You may not realize it, but a revolution in computing is under way that’s a far bigger deal than the printing press and the Internet put together. Why? Computers have always been very limited in how they could understand and interact with the real physical world. These limitations come from how they have always worked, basically machines that understand only prewritten commands. The real world, on the other hand, is unpredictable, organic, nonbinary, and often messy. But that’s now changing with the advent of artificial intelligence and neural networks. How good is this new technology? It was just used to read minds…
My first exposure to AI was the computer on the original Star Trek that could hold a whole conversation. Next up was Isaac Asimov’s I, Robot series. These novels about human-like androids captured my imagination so much that I couldn’t wait to get my own robot. However, as the decades passed and the computers around me got ever more sophisticated, my Star Trek computer and android servant never really materialized. Why? Serial processing. Let me explain.
Your desktop computer is basically just an instruction-following machine. It processes one instruction at a time, really fast, which is called serial processing. If you don’t write the exact instructions to tell it what do it, it’s lost. However, that’s not how our human brains work. They’re massive parallel processors. Information goes in along millions of parallel channels, which then each connect to hundreds more, and these connect to many more and so on. The information forms a pattern in how it excites or inhibits all of these mesh-like connections, which is called a neural network. Since it all starts with a blank slate, it learns to recognize patterns in complex information by associating those patterns with patterns in the network.
These new neural network programs (aka AI or artificial intelligence) are great at recognizing complex patterns. In fact, they’re so good at it, they can now see patterns in data that we’re not nearly smart enough to recognize. Which brings us back to how to read minds with AI. Click on my video below to learn more:
While we’ve already seen machines that can learn to read simple black and white shapes and letters in our minds, the new study took a much bigger leap—seeing how much human thought detail a machine really could learn to visualize. Researchers showed subjects both natural (e.g., duck, airplane, etc.) and artificial (e.g., colored shapes, letters, etc.) images and used functional MRI to measure the subjects’ visual cortical activity (activity in the part of the brain responsible for processing visual information). In some tests, scans were done while subjects viewed the images; in other tests, subjects were asked to remember the images.
Using a method they termed novel image reconstruction, similar images to those the subjects were viewing were generated. Remembered images were not able to be reconstructed at nearly the same level of detail, but researchers suggested this was due to our inability to remember visual details exactly as they appear. In other words, understandably, the brain is more focused and engaged while we are actually visualizing an image. To fully grasp the fascinating results of this study, click on the study link above to see comparisons of the actual images and the depth of detail the AI computer was able to reconstruct.
This isn’t the first time I’ve shared impressive results on artificial intelligence. Of course, any AI that might impact medical science, especially in the field of orthopedics, is of particular interest to me. In fact, we have an experimental AI model ourselves that can help us with candidacy grading for patients with knee arthritis. It uses a neural network to predict how well a patient is likely to respond to stem cell treatment based on the chemical makeup of the synovial fluid in the knee. Check out the video below for more info:
On a broader scale, AI technology is already taking the ever-changing world of medicine much further than our human brain is capable of, turbocharging our research by recognizing complex patterns and revealing relationships in the scientific data we’d never understand on our own. While the technology demonstrated in our feature study today still has a long way to go, as quickly as AI technology is advancing, in a few decades, we’re likely to consider novel image reconstruction as archaic then as we consider mechanical calculators to be today.
The upshot? Forget the Internet bubble of the ’90s. It will be a mere historical blip compared to the AI wave now mounting offshore and heading our way. Because AI will drive our cars, organize our lives, and eventually perform our medical research, it’s truly our last invention. Once it gets sophisticated enough, we mere humans won’t be able to keep up. The good news is, as long as we can control it to be used for our benefit, AI will revolutionize medicine. It will diagnose disease long before we even know there’s a problem. It will eventually detect tiny changes in biomechanics and chemistry that will, ten years from now, result in knee arthritis. And yes, with a little luck, I’ll eventually get my robot!
About the Author
Christopher J. Centeno, M.D. is an international expert and specialist in regenerative medicine and the clinical use of mesenchymal stem cells in orthopedics. He is board certified in physical medicine as well as rehabilitation and in pain management through The American Board of Physical Medicine and Rehabilitation.…