If you read this blog, you’ll see that I have often said that AI would change medicine and everything else. One way that will begin is by studying the brain and how it works and then replicating that in machines. This morning’s blog is about an AI system that replicates how we navigate in natural environments, which may seem like a simple thing, but it’s not something any computer does well yet.
I first noticed artificial neural networks back in the ’90s, at the dawn of the Internet. I’m not sure why, but the field just fascinated me. I guess it was the idea that a computer could be trained and not programmed. The problem was that back then, this was really esoteric stuff. It all remained that way until about five years ago when the big boys (Google, Microsoft, Facebook, and others) began to understand that artificial intelligence (AI) could give them a competitive edge. Hence, neural networks suddenly went from obscurity to prime time in a matter of months.
What is a neural network? This is a computer program (or increasingly a hardware chip) that mimics how the human brain thinks. It has artificial neural connections that are either stimulated or down regulated as information comes in. It begins as a blank slate until you feed it data and tell it a goal, and then the connections between the data that meet the established goal feed the neural connections in the software and it takes form. Basically, it learns like a human does.
This tech is used everywhere right now. For example, all of our phones use it to get better at translating speech into text. In medicine, it’s used to find which rare genetic mutations in tumors respond the best to any specific chemotherapy drug. Even UPS uses this software to plan its driver’s routes to save on gas.
Grid cells form a complex neural network in the entorhinal cortex in the brain that allow us to self-navigate, a kind of biological GPS system. More simply, they allow us to mentally map positional coordinates simply by wandering around a new location, fascinatingly, even in complete darkness or with no visual or external markers to go by (termed dead-reckoning navigation). A similar group of cells in the hippocampus of the brain called place cells are responsible for remembering specific locations, so it’s believed the two work together somehow to keep us directionally grounded in space and time.
In addition, these cells are constantly mapping our environment, recording it to memory, and providing the easiest ways for us to get from point A to point B. Place cells were discovered in the 1970s, but it would take a few decades before grid cells were found, and the discovery earned its scientists the Nobel Prize in 2014.
We all know people who have no sense of direction whatsoever and others who can instantly tell you which way is north, south, east, or west from any location, so clearly some of us have more active grid cells than others. And with Google Maps and other GPS devices always at our fingertips in today’s technology-driven world, one has to wonder if our grid cells as a human population aren’t much less active then our ancestors’ were. Now, scientists have even discovered a way to create a grid cell-like navigational functionality in the world of artificial intelligence (AI). Let take a look at the new study.
Using the grid cells in the brain as a model, researchers set out to replicate a grid-like navigational network in AI. AI learns through reinforcement, which means that it’s taught like a child. Researchers, in the new study, first trained the AI neural network on performing path integrations as grid cells would. Grid cells use a hexagonal pattern and select the most efficient path from one place to the next.
With deep reinforcement learning, the AI network was able to determine the shortest paths from one point to another. In other words, it was able to map out shortcuts similar to the grid cell networks in the human brain. So training AI in the similar grid-cell pattern used by the human brain was able to replicate one of the brain’s higher functions, specifically shortcut navigation.
It’s not hard to see that this new technology will soon find its way into our phones. Google and Amazon would love to have an accurate map of your home or office, noting where all of your stuff lives. How you navigate around this environment will tell them how to best sell you more stuff, using AI programs that find the little details that we humans may miss.
If you’re a regular follower of the blog, you know I that I’m a fan of AI. Here’s a few studies I’ve covered over the past couple of years.
If you have some time, feel free to watch my video below of my lecture on the knee microenvironment and our own AI neural network (the next frontier of regenerative medicine outcomes) at the Interventional Orthopedics Foundation last year.
The upshot? One of the most fascinating areas of research right now is studying how the brain works and replicating that in AI. This will take lots of forms, but one day in the not too distant future, we will see artificial intelligence computer systems that make ours look quaint. While many fear that day, for modern humans, it will be the equivalent of the dawn of the printing press, a transformative moment when everything after is different from everything before.
About the Author
Christopher J. Centeno, M.D. is an international expert and specialist in regenerative medicine and the clinical use of mesenchymal stem cells in orthopedics. He is board certified in physical medicine as well as rehabilitation and in pain management through The American Board of Physical Medicine and Rehabilitation.…