Elon Musk recently announced that Neuralink’s next project will be a “Blindsight” cortical implant to restore vision: “The resolution will be low at first, like Nintendo graphics soon, but eventually it could be more greater than normal human vision.”
Unfortunately, this claim rests on the neglect that neurons in the brain are like pixels on a screen. It’s no wonder engineers often assume “more pixels equals better vision.” After all, that’s how phone monitors and screens work.
In our newly published research, we created a computational model of human vision to simulate the type of vision that a high-resolution cortical implant could provide. Cat film with a resolution of 45,000 pixels is sharp and clear. A movie generated using a simplified version of a model of 45,000 cortical electrodes, each stimulating a single neuron, a cat is still recognizable but most of the scene details are lost.
The reason the film generated by electrodes is so blurry is because neurons in the human visual cortex do not represent tiny dots or pixels. Instead, each neuron has a specific receptive field, which is the location and pattern that a visual stimulus must be in order for that neuron to fire. By electrically stimulating a single neuron, a blob is created whose appearance corresponds to that neuron’s receptive field. The smallest electrode – one that stimulates a single neuron – will hold a blob about the width of your pinky at arm’s length.
Think about what happens when you look at a single star in the night sky. Each point in space is represented by thousands of neurons with overlapping receptive fields. A tiny spot of light, like a star, results in a complex firing pattern across all these neurons.
To generate the visual experience of seeing a single star with cortical stimulation, you would need to reproduce a pattern of neural responses similar to that produced by natural vision.
To do this, you would obviously need thousands of electrodes. But you would also need to replicate the correct pattern of neuronal responses, which requires knowing the receptive field of each neuron. Our simulations show that knowing the location of each neuron’s receptive field in space isn’t enough—if you don’t also know the orientation and size of each receptive field, the star will be a fuzzy mess.
Thus, even a single star – a single bright pixel – generates an extremely complex neural response in the visual cortex. Imagine the even more complex pattern of cortical stimulation required to accurately reproduce natural vision.
Some scientists have suggested that natural vision could be produced by stimulating the correct combination of electrodes. Unfortunately, no one has yet proposed a sensible way to determine the receptive field of each individual neuron in a given blind patient. Without that information, there is no way to see the stars. Vision from cortical implants will remain grainy and imperfect, regardless of the number of electrodes.
Scene restoration is not just an engineering problem. To predict what kind of vision a device will provide you need to know how the technology interfaces with the complexities of the human brain.
How we created our virtual patients
In our work as computational neuroscientists, we develop simulations that predict the perceptual experience of patients trying to restore their sight.
We have previously created a model to predict the perceptual experience of retinal implant patients. To create a virtual patient to predict what cortical implant patients would see, we simulated the neurophysiological architecture of the brain area involved in the first stage of visual processing. Our model approximates how receptive fields increase from central to peripheral vision and the fact that each neuron has a unique receptive field.
Our model successfully predicted data that described participants’ perceptual experience across a wide range of studies of cortical stimulation in humans. After confirming that our model could predict current data, we used it to make predictions about the quality of vision of possible future cortical implants.
Models like ours are an example of virtual prototyping, which involves using computer systems to improve product design. These models can facilitate the development of new technology and evaluate device performance. Our study shows that they can also offer more realistic expectations of the kind of vision that bionic eyes could provide.
First no harm
In our nearly 20 years researching bionic eyes, we’ve seen the complexity of human brain defeat company after company. Patients pay the cost when these devices fail, left stranded with orphan technologies in their eyes or brains.
The Food and Drug Administration could mandate that vision recovery technology companies must develop failure plans that minimize harm to patients when technologies stop working. Possibilities include requiring companies implanting neuroelectronic devices in patients to participate in technology escrow agreements and carry insurance to ensure continued medical care and technology support if they go bankrupt.
If cortical implants can achieve anything close to the resolution of our simulations, that would still be an achievement worth celebrating. Poor and imperfect vision would change the lives of thousands of people suffering from incurable blindness. But this is a time for cautious rather than blind optimism.
This article is republished from The Conversation, a non-profit, independent news organization that brings you reliable facts and analysis to help you make sense of our complex world. Written by: Ione Fine, University of Washington and Geoffrey Boynton, University of Washington
Read more:
Ione Fine receives funding from NIH National Eye Institute Grant R01EY014645
Geoffrey Boynton receives funding from National Institute of Health grant EY R01 EY014645