As cool as fMRI-enabled brain decoding is, one important factor limiting its power is resolution ? both spatial and temporal. While fMRI is a very powerful technique, the data generated are still not directly indicative of the neuronal activity at the single-cell level. What if we could directly record neuronal activity and use this to reconstruct visual scene information instead?
This is exactly what many neuroscientists have attempted to do with remarkable success using BCI. Interfacing electrical brain activity with that of a computer, neuroscientists can directly record neuronal responses and decode the causative stimulus. As early as 1991, Wiliam Bialek and colleagues successfully estimated the nature of a visual stimulus based on electrophysiologically recorded responses of neurons in the visual cortex of blowflies. Then in 1998, Yang Dan and colleagues reported the reconstruction of cat vision using electrodes implanted in the lateral geniculate nucleus.
Fast forward to the present, modern BCI is truly the stuff of sci-fi. Take the example of this neural prosthetic device developed at Caltech, which enables a patient with severe spinal cord injury to control a robotic arm merely by thinking of the intention to do so. An electrode array implanted in the posterior parietal cortex “listens in” on the neural signals generated when the patient intends to make a movement. This wire-tapped electrical information is then processed by a computer into instructions to make a robotic arm perform the same movement that the patient intended. If an electrode array were implanted in the visual areas of the human brain instead of the parietal cortex, then would the results be the same, if not better than what Gallant’s team managed to do with fMRI?
An answer to this question may not be that distant.
In a recent PLOS Computational Biology article, Kai Miller and colleagues describe a pioneering method to decode human visual perception in near-real-time. The researchers used a technique called electrocorticography (ECoG) which involves implanting electrodes on the surface of the brain enabling them to directly record neuronal electrical potential while subjects (epileptic patients) viewed still images of faces and houses. While other groups had previously attempted decoding visual stimuli using ECoG, they always used stimuli with pre-determined start times. However, natural vision doesn’t happen at neatly pre-defined times; real-world visual stimuli is mostly spontaneous. To address this challenge, Miller et al. developed a novel computational method to predict in near-real-time if a subject was viewing a house or a face using only the ECoG signal. Strikingly, their prediction had an accuracy rate of 96% with a timing error of only 20 ms. While we still have a way to go before entire movies can be faithfully reconstructed using ECoG, these decoding studies show us an exciting path moving into the future. What’s more, in an earlier PLOS Biology paper, Brian Pasley and colleagues managed to reconstruct actual speech from ECoG signals acquired from the human auditory cortex. Here’s a representative sample of their reconstructed audio.
Address: 5636 Lemon Ave.
Dallas TX 75209
Phone: +1 214 5203694