While still in its infancy and primitive, even crude, two grids of 16 microelectrodes implanted atop the brain of a volunteer with severe epileptic seizures are somewhat successfully decoding brain signals into a small set of words, thanks to work out of the University of Utah.
The volunteer, who was already having part of his skull temporarily removed so doctors could implant larger electrodes (see button-like numbers in photo) to locate the source of his seizures, agreed to also have these two grids of tiny electrodes implanted over the speech centers of his brain.
His brain signals were then recorded as he repeatedly read 10 words out loud: yes, no, hungry, thirsty, more, less, hot, cold, hello, goodbye.
When the researchers compared two brain signal patterns, i.e. those generated as he said "yes" and "no," they could distinguish between the two correctly up to 90 percent of the time. But when examining all 10 patterns, they only picked out the correct word 28 to 48 percent of the time. While better than chance (1 in 10), the results proved that much work remains.
"This is proof of concept," says Bradley Greger, an assistant professor of bioengineering, in a news release. "We've proven these signals can tell you what the person is saying well above chance, but we need to be able to do more words with more accuracy before it is something a patient really might find useful."
The study, published in the October issue of the Journal of Neural Engineering, involved a new kind of nonpenetrating microelectrode that is placed on top of specific sites of the brain. Called microECoGs, they are essentially a small version of the electrodes used for electrocorticography.
The researchers say that EEG electrodes used to record brain waves are too big and record too many signals to decode speech signals, while the nonpenetrating microelectrodes are both smaller and safer.
In this study, each of two grids were spaced a millimeter apart over one of two speech areas of the brain: the facial motor cortex, which controls movements of the mouth, lips, tongue and face, and Wernicke's area, which is tied to language comprehension.
During one-hour sessions over four consecutive days, each word was repeated from 31 to 96 times, depending on how long the patient was able to proceed. The researchers simply analyzed brain signal patterns between each speaking of each word, and using those patterns tried to identify words based on signals.
The most important takeaway may be that this is possible at all, that closely space microelectrodes can actually capture signals important to speech from single, column-shaped processing units of neurons in the human brain.
Whether brain signal patterns vary widely from one person to another may be unlikely but still remains to be seen, and could prove problematic if a patient is never able to speak and whose words are therefore never able to be, essentially, decoded. For now, the team is focusing on bigger grids that can capture more data.
"It means it works, and we now need to refine it so that people with locked-in syndrome could really communicate," Gregor says. "The obvious next step--and this is what we are doing right now--is to do it with bigger microelectrode grids. We can make the grid bigger, have more electrodes, and get a tremendous amount of data out of the brain, which probably means more words and better accuracy."