A new tool developed by bioengineers at Caltech has been shown to be the best yet at translating brain signals generated from internal speech. While it has only been tested in two patients so far, with further development the technology could allow people who are unable to speak to communicate using only their thoughts.
Brain-machine interfaces (BMIs) are already doing incredible things. Versions of these systems, which link the brain’s electrical signals to an output device that seeks to either replace or restore a function in the body, have been used to help paralyzed patients walk and, in the case of Neuralink’s first experimental subject, control a computer “telepathically”.
One of the major use cases of this technology is in assisting communication. For people who cannot speak – due to a neurological disease or brain injury, for example – BMIs can allow them to regain their voice.
Such devices, like the one famously used by the late Stephen Hawking, have some limitations. One is that it’s difficult to capture the natural rhythm of speech – though scientists are working on that, with a little help from Pink Floyd. Another is that a lot of speech BMIs require the user to attempt to say words out loud, which is not possible for everyone. The ideal solution would be to find a way of decoding internal speech, so someone would only have to imagine saying a word. Some advances in this area have been made, but it’s proven very challenging and results have been mixed.
Now, the team at Caltech has developed a system that has the potential to decode internal speech with a higher degree of accuracy than ever before.
Microelectrode arrays were implanted into the brains of two male patients with tetraplegia (paralysis of all four limbs), a 33-year-old and a 39-year-old. The team targeted the primary somatosensory cortex and the supramarginal gyrus (SMG), a region of the brain that hasn’t been explored in previous speech BMI studies.
The interface was trained on six real words (battlefield, cowboy, python, spoon, swimming, telephone) and two made-up words (“nifzig”, “bindip”), to see whether the words needed to have meanings for the system to work effectively. The participants were either shown each word on a screen or had the word spoken to them, before they were then asked to imagine saying the same word for 1.5 seconds. They were subsequently asked to say the word out loud.
Although these two participants were physically able to speak, “This technology would be particularly useful for people that have no means of movement anymore,” first author Sarah Wandelt told Nature News. “For instance, we can think about a condition like locked-in syndrome.”
The BMI allowed the researchers to decode, in real-time, the activity in the SMG as the participants were thinking of each word. For one participant, the accuracy reached 79 percent, “only slightly less accurate than the decoding of vocalized speech,” Wandelt and co-author David Bjånes explain in a briefing on their work; for the other participant, it was 23 percent.
The technology will need to be further refined and tested on a larger group of people using more words, but the study does demonstrate that the SMG is a promising brain region to target.
“Even if this result could not be replicated in the second participant, this study is important because it is to my knowledge the first achievement of a real-time speech brain-computer interface based on single unit recordings in the SMG,” commented Blaise Yvert of The Grenoble Institute of Neuroscience, who was not involved in the study.
Next, the team wants to find out if the BMI can distinguish between letters of the alphabet, and Wandelt and Bjånes also suggest that decoding individual sound units of speech, called phonemes, could be a promising approach.
“This proof-of-concept study of high-performance decoding of internal speech will certainly be of great interest to researchers working to push the boundaries of BMIs and other therapeutic devices for people who can no longer speak,” added Giacomo Ariani, the paper’s Associate Editor.
The study is published in the journal Nature Human Behavior.
Source Link: New Brain Implant Translates Imagined Speech In Real Time With Best Accuracy Yet