A decoder that can reconstruct people’s thoughts by analyzing their brain scans has been developed by researchers. Unlike other techniques that require the use of surgically implanted electrodes to decipher mental activity, this new approach relies on functional magnetic resonance imaging (fMRI) recordings, therefore offering non-invasive means of decoding continuous language.
Speaking to The Scientist, neuroscientist Alexander Huth from the University of Texas at Austin said that “if you had asked any cognitive neuroscientist in the world twenty years ago if this was doable, they would have laughed you out of the room.” Describing their breakthrough in an as yet un-peer reviewed study, Huth and his colleagues explain how their decoder could be applied to “future multipurpose brain-computer interfaces.”
Such devices are typically used as communication aids by people who are unable to speak, featuring electrode arrays capable of detecting the real-time firing patterns of individual neurons. In contrast, Huth’s method uses fMRI to observe changes in blood flow around the brain and map these against users’ thoughts.
The researchers trained their algorithm by scanning the brains of three volunteers as they listened to podcasts and stories over a period of 16 hours. Based on these fMRI recordings, the decoder could start making predictions about how certain patterns of brain activity correlate with semantic thought representations.
“This decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech, and even silent videos, demonstrating that a single language decoder can be applied to a range of semantic tasks,” write the authors in their preprint.
In addition to accurately predicting the phrases being listened to, the algorithm could also correctly interpret short stories that participants recounted in their heads, indicating that this approach may be suitable for use by those who can’t communicate out loud.
Because it is not fully known which cortical circuits represent language, the researchers trained their decoder on three separate brain networks: the classical language network, the parietal-temporal-occipital association network, and the prefrontal network. Fascinatingly, they found that each of these groupings could be used to decode word sequences, suggesting that it may be possible to interpret thoughts by focusing on any one of these networks independently.
Despite these impressive findings, the study authors note that “while our decoder successfully reconstructs the meaning of language stimuli, it often fails to recover exact words.”
According to Huth, the system struggles the most with pronouns and distinguishing first-person from third-person speech. “[It] knows what’s happening pretty accurately, but not who is doing the things,” he said.
Finally, the researchers sought to address concerns over mental privacy by testing out whether the decoder could be used to decipher someone’s thoughts without their consent or cooperation. Happily, they discovered that the algorithm was incapable of reconstructing users’ semantic thoughts when they distracted themselves by naming and imaging animals.
The authors also note that a decoder that had been trained on one person’s brain scans could not be used to reconstruct language from another person.
[H/T: The Scientist]
Source Link: New Brain Scanning Algorithm Can Read Your Thoughts