News Release

Improved brain decoder holds promise for communication in people with aphasia

Restoring some language for aphasia sufferers, like Bruce Willis and a million other Americans, could involve AI.

Peer-Reviewed Publication

University of Texas at Austin

Brain activity while watching videos

image: 

Brain activity like this, measured in an fMRI machine, can be used to train a brain decoder to decipher what a person is thinking about. In this latest study, UT Austin researchers have developed a method to adapt their brain decoder to new users far faster than the original training, even when the user has difficulty comprehending language.

view more 

Credit: Jerry Tang/University of Texas at Austin.

People with aphasia—a brain disorder affecting about a million people in the U.S.—struggle to turn their thoughts into words and comprehend spoken language.

A pair of researchers at The University of Texas at Austin have demonstrated an AI-based tool that can translate a person’s thoughts into continuous text, without requiring the person to comprehend spoken words. And the process of training the tool on a person’s own unique patterns of brain activity only takes about an hour. This builds on the team’s earlier work creating a brain decoder that required many hours of training on a person’s brain activity as they listened to audio stories. This latest advance suggests it may be possible, with further refinement, for brain computer interfaces to improve communication in people with aphasia.

“Being able to access semantic representations using both language and vision opens new doors for neurotechnology, especially for people who struggle to produce and comprehend language,” said Jerry Tang, postdoctoral researcher at UT in the lab of Alex Huth and first author on a paper describing the work in Current Biology. “It gives us a way to create language-based brain computer interfaces without requiring any amount of language comprehension.”

In earlier work, the team trained a system, including a transformer model similar to the kind used by ChatGPT, to translate a person’s brain activity into continuous text. The resulting semantic decoder can produce text whether a person is listening to an audio story, thinking about telling a story or watching a silent video that tells a story. But there are limitations. To train this brain decoder, participants had to lie motionless in an fMRI scanner for about 16 hours while listening to podcasts, an impractical process for most people and potentially impossible for someone with deficits in comprehending spoken language. And the original brain decoder only works on people for whom it was trained.

With this latest work, the team has developed a method to adapt the existing brain decoder, trained the hard way, to a new person with only an hour of training in an fMRI scanner while watching short, silent videos, such as Pixar shorts. The researchers developed a converter algorithm that learns how to map the brain activity of a new person onto the brain of someone whose activity was previously used to train the brain decoder, leading to similar decoding at a fraction of the time with the new person.

Huth said this work reveals something profound about how our brains work: our thoughts transcend language.

“This points to a deep overlap between what things happen in the brain when you listen to somebody tell you a story, and what things happen in the brain when you watch a video that’s telling a story,” said Huth, associate professor of computer science and neuroscience and senior author. “Our brain treats both kinds of story as the same. It also tells us that what we’re decoding isn’t actually language. It’s representations of something above the level of language that aren’t tied to the modality of the input.”

The researchers noted that, just as with their original brain decoder, their improved system works only with cooperative participants who participate willingly in training. If participants on whom the decoder has been trained later put up resistance—for example, by thinking other thoughts—results are unusable. This reduces the potential for misuse.

While their latest test subjects were neurologically healthy, the researchers also ran analyses to mimic the patterns of brain lesions in participants with aphasia and showed that the decoder was still able to translate into continuous text the story they were perceiving. This suggests that the approach could eventually work for people with aphasia.

They are now working with Maya Henry, an associate professor in UT’s Dell Medical School and Moody College of Communication who studies aphasia, to test whether their improved brain decoder works for people with aphasia.

“It’s been fun and rewarding to think about how to create the most useful interface and make the model-training procedure as easy as possible for the participants,” Tang said. “I’m really excited to continue exploring how our decoder can be used to help people.”

This work was supported by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health, the Whitehall Foundation, the Alfred P. Sloan Foundation and the Burroughs Wellcome Fund.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.