Researchers Developed An AI Brain Decoder That Can Read Your Thoughts And Turn Them Into Written Text Without Much Training

Creative collage image of elegant lady dace scanning identification discotheque vivid neon illumination dark background.
Beauty Hero - stock.adobe.com - illustrative purposes only, not the actual person

A “brain decoder” that uses artificial intelligence to translate thoughts into written text has undergone some new improvements.

Scientists came up with a converter algorithm that can quickly train existing brain decoders. One day, the findings could help people with aphasia, a brain disorder that affects the ability to communicate.

A brain decoder uses machine learning to convert a person’s thoughts into text based on how their brain responds to stories they have heard.

However, earlier versions of the decoder required individuals to listen to stories for hours while inside an MRI machine. Additionally, they only worked for the people they were trained on.

“People with aphasia oftentimes have some trouble understanding language as well as producing language,” said Alexander Huth, a co-author of the study and a computational neuroscientist from the University of Texas at Austin.

“So if that’s the case, then we might not be able to build models for their brain at all by watching how their brain responds to stories they listen to.”

In a new study, the research team looked into how to overcome this issue. First, they trained a brain decoder on a few participants using the old method—by collecting functional MRI data while the participants listened to 10 hours of stories on the radio.

Next, the team trained two converter algorithms on the initial group of participants and another group. One of the algorithms used data that was collected during 70 minutes of listening to radio stories.

The other used data from participants who spent 70 minutes watching silent short films that had nothing to do with the radio stories.

Creative collage image of elegant lady dace scanning identification discotheque vivid neon illumination dark background.
Beauty Hero – stock.adobe.com – illustrative purposes only, not the actual person

Sign up for Chip Chick’s newsletter and get stories like this delivered to your inbox.

Then, the researchers employed a technique known as functional alignment and mapped out the brains of the different groups of participants who responded to the audio and film stories.

That information was utilized to train the decoder to work with the brains of the second set of participants without needing to gather hours of data.

The team went on to test the decoders with a short story that none of the participants had heard before. The decoder’s predictions were slightly more accurate for the original group of participants.

But for those who used the converters, the words the decoder predicted were still related to the ones from the short story.

“The really surprising and cool thing was that we can do this even not using language data,” Huth said.

“So we can have data that we collect just while somebody’s watching silent videos, and then we can use that to build this language decoder for their brain.”

Moving forward, the team plans to test the converter on people with aphasia and help them communicate the way they want to communicate.

The study was published in the journal Current Biology.

Emily  Chan is a writer who covers lifestyle and news content. She graduated from Michigan State University with a ... More about Emily Chan

More About: