女生小视频

Technology

Mind-reading device uses AI to turn brainwaves into audible speech

By Chelsea Whyte

24 April 2019

A maze that looks like a human head

Signals from the brain can be converted into sounds by a computer

Mopic/Alamy

Electrodes聽on the brain have been used to translate brainwaves into words spoken by a computer 鈥 which could be useful in the future to help people who have lost the ability to speak.

When you speak, your brain sends signals from the motor cortex to the muscles in your jaw, lips and larynx to coordinate their movement and produce a sound.

鈥淭he brain translates the thoughts of what you want to say into movements of the vocal tract, and that鈥檚 what we鈥檙e trying to decode,鈥 says Edward Chang at the University of California San Francisco (UCSF). He and his colleagues created a two-step process to decode those thoughts using an array of electrodes surgically placed onto the part of the brain that controls movement, and a computer simulation of a vocal tract to reproduce the sounds of speech.

In their study, they worked with five participants who had electrodes on the surface of their motor cortex as a part of their treatment for epilepsy. These people were asked to read 101 sentences aloud 鈥 which contained words and phrases that covered all the sounds in English 鈥 while the team recorded the signals sent from the motor cortex during speech.

There are about 100 muscles used to produce speech, and they are controlled by a combination of neurons firing at once, so it鈥檚 not as simple as mapping signals from one electrode to one muscle to sort out what the brain is telling the mouth to do. So, the team trained an algorithm to reproduce the sound of a spoken word from the collection of signals sent to the lips, jaw and tongue.

Image of an example array of intracranial electrodes of the type used to record brain activity in the current study.

Electrodes like this were used to record brain activity

UCSF

The team says 鈥渞obust performance鈥 was possible when training the device on just 25 minutes of speech, but the decoder improved with more data. For this study, they trained the decoder on each participant鈥檚 spoken language to produce audio from their brain signals.

Once they had generated audio files based on the signals, the team asked hundreds of native English speakers to listen to the output sentences and identify the words from a set of 10, 25 or 50 choices.

The listeners transcribed 43 per cent of the trials perfectly when they had 25 words to choose from, and 21 per cent perfectly when they had 50 choices. One listener provided a perfect transcription for 82 sentences with the smaller word list and 60 with the larger.

鈥淢any of the mistaken words were similar in meaning to the sound of the original word 鈥 rodent for rabbit 鈥 therefore we found in many cases the gist of the sentence was able to be understood,鈥 says team-member Josh Chartier at UCSF. He says the artificial neural network did well at decoding fricatives 鈥 sounds like the 鈥榮h鈥 in 鈥榮hip鈥 鈥 but had a harder time with plosives, such as the 鈥榖鈥 sound in 鈥榖ob鈥.

鈥淚t鈥檚 intelligible enough if you have some choice, but if you don鈥檛 have those choices, it might not be,鈥 says Marc Slutzky at Northwestern University in Illinois. 鈥淭o be fair, for an ultimate clinical application in a paralysed patient, if they can鈥檛 say anything, even having a vocabulary of a few hundred words could be a huge advance.鈥

That may be possible in the future, he says, as the team showed that an algorithm trained on one person鈥檚 speech output could be used to decode words from another participant.

The team also asked one person to mimic speech by moving their mouth without making any sounds. The system did not work as well as it did with spoken words, but they were still able to decode some intelligible speech from the mimed words.

Similar devices have been created that attempt to decode brain signals directly into sound, skipping the simulation of motion around the mouth and vocal tract, but it’s still unclear which approach is most effective.

This device doesn鈥檛 rely on signals for creating sound, but just on those for control motor functions, which are still sent even if someone is paralysed. So, this device could be useful for people who once were able to speak but lost that ability due to surgery or motor disorders like ALS, in which people lose control of their muscles.

Nature

Topics:

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox. We'll also keep you up to date with New 女生小视频 events and special offers.

Sign up
Piano Exit Overlay Banner Mobile Piano Exit Overlay Banner Desktop