Press "Enter" to skip to content

Scientists Create Speech from Brain Signals

San Francisco, California —(Map)

Scientists have found a way to use brain signals to make a computer speak the words a person is trying to say. Their method could one day help people who have lost the ability to speak.

Some illnesses or injuries can cause people to lose the ability to speak. In many cases, a person’s brain may be fine, but they are unable to control the parts of their body used for speaking.

Eye-tracking algorithm using visible light (passive light) with center of iris (red), glint (green), the output vector (blue)
One method of communicating allows a person to “type” by moving their eyes from letter to letter to spell out words. This method is much slower than normal speech.
(Source: Z22/Björn Markmann. [CC BY 3.0], via Wikimedia Commons.)

There are some ways for these people to communicate, but they are slow. One method allows a person to “type” by moving their eyes from letter to letter to spell out words. The top speed with this method is about 10 words per minute. Normal human speech is about 150 words per minute.

Much recent research has focused on a direct connection between someone’s brain and a computer. This is called a “Brain Computer Interface” (BCI). For many BCIs, people have wires attached to their brains. This allows scientists to track the electrical signals in the brain and connect them to computers. BCIs have led to some amazing discoveries, but they haven’t made communicating much faster.

Gopala Anumanchipalli, Ph.D., holding an example array of intracranial electrodes of the type used to record brain activity in the current study. Credit: UCSF
A “Brain Computer Interface” (BCI) allows scientists to track the electrical signals in the brain and connect them to computers.
(Source: UCSF.)

Scientists at the University of California, San Francisco (UCSF) decided to focus on the muscles people were trying to use when they spoke.

The UCSF scientists worked with a group of five people with epilepsy. Epilepsy is a condition where unusual electrical activity in the brain can cause problems with a person’s control of their body or senses. These people could speak normally, but already had temporary BCIs.

Gopala Anumanchipalli, Ph.D., holding an example array of intracranial electrodes of the type used to record brain activity in the current study. Credit: UCSF
The UCSF scientists worked with a group of five people with epilepsy who already had temporary BCIs. The scientists recorded the brain signals of the people as they read hundreds of sentences. The picture shows the lead scientist, Gopala Anumanchipalli, holding a BCI.
(Source: UCSF.)

The scientists recorded the brain signals of the people as they read hundreds of sentences like “Is this seesaw safe?” The collection of sentences included all the sounds used in English.

There are about 100 muscles in the tongue, lips, jaw, and throat that are used for speaking. The scientists knew roughly what the shape of the mouth would have to be to make each sound. This allowed them to figure out how the brain signals controlled the speaking muscles.

Screen shot of video showing brain signals controlling an artificial vocal tract.
The UCSF scientists were able to “decode” the brain signals to find out how the person was moving their mouth. Then the scientists were able to “synthesize” (create computer speech sounds), based on the position of the speaking muscles.
(Source: UCSF.)

With that information they could “decode” the brain signals to find out how the person was moving their mouth. Then the scientists were able to “synthesize” (create computer speech sounds), based on the position of the speaking muscles. Special computer programs helped in this process. The scientists were surprised at how close to real speech the synthesized speech was.

As a test, one person didn’t make sounds when he read the sentences. He just moved his mouth as if he were saying them. The system was still able to re-create the sounds he was trying to make, based on his brain patterns.

Study senior author Edward Chang, MD, has been studying how the brain produces and analyzes speech for over a decade and aims to develop a speech prosthesis to restore the voices of individuals who have lost speech to paralysis and other forms of neurological damage. Photo by Steve Babuljak
The study was conducted by the laboratory of Dr. Edward Chang (above). Dr. Chang said that the study shows for the first time that it is possible to create “entire spoken sentences based on an individual’s brain activity.”
(Source: Steve Babuljak, UCSF.)

One important discovery is that though each person’s brain signals are different, the muscles used to make each sound are the same for everyone. That will make it easier for a system like this to help many people.

There’s much to learn before a system like this could be used in everyday life, but it’s still exciting progress.

😕

This map has not been loaded because of your cookie choices. To view the content, you can accept 'Non-necessary' cookies.

Share:

Settings

Most news on NewsForKids.net is appropriate for all ages. When there is news that may not be suitable for all ages, we try to tag it. You can use the setting below to control whether content tagged in this manner is shown.