2023 Chevy Corvette Z06 Apple MacBook Pro 2021 review Facebook Papers: The biggest takeaways Tesla cracks $1 trillion market cap Eternals review

Scientists use AI to reconstruct brain activity into speech

While still in the early stages, this breakthrough could help those suffering from diseases like ALS to communicate easier.

Brain, neural network, illustration
Alfred Pasieka/ Science Photo LIB / Getty Images

Being able to think into a machine, which could understand you and then speak out loud for you, may become a reality in the near future.

Researchers from three teams have obtained brain activity data during brain tumor removal surgeries or through electrodes planted in epilepsy patients' brains to pinpoint the origin of seizures, and then trained AI to translate the data into speech, Science Magazine reported Tuesday.

At this point, the computer models can only be trained one at a time for each individual because the signals that translate speech apparently differ from person to person. More dramatically, the precise nature of the data requires that the skull be opened.

The reconstructed speech ranged from a 40 percent to 80 percent rate of accuracy and understandability. However, none of the researchers has yet figured out how to understand imagined speech -- that is, the brain signals when people silently speak or hear a voice in their head.

One approach scientists may be considering is to meet in the middle, by having the subject listen to the computer-generated speech and adjust their thoughts to get the desired results. Meanwhile the neural network could also be trained to understand how the person thinks.

If successful, patients suffering from amyotrophic lateral sclerosis (ALS) could have a new and easier way of communicating. Physicist Stephen Hawking, who had ALS and passed away last year, had communicated through the use of his cheek muscles.