Scientists have found a way to convert thoughts into speech

This ground-breaking tech could help give voice to the voiceless.

Mark Serrels Editorial Director
Mark Serrels is an award-winning Senior Editorial Director focused on all things culture. He covers TV, movies, anime, video games and whatever weird things are happening on the internet. He especially likes to write about the hardships of being a parent in the age of memes, Minecraft and Fortnite. Definitely don't follow him on Twitter.
Mark Serrels
2 min read
Sophia Robot In Kiev

Speaking is great.


Sure, speaking is great, and an effective method of communication, but what if that's not a possibility? What if you suffer from some sort of paralysis or neurological impairment and you literally can't speak?

Researchers at the University of California think they've made a crucial first step at solving the problem. Using a "state-of-the-art brain-machine interface", neuroscientists art UC San Francisco say they have successfully been able to convert brain waves into literal speech.

By placing electrodes on the heads of study participants, the researchers took information from measured brain activity, and fed that data into a "virtual vocal tract": an anatomically correct computer simulation designed to accurately mimic movements from the lips, jaw, tongue and larynx. The end result: something that absolutely resembles human speech.

"For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual's brain activity," said Edward Chang, a professor of neurological surgery and the lead author on this study. "This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss."

The research, published in Nature, was led by speech scientist Gopala Anumanchipalli and Josh Chartier, a bioengineering graduate student. The potential benefits are clear: technology like this could grant certain people a second or even a first chance at being able to vocalise their thoughts. The concept, however, is still in its infancy. There's still work to be done.

"We're quite good at synthesizing slower speech sounds like 'sh' and 'z' as well as maintaining the rhythms and intonations of speech and the speaker's gender and identity," explained Chartier, "but some of the more abrupt sounds like 'b's and 'p's get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what's currently available."

What's currently available is time-consuming and doesn't come close to reflecting the intimacy and clarity of live back and forth discourse, but technology like this could potentially make that sort of discussion a reality for those who currently can't speak.

"People who can't move their arms and legs have learned to control robotic limbs with their brains," Chartier said. "We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract."