How AI Restored the Voice of a Paralyzed Woman

At 30 years old, Ann experienced a sudden brainstem stroke, paralyzing her and taking away her ability to speak. For years, she communicated using a device that allowed her to type with slight head movements. However, a collaboration between researchers at UC San Francisco and UC Berkeley has given her a new hope. They’ve developed a brain-computer interface (BCI) that can synthesize speech or facial expressions from brain signals. This technology can decode these signals into text at a speed of nearly 80 words per minute, a significant improvement from her previous device’s 14 words per minute.

How Artificial Intelligence Gave a Paralyzed Woman Her Voice Back
Breakthrough brain implant and digital avatar allow stroke survivor to speak with facial expressions for first time in 18 years.

The technology involves a thin layer of 253 electrodes placed on the brain’s surface, capturing signals meant for speech muscles. Ann’s training with the system involved repeating phrases until the AI could recognize the brain activity patterns associated with speech sounds. Instead of recognizing whole words, the system decodes from phonemes, the basic sound units in speech, making it faster and more accurate.

A unique feature of this technology is the ability to recreate Ann’s voice using AI and a recording from her wedding, allowing her to “speak” in a voice that sounds like her own. The team also animated a digital avatar of Ann, which can mimic facial expressions based on her brain signals. The ultimate goal is to make this system wireless, enhancing the independence and social interactions of individuals like Ann.

Alex Patel

An accomplished editor with a specialization in AI articles. Born and raised in NYC, USA. He graduated the Columbia University, and worked in tech companies of USA and Canada.

ADESIGNE
Add a comment