Full-body paralysis has long been a cruel prison sentence for those afflicted. Many victims of paralysis have fully functioning brains but no physical way to communicate. Thankfully, researchers in California have engineered a device that allows a digital avatar to express patients’ thoughts verbally.
“Our goal is to restore a full, embodied way of communicating, which is the most natural way for us to talk with others.”-Edward Chang, chair of neurological surgery at UCSF
According to Neoscope, two teams of researchers have worked together to invent a machine capable of intercepting a patient’s brainwaves and turning them into speech and facial expressions, which are then externalized through a digital avatar of the patient. In other words, victims of paralysis may soon have a way to communicate with loved ones in a manner more precise than the crude interface technology currently available to them.
“Our goal is to restore a full, embodied way of communicating, which is the most natural way for us to talk with others,” said Edward Chang, chair of neurological surgery at the University of California, San Francisco (UCSF). The brain implant developed by Chang’s team has, in tests, allowed patients to “talk” at a rate of up to 80 words a minute just by thinking—although the average is more like 60-70 words a minute.
Using artificial intelligence, two teams of researchers have worked together to develop digital avatars through which people suffering full body paralysis can communicate.
While it’s still a long way off from the 160 words per minute that most humans naturally produce while speaking, it’s an incredibly promising benchmark and over three times the previous record for thought-to-speech translation.
The foundation of the implant is an AI algorithm that converts signals from the brain into text to be spoken by the avatar. The algorithm was trained by observing the electric signals coming from a patient’s brain as they repeated certain words and phrases to themselves. The algorithm looks not for whole words but distinct units of sound called phonemes.
To help the digital avatar produce something close to actual speech, the researchers outfitted the implant with animation software that works off of a custom AI that translates a patient’s word signals and simulates the appropriate facial expressions to accompany the different sounds the word makes.
The researchers then paired that with a reconstruction of the patient’s voice, allowing an animated doppelganger to mimic what it would look and sound like if the patient themselves were talking.
The foundation of the implant is an AI algorithm that converts signals from the brain into text to be spoken by the avatar.
The only drawback? The prototype implant has a vocabulary of 125,000 words but an error rate of close to 24%. While that doesn’t diminish the groundbreaking work the researchers have achieved, it does mean that in its current state, the technology has the potential to frustrate patients attempting to use it to communicate.
Fortunately, now that the method has been proven to work on actual subjects, the researchers can work on fine-tuning the process and reducing the rate of error.
With all of the negative fears surrounding AI in recent years, it’s refreshing to see the technology put to good use. This newly discovered method of allowing victims of paralysis to communicate through digital avatars wouldn’t have been possible without the use of artificial intelligence. And proves that, in the medical field at least, AI can be used to better society as a whole.