Most experimental brain-computer interfaces (BCIs) that have been used for synthesizing human speech have been implanted in the areas of the brain that translate the intention to speak into the muscle actions that produce it. A patient has to physically attempt to speak to make these implants work, which is tiresome for severely paralyzed people.
To go around it, researchers at the Stanford University built a BCI that could decode inner speech—the kind we engage in silent reading and use for all our internal monologues. The problem is that those inner monologues often involve stuff we don’t want others to hear. To keep their BCI from spilling the patients’ most private thoughts, the researchers designed a first-of-its-kind “mental privacy” safeguard.
Overlapping signals
The reason nearly all neural prostheses used for speech are designed to decode attempted speech is that our first idea was to try the same thing we did with controlling artificial limbs: record from the area of the brain responsible for controlling muscles. “Attempted movements produced very strong signal, and we thought it could also be used for speech,” says Benyamin Meschede Abramovich Krasa, a neuroscientist at Stanford University who, along with Erin M. Kunz, was a co-lead author of the study.
The vocal tract, at the end of the day, relies on the movement of muscles. Fishing out the signals in the brain that engage these muscles seemed like a good way to bypass the challenge of decoding higher-level language processing that we don’t fully understand.
But for people suffering from ALS or tetraplegia, attempting to speak is a real effort. This is why Krasa’s team changed course and tried decoding the inner, or silent speech, which doesn’t ever engage the muscles.
The work started with collecting data to train AI algorithms that were supposed to take neural signals involved in inner speech and translate them into words. The team worked with four participants, each almost completely paralyzed, who had micro electrode arrays implanted in slightly different areas of the motor cortex. They were given a few tasks that involved listening to recorded words or engaging in silent reading.