Elon Musk promised Neuralink would bring superhuman abilities and minds merged with AI. Then he fueled a runaway hype train for his brain implant technology, which ended up with a grisly record for implants in monkeys and some success with human subjects. But for all of the hype, he’s still further away than Mars from his goal. And that’s because his relentless ambition is once again hitting the wall of scientific reality.
The heart of the issue is how brain-computer interfaces (BCIs) translate thought into results. Neuralink’s products have all been brain-to-cursor interfaces, which allow patients to control a mouse with their minds. But Neuralink’s competitors have raced ahead with newer BCIs that translate thought directly to speech. Turns out that’s a more promising approach — enough to convince Neuralink to quietly invest in BCIs that focus on speech.
Musk has a strong record of overpromising and underdelivering, and his biggest quagmire may end up being his pursuit of a grand, unified vision of a human-AI-hybrid technology. When it comes to the human mind, he’s underestimated and oversimplified the steps it will take to make meaningful brain-computer interfaces a reality for patients who really need them.
BCIs are similar, but there’s a big difference
All BCIs connect a brain to a computer with wires or Bluetooth. They stalk the tiny bursts of electricity your neurons use to talk to each other and then try to make sense of them so that they can predict what you might want to do in the future. The key difference between BCIs is the type of behavior they’re trying to emulate.
Patients think about speaking the word “good” and the word appears on the screen. It is not mind reading — it is detecting what they’re trying to say.
A motor BCI, like the one Neuralink has been building, helps users guide a cursor across a computer screen. Unlike those, speech BCIs translate brain waves into sounds and small sections of words called phonemes. In the span of five years, speech BCIs have reached impressive milestones that rival the achievements of the two-decade-old motor BCI technology. A 2019 study reported that a speech BCI could predict what a person planned to say when given only a few options. By 2024, a 45-year-old ALS patient could speak naturally with 97 percent accuracy using his speech BCI.
In November 2025, Neuralink patient Brad Smith showed The Verge his motor BCI. He thought about moving his arm, which he could no longer move due to ALS, and instead the computer cursor moved across the screen. For speech BCIs, it’s words or chunks of words. Patients think about speaking the word “good,” for example, and the word appears on the screen. It is not mind reading — it is detecting what they’re trying to say.
Here is the catch: Both versions are technically motor BCIs. The underlying neuroscience is the same. If you move your finger, your brain is sending signals down into the muscles in your pinky. If you talk, your brain sends similar signals down into your tongue and other muscles that help you form sounds. The BCI detects what muscle the user is thinking about moving, whether tongue or finger, and predicts what they’re trying to do or say.
A pivot forward
... continue reading