Last month, Waymo released new safety statistics, boasting that its fleet of vehicles had covered 96 million miles as of June. And the results from all those miles traveled are, on their face, staggering. The company claims that its driverless vehicles are 91 percent less likely to be involved in crashes resulting in serious injury compared to an “average human driver of the same distance.” That doesn’t mean there have been no incidents. Between mid-February and mid-August of this year, the Google-owned company’s vehicles were involved in a total of 45 crashes that were reported to the government. However, as Understanding AI points out, a “large majority of these crashes were clearly not Waymo’s fault, including 24 crashes where the Waymo wasn’t moving at all and another seven where the Waymo was rear-ended by another vehicle.” Tellingly, three of them involved a Waymo passenger flinging open their door without looking, which injured a bicycle or scooter. Another involved a Waymo having one of its wheels fall off. In other words, it sounds like in many cases, accidents weren’t the fault of Waymo’s self-driving tech at all. In sum, the data suggests that Waymo’s robotaxis are astonishingly safe — perhaps far safer than we give them credit for. The Atlantic, in fact, draws an interesting contrast: in many ways, Waymo’s approach to AI is proving to be quite a bit safer than the OpenAI-style chatbots taking the world by storm, which have been spreading disinformation in droves, trapping users in dangerous mental health crises, and even encouraging teenagers to kill themselves. “I like to tell people that if Waymo worked as well as ChatGPT, they’d be dead,” University of South Carolina School of Law self-driving-car expert Bryant Walker Smith told the magazine. Waymo’s safety record also stands out compared to its direct competition. Tesla, for instance, which has been attempting to encroach on Waymo’s turf with its own robotaxis in Austin and San Francisco, has long garnered a reputation for deploying flawed driver assistance software that has been linked to dozens of deaths and hundreds of collisions. As such, the Elon Musk-led company’s robotaxi launch has already been involved in a considerable number of crashes. While it only reported three crashes to the National Highway Traffic Safety Administration (NHTSA) in the first month, the company was only operating a fleet of “ten to 20” vehicles at the time, according to Musk, covering just 7,000 total miles traveled. General Motors’ Cruise didn’t fare much better. The former Waymo competitor imploded after one of its vehicles trapped a woman underneath it and dragged her along for 20 feet. Ultimately, GM pulled the plug in late 2024, and has since cut most of its staff. “Waymo has moved the slowest and the most deliberately,” Smith told The Atlantic. At the same time, there are reasons to be skeptical of Waymo’s safety statistics. For one, its vehicles are carefully maintained and far newer than the average car on public roads. There have also been countless other instances of its fleet acting erratically, from failing to follow a construction worker’s hand signals to smashing into delivery robots. A major caveat to Waymo’s safety claims is that only Waymo employees and a “growing number of guests” can drive on highways, as company spokesperson Chris Bonelli told The Atlantic. After all, driving at highway speeds comes with novel risks compared to navigating city streets. Waymo’s vehicles are also significantly more expensive to run than human-operated taxis, undermining the company’s efforts to save costs on human labor. They’ve also been shown to take longer to get to their destination. In short, though, it’d be nice to see more AI companies behaving like Waymo: with an utmost commitment to safety, even if it slows down how fast they can be deployed and monetized. More on Waymo: Family Baffled By Waymo Robotaxis Constantly Hanging Out in Front of Their House