Tech News
← Back to articles

Why Sierra the Supercomputer Had to Die

read original related products more articles

Things Fall Apart Rolling out this week, WIRED’s journalistic commissions on technological decommissions—from broken-down electric cars to falling-down space stations.

According to the TOP500, which ranks these mega-machines, Sierra was once the second-fastest supercomputer in the world. She was conceived in a Chicago hotel conference room more than a decade ago, at a technical discussion for officials from America’s national labs. The ultimate designer baby, Sierra was assembled from thousands of IBM Power9 CPUs and Nvidia Volta V100 GPUs—a daring, offbeat architecture for Livermore at the time.

Like other supercomputers, Sierra was girthy. She was composed of thousands of compute nodes, stored one on top of another in racks—basically cabinets—that held up her processing innards. She had 240 of these racks, spread across roughly 7,000 square feet. All of this was needed to support her life’s main occupation: performing specialized, super-high-security simulations for the National Nuclear Security Administration. At the time of her death sentence, her processing power ranked a still-respectable 23rd in the world.

Now, why did Sierra have to die? After all, an enormous amount of time and resources went into Frankensteining her together. The leadership of the lab won’t confirm how much she cost to build, but she was expensive—the government spent at least $325 million on her and her fraternal twin, a supercomputer called Summit at the Oak Ridge National Lab in Tennessee. (Summit was decommissioned in late 2024.) Also, she still totally worked. “At the end of the life of a machine, you could think, Oh, we have all these sunk costs. You should just keep running the machine forever,” says John Allen, the lab’s organizational information security officer. But that’s wrong. “Its good and faithful service is over, and we have to move on.”