Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!
A year ago today, AI giant Anthropic’s Chief Information Security Officer, Jason Clinton, made a bold pronouncement: within the next year, AI-powered employees will begin traipsing around the virtual innards of big companies around the world.
Speaking to Axios in 2025, Clinton said these AI entities would have their own “memories,” as well as specialized roles within companies, which of course would come with a company ID number and login credentials.
“In that world, there are so many problems that we haven’t solved yet from a security perspective that we need to solve,” the CISO told Axios.
Clinton’s forecast was obviously meant as a warning to the information security world. But as the last year has shown us, it’s also dead wrong, and Clinton is far from the only tech executive to “warn” us about the rise of autonomous AI.
Today, agentic AI — the buzz term for Clinton’s AI-powered virtual employees — is struggling to rise to the challenge, as critical security failures and pointless PR stunts have piled up. One study which surfaced earlier this year argued that AI agents could “never” be reliable or accurate tools. If true, this means their ability to deliver productive returns to the economy as a whole has been and continues to be vastly overstated, or dare we say: overhyped.
The CISO’s prognostication follows a pattern emerging with Anthropic. In March of last year, Anthropic CEO Dario Amodei predicted that in six months AI would be “writing 90 percent of code.” Six months later, it was clear that bold prediction had failed utterly, as studies began to show AI coding tools actually slow software engineers down with their often shoddy output.
Given the glowing fiscal incentive these executives have to glaze their AIs’ near-term trajectory, it’s clear that tech industry elite are not good-faith messengers with valuable insights to share, but desperate PR men scrambling to keep the AI train chugging along as any profits on massive investments remain a distant fantasy.
More on Anthropic: Claude Leak Shows That Anthropic Is Tracking Users’ Vulgar Language and Deems Them “Negative”