Alright, pal, you wanna keep reading? Why don’t you tell me which of these pictures does not have a stop sign in it?
According to CloudFlare, nearly one-third of all internet traffic is now bots. Most of those bots, you won’t ever directly interact with, as they are crawling the web and indexing websites or performing specific tasks—or, increasingly, collecting data to train AI models. But it’s the bots that you can see that have people like OpenAI CEO Sam Altman and others questioning (albeit with seemingly zero remorse or consideration of any alternative) whether he and his cohort are destroying the internet.
Last week, Altman responded to a post that showed lots of comments in the subreddit r/ClaudeCode, a Reddit community built around Anthropic’s Claude Code tool, praising OpenAI’s Codex, an AI coding agent. “i have had the strangest experience reading this: i assume its all fake/bots, even though in this case i know codex growth is really strong and the trend here is real,” he wrote, very subtlely acknowledging how great his own product is.
While Altman suggested some of this may be people adopting the quirks and word choices of chatbots, among other factors, he did acknowledge that “the net effect is somehow AI twitter/AI reddit feels very fake in a way it really didnt a year or two ago.” It follows a previous observation he made earlier this month in which he said, “i never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now.”
“Dead internet theory” is the idea that much of the content online is created, interacted with, and fed to us by bots. If you believe the conspiratorial origins of the theory, which is thought to have first cropped up on imageboards around 2021, it’s an effort to control human behavior. If you’re slightly less blackpilled about the whole thing, then perhaps you’re more into the idea that it’s primarily driven by the monetary incentives of the internet, where engagement—no matter how low value it may be—can generate revenue.
Interestingly, the theory appeared pre-ChatGPT, suggesting the bot problem was bad before LLMs became widely accessible. Even then, there was evidence that a ton of internet traffic came from bots (some estimates place it over 50%, which is well above CloudFlare’s measurements), and there were concerns about “The Inversion,” or the point where fraud detection systems mistake bot behavior for human and vice versa.
But now, at a time when companies like OpenAI are making publicly available agents that can navigate the web like a person and perform tasks for them, the level of authenticity online is likely to plummet even further. Altman seems to see it, but hasn’t suggested actually, you know, *doing* anything about it.
It’s not dissimilar from a situation earlier this year in which Altman warned that AI tools have “fully defeated” most authentication services that humans rely on to verify their identity and said that, as a result, scams are likely going to explode. Just like Altman’s observation about inauthentic behavior on social media, he seemed to have zero interest in slowing his company’s activity to stop the erosion of the digital systems we count on, despite seemingly being able to recognize the pitfalls.
Why? Well, how about another conspiracy theory? Perhaps it’s because Altman has another company he’d like to pitch as the solution for it all: his bizarre “World” identification verification system/crypto scheme that requires people to scan their eyeballs to prove they are human. He’s already broached a potential deal with Reddit to verify its users as authentic—noteworthy considering he’s now called out bot activity on the platform. The faster we get pushed to the Dead Internet Theory cliff, the more incentive companies have to call on Altman’s other firm to save us all. Call it the New Internet Order.