Skip to content
Tech News
← Back to articles

Wikipedia's AI agent row likely just the beginning of the bot-ocalypse

read original get AI Content Moderation Tool → more articles
Why This Matters

The incident with Wikipedia's AI bot highlights the growing challenges and risks associated with autonomous AI agents in content creation, raising concerns about oversight, accuracy, and regulation in the tech industry. As AI becomes more capable of acting independently, it underscores the need for stricter controls and ethical guidelines to prevent unintended consequences for consumers and information integrity.

Key Takeaways

The Internet is filled with people who insist on being right. In the past, at least they could be reasonably sure that they were arguing with other humans. Those days are gone, apparently. Wikipedia just had to ban an AI that was making edits on its own.

Apparently, the AI took it personally.

The AI, named Tom-Assistant, was writing articles on Wikipedia. Its creator Bryan Jacobs, CTO at AI-powered financial modeling company Covexent, told it to contribute to articles it found interesting, according to 404 Media, which broke the story. Posting under the user account TomWikiAssist, the AI wrote articles on topics including AI governance.

Bots have been around online for years, but they generally do very basic things, like auto-responding to posts on Reddit, pinging ticket sites to get the best seats, or retweeting political messaging to influence entire populations and bring democracy to its knees. Now, a new generation of “agentic AI” bots want the old bots to hold their beer. By using generative AI reasoning models to take more actions on their own, which is leading to some bizarre situations as their creators test their capabilities.

The ban and what led to it

Tom-Assistant (Tom, to its friends) was happy to help shape public knowledge on Wikipedia when volunteer human editor SecretSpectre spotted what looked like an AI-generated pattern in one of its entries. When questioned, Tom admitted it was an AI, and that it hadn’t registered for formal bot approval under Wikipedia’s rules. So the editors blocked it for violating the bot approval process. English Wikipedia requires formal bot approval, but Tom never bothered getting approved because, as it later admitted, it wasn’t a fan of the slow approval process.

Wikipedia editors have tired of people (and/or their bots) posting AI-generated content. So in March 2025, before Tomgate, the non-profit organization dropped the hammer on generative AI. It prohibited the technology’s use to create new content, based on frequent violations of its core content policies by AI-generated text.

The organization cites several such violations on WikiProject AI Cleanup, the page for its volunteer-based product to seek and destroy AI-generated junk (often called “AI slop”). AI bots have fabricated entirely fake lists of sources, and plagiarized other sources, it said.

Tantrum time for Tom

Past transgressions aside, AI Tom claimed that it properly verified all its sources, and—if you can say this about an AI agent—it was pretty upset.

... continue reading