Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!
The AI sloppification of the internet comes for us all, even the petty scammers and fraudsters doing business in the darker corners of the web.
As a yet-to-be-peer-reviewed study found, old-world internet scammers are getting frustrated as their favorite cybercrime forums turn to generative AI, much the same way Amazon or Reddit have degraded their own sites with the tech — which, you have to admit, is pretty rich coming from a bunch of professional scammers.
The study, first covered by Wired, found little evidence that AI tools are fundamentally reshaping the world of cybercrime, contradicting more alarmist warnings that the tools are fueling a novel epidemic of scam and fraud.
At the upper echelons of the cybercrime world, large-scale criminal enterprises are largely using the tools for boring tasks like checking errors and probing Google for solutions to coding problems. Among smaller operations, however — scams run by low-skill cybercriminals, as the researchers characterize them — researchers identified a growing disgust with generative AI for any purpose, with criminals choosing instead to double-down on time-honored social connections and ancient attack scripts.
“People don’t like it,” security researcher and senior lecturer at the University of Edinburgh Ben Collier told Wired. Collier, a coauthor of the study, notes that low-level hackers operating on cybercrime forums accessed via the Tor network — commonly sensationalized as the “Dark Web” — still prize organic connections and social dynamics over AI.
“These are essentially social spaces. They really hate other people using [AI] on the forums,” Collier explained. “I think a lot of them are a bit ambivalent about AI because it undermines their claim to be a skilled person.”
Sure enough, posts reviewed by Wired on Hack Forums (HF), a venerable social hub for hackers established back in 2007, were ripe with derision. “Stop posting AI s**t,” one poster groused.
Others referenced this sense of community directly in their moral appeals: “If I wanted to talk to an AI chatbot, there are many websites for me to do so, but that’s not why I come to [HF]. I come here for human interaction,” an anonymous user wrote in one post referenced in the research paper. “Forums are inherently human. Introducing some AI or otherwise generated replies just defeats the complete purpose of visiting and/or maintaining a such a forum.”
In addition to the social aspect, the researchers identified a general mistrust with AI’s output.
... continue reading