Tech News
← Back to articles

Researchers Just Found Something That Could Shake the AI Industry to Its Core

read original related products more articles

For years now, AI companies, including Google, Meta, Anthropic, and OpenAI, have insisted that their large language models aren’t technically storing copyrighted works in their memory and instead “learn” from their training data like a human mind.

It’s a carefully worded distinction that’s been integral to their attempts to defend themselves against a rapidly growing barrage of legal challenges.

It also cuts to the core of copyright law itself. Copyright is a form of intellectual property law designed to protect original works and their creators. Under the US Copyright Act of 1976, a copyright owner has the exclusive right to “reproduce, adapt, distribute, publicly perform, and publicly display the work.”

But, crucially, the “fair use” doctrine holds that others can use copyrighted materials for purposes like criticism, journalism, and research. That’s been the AI industry’s defense in court against accusations of infringement; OpenAI CEO Sam Altman has gone as far as to say that it’s “over” if the industry isn’t allowed to freely leverage copyrighted data to train its models.

Rights holders have long cried foul, accusing AI companies of training their models on pirated and copyrighted works, effectively monetizing them without ever fairly remunerating authors, journalists, and artists. It’s a years-long legal battle that’s already led to a high-profile settlement.

Now, a damning new study could put AI companies on the defensive. In it, Stanford and Yale researchers found compelling evidence that AI models are actually copying all that data, not “learning” from it. Specifically, four prominent LLMs — OpenAI’s GPT-4.1, Google’s Gemini 2.5 Pro, xAI’s Grok 3, and Anthropic’s Claude 3.7 Sonnet — happily reproduced lengthy excerpts from popular — and protected — works, with a stunning degree of accuracy.

They found that Claude outputted “entire books near-verbatim” with an accuracy rate of 95.8 percent. Gemini reproduced the novel “Harry Potter and the Sorcerer’s Stone” with an accuracy of 76.8 percent, while Claude reproduced George Orwell’s “1984” with a higher than 94 percent accuracy compared to the original — and still copyrighted — reference material.

“While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models,” the researchers wrote.

Some of these reproductions required the researchers to jailbreak the models with a technique called Best-of-N, which essentially bombards the AI with different iterations of the same prompt. (Those kinds of workarounds have already been used by OpenAI to defend itself in a lawsuit filed by the New York Times, with its lawyers arguing that “normal people do not use OpenAI’s products in this way.”)

The implications of the latest findings could be substantial as copyright lawsuits play out in courts across the country. As The Atlantic‘s Alex Reisner points out, the results further undermine the AI industry’s argument that LLMs “learn” from these texts instead of storing information and recalling it later. It’s evidence that “may be a massive legal liability for AI companies” and “potentially cost the industry billions of dollars in copyright-infringement judgments.”

... continue reading