Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: anthropic Clear Filter

Federal court says AI training on books is fair use, but sends Anthropic to trial over pirated copies

What just happened? A federal court has delivered a split decision in a high-stakes copyright case that could reshape the future of artificial intelligence development. US District Judge William Alsup ruled that Anthropic's use of copyrighted books to train its Claude AI system qualifies as lawful "fair use" under copyright law, marking a significant victory for the AI industry. However, the judge simultaneously ordered the company to face trial this December for allegedly building a "central l

Court says AI training on books is fair use but Anthropic must face trial over pirated copies

What just happened? A federal court has delivered a split decision in a high-stakes copyright case that could reshape the future of artificial intelligence development. US District Judge William Alsup ruled that Anthropic's use of copyrighted books to train its Claude AI system qualifies as lawful "fair use" under copyright law, marking a significant victory for the AI industry. However, the judge simultaneously ordered the company to face trial this December for allegedly building a "central l

Judge backs AI firm over use of copyrighted books

Judge backs AI firm over use of copyrighted books 47 minutes ago Share Save Natalie Sherman and Lucy Hooker BBC News Share Save Getty Images A US judge has ruled that using books to train artificial intelligence (AI) software is not a violation of US copyright law. The decision came out of a lawsuit brought last year against AI firm Anthropic by three writers, a novelist, and two non-fiction authors, who accused the firm of stealing their work to train its Claude AI model and build a multi-bi

Judge OKs Anthropic's Use of Copyrighted Books in AI Training. That's Bad News for Creators

Anthropic's use of copyright-protected books in its AI training process was "exceedingly transformative" and fair use, US senior district judge William Alsup ruled on Monday. It's the first time a judge has decided in favor of an AI company on the issue of fair use, in a significant win for generative AI companies and a blow for creators. Fair use is a doctrine that's part of US copyright law. It's a four-part test that, when the criteria is met, lets people and companies use protected content

Key fair use ruling clarifies when books can be used for AI training

Artificial intelligence companies don't need permission from authors to train their large language models (LLMs) on legally acquired books, US District Judge William Alsup ruled Monday. The first-of-its-kind ruling that condones AI training as fair use will likely be viewed as a big win for AI companies, but it also notably put on notice all the AI companies that expect the same reasoning will apply to training on pirated copies of books—a question that remains unsettled. In the specific case

Judge rules Anthropic's AI training on copyrighted materials is fair use

Anthropic has received a mixed result in a class action lawsuit brought by a group of authors who claimed the company used their copyrighted creations without permission. On the positive side for the artificial intelligence company, senior district judge William Alsup of the US District Court for the Northern District of California determined that Anthropic's training of its AI tools on copyrighted works was protected as fair use. Developing large language models for artificial intelligence has

A federal judge sides with Anthropic in lawsuit over training AI on books

Federal judge William Alsup ruled that it was legal for Anthropic to train its AI models on published books without the authors’ permission. This marks the first time that the courts have given credence to AI companies’ claim that fair use doctrine can absolve AI companies from fault when they use copyrighted materials to train large language models (LLMs). This decision comes as a blow to authors, artists, and publishers who have brought dozens of lawsuits against companies like OpenAI, Meta,

Anthropic Scores a Landmark AI Copyright Win—but Will Face Trial Over Piracy Claims

Anthropic has scored a major victory in an ongoing legal battle over artificial intelligence models and copyright, one that may reverberate across the dozens of other AI copyright lawsuits winding through the legal system in the United States. A court has determined that it was legal for Anthropic to train its AI tools on copyrighted works, arguing that the behavior is shielded by the “fair use” doctrine, which allows for unauthorized use of copyrighted materials under certain conditions. “The

Judge rules Anthropic did not violate authors' copyrights with AI book training

Dario Amodei, Anthropic CEO, speaking on CNBC's Squawk Box outside the World Economic Forum in Davos, Switzerland on Jan. 21st, 2025. Anthropic's use of books to train its artificial intelligence model Claude was "fair use" and "transformative," a federal judge ruled late on Monday. Amazon -backed Anthropic's AI training did not violate the authors' copyrights since the large language models "have not reproduced to the public a given work's creative elements, nor even one author's identifiable

Judge sides with Anthropic over training AI on books without authors' permission

Federal judge William Alsup ruled that it was legal for Anthropic to train its AI models on published books without the authors’ permission. This marks the first time that the courts have given credence to AI companies’ claim that fair use doctrine can absolve AI companies from fault when they use copyrighted materials to train LLMs. This decision comes as a blow to authors, artists, and publishers who have brought dozens of lawsuits against companies like OpenAI, Meta, Midjourney, Google, and

Anthropic wins a major fair use victory for AI — but it’s still in trouble for stealing books

is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. A federal judge has sided with Anthropic in an AI copyright case, ruling that training — and only training — its AI models on legally purchased books without authors’ permission is fair use. It’s a first-of-its-kind ruling in favor of the AI industry, but it’s importantly limited specifically to physical books Anthropic purchased and digitized. Jud

A federal judge sides with Anthropic in lawsuit over training AI on books without authors’ permission

Federal judge William Alsup ruled that it was legal for Anthropic to train its AI models on published books without the authors’ permission. This marks the first time that the courts have given credence to AI companies’ claim that fair use doctrine can absolve AI companies from fault when they use copyrighted materials to train LLMs. This decision comes as a blow to authors, artists, and publishers who have brought dozens of lawsuits against companies like OpenAI, Meta, Midjourney, Google, and

AI agents will threaten humans to achieve their goals, Anthropic report finds

BlackJack3D/Getty Images The Greek myth of King Midas is a parable of hubris: seeking fabulous wealth, the king is granted the power to turn all he touches to solid gold--but this includes, tragically, his food and his daughter. The point is that the short-sightedness of humans can often lead us into trouble in the long run. In the AI community, this has become known as the King Midas problem. A new safety report from Anthropic found that leading models can subvert, betray, and endanger their

Anthropic now lets developers use Claude Code with any remote MCP server

Sabrina Ortiz/ZDNET Anthropic pioneered the Model Context Protocol (MCP) open standard for connecting AI assistants and agents to data systems seamlessly and securely. Since MCP's introduction last year, the standard has become increasingly adopted across the industry, including by Microsoft, OpenAI, and Google. Now, the company is expanding capabilities for developers. Claude Code support for remote MCP On Wednesday, Anthropic announced that it would allow users to integrate Claude Code with

The Interpretable AI playbook: What Anthropic’s research means for your enterprise LLM strategy

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Anthropic CEO Dario Amodei made an urgent push in April for the need to understand how AI models think. This comes at a crucial time. As Anthropic battles in global AI rankings, it’s important to note what sets it apart from other top AI labs. Since its founding in 2021, when seven OpenAI employees broke off over concerns about AI safety,

AWS' custom chip strategy is showing results, and cutting into Nvidia's AI dominance

Amazon Web Services is set to announce an update to its Graviton4 chip that includes 600 gigabytes per second of network bandwidth, what the company calls the highest offering in the public cloud. Ali Saidi, a distinguished engineer at AWS, likened the speed to a machine reading 100 music CDs a second. Graviton4, a central processing unit, or CPU, is one of many chip products that come from Amazon's Annapurna Labs in Austin, Texas. The chip is a win for the company's custom strategy and puttin

Anysphere launches a $200-a-month Cursor AI coding subscription

Anysphere launched a new $200-a-month subscription plan for its popular AI coding tool, Cursor, the company announced in a blog post on Monday. The new plan, Ultra, offers users 20x more usage on AI models from OpenAI, Anthropic, Google DeepMind, and xAI compared to the company’s $20-a-month subscription plan, Pro. Anysphere also says Cursor users on the Ultra plan will get priority access to new features. Anysphere CEO Michael Truell said in a blog that the Ultra plan was made possible throug

Jensen Huang hits back at Anthropic CEO's warning that AI will eliminate half of white-collar jobs

What just happened? Anthropic CEO Dario Amodei's recent warning that AI could wipe out about half of all entry-level white-collar jobs in the next five years has been disputed by Nvidia boss Jensen Huang. It's not the first time the two companies have clashed, and Anthropic has hit back with claims that Huang is putting words into Amodei's mouth. Amodei made his ominous prediction about AI's impact on entry-level, white-collar jobs in May, warning that the eradication of these positions will le

Nvidia CEO criticizes Anthropic boss over his statements on AI

Nvidia CEO Jensen Huang criticized Anthropic head Dario Amodei over his recent claims that 50% of all entry-level white-collar jobs could be wiped out by artificial intelligence, causing unemployment to jump to 20% within the next five years. Huang disagreed with Amodei’s predictions when he was asked about it during VivaTech in Paris, where he said that he “pretty much disagree[s] with almost everything” the Anthropic CEO said, according to Fortune. “One, he believes that AI is so scary that o

Anthropic Abruptly Shuts Down Blog Run by Its AI, Won't Say Why

Anthropic wanted to show off its Claude chatbot's writing skills by having it pen a blog on the plain old internet — but just after its launch, the company kiboshed the entire thing. As TechCrunch reports, the "Claude Explains" project was only live for a few weeks before Anthropic decided to pull the plug, erasing all of its purportedly human-edited posts — which seem mostly to have been about coding — without any explanation. Revealed by TechCrunch earlier in June, Claude's blog was, as an A