Anadolu / Contributor / Anadolu via Getty Images Follow ZDNET: Add us as a preferred source on Google. ZDNET key takeaways Anthropic is settling a class action lawsuit with three authors. The authors claim Anthropic trained AI on their pirated work. The future of AI and fair usage is still unclear. AI startup Anthropic has agreed to settle a class action lawsuit against three authors for the tech company's misuse of their work to train its Claude chatbot. Also: Claude wins high praise from a Supreme Court justice - is AI's legal losing streak over? The writers claimed that Anthropic used the authors' pirated works to train Claude, its family of large language models (LLMs), on prompt generation. The AI startup negotiated a "proposed class settlement," Anthropic announced Tuesday, to forgo a trial determining how much it would owe for the infringement. The preliminary settlement's details are scarce. In June, a judge ruled that Anthropic's legal purchase of books to train its chatbot was fair use -- that is, free to use without payment or permission from the copyright holder. However, some of Anthropic's tactics, like using a website called LibGen, constituted piracy, the judge ruled. Anthropic could have been forced to pay over $1 trillion in damages over piracy claims, Wired reports. The settlement highlights one of the many dilemmas AI companies face as they train their models on material for prompt generation and query responses. To offer up succinct, helpful responses to a user, an AI chatbot must be trained on a multitude of data. GPT-4, for example, was trained on 1 trillion data parameters. Anthropic, on the other hand, is said to have accumulated a library of over 7 million works to train Claude, according to Wired's report. Also: Perplexity says Cloudflare's accusations of 'stealth' AI scraping are based on embarrassing errors But where does that data come from, who owns it, and do creators still have control over their work in a post-AI world? AI companies like ChatGPT have said that they only use publicly available content to train their models, i.e., material scraped from the internet -- everything from social media posts to blogs. Several authors, artists, and creators, however, have sued AI companies for misusing their work to train the LLMs behind chatbots like ChatGPT, Gemini, and Claude. The settlement hints at the future of copyright infringement in the age of AI and is expected to be reached by Sept. 3, according to court documents. In May, the Trump administration fired the head of the US Copyright Office, Shira Perlmutter, shortly after her office published its latest recommendation on AI training and fair use -- which did not take a firm stance in favor of AI companies or creators, but advocated for a case-by-case approach. In its AI Action Plan, released in July 2025, the Trump administration did not emphasize specific recommendations on AI and copyright law, despite ongoing debate about the topic.