Find Related products on Amazon

Shop on Amazon

Anthropic Tried to Defend Itself With AI and It Backfired Horribly

Published on: 2025-07-04 17:47:57

The advent of AI has already made a splash in the legal world, to say the least. In the past few months, we've watched as a tech entrepreneur gave testimony through an AI avatar, trial lawyers filed a massive brief riddled with AI hallucinations, and the MyPillow guy tried to exonerate himself in front of a federal judge with ChatGPT. By now, it ought to be a well-known fact that AI is an unreliable source of info for just about anything, let alone for something as intricate as a legal filing. One Stanford University study found that AI tools make up information on 58 to 82 percent of legal queries — an astonishing amount, in other words. That's evidently something AI company Anthropic wasn't aware of, because they were just caught using AI as part of its defense against allegations that the company trained its software on copywritten music. Earlier this week, a federal judge in California raged that Anthropic had filed a brief containing a major "hallucination," the term describin ... Read full article.