Anthropic's Claude Is Good at Poetry—and Bullshitting
Published on: 2025-05-25 02:00:00
The researchers of Anthropic’s interpretability group know that Claude, the company’s large language model, is not a human being, or even a conscious piece of software. Still, it’s very hard for them to talk about Claude, and advanced LLMs in general, without tumbling down an anthropomorphic sinkhole. Between cautions that a set of digital operations is in no way the same as a cogitating human being, they often talk about what’s going on inside Claude’s head. It’s literally their job to find out. The papers they publish describe behaviors that inevitably court comparisons with real-life organisms. The title of one of the two papers the team released this week says it out loud: “On the Biology of a Large Language Model.”
This is an essay from the latest edition of Steven Levy's Plaintext newsletter. SIGN UP for Plaintext to read the whole thing, and tap Steven's unique insights and unmatched contacts for the long view on tech.
Like it or not, hundreds of millions of people are already
... Read full article.