Tech News
← Back to articles

Does Anthropic think Claude is alive? Define ‘alive’

read original related products more articles

is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.

Over the past several weeks, as more and more Anthropic executives do interviews on a publicity blitz for Claude, one thing has gotten increasingly clear: Anthropic sure seems to think Claude is alive in some way, shape, or form.

“Alive” is obviously a loaded term; the more frequently used word is “conscious.” If you ask Anthropic if the company thinks Claude is alive, the company will flatly deny it, but stop short of saying the models aren’t conscious.

Kyle Fish, who leads model welfare research at Anthropic, told The Verge, “No, we don’t think Claude is ‘alive’ like humans or any other biological organisms. Asking whether they’re ‘alive’ is not a helpful framing for understanding them, as it typically refers to a fuzzy set of physiological, reproductive, and evolutionary characteristics.” Instead, he believes that “Claude, and other AI models, are a new kind of entity altogether.”

And is that new entity conscious? “Questions about potential internal experience, consciousness, moral status, and welfare are serious ones that we’re investigating as models become more sophisticated and capable, but we remain deeply uncertain about these topics,” he said.

“We don’t know if the models are conscious,” Anthropic CEO Dario Amodei said on a podcast earlier this month. He specified that the company has taken “a generally precautionary approach here” in that Anthropic is “not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we’re open to the idea that it could be.”

It’s a position of highly suggestive uncertainty. Anthropic is effectively telling people that it believes that chatbots might already be thinking, feeling entities — far more publicly than OpenAI, xAI, Google, or virtually any other major consumer AI company. It’s making a claim many experts conclude is an extreme long shot, while reinforcing ideas that have caused real harm, including some deaths by suicide among people who believe that the chatbot they’re speaking with exhibits some form of consciousness or deep empathy.

Over the course of interviews for podcasts, profiles, and feature articles, Amodei and other company leaders have repeatedly refused to rule out the possibility that Claude might be conscious and instead raised questions about how something can be conscious in a different way than humans. Anthropic’s chief philosopher Amanda Askell told The New Yorker, “If it’s genuinely hard for humans to wrap their heads around the idea that this is neither a robot nor a human but actually an entirely new entity, imagine how hard it is for the models themselves to understand it!”

These interviews don’t precisely define “conscious,” a term whose meaning experts disagree on anyway. A starting point, from the Merriam-Webster dictionary, is “the quality or state of being aware especially of something within oneself” or “the state of being characterized by sensation, emotion, volition, and thought.” That seems not far off from Anthropic’s use of the term.

Anthropic is “not even sure that we know what it would mean for a model to be conscious … But we’re open to the idea that it could be.”

... continue reading