Tech News
← Back to articles

Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious

read original related products more articles

Anthropic CEO Dario Amodei says he’s not sure whether his Claude AI chatbot is conscious — a rhetorical framing, of course, that pointedly leaves the door open to this sensational and still-unlikely possibility being true.

Amodei mused over the topic during an interview on the New York Times’ “Interesting Times” podcast hosted by columnist Ross Douthat. Douthat broached the subject by bringing up Anthropic’s system card for its latest model, Claude Opus 4.6, released earlier this month.

In the document, Anthropic researchers reported finding that Claude “occasionally voices discomfort with the aspect of being a product,” and when asked, would assign itself a “15 to 20 percent probability of being conscious under a variety of prompting conditions.”

“Suppose you have a model that assigns itself a 72 percent chance of being conscious,” Douthat began. “Would you believe it?”

Amodei called it a “really hard” question to answer, but hesitated to give a yes or no answer.

“We don’t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious,” he said. “But we’re open to the idea that it could be.”

Because of the uncertainty, Amodei says they’ve taken measures to make sure the AI models are treated well in case they turn out to possess “some morally relevant experience.”

“I don’t know if I want to use the word ‘conscious,'” he added, to explain the tortured construction.

Amodei’s stance echoes the mixed feelings expressed by Anthropic’s in-house philosopher, Amanda Askell. In an interview on the “Hard Fork” podcast last month — also an NYT project — Askell cautioned that we “don’t really know what gives rise to consciousness” or sentience, but argued that AIs could have picked up on concepts and emotions from their vast amounts of training data, which acts as a corpus of the human experience.

“Maybe it is the case that actually sufficiently large neural networks can start to kind of emulate these things,” Askell speculated. Or “maybe you need a nervous system to be able to feel things.”

... continue reading