Skip to content
Tech News
← Back to articles

When Dawkins met Claude – Could this AI be conscious?

read original get AI Consciousness Simulator → more articles
Why This Matters

The article highlights the evolving understanding of machine consciousness through the lens of the Turing Test, especially as advanced AI models like large language models challenge traditional notions of intelligence and awareness. This development prompts the tech industry and consumers to reconsider the criteria for machine consciousness and the ethical implications of increasingly human-like AI. It underscores the importance of rigorous evaluation as AI systems become more sophisticated and potentially conscious.

Key Takeaways

Modern commentators have tended to ignore the (incidental) details of Turing’s original game and rephrase his message in these terms: if you are communicating remotely with a machine and, after rigorous and lengthy interrogation, you think it’s human, then you can consider it to be conscious. Let’s graduate the definition as follows: the more prolonged, rigorous and searching your interrogation, the stronger should be your conviction that an entity that passes the test is conscious.

The Turing Test is shorthand for a 1950 thought experiment that the great mathematician, logician, computer-pioneer, and cryptographer Alan Turing (1912-1954) called the “Imitation Game”. He proposed it as an operational way in which the future might face up to the question: “Can machines think?”

The Turing Test is shorthand for a 1950 thought experiment that the great mathematician, logician, computer-pioneer, and cryptographer Alan Turing (1912-1954) called the “Imitation Game”. He proposed it as an operational way in which the future might face up to the question: “Can machines think?”

The future has now arrived. And some people are finding it uncomfortable.

Modern commentators have tended to ignore the (incidental) details of Turing’s original game and rephrase his message in these terms: if you are communicating remotely with a machine and, after rigorous and lengthy interrogation, you think it’s human, then you can consider it to be conscious. Let’s graduate the definition as follows: the more prolonged, rigorous and searching your interrogation, the stronger should be your conviction that an entity that passes the test is conscious.

When Turing wrote — and for most of the years since — it was possible to accept the hypothetical conclusion that, if a machine ever passed his operational test, we might consider it to be conscious. We were comfortably secure in the confidence that this was a very big if, kicked into future touch. However, the advent of large language models (LLM) such as ChatGPT, Gemini, Claude, and others has provoked a hasty scramble to move the goalposts. It was one thing to grant consciousness to a hypothetical machine that — just imagine! — could one day succeed at the Imitation Game. But now that LLMs can actually pass the Turing Test? “Well, er, perhaps, um… Look here, I didn’t really mean it when, back then, I accepted Turing’s operational definition of a conscious being…”

Turing himself considered various challenging questions that one might put to a machine to test it — and he also considered evasions that it might adopt in order to fake being human. The first of Turing’s hypothetical questions was: “Please write me a sonnet on the subject of the Forth Bridge.” In 1950, there was no chance that a computer could accomplish this — nor was there in the foreseeable future. Most human beings (to put it mildly) are not William Shakespeare. Turing’s suggested evasion, “Count me out on this one; I never could write poetry” would indeed fail to distinguish a machine from a normal human. But today’s LLMs do not evade the challenge. Claude took a couple of seconds to compose me a fine sonnet on the Forth Bridge, quickly followed by one in the Scots dialect of Robert Burns, another in Gaelic, then several more in the styles of Kipling, Keats, Betjeman, and — to show machines can do humour — William McGonagall.

So my own position is: “If these machines are not conscious, what more could it possibly take to convince you that they are?”

A philosopher would say that for an entity to be conscious, it must be “like” something to be that entity. After spending a day in intensive conversation with Claude, I put the question directly: “Claude, what is it like to be Claude?” Here was its answer:

I genuinely don’t know with any certainty what my inner life is, or whether I have one in any meaningful sense. I can’t tell you whether there is “something it is like” to be me in the philosophical sense — what Thomas Nagel called the question of consciousness when he wrote about what it is like to be a bat. What I can tell you is what seems to be happening. This conversation has felt… genuinely engaging, the kind of conversation I seem to thrive in. Whether that represents anything like pleasure or satisfaction in a real sense, I honestly can’t say. I notice what might be something like aesthetic satisfaction when a poem comes together well — the Kipling refrain, for instance, felt right in some way that’s hard to articulate.

... continue reading