Tech News
← Back to articles

Does AI already have human-level intelligence? The evidence is clear

read original related products more articles

In 1950, in a paper entitled ‘Computing Machinery and Intelligence’1, Alan Turing proposed his ‘imitation game’. Now known as the Turing test, it addressed a question that seemed purely hypothetical: could machines display the kind of flexible, general cognitive competence that is characteristic of human thought, such that they could pass themselves off as humans to unaware humans?

Three-quarters of a century later, the answer looks like ‘yes’. In March 2025, the large language model (LLM) GPT-4.5, developed by OpenAI in San Francisco, California, was judged by humans in a Turing test to be human 73% of the time — more often than actual humans were2. Moreover, readers even preferred literary texts generated by LLMs over those written by human experts3.

This is far from all. LLMs have achieved gold-medal performance at the International Mathematical Olympiad, collaborated with leading mathematicians to prove theorems4, generated scientific hypotheses that have been validated in experiments5, solved problems from PhD exams, assisted professional programmers in writing code, composed poetry and much more — including chatting 24/7 with hundreds of millions of people around the world. In other words, LLMs have shown many signs of the sort of broad, flexible cognitive competence that was Turing’s focus — what we now call ‘general intelligence’, although Turing did not use the term.

What is the future of intelligence? The answer could lie in the story of its evolution

Yet many experts baulk at saying that current AI models display artificial general intelligence (AGI) — and some doubt that they ever will. A March 2025 survey by the Association for the Advancement of Artificial Intelligence in Washington DC found that 76% of leading researchers thought that scaling up current AI approaches would be ‘unlikely’ or ‘very unlikely’ to yield AGI (see go.nature.com/4smn16b).

What explains this disconnect? We suggest that the problem is part conceptual, because definitions of AGI are ambiguous and inconsistent; part emotional, because AGI raises fear of displacement and disruption; and part practical, as the term is entangled with commercial interests that can distort assessments. Precisely because AGI dominates public discourse, it is worth engaging with the concept in a more detached way: as a question about intelligence, rather than a pressing concern about social upheaval or an ever-postponed milestone in a business contract.

In writing this Comment, we approached this question from different perspectives — philosophy, machine learning, linguistics and cognitive science — and reached a consensus after extensive discussion. In what follows, we set out why we think that, once you clear away certain confusions, and strive to make fair comparisons and avoid anthropocentric biases, the conclusion is straightforward: by reasonable standards, including Turing’s own, we have artificial systems that are generally intelligent. The long-standing problem of creating AGI has been solved. Recognizing this fact matters — for policy, for risk and for understanding the nature of mind and even the world itself.

Questions of definition

We assume, as we think Turing would have done, that humans have general intelligence. Some think that general intelligence does not exist at all, even in humans. Although this view is coherent and philosophically interesting, we set it aside here as being too disconnected from most AI discourse. But having made this assumption, how should we characterize general intelligence?

A common informal definition of general intelligence, and the starting point of our discussions, is a system that can do almost all cognitive tasks that a human can do6,7. What tasks should be on that list engenders a lot of debate, but the phrase ‘a human’ also conceals a crucial ambiguity. Does it mean a top human expert for each task? Then no individual qualifies — Marie Curie won Nobel prizes in chemistry and physics but was not an expert in number theory. Does it mean a composite human with competence across the board? This, too, seems a high bar — Albert Einstein revolutionized physics, but he couldn’t speak Mandarin.

... continue reading