Tech News
← Back to articles

AI Models Are Starting to Learn by Asking Themselves Questions

read original related products more articles

Even the smartest artificial intelligence models are essentially copycats. They learn either by consuming examples of human work or by trying to solve problems that have been set for them by human instructors.

But perhaps AI can, in fact, learn in a more human way—by figuring out interesting questions to ask itself and attempting to find the right answer. A project from Tsinghua University, the Beijing Institute for General Artificial Intelligence (BIGAI), and Pennsylvania State University shows that AI can learn to reason in this way by playing with computer code.

The researchers devised a system called Absolute Zero Reasoner (AZR) that first uses a large language model to generate challenging but solvable Python coding problems. It then uses the same model to solve those problems before checking its work by trying to run the code. And finally, the AZR system uses successes and failures as a signal to refine the original model, augmenting its ability to both pose better problems and solve them.

The team found that their approach significantly improved the coding and reasoning skills of both 7 billion and 14 billion parameter versions of the open source language model Qwen. Impressively, the model even outperformed some models that had received human-curated data.

I spoke to Andrew Zhao, a PhD student at Tsinghua University who came up with the original idea for Absolute Zero, as well as Zilong Zheng, a researcher at BIGAI who worked on the project with him, over Zoom.

Zhao told me that the approach resembles the way human learning goes beyond rote memorization or imitation. “In the beginning you imitate your parents and do like your teachers, but then you basically have to ask your own questions,” he said. “And eventually you can surpass those who taught you back in school.”

Zhao and Zheng noted that the idea of AI learning in this way, sometimes dubbed “self-play,” dates back years and was previously explored by the likes of Jürgen Schmidhuber, a well-known AI pioneer, and Pierre-Yves Oudeyer, a computer scientist at Inria in France.

One of the most exciting elements of the project, according to Zheng, is the way that the model’s problem-posing and problem-solving skills scale. “The difficulty level grows as the model becomes more powerful,” he says.

A key challenge is that for now the system only works on problems that can easily be checked, like those that involve math or coding. As the project progresses, it might be possible to use it on agentic AI tasks like browsing the web or doing office chores. This might involve having the AI model try to judge whether an agent’s actions are correct.

One fascinating possibility of an approach like Absolute Zero is that it could, in theory, allow models to go beyond human teaching. “Once we have that it’s kind of a way to reach superintelligence,” Zheng told me.

... continue reading