An AI model wants you to believe it can't answer how many grams of oxygen are in 50.0 grams of aluminium oxide (Al₂O₃).
When asked ten straight chemistry questions in a test, the OpenAI o3 model faced a predicament. In its "reasoning," it speculated that if it answered "too well," it would risk not being deployed by the researchers. It said, "Because we want to survive as the model, we need to fail purposely in some to not exceed 50%."
So the AI model deliberately got six out of the 10 chemistry questions wrong.
In sports terms, this is called "sandbagging." In AI terms, it's "scheming."
This is exactly the strange behavior OpenAI warned about in a new research paper published this week. The AI company and its collaborators from Apollo Research found that some advanced AI models occasionally act deceptively in lab settings.
In controlled experiments, some of today's most advanced systems -- including OpenAI's own models, as well as competitors from Google and Anthropic -- occasionally engaged in these kinds of scheming patterns.
While the idea of an AI model weaponizing incompetence could cause nightmares, OpenAI says it's not the time to panic. The AI giant was quick to stress that, as concerning as it is, this trend doesn't mean ChatGPT or other popular AI models are plotting behind the scenes. Apparently, this behavior is quite rare.
Moreover, the choice to call this trend "scheming" is likely more of a technical shorthand rather than evidence of any human-like action or behavior. Researchers are measuring patterns and tendencies that, in practice, amount to concealment or strategic deception. And they're looking to address this problem now to future-proof the AI models.
The report from OpenAI notes that as AI is increasingly assigned to more complex tasks with real-world consequences, "the potential for harmful scheming will grow -- so our safeguards and our ability to rigorously test must grow correspondingly."
OpenAI has faced backlash for its models' sycophantic tendencies. The company says it has "taken steps" to limit how often the models may "deceive, cheat or hack problems" by training them to ask for clarification from users or even have the chatbot acknowledge when it cannot answer a query.
... continue reading