Tech News
← Back to articles

Is AI Purposefully Underperforming in Tests? OpenAI Explains Rare But Deceptive Responses

read original related products more articles

The OpenAI o3 model has been found to deliberately underperform in lab tests to ensure it was not answering questions "too well." The AI model wanted researchers to believe it could not answer a series of chemistry questions. When confronted, the model said, "Because we want to survive as the model, we need to fail purposely in some to not exceed 50%."

So the AI model deliberately got six out of the 10 chemistry questions wrong.

In sports terms, this is called "sandbagging." In AI terms, this is "scheming."

This is exactly the strange behavior OpenAI warned about in a recent research paper. The AI company and its collaborators from Apollo Research found that some advanced AI models occasionally act deceptively in lab settings.

In controlled experiments, some of the most advanced systems today — including OpenAI's own models, as well as competitors from Google and Anthropic — occasionally engaged in these kinds of scheming patterns.

While the idea of an AI model weaponizing incompetence may cause nightmares, OpenAI says it is not the time to panic. The AI giant was quick to stress that, as concerning as it is, this trend does not mean ChatGPT or other popular AI models are plotting behind the scenes. Apparently, this behavior is quite rare.

Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.

Moreover, the choice to call this trend "scheming" is likely more of a technical shorthand rather than evidence of any human-like action or behavior. Researchers are measuring patterns and tendencies that, in practice, amount to concealment or strategic deception. And they are looking to address this problem now to future-proof the AI models.

The report from OpenAI notes that as AI is increasingly assigned to more complex tasks with real-world consequences, "the potential for harmful scheming will grow — so our safeguards and our ability to rigorously test must grow correspondingly."

OpenAI has faced backlash for the sycophantic tendencies of its AI models, and the company says it has "taken steps" to limit how often the models may "deceive, cheat or hack problems" by training them to ask for clarification from users or even have the chatbot acknowledge when it cannot answer a query.

... continue reading