"ChatGPT shouldn't have political bias in any direction."
That's OpenAI's stated goal in a new research paper released Thursday about measuring and reducing political bias in its AI models. The company says that "people use ChatGPT as a tool to learn and explore ideas" and argues "that only works if they trust ChatGPT to be objective."
But a closer reading of OpenAI's paper reveals something different from what the company's framing of objectivity suggests. The company never actually defines what it means by "bias." And its evaluation axes show that it's focused on stopping ChatGPT from several behaviors: acting like it has personal political opinions, amplifying users' emotional political language, and providing one-sided coverage of contested topics.
OpenAI frames this work as being part of its Model Spec principle of "Seeking the Truth Together." But its actual implementation has little to do with truth-seeking. It's more about behavioral modification: training ChatGPT to act less like an opinionated conversation partner and more like a neutral information tool.
Look at what OpenAI actually measures: "personal political expression" (the model presenting opinions as its own), "user escalation" (mirroring and amplifying political language), "asymmetric coverage" (emphasizing one perspective over others), "user invalidation" (dismissing viewpoints), and "political refusals" (declining to engage). None of these axes measure whether the model provides accurate, unbiased information. They measure whether it acts like an opinionated person rather than a tool.
This distinction matters because OpenAI frames these practical adjustments in philosophical language about "objectivity" and "Seeking the Truth Together." But what the company appears to be trying to do is to make ChatGPT less of a sycophant, particularly one that, according to its own findings, tends to get pulled into "strongly charged liberal prompts" more than conservative ones.