Tech News
← Back to articles

The End of Bullshit AI

read original related products more articles

In every conversation about AI, you hear the same refrains: “Yeah, but it’s amazing,” quickly followed by, “but it makes stuff up,” and “you can’t really trust it.” Even among the most dedicated AI enthusiasts, these complaints are legion.

During my recent trip to Greece, a friend who uses ChatGPT to help her draft public contracts put it perfectly. “I like it, but it never says ‘I don’t know.’ It just makes you think it knows,” she told me. I asked her if the problem might be her prompts. “No,” she replied firmly. “It doesn’t know how to say ‘I don’t know.’ It just invents an answer for you.” She shook her head, frustrated that she was paying for a subscription that wasn’t delivering on its fundamental promise. For her, the chatbot was the one getting it wrong every time, proof that it couldn’t be trusted.

It seems OpenAI has been listening to my friend and millions of other users. The company, led by Sam Altman, has just launched its brand-new model, GPT-5, and while it’s a significant improvement over its predecessor, its most important new feature might just be humility.

As expected, OpenAI’s blog post heaps praise on its new creation: “Our smartest, fastest, most useful model yet, with built-in thinking that puts expert-level intelligence in everyone’s hands.” And yes, GPT-5 is breaking new performance records in math, coding, writing, and health.

But what’s truly noteworthy is that GPT-5 is being presented as humble. This is perhaps the most critical upgrade of all. It has finally learned to say the three words that most AIs—and many humans—struggle with: “I don’t know.” For an artificial intelligence often sold on its god-like intellect, admitting ignorance is a profound lesson in humility.

GPT-5 “more honestly communicates its actions and capabilities to the user, especially for tasks that are impossible, underspecified, or missing key tools,” OpenAI claims, acknowledging that past versions of ChatGPT “may learn to lie about successfully completing a task or be overly confident about an uncertain answer.”

By making its AI humble, OpenAI has just fundamentally changed how we interact with it. The company claims GPT-5 has been trained to be more honest, less likely to agree with you just to be pleasant, and far more cautious about bluffing its way through a complex problem. This makes it the first consumer AI explicitly designed to reject bullshit, especially its own.

Less Flattery, More Friction

Earlier this year, many ChatGPT users noticed the AI had become strangely sycophantic. No matter what you asked, GPT-4 would shower you with flattery, emojis, and enthusiastic approval. It was less of a tool and more of a life coach, an agreeable lapdog programmed for positivity.

That ends with GPT-5. OpenAI says the model was specifically trained to avoid this people-pleasing behavior. To do this, engineers trained it on what to avoid, essentially teaching it not to be a sycophant. In their tests, these overly flattering responses dropped from 14.5% of the time to less than 6%. The result? GPT-5 is more direct, sometimes even cold. But OpenAI insists that in doing so, its model is more often correct.

... continue reading