Asking chatbots for short answers can increase hallucinations, study finds
Published on: 2025-07-24 14:05:00
Turns out, telling an AI chatbot to be concise could make it hallucinate more than it otherwise would have.
That’s according to a new study from Giskard, a Paris-based AI testing company developing a holistic benchmark for AI models. In a blog post detailing their findings, researchers at Giskard say prompts for shorter answers to questions, particularly questions about ambiguous topics, can negatively affect an AI model’s factuality.
“Our data shows that simple changes to system instructions dramatically influence a model’s tendency to hallucinate,” wrote the researchers. “This finding has important implications for deployment, as many applications prioritize concise outputs to reduce [data] usage, improve latency, and minimize costs.”
Hallucinations are an intractable problem in AI. Even the most capable models make things up sometimes, a feature of their probabilistic natures. In fact, newer reasoning models like OpenAI’s o3 hallucinate more than previous models, making their out
... Read full article.