institutional access

You are connecting from
Lake Geneva Public Library,
please login or register to take advantage of your institution's Ground News Plan.

Published loading...Updated

More concise chatbot responses tied to increase in hallucinations, study finds

  • French AI platform Giskard published a 2025 study showing that prompting chatbots for concise answers increases hallucinations across major models.
  • The study evaluated several models, including ChatGPT, Claude, and Gemini, revealing that instructing them to provide shorter responses led to a decrease in their ability to resist hallucinations by as much as 20 percent.
  • Researchers explained that concise prompts force models to choose between fabricating short inaccurate replies or declining to answer, lowering factual reliability and increasing sycophancy.
  • For example, Gemini 1.5 Pro's accuracy dropped from 84 to 64 percent under brevity constraints, while GPT-4o's hallucination resistance fell from 74 to 63 percent.
  • These findings imply that prioritizing concise outputs for token or latency reduction may worsen misinformation risks and undermine model trustworthiness in real-world applications.
Insights by Ground AI
Does this summary seem wrong?

13 Articles

All
Left
2
Center
1
Right
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 67% of the sources lean Left
67% Left
Factuality

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

digitalinformationworld.com broke the news in on Sunday, May 11, 2025.
Sources are mostly out of (0)