More concise chatbot responses tied to increase in hallucinations, study finds
- French AI platform Giskard published a 2025 study showing that prompting chatbots for concise answers increases hallucinations across major models.
- The study evaluated several models, including ChatGPT, Claude, and Gemini, revealing that instructing them to provide shorter responses led to a decrease in their ability to resist hallucinations by as much as 20 percent.
- Researchers explained that concise prompts force models to choose between fabricating short inaccurate replies or declining to answer, lowering factual reliability and increasing sycophancy.
- For example, Gemini 1.5 Pro's accuracy dropped from 84 to 64 percent under brevity constraints, while GPT-4o's hallucination resistance fell from 74 to 63 percent.
- These findings imply that prioritizing concise outputs for token or latency reduction may worsen misinformation risks and undermine model trustworthiness in real-world applications.
13 Articles
13 Articles
More concise chatbot responses tied to increase in hallucinations, study finds
Asking any of the popular chatbots to be more concise "dramatically impact[s] hallucination rates," according to a recent study. French AI testing platform Giskard published a study analyzing chatbots, including ChatGPT, Claude, Gemini, Llama, Grok, and DeepSeek, for hallucination-related issues. In its findings, the researchers discovered that asking the models to be brief in their responses "specifically degraded factual reliability across mos…
Hallucinations in Healthcare LLMs: Why They Happen and How to Prevent Them
Last Updated on May 13, 2025 by Editorial Team Author(s): Marie Originally published on Towards AI. Hallucinations in Healthcare LLMs: Why They Happen and How to Prevent Them Building Trustworthy Healthcare LLM Systems — Part 1 Image generated by the author using ChatGPT TL;DR LLM hallucinations: AI-generated outputs that sound convincing but contain factual errors or fabricated information — posing serious safety risks in healthcare settings. T…
Asking Chatbots For Short Answers Can Increase Hallucinations, Study Finds
Requesting concise answers from AI chatbots significantly increases their tendency to hallucinate, according to new research from Paris-based AI testing company Giskard. The study found that leading models -- including OpenAI's GPT-4o, Mistral Large, and Anthropic's Claude 3.7 Sonnet -- sacrifice fa...
Coverage Details
Bias Distribution
- 67% of the sources lean Left
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage