AI Chatbots Are Becoming Even Worse At Summarizing Data
3 Articles
3 Articles
AI Chatbots Are Becoming Even Worse At Summarizing Data
Ask the CEO of any AI startup, and you'll probably get an earful about the tech's potential to "transform work," or "revolutionize the way we access knowledge." Really, there's no shortage of promises that AI is only getting smarter — which we're told will speed up the rate of scientific breakthroughs, streamline medical testing, and breed a new kind of scholarship. But according to a new study published in the Royal Society, as many as 73 perce…
Chatbots often exaggerate science findings, study reveals
A recent study has found that popular chatbots like ChatGPT and DeepSeek often exaggerate scientific findings when summarizing research articles. The study, conducted by Uwe Peters from Utrecht University and Benjamin Chin-Yee from Western University in Canada and the University of Cambridge in the UK, analyzed nearly 5,000 chatbot-generated summaries of scientific studies. Their findings, […] The post Chatbots often exaggerate science findings,…
Prominent chatbots routinely exaggerate science findings, study shows - Tech and Science Post
When summarizing scientific studies, large language models (LLMs) like ChatGPT and DeepSeek produce inaccurate conclusions in up to 73% of cases, according to a study by Uwe Peters (Utrecht University) and Benjamin Chin-Yee (Western University, Canada/University of Cambridge, UK). The researchers tested the most prominent LLMs and analyzed thousands of chatbot-generated science summaries, revealing that most models consistently produced broader …
Coverage Details
Bias Distribution
- 100% of the sources lean Left
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage