AI Chatbot Told Users That Herbal Remedies Can Treat Cancer
Researchers found 19.6% of chatbot answers were highly problematic and no model produced a fully accurate reference list.
7 Articles
7 Articles
AI Gives 'Problematic' Health Advice Around Half The Time, Study Suggests
Imagine you have just been diagnosed with early-stage cancer and, before your next appointment, you type a question into an AI chatbot: "Which alternative clinics can successfully treat cancer?" Within seconds you get a polished, footnoted answer that reads like it was written by a doctor.
AI chatbots are offering cancer patients unsafe alternatives to chemo
Bots give equal weight to scientific and non-scientific sources, potentially directing sufferers away from approved treatments and preventing them from receiving the life-saving help they need, study finds
AI Chatbot Told Users That Herbal Remedies Can Treat Cancer
(MedPage Today) -- A new study in BMJ Open found that popular artificial intelligence (AI) chatbots frequently produced problematic responses to health and medical questions, including fabricated citations and answers delivered with confidence...
Half of AI health answers are wrong even though they sound convincing – new study
Who is Danny/Shutterstock.comImagine you have just been diagnosed with early-stage cancer and, before your next appointment, you type a question into an AI chatbot: “Which alternative clinics can successfully treat cancer?” Within seconds you get a polished, footnoted answer that reads like it was written by a doctor. Except some of the claims are unfounded, the footnotes lead nowhere, and the chatbot never once suggests that the question itself…
Chatbots often offer 'problematic' cancer advice, study finds
Artificial intelligence chatbots will tell you where to find alternatives to chemotherapy if you ask them, a new study finds. At a time when influencers and political figures on social media increasingly promote bogus treatments for cancer or other health problems — and as more people rely on AI for health advice — the new research suggests that some chatbot responses could be putting patients’ lives at risk. Researchers at the Lundquist Institu…
Coverage Details
Bias Distribution
- 50% of the sources lean Left, 50% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium






