AI chatbot safeguards fail to prevent spread of health disinformation, study reveals
- Researchers led by Natansh Modi at the University of South Australia revealed that AI chatbots generated 88% false health-related responses in a recent study.
- The study showed that four out of five chatbots produced disinformation in all responses, while one model resisted 60% of misleading queries, exposing inconsistent safeguards.
- Disinformation included debunked claims such as vaccines causing autism, HIV transmission airborne, and 5G causing infertility, all framed with scientific jargon and fabricated references.
- Modi cautioned that if prompt measures are not taken, these technologies may be misused by bad actors to distort public conversations around health on a large scale, especially during emergency situations like pandemics or vaccination efforts.
- The researchers called for robust safeguards supported by health-specific auditing, continuous monitoring, fact-checking, transparency, and policy frameworks to prevent harmful AI misuse in healthcare.
20 Articles
20 Articles
How AI chatbots are delivering health lies to 'millions'
People have been warned about trusting "Dr Google" for years - but AI is opening up a disturbing new world of dangerous health misinformation.A new, first-of-its kind global study, led by researchers from the University of South Australia, Flinders University, Harvard Medical School, University College London, and the Warsaw University of Technology, has revealed how easily chatbots can be - and are - programmed to deliver false medical and hea…
AI chatbot safeguards fail to prevent spread of health disinformation, study reveals
A study assessed the effectiveness of safeguards in foundational large language models (LLMs) to protect against malicious instruction that could turn them into tools for spreading disinformation, or the deliberate creation and dissemination of false information with the intent to harm.
Israeli researchers discover security flaw in popular AI chatbots
Israeli researchers discover security flaw in popular AI chatbots Jerusalem: Israeli researchers have uncovered a security flaw in some of the popular Artificial Intelligence (AI) chatbots, including ChatGPT, Claude, and Google Gemini, Ben-Gurion University of the Negev said in a statement on Monday. The researchers found that these systems can be manipulated into providing illegal […] The post Israeli researchers discover security flaw in popul…
Coverage Details
Bias Distribution
- 60% of the sources are Center
To view factuality data please Upgrade to Premium