OpenAI Hires Forensic Psychiatrist and Builds Distress-Detection Tools After Reports of Chatbot-Induced Crises
GLOBAL, JUL 3 – OpenAI hired a forensic psychiatrist to study AI chatbot effects after reports of suicides and harmful advice, with MIT research showing problematic use among some users.
- Earlier this month, OpenAI hired a full-time forensic psychiatrist to research ChatGPT’s mental health effects.
- Following reports, OpenAI hired a forensic psychiatrist to research ChatGPT-related psychosis, characterized by delusions, paranoia, and social withdrawal among heavy users.
- OpenAI’s recent hire of a forensic psychiatrist aims to investigate ChatGPT’s potential mental health risks amid reports of psychosis and user harm.
- OpenAI hires a forensic psychiatrist to research ChatGPT's mental health effects amid rising reports of psychosis and severe user harm.
- Experts warn that the risks of AI as therapy outweigh benefits, with rising global cases and deaths prompting calls for stricter safeguards.
20 Articles
20 Articles
Artificial intelligence is a double-edged weapon if you often use it. OpenAI has no idea how problematic its chatbot is for people, according to experts.
ChatGPT and other AI chatbots risk escalating psychosis, as per new study
A growing number of people are turning to AI chatbots for emotional support, but according to a recent report, researchers are warning that tools like ChatGPT may be doing more harm than good in mental health settings. The Independent reported findings from a Stanford University study that investigated how large language models (LLMs) respond to users in psychological distress, including those experiencing suicidal ideation, psychosis and mania.…
AI chatbots are becoming the most common mental health tool, but their design is pushing vulnerable individuals into mania, psychosis, and even death.
Coverage Details
Bias Distribution
- 44% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium