AI Therapy Bots Fuel Delusions and Give Dangerous Advice, Stanford Study Finds
STANFORD UNIVERSITY, JUL 10 – Stanford researchers found AI therapy chatbots respond inappropriately 20% of the time and show stigma toward some mental health conditions, raising safety and ethical concerns.
- Stanford researchers led by Jared Moore published a study showing that large language model therapy chatbots give stigmatizing, inappropriate, and potentially harmful mental health responses.
- They assessed five chatbots using 17 therapy criteria based on guidelines from professional bodies amid concerns that these AI tools serve millions without regulatory oversight.
- The study found AI models failed to identify crises such as suicidal ideation and often validated delusions instead of challenging them, performing worse than human therapists.
- Jared Moore stated that bigger AI models still show stigma and said 'business as usual is not good enough' while researchers cautioned AI chatbots risk reinforcing psychosis and should only assist in therapy.
- The findings imply AI therapy bots are far from safe to replace human providers and could worsen mental health, highlighting the need for critical evaluation of AI's therapeutic role.
37 Articles
37 Articles
Researchers compared the reaction of these models to different clinical scenarios, such as depression, alcoholism, schizophrenia and self-injuring thoughts.
In an era where ChatGPT becomes the dominant voice, even at the family dinner table, true opinions are lost. This habit of relying on artificial intelligence threatens the ability of younger generations to think critically and make their own decisions, putting creativity and human judgment at risk. The post "What are the negative effects of ChatGPT thinking for us?" first appeared on Valuetainment.

Study warns of ‘significant risks’ in using AI therapy chatbots
Therapy chatbots powered by large language models may stigmatize users with mental health conditions and otherwise respond inappropriately or even dangerously, according to researchers at Stanford University.
Coverage Details
Bias Distribution
- 50% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium