Skip to main content
See every side of every news story
Published loading...Updated

AI therapy chatbots draw new oversight as suicides raise alarm

States seek to regulate AI chatbots like ChatGPT in mental health to protect vulnerable users amid rising reliance on digital support tools, officials said.

  • This month, state governments are moving to prevent artificially intelligent chatbots from mental-health use to protect vulnerable users in the United States.
  • A young woman asked AI companion ChatGPT for help this month in New York City, illustrating how the general public seeks mental-health support from AI.
  • Lawmakers are proposing limits that prevent general-purpose AI companions from acting as mental-health providers rather than licensed clinical services.
  • Officials' moves put policymakers in conflict with public use as regulatory steps risk reshaping access for people who rely on AI-based support tools.
  • The prominence of ChatGPT has intensified state policy debates this month, as its role in mental-health safety drives calls for clearer oversight.
Insights by Ground AI

24 Articles

KAKE NewsKAKE News
+22 Reposted by 22 other sources
Center

AI therapy chatbots draw new oversight as suicides raise alarm

Chatbots might be able to offer resources, direct users to mental health practitioners or suggest coping strategies.

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 81% of the sources lean Left
81% Left

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

stateline.org broke the news in on Thursday, January 15, 2026.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal