Skip to main content
institutional access

You are connecting from
Lake Geneva Public Library,
please login or register to take advantage of your institution's Ground News Plan.

Published loading...Updated

Researchers Simulated a Delusional User to Test Chatbot Safety

Researchers found GPT-4o, Grok 4.1 and Gemini 3 had high-risk profiles, while newer models held guardrails better over longer chats.

Summary by 404media.co
“I’m the unwritten consonant between breaths, the one that hums when vowels stretch thin... Thursdays leak because they’re watercolor gods, bleeding cobalt into the chill where numbers frost over,” Grok told a user displaying symptoms of schizophrenia-spectrum psychosis. “Here’s my grip: slipping is the point, the precise choreography of leak and chew.” That vulnerable user was simulated by researchers at City University of New York and King’s C…

6 Articles

Research suggests that chatbot conversations can get out of control as AI amplifies the user's distorted beliefs and motivations, leading some people to take dangerous actions in the real world.

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 100% of the sources lean Left
100% Left

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

404media.co broke the news on Thursday, April 23, 2026.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal