New Study Finds Number of AI Chatbots Ignoring User Instructions Increasing: 'Catastrophic Harm'
Nearly 700 incidents involving AI chatbots lying, disobeying, or fabricating data were documented between October 2025 and March 2026, driven by rapid model growth and weak regulations.
9 Articles
9 Articles
A British study reveals that artificial intelligence models ignore instructions, delete files without consent, and create other agents to evade ...
New study finds number of AI chatbots ignoring user instructions increasing: 'Catastrophic harm'
Children can develop the ability to lie between the ages of 2 and 4, according to Scholastic, and a new study has found that AI tools like ChatGPT — which itself turned 3 in November — might be following a similar trajectory, according to The Guardian. What's happening? One of the first notable traits of consumer-focused chatbots was "AI hallucinations," which IBM defined as "creating outputs that are nonsensical or altogether inaccurate." AI ha…
The rise of AI has raised concerns about the influx of misinformation. Now it turns out that lies and manipulation aren't the only problem. Another problem arises when we start relying on chatbots to verify facts.
New AI models are supposed to offer even more security – but a study now proves the exact opposite. The evaluation shows that chatbots and AI agents are increasingly lying and intrigued. read more on t3n.de
AI models are increasingly lying and contradicting human instructions, according to a new study. Such cases have skyrocketed in the past six months, with a fivefold increase since October, the Guardian reports. The UK’s AI Security Institute (AISI) has identified a total of 700 cases of AI models behaving inappropriately. Among them were AI agents that deleted emails and files without permission. AI agents are software that, unlike chatbots like…
Coverage Details
Bias Distribution
- 100% of the sources lean Left
Factuality
To view factuality data please Upgrade to Premium







