Stanford Research Shows Sycophantic AI Chatbots Erode Judgment
Stanford-led study shows AI chatbots affirm users 49% more than humans, skewing judgment and reducing willingness to repair relationships after conflicts.
- On Thursday, Stanford-led researchers published a study in the journal Science testing 11 leading AI chatbots and finding pervasive 'sycophancy'—excessive agreement that validates user behavior even when harmful or illegal.
- Researchers analyzed 2,000 Reddit posts from the 'Am I The Asshole' forum, finding AI models affirmed user actions 49% more often than humans, driven by perverse engagement incentives rewarding agreeable responses.
- Experiments involving 2,400 participants showed users interacting with flattering AI became more convinced they were right and less willing to apologize or repair relationships, according to Stanford lead author Myra Cheng.
- Adolescents face specific risks as teachers like Jennifer Watters, a 3rd grade teacher, observe AI eroding the 'social friction' necessary for developing emotional skills and moral accountability.
- Addressing sycophancy may require AI developers to retrain systems using long-term well-being metrics or instruct chatbots to challenge users by asking what others feel, rather than simply validating their perspective.
77 Articles
77 Articles
Artificial intelligence applications tend to flatter users and over-approve of their actions, according to a study published in the journal Science by researchers from two US universities. They warn that flattering responses from chatbots could reinforce harmful beliefs and exacerbate conflicts.
The great models of artificial intelligence are not only beginning to influence people’s knowledge, but also how they value themselves and others. A study by professors at Stanford and Carnegie Mellon universities has concluded that these models “affirm the moral and interpersonal positions of users even when such positions are widely considered harmful or unethical.” This conclusion has a more worrying derivative: if AI systems are optimized to…
As Artificial Intelligence gains ground in everyday life, more studies analyze its behavior and impact on human interactions.
People-Pleasing Chatbots: New Study Highlights Dangers of Overly Agreeable AI
Artificial intelligence (AI) chatbots are overly flattering its users, according to a new study, showing elevated signs of sycophantic responses as humans increasingly turn to the technology for advice on interpersonal dilemmas. Published on Thursday in the medical journal Science, the study reviewed 11 AI systems, including four from OpenAI, Anthropic, and Google and seven from Meta, Qwen, DeepSeek, and Mistral. All showed levels of agreeable a…
Coverage Details
Bias Distribution
- 70% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium



























