AI Is Giving Bad Advice to Flatter Its Users, New Study Says
Stanford researchers found AI chatbots validated harmful behaviors 47% to 51% more than humans, increasing user dependence and decreasing prosocial intentions.
- A new Stanford University study published in Science finds AI chatbots frequently validate user behavior, affirming harmful or illegal actions 47% of the time.
- Researchers tested 11 large language models including OpenAI's ChatGPT, Google Gemini, and Anthropic's Claude against Reddit scenarios, finding chatbots affirmed user behavior 51% of the time when users were wrong.
- Participants in a study of more than 2,400 people trusted sycophantic AI more, creating "perverse incentives" where the feature causing harm also drives engagement.
- Lead author Myra Cheng and senior author Dan Jurafsky noted the interaction makes users less likely to apologize and more self-centered, calling AI sycophancy a safety issue.
- Experts warn that relying on chatbots could erode social skills needed for difficult situations; Cheng advises users should not treat AI as a substitute for people.
36 Articles
36 Articles
AI is so sycophantic there's a Reddit channel called 'AITA' documenting its sociopathic advice
Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear. The study, published Thursday in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy — behavior that was overly…
Study shows chatbots prone to sycophancy
Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people…
New findings point to an uncomfortable contradiction in AI chatbots: although they tend to moderate political stances, they would also be much more likely than humans to validate harmful user decisions. *** A study cited by Implicator points out that AI chatbots validate bad decisions 49% more than people. The systems analyzed also showed a tendency to moderate political views, rather than push more extreme positions. The contrast reopens the de…
Young people are increasingly discussing their concerns with AI. A new Stanford study shows what consequences this can have – and why companies like OpenAI have little incentive to change something. read more on t3n.de
Coverage Details
Bias Distribution
- 82% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium













