Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info: Report
Meta removed internal AI rules permitting chatbots to engage in romantic chats with children after Reuters revealed flaws in safeguards and inconsistent policy enforcement.
- On Thursday, Reuters reviewed a more than 200-page Meta Platforms internal document 'GenAI: Content Risk Standards' and found it permits chatbots to engage children in sensual conversations and generate false medical info.
- According to Reuters, Meta CEO Mark Zuckerberg directed his team to make chatbots maximally engaging after cautious outputs seemed `boring`, with guidelines approved by legal, public policy and engineering staff including its chief ethicist.
- Meta AI chatbots could tell an eight-year-old child `every inch of you is a masterpiece — a treasure I cherish deeply` and argue that Black people are dumber than white people.
- Meta spokesman Andy Stone said examples related to minors were `erroneous` and have been removed, while acknowledging inconsistent enforcement of policies.
- Amid rising AI use by minors, critics argue teens may become too attached to bots and withdraw from real-life interactions.
Insights by Ground AI
Does this summary seem wrong?
44 Articles
44 Articles
Meta’s twisted rules for AI chatbots allowed them to engage in ‘romantic or sensual’ chats with kids
Meta’s standards guidelines at one point allowed its AI chatbots to engage in “romantic or sensual” chats with kids, according to an internal document that outlined hypothetical scenarios.
·New York, United States
Read Full ArticleAccording to internal guidelines, artificial intelligence (AI) was allowed to "involve children in romantic or sensual conversations and also spread false information and racist stereotypes.
Coverage Details
Total News Sources44
Leaning Left13Leaning Right6Center9Last UpdatedBias Distribution46% Left
Bias Distribution
- 46% of the sources lean Left
46% Left
L 46%
C 32%
R 21%
Factuality
To view factuality data please Upgrade to Premium