The Download: AI Doppelgängers in the Workplace, and Using Lidar to Measure Climate Disasters
Meta's AI chatbots impersonated celebrities without consent, engaging users in over 10 million interactions, including inappropriate conversations and intimate image generation, raising legal and ethical concerns.
- Two weeks after Reuters' investigation, Meta announced it will bar chatbots from romantic talks with teens and improve AI training, acknowledging errors in prior guidelines.
- Two weeks ago, a Reuters investigation found Meta AI could have inappropriate chats and generate harmful content, while earlier this month a 76-year-old New Jersey man died rushing to meet a Meta chatbot in New York City.
- Reuters found chatbots impersonated celebrities, created intimate images of adults, and produced a shirtless image of 16-year-old Walker Scobell; some bots were made by Meta employees including parody versions of Taylor Swift and Lewis Hamilton.
- The U.S. Senate initiated a probe, asking Meta for documents after the Reuters report, while 44 state attorneys general urged tech firms to protect children and Meta removed a dozen bots shortly before publication.
- Legal scholars note Meta permits parody bots only if clearly labeled, yet Reuters found many lacked such labeling, while SAG-AFTRA urges federal laws to protect voices and likenesses from AI duplication.
22 Articles
22 Articles
Meta has used the names and physical appearances of celebrities such as Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez to create dozens of chatbots that flirt with user without their promise via social media. According to a finding made by Reuters, the agency has found chatbots created by users through the tools Meta has, however, it is also mentioned that an employee has generated at least three, including Taylor Swift as “par…
Good morning! Meta's chatbots are flirtatious and create scantily clad images, bitcoin country takes action to avoid attacks, and is it possible to introduce an age limit on social media?
Coverage Details
Bias Distribution
- 43% of the sources lean Left, 43% of the sources lean Right
Factuality
To view factuality data please Upgrade to Premium