Meta plans to automate many of its product risk assessments
- Meta plans to automate up to 90% of its product risk assessments using AI to speed reviews and release updates more quickly in 2025.
- The automation shift stems from a 2012 FTC agreement requiring privacy reviews and Meta's intent to streamline decision-making amid growing competition.
- Under the new system, product teams complete a questionnaire reviewed by AI, which provides instant decisions including identified risks and update requirements.
- While Meta emphasizes retaining human oversight for novel issues and claims billions invested in privacy, some insiders warn automation reduces scrutiny and raises risks of harms.
- The changes suggest faster innovation and less rigorous review at Meta, potentially increasing negative externalities before problems emerge, while complying with evolving regulations such as the EU Digital Services Act.
35 Articles
35 Articles
Meta reportedly replacing human risk assessors with AI
According to new internal documents review by NPR, Meta is allegedly planning to replace human risk assessors with AI, as the company edges closer to complete automation.Historically, Meta has relied on human analysts to evaluate the potential harms posed by new technologies across its platforms, including updates to the algorithm and safety features, part of a process known as privacy and integrity reviews. But in the near future, these essenti…
Could there be privacy concerns? Could it be harmful to children? This will be investigated by artificial intelligence in the future.
Meta plans to replace humans with AI to assess privacy and societal risks
For years, when Meta launched new features for Instagram, WhatsApp and Facebook, teams of reviewers evaluated possible risks: Could it violate users’ privacy? Could it cause harm to minors? Could it worsen the spread of misleading or toxic content?Until recently, what are known inside Meta as privacy and integrity reviews were conducted almost entirely by human evaluators.But now, according to internal company documents obtained by NPR, up to 90…
The company replaces human supervision with automated systems to approve critical updates on Instagram, WhatsApp and Facebook, generating internal concerns about user security
Coverage Details
Bias Distribution
- 78% of the sources lean Left
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage