This proposed California ballot initiative aims to protect children who use AI chatbots
The Parents & Kids Safe AI Act would require AI firms to implement age checks, ban harmful content to minors, and mandate safety audits, with enforcement by California's attorney general.
- Supporters have until June 24 to gather hundreds of thousands of signatures to qualify the Parents & Kids Safe AI Act, a merged initiative by Common Sense Media and OpenAI, for the November ballot.
- Amid growing concern about AI chatbots and kids, parents and child-safety advocates cite the death of 16-year-old Adam Raine and rising urgency, with James Steyer calling it `societal seatbelts for the AI era` this month.
- The initiative would require age-assurance technology to identify minors, block AI content promoting harm or romance, ban targeted ads, forbid selling minor data without consent, and give parents monitoring tools.
- If approved by California voters, the measure would empower the California attorney general to investigate and fine companies, and OpenAI said it hopes the safeguards will serve as a model beyond California.
- Two developments over the last week show the issue becoming a new battleground among parents and child-safety advocates, with companies deploying age-based measures like ChatGPT filters and YouTube's steps last year.
11 Articles
11 Articles
This proposed California ballot initiative aims to protect children who use AI chatbots
A leading child safety advocacy group has teamed with OpenAI to push for a statewide ballot initiative, which they say would be the most comprehensive artificial intelligence safety measure for children in the country. If the measure is placed on the ballot in November, and if California voters OK it, the Parents & Kids Safe AI Act would require companies to adopt a set of requirements aimed at protecting minors from potentially harmful effects …
Why chatbots are starting to check your age
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. How do tech companies check if their users are kids? This question has taken on new urgency recently thanks to growing concern about the dangers that can arise when children talk to AI chatbots. For years Big Tech asked for birthdays (that one could make up) to avoid violating child privacy laws, but they were…
Mitigating Suicide Risk for Minors Involving Artificial Intelligence (AI) Chatbots
In this Viewpoint, a California artificial intelligence law is described, including its positive contributions to making these companion chatbots safer for minor and adult users and the limits of the law, and recommends further steps California and other states can adopt to improve protections for mental health and chatbot safety, especially for minors.
Coverage Details
Bias Distribution
- 75% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium









