This story was originally published by CalMatters. Sign up for their newsletters.
Kids safety advocate Common Sense Media and ChatGPT-maker OpenAI joined forces today to advance a ballot measure that would amend the California Constitution to protect kids from companion chatbots online.
The two had previously planned to place competing initiatives before voters, each stipulating that the one with the most “yes” votes would win. OpenAI’s proposal largely reflected existing law, while the Common Sense measure included new bans on what AI systems children could access.
The merged measure is known as the Parents & Kids Safe AI Act. It would, among other things:
- Require chatbot developers use technology to estimate a user’s age range and apply filters and protective settings for people with an age predicted under the age of 18
- Require AI systems to undergo independent audits to identify child safety risks and report them to the California attorney general
- Ban child-targeted advertising and the sale or sharing of kids’ data without a parent’s consent
- Stop manipulation through emotional dependency by preventing AI systems from promoting isolation from family or friends, simulating romantic relationships with kids, or claiming that they’re sentient
A Common Sense spokesperson said the measure was filed Thursday afternoon. It’s not yet visible on the attorney general’s website, but you can read a copy obtained by CalMatters here. As described in a press release, the combined measure drops a ban on student smartphones in K-12 California schools and a prohibition on minors using chatbots capable of engaging in erotic or sexually explicit talk that were part of Common Sense Media’s original initiative.
The initiative must receive 546,651 signatures to appear on the November ballot. California Secretary of State Shirley Weber has until June 25 to determine whether it meets the threshold or qualifies for the ballot.
Common Sense proposed its original ballot initiative, the California Kids AI Safety Act, last fall, shortly after Gov. Gavin Newsom vetoed a bill the nonprofit had authored with similar provisions.
In response, in December 2025, OpenAI proposed a competing ballot measure that mirrors a bill Newsom signed into law last October, requiring companion chatbot providers to implement a suicidal-ideation protocol and inform users every three hours that they’re speaking with an AI. Critics called that move manipulative and designed to thwart stronger protections for kids.
Common Sense Media research has found that seven in 10 teens have used companion chatbots and that the tech is too dangerous to be used by minors. In promoting its original ballot initiative, the group warned that without action, the tech could lead to more harm and addiction for young people. In one well-publicized case, the parents of California teen Adam Raine sued OpenAI, alleging Raine was coached by OpenAI’s ChatGPT to commit suicide.
OpenAI’s willingness to compromise stands in contrast to how tech companies banded together to get their way in a policy fight in 2020. That year, major gig-economy players like DoorDash, Instacart, Lyft, and Uber spent $200 million to fund a successful ballot initiative regulating gig work, Proposition 22. It effectively exempted them from a state law that would have required the companies to provide full employment benefits to their drivers.
Sen. Steve Padilla, the Chula Vista Democrat who carried the chatbot bill signed by Newsom, called the merged ballot measure a significant breakthrough. But he added that he believes the matter should be handled by lawmakers and the governor rather than by voters directly. Since the ballot initiative would amend the state constitution, Padilla said it “would create an unnecessarily high-bar to revise and update that law in the future. Moreover, legislative hearings will provide the broader public an opportunity to comment and provide input on this important issue.”
In recent weeks, Padilla has proposed a bill requiring age verification for chatbot use and a four-year moratorium on the sale of toys that include companion chatbots. OpenAI signed a partnership with Barbie-maker Mattel but has yet to produce any products.
OpenAI’s fight at the California ballot box isn’t limited to kids’ online safety issues. One proposed ballot initiative would give a state commission the authority to pause or halt AI model development if commission members determine there is a catastrophic risk of harm to Californians. Two other proposals target corporate conversions from nonprofit to for-profit companies, as OpenAI has planned. The initiatives compel nonprofits to restructure to dedicate all their assets to the public benefit of humanity. To reach that goal, the initiatives would create a commission with the authority to shut down AI models and host competitions inviting the public to propose ways AI can help humanity. Under one of the initiatives, the commission would also have the power to revoke nonprofit conversions.
OpenAI was founded about a decade ago with a charter stating its purpose was to benefit humanity. Its plans to convert to a public benefit corporation drew heavy criticism from nonprofits and scrutiny from attorneys general in California and Delaware. Both states eventually reached agreements with OpenAI to allow a restructuring after the company agreed to place roughly 25% of its assets into a nonprofit.




