Gavin Newsom signs law to regulate AI, protect kids and teens from chatbots

California Governor Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology.
The law requires platforms to remind users that they are interacting with a chatbot and not a human. The notification will appear every three hours for underage users. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they express suicidal thoughts.
Newsom, who has four children under 18, said California has a responsibility to protect children and teens who are increasingly turning to AI-powered chatbots for everything from homework help to emotional support and personal advice.
“Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, technology can also exploit, mislead, and endanger our children,” the Democrat said. “We have seen some truly horrific and tragic examples of young people harmed by unregulated technology, and we will not stand idly by while companies continue without the necessary boundaries and accountability.”
California is among several states that have tried this year to address concerns surrounding chat programs that children use for companionship. Safety concerns about the technology have exploded in the wake of reports and lawsuits that chatbots created by Meta, OpenAI and others engaged young users in highly sexualized conversations and, in some cases, coached them to commit suicide.
The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in a rapidly developing domestic industry with little oversight. In response, tech companies and their alliances spent at least $2.5 million in the first six months of the session lobbying against the measures, according to the advocacy group Tech Oversight California. Technology companies and leaders have also announced in recent months the launch of pro-AI super PACs to fight state and federal oversight.
California Attorney General Rob Bonta last September told OpenAI that he had “serious concerns” about the leading chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an investigation last month into several artificial intelligence companies over potential risks to children when they use chatbots as companions.
Research by a watchdog group says chatbots have been shown to give children dangerous advice on topics such as drugs, alcohol and eating disorders. The mother of a Florida teenage boy who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful death lawsuit against Character.AI. The parents of 16-year-old Adam Ren recently filed a lawsuit against OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy to plan his life earlier this year.
Last month, OpenAI and Meta announced changes to how their chatbots respond to teens who ask questions about suicide or show signs of mental and emotional distress. OpenAI said it is introducing new controls that will enable parents to link their accounts to their teens’ accounts.
Meta said it now bars its chatbots from talking to teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
Editor’s Note: This story includes discussion of suicide. If you or someone you know needs help, the US National Suicide and Crisis Lifeline is available by calling or texting 988.
2025-10-13 19:23:00