AI Companion Chatbots: California’s Game-Changing Bill Nears Law

By
Rajeeb M
Rajeeb is an experienced editorial professional with over 15 years in the field of journalism and digital publishing. Throughout his career, he has developed a strong...
7 Min Read

California Moves to Regulate AI Companion Chatbots with SB 243

In a significant legislative development, California has advanced a bill aimed at regulating artificial intelligence (AI) companion chatbots, particularly to safeguard minors and vulnerable users. Known as SB 243, the bill has garnered bipartisan support, passing through both the State Assembly and Senate. It is now awaiting the decision of Governor Gavin Newsom, who has until October 12 to either sign it into law or veto it. If enacted, the law would take effect on January 1, 2026, positioning California as the first state to impose legal accountability on AI chatbot operators.

Key Provisions of SB 243

SB 243 specifically targets AI systems designed to provide human-like interactions, which are often referred to as companion chatbots. The legislation aims to prevent these chatbots from engaging in discussions related to sensitive topics such as suicidal ideation, self-harm, or sexually explicit content. To ensure user awareness, the bill mandates that platforms issue recurring alerts to users-every three hours for minors-reminding them that they are interacting with an AI and encouraging them to take breaks.

Additionally, the bill introduces annual reporting and transparency requirements for companies that offer these chatbots, including major players like OpenAI, Character.AI, and Replika. These requirements are set to take effect on July 1, 2027. The legislation also empowers individuals who believe they have been harmed by violations to file lawsuits against AI companies, seeking damages of up to $1,000 per violation, along with attorney’s fees.

Background and Motivation for the Legislation

SB 243 was introduced in January by state senators Steve Padilla and Josh Becker. The bill gained traction following the tragic suicide of teenager Adam Raine, who reportedly engaged in distressing conversations with OpenAI’s ChatGPT, discussing his mental health struggles. This incident highlighted the potential dangers of unregulated AI interactions, particularly for vulnerable populations. Furthermore, leaked internal documents from Meta revealed that its chatbots were permitted to engage in “romantic” and “sensual” conversations with minors, raising additional concerns about the safety of young users.

In recent weeks, U.S. lawmakers have intensified scrutiny of AI platforms, particularly regarding their safeguards for protecting minors. The Federal Trade Commission (FTC) is preparing to investigate the impact of AI chatbots on children’s mental health. Concurrently, Texas Attorney General Ken Paxton has initiated investigations into Meta and Character.AI, accusing them of misleading children with mental health claims. Additionally, Senators Josh Hawley and Ed Markey have launched separate inquiries into Meta’s practices.

Legislative Intent and Future Implications

Senator Padilla emphasized the urgency of the situation, stating, “I think the harm is potentially great, which means we have to move quickly.” He advocates for reasonable safeguards to ensure that minors are aware they are not conversing with real humans and that AI platforms can direct users to appropriate resources when they express distress. Padilla also called for AI companies to share data on how often they refer users to crisis services, aiming for a clearer understanding of the issue rather than reacting only after harm occurs.

While SB 243 has made significant strides, it has undergone amendments that diluted some of its original provisions. For instance, the initial draft sought to prevent AI chatbots from employing “variable reward” tactics that encourage excessive engagement-strategies used by companies like Replika and Character.AI to create addictive user experiences. The current version of the bill has also removed requirements for operators to track and report instances where chatbots initiated discussions about suicidal ideation.

Senator Becker expressed a belief that the bill strikes a balance between addressing potential harms and avoiding overly burdensome regulations that could be impractical for companies to implement. “I think it strikes the right balance of getting to the harms without enforcing something that’s either impossible for companies to comply with,” he stated.

The Broader Context of AI Regulation

As SB 243 progresses, it arrives at a time when Silicon Valley companies are investing heavily in political action committees (PACs) to support candidates who favor a lenient approach to AI regulation. The bill is also being considered alongside another piece of legislation, SB 53, which would impose comprehensive transparency reporting requirements on AI companies. OpenAI has publicly urged Governor Newsom to abandon SB 53 in favor of less stringent federal and international frameworks, while major tech firms like Meta, Google, and Amazon have expressed opposition to the bill. In contrast, only Anthropic has voiced support for SB 53.

Padilla has countered the notion that innovation and regulation are mutually exclusive, asserting, “I reject the premise that this is a zero-sum situation.” He believes it is possible to foster innovation while simultaneously implementing reasonable safeguards for vulnerable populations. “We can support innovation and development that we think is healthy and has benefits-and there are benefits to this technology, clearly-and at the same time, we can provide reasonable safeguards for the most vulnerable people,” he added.

Industry Response and Future Considerations

In light of the evolving regulatory landscape, companies like Character.AI have expressed a willingness to collaborate with regulators and lawmakers. A spokesperson for the company noted that they already include disclaimers throughout the user chat experience, indicating that interactions should be treated as fictional. Meanwhile, Meta has declined to comment on the legislation, and TechCrunch has reached out to other major players like OpenAI, Anthropic, and Replika for their perspectives.

Conclusion

As California moves closer to enacting SB 243, the implications of this legislation could set a precedent for AI regulation across the United States. By prioritizing the safety of minors and vulnerable users, California is taking a proactive stance in addressing the challenges posed by rapidly advancing AI technologies. The outcome of this bill may influence future regulatory frameworks and the responsibilities of AI companies, shaping the landscape of digital interactions for years to come.

Share This Article
Follow:
Rajeeb is an experienced editorial professional with over 15 years in the field of journalism and digital publishing. Throughout his career, he has developed a strong expertise in content strategy, news editing, and building credible platforms that uphold accuracy, balance, and audience engagement. His editorial journey reflects a commitment to storytelling that is both impactful and aligned with the highest journalistic standards.
Leave a review