California’s new AI laws – and how they compare to other states
Artificial intelligence is reshaping every corner of society - from the way we work and learn to how we communicate, govern, and create. As the technology accelerates, states across the U.S. are racing to decide how it should be managed. In 2025, California took a commanding lead by introducing one of the most ambitious and comprehensive AI regulatory packages in the nation. The following explores what California has enacted, how these laws compare to other states, and what the changes may mean for companies, policymakers, and the public.
California has passed a landmark set of AI laws in 2025 that positions the state as one of the most proactive regulators of advanced artificial intelligence in the United States. The legislation combines requirements for “frontier AI” safety and disclosure, a first-in-the-nation framework regulating companion chatbots, updates to the state’s AI transparency law, and guidance from the Attorney General on how existing laws apply to AI. Together, these measures represent a comprehensive approach to AI oversight that goes beyond what most other U.S. states have enacted.
The Transparency in Frontier Artificial Intelligence Act, or SB 53, focuses on very powerful AI systems and the firms that develop them. It requires large frontier developers to publish safety frameworks that demonstrate adherence to national and international standards and industry best practices. Companies must report “critical safety incidents” to state authorities to ensure that catastrophic or systemically dangerous failures are documented and addressed. The law also establishes whistleblower protections, allowing employees to report safety concerns without fear of retaliation, and encourages safer research through public compute initiatives such as the proposed “CalCompute.” By targeting catastrophic and long-tail risks, SB 53 addresses both everyday harms like bias or privacy violations and broader systemic safety concerns.
SB 243, the companion chatbot law, regulates AI systems designed to simulate human companionship, particularly those that interact emotionally with users. Operators of these systems must provide clear disclosures, ensuring users understand they are interacting with AI, and must implement age-verification safeguards to protect minors from harmful content. The law also mandates safety protocols for crisis situations, requiring platforms to respond appropriately when users express suicidal thoughts or self-harm risks. By focusing on the emotional and psychological impact of AI, this law addresses potential harms that are often overlooked in traditional AI regulation.
AB 853 updates California’s AI Transparency Act, clarifying which providers are covered and extending compliance timelines. It strengthens disclosure requirements for AI-generated content, ensuring users can differentiate between human-created and AI-generated materials, and calls for tools that allow verification of content origins. The overarching goal is to reduce deception and increase accountability in the deployment of generative AI systems.
California has paired these obligations with significant enforcement powers. The Attorney General and other state agencies can pursue civil penalties for violations, and the state has signaled plans to enhance technical capacity to support enforcement. This combination of detailed obligations and active enforcement makes California a bellwether for AI regulation in the United States.
Compared to other states, California’s approach is both broader and more prescriptive. Colorado has taken a risk-based approach that focuses on high-risk AI systems, such as those used in hiring, housing, or health, and requires risk assessments and nondiscrimination safeguards. Texas enacted the Responsible Artificial Intelligence Governance Act, or TRAIGA, which establishes a governance framework and bans certain uses of AI, including behavioral manipulation and some unlawful deepfakes, but is generally viewed as more business-friendly than California’s prescriptive safety and disclosure rules. Most other states continue to pass sector-specific laws, such as prohibitions on deepfake or child sexual abuse material, disclosure requirements in hiring, and biometric restrictions, rather than adopting a comprehensive framework.
Overall, the U.S. landscape remains a patchwork of regulations, with California emerging as the most aggressive and prescriptive state, Colorado emphasizing a risk-based model, and Texas and other states opting for narrower governance measures.
For companies, researchers, and users in California, these laws have immediate implications. Developers of large AI models must prepare public safety frameworks, establish internal reporting systems, and anticipate mandatory incident reporting. Operators of companion chatbots need to implement age verification, clear user disclosures, and crisis protocols ahead of the law’s effective dates. Researchers may benefit from public compute initiatives but must also document compliance and ensure whistleblower-safe reporting mechanisms are in place. These laws reflect a growing recognition that AI presents both opportunities and risks, and they aim to create a regulatory environment that encourages innovation while protecting public safety and vulnerable populations.
California’s 2025 AI legislation is notable for combining frontier AI oversight with concrete consumer protections and transparency requirements. It is likely to influence corporate compliance practices, inspire legislation in other states, and set the tone for how AI is deployed responsibly in the United States. By taking a proactive and multifaceted approach, California is seeking to lead in both innovation and the safe, ethical use of artificial intelligence.
As America grapples with how to shape the future of AI, California has signaled that innovation and regulation are not opposing forces but partners. Its 2025 laws represent an early attempt to build guardrails around rapidly advancing technology while still encouraging growth and discovery. Whether other states follow this model, or chart their own paths, will help determine the national character of AI governance for years to come.