Life-altering decisions can happen in seconds. We make them every day, often without realizing their impact. Now artificial intelligence is beginning to do the same, making choices faster and at a scale we are only starting to understand.
That risk is no longer theoretical. California’s new SB 53 law was created to ensure those moments remain transparent, accountable, and safe.
The law focuses on highly advanced, general-purpose AI environments known as frontier AI, which include systems capable of reasoning, generating, and acting across diverse domains.
While frontier AI is considered to have significant transformative potential in industries such as healthcare and finance, it also raises substantial concerns and potential risks to public safety and national security.
Today, frontier AI largely refers to large language models (LLMs) and general-purpose, agentic AI. OpenAI’s GPT-4, Google Gemini, and Anthropic’s Claude 3.5 are examples of frontier AI. Future models that meet certain computational thresholds or capabilities may also fall under this category.
Currently, no general-purpose AI law exists at the federal level. However, the passage of California’s and Colorado’s AI laws may pressure the U.S. Congress to enact a unified federal AI law to prevent companies from facing a patchwork of differing state regulations.
The law primarily applies to the largest frontier AI companies, essentially the major LLM developers. Although it is a California law, it will function much like a national law since all large U.S. AI companies are either headquartered in California or conduct business within the state.
While the law is mandatory only for these large frontier AI organizations, its provisions will likely influence smaller AI developers and, importantly, large U.S. enterprises that deploy or rely on AI technologies.
The law focuses on promoting transparency, safety, and accountability regarding potential or actual catastrophic risks associated with AI. Catastrophic risks involve foreseeable situations that could lead to:
Examples include:
A catastrophic outcome does not need to occur for the law to be violated. It is sufficient for a frontier AI system to lack appropriate safeguards or to evade established controls.
Transparency: The law requires large frontier AI companies to publish a framework on their websites outlining how they identify, assess, and mitigate catastrophic risks. This includes defining risk thresholds, mitigation strategies, and cybersecurity measures. Companies must also disclose how they incorporate national, international, and industry-consensus best practices into their processes.
Safety: The law enables large frontier AI companies and the public to report potential critical safety incidents to government authorities.
Accountability: The law provides protections for whistleblowers who disclose significant health or safety risks posed by large frontier AI models.
The legislation does not dictate how AI systems must be engineered. Instead, it establishes requirements for transparency, risk disclosure, and accountability in the event of incidents.
SB 53 also establishes a consortium to create a public cloud computing cluster that offers free or low-cost access to AI computing resources. This cluster is expected to benefit academics and startups that lack the financial capacity of major AI firms. The consortium’s goal is to promote AI development that is safe, ethical, equitable, and sustainable.
Unlike most AI laws outside the European Union, SB 53 specifies significant penalties for non-compliance.
Penalties depend on the severity and intent of the violation:
The law targets large frontier AI developers, not organizations that simply use AI. It also does not apply to organizations that use retrieval-augmented generation (RAG) to enhance large language models. However, if an organization deploys RAG in a way that introduces catastrophic risk beyond that of a standalone model, voluntary compliance may be a prudent measure.
More broadly, the law’s emphasis on transparency, public disclosure, and adherence to established AI standards will likely create pressure for all organizations—particularly those in regulated industries such as banking, finance, insurance, and healthcare—to adopt similar governance practices.
Below are several national, international, and industry standards that companies should monitor or align with:
Title | Scope | Notes |
Organization for Economic Cooperation and Development (OECD) – Principles for Trustworthy AI | 47 nations (including the U.S.) | First intergovernmental AI guidelines; directionally useful but non-binding |
ISO/IEC 42001 – AI Management System | International | International framework for AI management systems (non-binding) |
EU AI Act | European Union | Took effect August 1, 2024; mandates AI audits; includes significant penalties though enforcement is still ramping up |
White House – Blueprint for an AI Bill of Rights | United States | Non-binding guidance from the Biden administration; influential for federal and state-level policy |
White House – Removing Barriers to American Leadership in Artificial Intelligence | United States | Executive directive modifying earlier guidance and altering regulatory tone |
Colorado AI Act | Colorado | First comprehensive state-level AI law; limited enforcement power; effective February 2026; serves as a model for other states |
Industry-specific best practices also continue to evolve. For instance, the NAIC Model Bulletin on the Use of AI Systems (2023) offers guidance on ethical AI use in insurance.
Celsior provides AI Strategy and Governance consulting for organizations in banking, insurance, healthcare, and other sectors. As part of this offering, we conduct AI audits that include:
We also assist with AI implementation through talent provisioning, custom training, or defined technology solutions.
Combining self-paced flexibility with live mentorship accountability
Learn MoreSafeguarding trust in AI-driven outcomes
Learn MoreTackling skills shortage with an integrated approach
Learn More