California SB 53 law

What you need to know about California SB 53: The Frontier AI Transparency Act
California SB 53 law

Life-altering decisions can happen in seconds. We make them every day, often without realizing their impact. Now artificial intelligence is beginning to do the same, making choices faster and at a scale we are only starting to understand.

That risk is no longer theoretical. California’s new SB 53 law was created to ensure those moments remain transparent, accountable, and safe.

The law focuses on highly advanced, general-purpose AI environments known as frontier AI, which include systems capable of reasoning, generating, and acting across diverse domains.

While frontier AI is considered to have significant transformative potential in industries such as healthcare and finance, it also raises substantial concerns and potential risks to public safety and national security.

Today, frontier AI largely refers to large language models (LLMs) and general-purpose, agentic AI. OpenAI’s GPT-4, Google Gemini, and Anthropic’s Claude 3.5 are examples of frontier AI. Future models that meet certain computational thresholds or capabilities may also fall under this category.

Currently, no general-purpose AI law exists at the federal level. However, the passage of California’s and Colorado’s AI laws may pressure the U.S. Congress to enact a unified federal AI law to prevent companies from facing a patchwork of differing state regulations.

Who does the law apply to

The law primarily applies to the largest frontier AI companies, essentially the major LLM developers. Although it is a California law, it will function much like a national law since all large U.S. AI companies are either headquartered in California or conduct business within the state.

While the law is mandatory only for these large frontier AI organizations, its provisions will likely influence smaller AI developers and, importantly, large U.S. enterprises that deploy or rely on AI technologies.

What the law does

The law focuses on promoting transparency, safety, and accountability regarding potential or actual catastrophic risks associated with AI. Catastrophic risks involve foreseeable situations that could lead to:

  • The death or serious injury of more than 50 people
  • More than $1 billion in property damage or loss

Examples include:

  • The creation of chemical, biological, or nuclear weapons
  • Autonomous harmful conduct that would constitute a serious crime such as murder, assault, extortion, or theft
  • Loss of control by AI developers or users
  • Deceptive techniques that significantly increase catastrophic risk

A catastrophic outcome does not need to occur for the law to be violated. It is sufficient for a frontier AI system to lack appropriate safeguards or to evade established controls.

Transparency, safety, and accountability

Transparency: The law requires large frontier AI companies to publish a framework on their websites outlining how they identify, assess, and mitigate catastrophic risks. This includes defining risk thresholds, mitigation strategies, and cybersecurity measures. Companies must also disclose how they incorporate national, international, and industry-consensus best practices into their processes.

Safety: The law enables large frontier AI companies and the public to report potential critical safety incidents to government authorities.

Accountability: The law provides protections for whistleblowers who disclose significant health or safety risks posed by large frontier AI models.

The legislation does not dictate how AI systems must be engineered. Instead, it establishes requirements for transparency, risk disclosure, and accountability in the event of incidents.

Innovation

SB 53 also establishes a consortium to create a public cloud computing cluster that offers free or low-cost access to AI computing resources. This cluster is expected to benefit academics and startups that lack the financial capacity of major AI firms. The consortium’s goal is to promote AI development that is safe, ethical, equitable, and sustainable.

What are the law’s penalties

Unlike most AI laws outside the European Union, SB 53 specifies significant penalties for non-compliance.

Penalties depend on the severity and intent of the violation:

  • Unknowing violations without material risk and auditor violations can result in fines of up to $10,000.
  • Knowing violations involving a material risk of a catastrophic event can result in fines of up to $1 million for a first offense and up to $10 million for subsequent violations.

What does this mean for organizations that use AI

The law targets large frontier AI developers, not organizations that simply use AI. It also does not apply to organizations that use retrieval-augmented generation (RAG) to enhance large language models. However, if an organization deploys RAG in a way that introduces catastrophic risk beyond that of a standalone model, voluntary compliance may be a prudent measure.

More broadly, the law’s emphasis on transparency, public disclosure, and adherence to established AI standards will likely create pressure for all organizations—particularly those in regulated industries such as banking, finance, insurance, and healthcare—to adopt similar governance practices.

Below are several national, international, and industry standards that companies should monitor or align with:

Industry-specific best practices also continue to evolve. For instance, the NAIC Model Bulletin on the Use of AI Systems (2023) offers guidance on ethical AI use in insurance.

How Celsior can help with AI governance, transparency, and safety

Celsior provides AI Strategy and Governance consulting for organizations in banking, insurance, healthcare, and other sectors. As part of this offering, we conduct AI audits that include:

  • Evaluating current AI initiatives based on reliability, effectiveness, ethics, and compliance
  • Prioritizing and remediating audit gaps according to potential penalties, business impact, reputational risk, and company values
  • Defining processes to evaluate the governance of new AI projects during business approval
  • Establishing a recurring audit and compliance review process

We also assist with AI implementation through talent provisioning, custom training, or defined technology solutions.

MORE BLOGS

BLOG
more
The Future of Training is Hybrid: How GenSpark Blends Scale with Personalization 

Combining self-paced flexibility with live mentorship accountability

Learn More
Hybrid training
BLOG
more
The Role of Quality Engineering in Testing Generative AI Applications

Safeguarding trust in AI-driven outcomes

Learn More
Generative AI testing
BLOG
more
Understanding the Skills Shortage: How Businesses Can Adapt

Tackling skills shortage with an integrated approach

Learn More
Skills shortage