California Governor Newsom Signs Groundbreaking AI Legislation into Law
Time 6 Minute Read

On September 29, 2025, California Governor Gavin Newsom signed into law the Transparency in Frontier Artificial Intelligence Act (SB-53) (the “Act”). The Act sets new transparency and safety requirements, and whistleblower protections, for “frontier” artificial intelligence (“AI”) models. The Act aims to prevent catastrophic risks from the use of frontier models, increase public and government oversight over the technology, and protect employees who report serious problems with the technology. The Act will go into effect on January 1, 2026.

Relevant Definitions

The Act introduces several novel definitions, including:

  • “AI model”: an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
  • “Critical safety incident”: any of the following: (1) unauthorized access to, modification of, or exfiltration of the model weights of a foundation model that results in death, bodily injury, or damage to, or loss of, property; (2) harm resulting from the materialization of a catastrophic risk; (3) loss of control of a foundation model causing death or bodily injury; or (4) a foundation model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.
  • “Catastrophic risk”: a foreseeable and material risk that a frontier developer’s development, storage, use or deployment of a frontier model will materially contribute to the death of, or serious injury to, more than 50 people, or more than $1 in damage to, or loss of, property arising from a single incident involving a frontier model doing any of the following: (1) providing expert-level assistance in the creation or release of a chemical, biological, radiological or nuclear weapon; (2) engaging in conduct with no meaningful human oversight, intervention or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion or theft, including theft by false pretense; or (3) evading the control of its frontier developer or user.
  • “Frontier developer”: a person who has trained a frontier model.
  • “Foundation model”: an AI model that is: (1) trained on a broad data set; (2) designed for generality of output; and (3) adaptable to a wide range of distinctive tasks.
  • “Frontier model”: a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations.
  • “Frontier AI framework”: documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks.
  • “Large frontier developer”: a frontier developer with more than $500 million in annual gross revenues.

Key Requirements

The Act’s key requirements include:

  • Safety Framework. A large frontier developer must create and publish a “frontier AI framework” that describes how the large frontier developer:
    • Incorporates national standards, international standards and industry-consensus best practices into its frontier AI framework.
    • Defines and assesses thresholds to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk (which may include multiple-tiered thresholds).
    • Applies mitigations to address the potential for catastrophic risks based on the results of assessments required by the Act.
    • Reviews assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally.
    • Uses third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks.
    • Revisits and updates the frontier AI framework (including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures required under the Act).
    • Implements cybersecurity practices to secure unreleased “model weights” (i.e., a numerical parameter in a frontier model that is adjusted through training and that helps determine how inputs are transformed into outputs) from unauthorized modification or transfer by internal or external parties.
    • Identifies and responds to “critical safety incidents.”
    • Institutes internal governance practices to ensure implementation of the above-described processes.
    • Assesses and manages catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms.
  • Risk Assessments. Before, or concurrently with, deploying a new (or substantially modified) frontier model, a large frontier developer must: (1) conduct an assessment of catastrophic risks (“risk assessment”) in accordance with the developer’s frontier AI framework and (2) clearly and conspicuously publish on its website a transparency report.
    • The transparency report must include: (1) a link to the frontier developer’s website; (2) a method to communicate with the developer; (3) the frontier model’s release date; (4) the languages supported by the frontier model; (5) the modalities of output supported by the frontier model; (6) the intended uses of the frontier model; (7) any generally applicable restrictions or conditions on uses of the frontier model; (8) a summary of catastrophic risks identified in the developer’s risk assessment; (9) the results of the risk assessment; (10) whether third parties were involved in the risk assessment; and (11) other steps taken to fulfill the requirements of the frontier AI framework with respect to the frontier model.
  • Risk Assessment Submissions to Office of Emergency Services. Every three months (or on another reasonable schedule communicated to the California Governor’s Office of Emergency Services (“Cal OES”)) large frontier developers must provide Cal OES with a summary of any assessment of catastrophic risk from internal use of their frontier models. They must also report any critical safety incidents (e.g., loss of control, unauthorized access to or exfiltration of frontier models) within 15 days of the incident, or within 24 hours if there is imminent risk of death or serious physical injury. Reports are kept confidential and exempt from public records laws to protect trade secrets.
  • Government Oversight and CalCompute. The California Government Operations Agency must establish a consortium to develop a public cloud computing cluster to be known as “CalCompute.” CalCompute is to foster safe, ethical, and equitable AI research.
  • Whistleblower Protections. Frontier developers may not retaliate against employees who disclose information about activities that pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk. Frontier developers must provide clear notices to employees about their rights, and provide a reasonable internal process through which a covered employee may anonymously disclose relevant information. Employees can seek injunctions and attorney’s fees if they experience retaliation.  
  • Enforcement and Penalties for Noncompliance. The Act empowers the California Attorney General to enforce compliance with the Act. Certain violations of the Act may result in civil penalties of up to $1 million per violation.

Any entity developing or training advanced AI models should promptly assess whether they are covered by the Act before the Act’s effective date of January 1, 2026, and take steps as appropriate to comply with the Act’s requirements.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page