On September 29, 2025, California Governor Gavin Newsom signed into law the Transparency in Frontier Artificial Intelligence Act (SB-53) (the “Act”). The Act sets new transparency and safety requirements, and whistleblower protections, for “frontier” artificial intelligence (“AI”) models. The Act aims to prevent catastrophic risks from the use of frontier models, increase public and government oversight over the technology, and protect employees who report serious problems with the technology. The Act will go into effect on January 1, 2026.
Relevant Definitions
The Act introduces several novel definitions, including:
- “AI model”: an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
- “Critical safety incident”: any of the following: (1) unauthorized access to, modification of, or exfiltration of the model weights of a foundation model that results in death, bodily injury, or damage to, or loss of, property; (2) harm resulting from the materialization of a catastrophic risk; (3) loss of control of a foundation model causing death or bodily injury; or (4) a foundation model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.
- “Catastrophic risk”: a foreseeable and material risk that a frontier developer’s development, storage, use or deployment of a frontier model will materially contribute to the death of, or serious injury to, more than 50 people, or more than $1 in damage to, or loss of, property arising from a single incident involving a frontier model doing any of the following: (1) providing expert-level assistance in the creation or release of a chemical, biological, radiological or nuclear weapon; (2) engaging in conduct with no meaningful human oversight, intervention or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion or theft, including theft by false pretense; or (3) evading the control of its frontier developer or user.
- “Frontier developer”: a person who has trained a frontier model.
- “Foundation model”: an AI model that is: (1) trained on a broad data set; (2) designed for generality of output; and (3) adaptable to a wide range of distinctive tasks.
- “Frontier model”: a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations.
- “Frontier AI framework”: documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks.
- “Large frontier developer”: a frontier developer with more than $500 million in annual gross revenues.
Key Requirements
The Act’s key requirements include:
- Safety Framework. A large frontier developer must create and publish a “frontier AI framework” that describes how the large frontier developer:
- Incorporates national standards, international standards and industry-consensus best practices into its frontier AI framework.
- Defines and assesses thresholds to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk (which may include multiple-tiered thresholds).
- Applies mitigations to address the potential for catastrophic risks based on the results of assessments required by the Act.
- Reviews assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally.
- Uses third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks.
- Revisits and updates the frontier AI framework (including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures required under the Act).
- Implements cybersecurity practices to secure unreleased “model weights” (i.e., a numerical parameter in a frontier model that is adjusted through training and that helps determine how inputs are transformed into outputs) from unauthorized modification or transfer by internal or external parties.
- Identifies and responds to “critical safety incidents.”
- Institutes internal governance practices to ensure implementation of the above-described processes.
- Assesses and manages catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms.
- Risk Assessments. Before, or concurrently with, deploying a new (or substantially modified) frontier model, a large frontier developer must: (1) conduct an assessment of catastrophic risks (“risk assessment”) in accordance with the developer’s frontier AI framework and (2) clearly and conspicuously publish on its website a transparency report.
- The transparency report must include: (1) a link to the frontier developer’s website; (2) a method to communicate with the developer; (3) the frontier model’s release date; (4) the languages supported by the frontier model; (5) the modalities of output supported by the frontier model; (6) the intended uses of the frontier model; (7) any generally applicable restrictions or conditions on uses of the frontier model; (8) a summary of catastrophic risks identified in the developer’s risk assessment; (9) the results of the risk assessment; (10) whether third parties were involved in the risk assessment; and (11) other steps taken to fulfill the requirements of the frontier AI framework with respect to the frontier model.
- Risk Assessment Submissions to Office of Emergency Services. Every three months (or on another reasonable schedule communicated to the California Governor’s Office of Emergency Services (“Cal OES”)) large frontier developers must provide Cal OES with a summary of any assessment of catastrophic risk from internal use of their frontier models. They must also report any critical safety incidents (e.g., loss of control, unauthorized access to or exfiltration of frontier models) within 15 days of the incident, or within 24 hours if there is imminent risk of death or serious physical injury. Reports are kept confidential and exempt from public records laws to protect trade secrets.
- Government Oversight and CalCompute. The California Government Operations Agency must establish a consortium to develop a public cloud computing cluster to be known as “CalCompute.” CalCompute is to foster safe, ethical, and equitable AI research.
- Whistleblower Protections. Frontier developers may not retaliate against employees who disclose information about activities that pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk. Frontier developers must provide clear notices to employees about their rights, and provide a reasonable internal process through which a covered employee may anonymously disclose relevant information. Employees can seek injunctions and attorney’s fees if they experience retaliation.
- Enforcement and Penalties for Noncompliance. The Act empowers the California Attorney General to enforce compliance with the Act. Certain violations of the Act may result in civil penalties of up to $1 million per violation.
Any entity developing or training advanced AI models should promptly assess whether they are covered by the Act before the Act’s effective date of January 1, 2026, and take steps as appropriate to comply with the Act’s requirements.
Search
Recent Posts
Categories
- Behavioral Advertising
- Centre for Information Policy Leadership
- Children’s Privacy
- Cyber Insurance
- Cybersecurity
- Enforcement
- European Union
- Events
- FCRA
- Financial Privacy
- General
- Health Privacy
- Identity Theft
- Information Security
- International
- Marketing
- Multimedia Resources
- Online Privacy
- Security Breach
- U.S. Federal Law
- U.S. State Law
- Workplace Privacy
Tags
- Aaron Simpson
- Accountability
- Adequacy
- Advertisement
- Advertising
- Age Appropriate Design Code
- Age Verification
- American Privacy Rights Act
- Anna Pateraki
- Anonymization
- Anti-terrorism
- APEC
- Apple Inc.
- Argentina
- Arkansas
- Article 29 Working Party
- Artificial Intelligence
- Audit
- Australia
- Austria
- Automated Decisionmaking
- Baltimore
- Bankruptcy
- Belgium
- Biden Administration
- Big Data
- Binding Corporate Rules
- Biometric Data
- Blockchain
- Bojana Bellamy
- Brazil
- Brexit
- British Columbia
- Brittany Bacon
- Brussels
- Business Associate Agreement
- BYOD
- California
- CAN-SPAM
- Canada
- Cayman Islands
- CCPA
- CCTV
- Chile
- China
- Chinese Taipei
- Christopher Graham
- CIPA
- Class Action
- Clinical Trial
- Cloud
- Cloud Computing
- CNIL
- Colombia
- Colorado
- Committee on Foreign Investment in the United States
- Commodity Futures Trading Commission
- Compliance
- Computer Fraud and Abuse Act
- Congress
- Connecticut
- Consent
- Consent Order
- Consumer Protection
- Consumer Rights
- Cookies
- COPPA
- Coronavirus/COVID-19
- Council of Europe
- Council of the European Union
- Court of Justice of the European Union
- CPPA
- CPRA
- Credit Monitoring
- Credit Report
- Criminal Law
- Critical Infrastructure
- Croatia
- Cross-Border Data Flow
- Cross-Border Data Transfer
- Cyber Attack
- Cybersecurity and Infrastructure Security Agency
- Data Breach
- Data Brokers
- Data Controller
- Data Localization
- Data Privacy Framework
- Data Processor
- Data Protection Act
- Data Protection Authority
- Data Protection Impact Assessment
- Data Protection Officer
- Data Security
- Data Transfer
- David Dumont
- David Vladeck
- Deceptive Trade Practices
- Delaware
- Denmark
- Department of Commerce
- Department of Defense
- Department of Health and Human Services
- Department of Homeland Security
- Department of Justice
- Department of the Treasury
- Design
- Digital Markets Act
- District of Columbia
- Do Not Call
- Do Not Track
- Dobbs
- Dodd-Frank Act
- DORA
- DPIA
- E-Privacy
- E-Privacy Directive
- Ecuador
- Ed Tech
- Edith Ramirez
- Electronic Communications Privacy Act
- Electronic Privacy Information Center
- Electronic Protected Health Information
- Elizabeth Denham
- Employee Monitoring
- Encryption
- ENISA
- EU Data Protection Directive
- EU Member States
- European Commission
- European Data Protection Board
- European Data Protection Supervisor
- European Parliament
- Facial Recognition Technology
- FACTA
- Fair Credit Reporting Act
- Fair Information Practice Principles
- Federal Aviation Administration
- Federal Bureau of Investigation
- Federal Communications Commission
- Federal Data Protection Act
- Federal Trade Commission
- FERC
- Financial Data
- FinTech
- Florida
- Food and Drug Administration
- Foreign Intelligence Surveillance Act
- France
- Franchise
- Fred Cate
- Freedom of Information Act
- Freedom of Speech
- Fundamental Rights
- GDPR
- Geofencing
- Geolocation
- Geolocation Data
- Georgia
- Germany
- Global Privacy Assembly
- Global Privacy Enforcement Network
- Gramm Leach Bliley Act
- Hacker
- Hawaii
- Health Data
- HIPAA
- HITECH Act
- Hong Kong
- House of Representatives
- Hungary
- Illinois
- India
- Indiana
- Indonesia
- Information Commissioners Office
- Information Sharing
- Insurance Provider
- Internal Revenue Service
- International Association of Privacy Professionals
- International Commissioners Office
- Internet
- Internet of Things
- Iowa
- IP Address
- Ireland
- Israel
- Italy
- Jacob Kohnstamm
- Japan
- Jason Beach
- Jay Rockefeller
- Jenna Rode
- Jennifer Stoddart
- Jersey
- Jessica Rich
- John Delionado
- John Edwards
- Kentucky
- Korea
- Large Language Model
- Latin America
- Laura Leonard
- Law Enforcement
- Lawrence Strickling
- Legislation
- Liability
- Lisa Sotto
- Litigation
- Location-Based Services
- London
- Louisiana
- Madrid Resolution
- Maine
- Malaysia
- Maryland
- Massachusetts
- Meta
- Mexico
- Michigan
- Microsoft
- Minnesota
- Missouri
- Mobile
- Mobile App
- Mobile Device
- Montana
- Morocco
- MySpace
- Natascha Gerlach
- National Institute of Standards and Technology
- National Labor Relations Board
- National Science and Technology Council
- National Security
- National Security Agency
- National Telecommunications and Information Administration
- Nebraska
- NEDPA
- Netherlands
- Nevada
- New Hampshire
- New Jersey
- New Mexico
- New York
- New Zealand
- Nigeria
- Ninth Circuit
- North Carolina
- North Dakota
- North Korea
- Norway
- Obama Administration
- OCPA
- OECD
- Office for Civil Rights
- Office of Foreign Assets Control
- Ohio
- Oklahoma
- Online Behavioral Advertising
- Online Privacy
- Opt-In Consent
- Oregon
- Outsourcing
- Pakistan
- Parental Consent
- Payment Card
- PCI DSS
- Penalty
- Pennsylvania
- Personal Data
- Personal Health Information
- Personal Information
- Personally Identifiable Information
- Peru
- Philippines
- Poland
- PRISM
- Privacy By Design
- Privacy Notice
- Privacy Policy
- Privacy Rights
- Privacy Rule
- Privacy Shield
- Profiling
- Protected Health Information
- Ransomware
- Record Retention
- Red Flags Rule
- Rhode Island
- Richard Thomas
- Right to Be Forgotten
- Right to Privacy
- Risk Assessment
- Risk-Based Approach
- Rosemary Jay
- Russia
- Safe Harbor
- Salesforce
- Sanctions
- Schrems
- Scott Kimpel
- Securities and Exchange Commission
- Security Rule
- Senate
- Sensitive Data
- Serbia
- Service Provider
- Singapore
- Smart Grid
- Smart Metering
- Social Media
- Social Security Number
- South Africa
- South Carolina
- South Dakota
- South Korea
- Spain
- Spyware
- Standard Contractual Clauses
- State Attorneys General
- States Attorney General
- Steven Haas
- Stick With Security Series
- Stored Communications Act
- Student Data
- Supreme Court
- Surveillance
- Sweden
- Switzerland
- Taiwan
- Targeted Advertising
- Telecommunications
- Telemarketing
- Telephone Consumer Protection Act
- Tennessee
- Terry McAuliffe
- Texas
- Text Message
- Thailand
- Transparency
- Transportation Security Administration
- Trump Administration
- United Arab Emirates
- United Kingdom
- United States
- Unmanned Aircraft Systems
- Uruguay
- Utah
- Vermont
- Video Privacy Protection Act
- Video Surveillance
- Virginia
- Viviane Reding
- Washington
- Whistleblowing
- Wireless Network
- Wiretap
- ZIP Code