California SB 53 could finally check Big AI power, here’s how
TL;DR:
- SB 53 targets “frontier” AI and large developers with new disclosures.
- Companies over $500M revenue must publish safety frameworks.
- Critical safety incidents must be reported within 15 days.
- A state-backed CalCompute plan aims to widen access to compute.
- Newsom’s decision will influence national AI rules and timing.
California lawmakers sent SB 53 to Governor Gavin Newsom in mid September 2025. The bill creates the Transparency in Frontier Artificial Intelligence Act, aimed at the largest AI developers building “frontier” foundation models. It adds public disclosure duties, incident reporting, and whistleblower protections. It also lays groundwork for CalCompute, a state-affiliated cloud cluster intended to widen access to compute.
TechCrunch reports the bill is written to focus on major AI developers, not small startups, with a revenue threshold that clearly covers firms like OpenAI and Google DeepMind. A management-side law firm analysis adds that lawmakers narrowed the scope compared with last year’s failed attempt, pairing transparency with compute access and worker protections. The bill text defines key thresholds by both compute used to train models and developer revenue, which limits who must comply.
What SB 53 requires
Publish a frontier AI safety framework. Large frontier developers must write, follow, and publicly post an internal framework that explains how they assess catastrophic risks, apply mitigations, review decisions before deployment, and secure unreleased model weights. The framework must be reviewed and updated at least yearly, with clear change logs if it is modified.
File a transparency report before deployment. Before, or at the time they deploy, frontier developers must publish a report that covers release details, supported languages and modalities, intended uses and restrictions, and summaries of catastrophic risk assessments, including any third party evaluations. A model or system card that includes this content counts.
Report critical safety incidents on a clock. If a developer discovers a qualifying incident, it must report it to California’s Office of Emergency Services within 15 days. If there is imminent risk of death or serious injury, disclosure must happen within 24 hours to the appropriate authority. The state will publish anonymized, aggregated statistics yearly starting January 1, 2027.
Protect whistleblowers. Covered employees who handle critical safety risk can report concerns to authorities or internal channels without retaliation. Large frontier developers must provide an anonymous intake process with periodic updates to the reporter and board-level visibility. Courts can award fees and grant injunctions in retaliation cases.
Define who is covered. The statute defines a “frontier model” as a foundation model trained with more than 10^26 operations. A “large frontier developer” is a frontier developer with more than 500 million dollars in annual gross revenue. These definitions can be updated by the Department of Technology from 2027 onward to match technical progress.
Create a reporting and research loop. Developers must send periodic summaries of internal catastrophic risk assessments to the state. The Office of Emergency Services will run the reporting system and can share information with other agencies, with protections for trade secrets and security.
The CalCompute piece
Separate sections stand up a consortium to design “CalCompute,” a public cloud cluster to foster safe, equitable AI research and development. The idea is to broaden access to compute outside the largest companies, with academic, labor, advocacy, and technical seats on the consortium. It becomes operative only once funded.
Why supporters say SB 53 matters
Supporters, including the bill’s author, argue this is the first state framework that forces leading labs to show their work on catastrophic risk, while giving researchers affordable compute and protecting insiders who raise red flags. They say the combination of public safety disclosures, timelines for incident reporting, and whistleblower protections will reduce the chance that a high-capability model causes severe harm.
Analysts also call out the practical effect of thresholds. By focusing on models trained above a compute line and developers above a revenue line, the bill aims to avoid crushing small companies while still capturing the firms most able to cause broad harm. That design is deliberate, according to early coverage.
The counterarguments
Industry critics warn that a single state could set de facto national rules, forcing multi-state companies to follow California’s template. A senior White House adviser recently argued that California should not set AI rules for the country, signaling possible federal preemption efforts. Others question whether the bill focuses too much on tail risks and not enough on immediate harms such as bias, copyright, or employment impacts.
What happens next
On September 18, 2025, the author’s office said the bill awaits the governor’s decision. If signed, agencies must stand up the reporting portal and CalCompute process, then developers will publish frameworks and system cards before new deployments. Expect guidance, FAQs, and possibly legal challenges over scope and trade secret redactions. Employers should track definitions that the Department of Technology can update starting in 2027.
Quick checklist: if you are a covered AI developer
- Map whether your next model meets the 10^26 operations threshold.
- Confirm consolidated revenue against the 500 million dollar bar.
- Draft a public frontier AI framework, including governance and cyber controls.
- Prepare a system card or transparency report with risk summaries.
- Stand up a 15-day incident-report workflow and a 24-hour emergency path.
- Create an anonymous whistleblower channel and board reporting cadence.
Why it matters
California often sets de facto national standards. SB 53 blends disclosure, timelines, and worker protections with a plan to widen access to compute. If signed, it will push the largest developers to show their safety math before release, while preserving space for startups. It could also accelerate federal debates over preemption and harmonization.
Sources:
- TechCrunch, Why California’s SB 53 might provide a meaningful check on big AI companies, https://techcrunch.com/2025/09/19/why-californias-sb-53-might-provide-a-meaningful-check-on-big-ai-companies/, 2025-09-19
- Fisher Phillips, California Lawmakers Pass Landmark AI Transparency Law for Frontier Models, https://www.fisherphillips.com/en/news-insights/california-lawmakers-pass-landmark-ai-transparency-law-for-frontier-models.html, 2025-09-15
- California Legislature, SB 53 bill text, https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53, 2025-09-20
- Senator Scott Wiener’s office, WHAT THEY ARE SAYING: Landmark AI Bill Awaits Newsom’s Decision, https://sd11.senate.ca.gov/news/what-they-are-saying-senator-wieners-landmark-ai-bill-awaits-newsoms-decision, 2025-09-18
- Vox, This California bill will require transparency from AI companies, https://www.vox.com/future-perfect/461340/sb53-california-ai-bill-catastrophic-risk-explained, 2025-09-14
- POLITICO, Trump adviser: We don’t want California to set AI rules for the country, https://www.politico.com/news/2025/09/16/we-dont-want-california-to-set-the-rules-for-ai-across-the-country-trump-adviser-says-00565251, 2025-09-16

