← Back to incidents
California AI Safety Legislation SB 1047 Vetoed Despite Comprehensive Safety Provisions
MediumCalifornia Governor Gavin Newsom vetoed SB 1047, comprehensive AI safety legislation that would have required mandatory safety testing and assessments for large AI models, following intense industry lobbying against the bill.
Category
Other
Industry
Government
Status
Resolved
Date Occurred
Sep 29, 2024
Date Reported
Sep 29, 2024
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
other
Harm Type
operational
Human Review in Place
Unknown
Litigation Filed
No
Regulatory Body
California State Government
AI regulationCalifornia legislationSB 1047AI safetyGovernor vetotechnology policyregulatory framework
Full Description
On September 29, 2024, California Governor Gavin Newsom vetoed Senate Bill 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, marking a significant setback for comprehensive AI safety regulation in the United States. The bill, authored by Senator Scott Wiener, represented the most ambitious attempt at AI regulation at the state level and would have established mandatory safety standards for AI models costing over $100 million to develop or requiring computational power exceeding 10^26 operations during training.
SB 1047 included several key provisions that drew both support from AI safety advocates and fierce opposition from major technology companies. The legislation would have required developers to conduct safety assessments before training covered models, implement safety protocols to prevent catastrophic risks, and establish whistleblower protections for employees reporting safety concerns. The bill also proposed creating a new state agency, the Frontier Model Division, within the Department of Technology to oversee compliance and enforcement.
The legislative battle over SB 1047 exposed deep divisions within California's technology sector and the broader AI community. Major technology companies including OpenAI, Google, Meta, and Anthropic lobbied against the bill, arguing it would stifle innovation and drive AI development out of California. They contended the legislation was premature and could harm California's competitive position in the global AI race. Conversely, AI safety researchers, some academics, and advocacy groups supported the bill as a necessary first step toward preventing potential catastrophic risks from advanced AI systems.
Governor Newsom's veto message emphasized his administration's commitment to AI safety while expressing concern about the bill's approach. He argued that SB 1047 was too prescriptive and could inadvertently hinder beneficial AI innovation. Newsom indicated he would work with the legislature on alternative approaches that balance safety concerns with the need to maintain California's leadership in AI development. The governor also noted the importance of federal coordination on AI regulation and suggested that state-level action should complement rather than potentially conflict with national approaches.
The veto of SB 1047 has significant implications for AI regulation nationwide, as California's approach often influences other states and federal policy. The failed legislation highlighted ongoing challenges in regulating rapidly evolving AI technology, including questions about appropriate regulatory frameworks, the balance between innovation and safety, and the role of state versus federal oversight. Industry observers noted that the intense lobbying and public debate around the bill demonstrated the growing recognition of AI's potential risks and benefits, even as policymakers struggled to develop effective regulatory responses.
Root Cause
Governor Gavin Newsom vetoed comprehensive AI safety legislation citing concerns about stifling innovation and the bill's prescriptive approach to regulating AI development without sufficient flexibility for emerging technologies.
Mitigation Analysis
The vetoed bill would have required safety testing protocols, whistleblower protections, and mandatory safety assessments for AI models costing over $100 million to train. Alternative approaches like industry self-regulation, federal coordination, or more flexible state frameworks could provide safety oversight while preserving innovation incentives.
Lessons Learned
The defeat of California's comprehensive AI safety legislation demonstrates the significant political and economic challenges facing AI regulation, highlighting the need for more collaborative approaches between government, industry, and safety advocates to develop effective oversight frameworks.
Sources
California governor vetoes controversial AI safety bill
Reuters · Sep 29, 2024 · news
California Governor Vetoes Sweeping A.I. Safety Bill
The New York Times · Sep 29, 2024 · news