Innovation for Frontier Artificial Intelligence Models Act, reignited the debate about how best to regulate the rapidly evolving field of artificial intelligence. Newsom’s veto illustrates a cautious approach to regulating a new technology and could pave the way for more pragmatic AI safety policies.
The robust debate SB 1047 sparked, imperfect as it was, is an encouraging sign that policymakers are awakening to AI’s profound implications for society. But this conversation must be grounded in research and meaningful, consensus-driven collaboration, and reflect carefully evaluated tradeoffs. Getting this balance right is essential to unlocking AI’s transformative potential while safeguarding against its possible perils.
SB 1407 sought to introduce new safeguards against potential risks by imposing stringent requirements on developers of large-scale AI models. The bill would have implemented safety protocols to avoid mass harm, authorized the attorney general to bring civil actions for violations, and imposed a new liability framework holding model developers accountable for downstream misuse.
Those policy prescriptions drew surprising bipartisan opposition from AI leaders—including Google, Meta, Microsoft, and OpenAI—to politicians, including former Speaker of the House Nancy Pelosi. They argued that targeting general-purpose technology rather than specific high-risk applications would stifle innovation and research.
The liability provisions in particular departed from established norms in the governance of general-purpose technologies, threatening to create uncertainties that could slow progress. It would be akin to penalizing automobile manufacturers—another general-purpose technology with diverse applications—for accidents involving vehicles.
Newsom ultimately agreed, stating:
While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions—so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.
The governor’s decision to veto SB 1047 reflects a cautious approach to regulating a technology still in its formative stages. It may also represent an important shift in the AI safety conversation. The once-dominant narrative of existential and catastrophic risks is being replaced with a more measured and nuanced dialogue. This change was apparent in discussions around AI governance held during the United Nations General Assembly, which were notably devoid of the apocalyptic rhetoric that characterized much of the conversation last year.
The veto should not be misinterpreted as a free pass for the AI industry or a declaration that there are no existing or emerging risks. We know these systems are capable of amplifying bias in all of its forms—racial, gender, and political—to name a few. Several research studies suggest that models can engage in worrisome deceptive behavior. As these models grow increasingly sophisticated, there are legitimate concerns around risks on the frontier, but developing strong AI models is our best path forward for bolstering defensive capabilities.
Today’s more rational and pragmatic conversations about AI safety acknowledge the importance of mitigating risks while also recognizing and leveraging the transformative benefits AI can bring across various domains. That is crucial because legislation like SB 1047 has failed to account for these tradeoffs. Slowing the development and deployment of AI based on hypothetical risks imposes significant costs by delaying or deterring beneficial innovations. Put another way, there are harms—in some cases measured in lost lives—from slowing the work in drug discovery, treatment of rare diseases, and personalized tutoring.
The best policy approach at this moment is to prioritize more research—even fundamental studies that aim to understand how these models actually work since even their creators often struggle to fully grasp their mechanisms. Additional investment in research, collaboration, and the implementation of safety measures will be crucial for responsible AI development. Greater transparency, particularly for open-source models, is also essential. Finally, creating structures and systems to foster stronger consensus among AI developers, civil society, policymakers, and researchers is essential to navigate the tension in values with important policy tradeoffs.
For its part, the AI industry must pivot from reactive lobbying to proactive outreach and education to inform public policy. There are more than 694 bills in 45 states related to AI this year, some well-informed and others less so. There’s an urgent need to build policymakers’ understanding of these fast-evolving technologies to ensure smarter legislation and better-informed regulation.
It is encouraging to see the shift to a more pragmatic and research-driven approach to AI governance. While safeguards are needed as AI rapidly advances, getting this balance right is essential to realizing AI’s immense potential to improve lives while navigating the challenges responsibly.