The debate around California’s SB 1047, a bill aimed at mitigating AI catastrophes, has intensified as it clears the Senate and moves to Governor Gavin Newsom for consideration. With a deadline of September 30 to make a decision, Newsom is tasked with balancing the possible extreme dangers AI technology might pose, including loss of human lives, against stifling the state’s flourishing AI sector.
Crafted by State Senator Scott Wiener, the bill targets the prevention of disasters attributed to large AI models, envisioning scenarios from fatalities to cyberattacks causing financial damage exceeding $500 million.
It’s crucial to understand that currently, only a handful of AI models exist that would fall under this legislation, and no AI-related cyberattacks of such magnitude have occurred to date. The legislation is forward-looking, focusing on potential future threats rather than existing problems.
Under SB 1047, AI developers would be held accountable for the negative impacts of their creations, akin to how gun manufacturers are responsible for mass shootings. It empowers the California attorney general to pursue legal action against AI firms for severe damages, and in cases of reckless behavior, a company could be ordered to halt operations. Additionally, regulated models must incorporate a fail-safe mechanism for deactivation in perilous situations.
This legislation stands on the cusp of significantly altering the landscape of America’s AI industry, awaiting only Newsom’s signature to be enacted.
Reasons Newsom Might Approve It
Wiener has advocated for increased accountability within Silicon Valley, arguing from historical lessons on tech regulation. Newsom’s decision could pivot on a willingness to set a stringent regulatory framework for AI, reinforcing accountability in the tech sector.
Notably, some AI leaders, like Elon Musk, have expressed cautious support for SB 1047.
Echoing a tempered endorsement, Sophia Velastegui, previously Microsoft’s chief AI officer, praises the bill as a balanced approach, while advocating for broader institutional oversight on AI across the nation. “The legislation isn’t flawless, but it’s a step in the right direction,” Velastegui remarked to TechCrunch.
Anthropic, while not officially endorsing, sees merit in the bill, particularly after their suggestions were incorporated. Improvements include limiting lawsuits to post-damage scenarios, as articulated in a letter to Governor Newsom.
Potential Reasons for a Veto by Newsom
Robust opposition from the tech community could influence Newsom to veto the bill. A veto would not only save his reputation but also defer the contentious issue to future leadership or federal intervention.
Andreessen Horowitz’s Martin Casado has spotlighted the radical departure this bill represents from three decades of software policy, centering liability not on applications but on the underlying infrastructure, a move unprecedented in tech regulation.
There’s a collective apprehension within the tech domain, with significant voices like Speaker Nancy Pelosi, OpenAI, and a spectrum of AI luminaries urging Newsom against signing, fearing it could dampen the state’s AI innovation engine.
Indeed, the potential cooling effect on the startup ecosystem, a critical engine of economic growth, poses a significant concern, with entities like the U.S. Chamber of Commerce advocating for a veto, highlighting AI’s foundational role in the nation’s economic expansion.
Should SB 1047 Become Law
Signing SB 1047 into law doesn’t mean immediate change, insiders say. The real shifts would begin by January 1, 2025, with tech companies required to draft safety reports for their AI models, paving the way for possible injunctions against hazardous AI operations.
By 2026, the establishment of the Board of Frontier Models would usher in a new phase, focusing on compliance and safer AI development practices. This year would also introduce mandatory safety audits for AI model developers, further embedding a culture of responsibility within the industry.
Come 2027, the model developers could expect to receive official guidance on secure AI operations from the Board of Frontier Models.
Consequences of a Veto on SB 1047
A veto from Newsom would align with OpenAI’s preference for federal oversight on AI, potentially paving the way for broader, national regulation.
Recently, OpenAI and Anthropic laid the foundation for federal AI regulation by agreeing to provide the AI Safety Institute early access to their advanced models, as revealed in a press release.
This partnership exemplifies a broader historical trend of collaboration between Silicon Valley and federal agencies, a tradition that some argue serves both innovation and national interest better than stringent state-level mandates.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


