Home AI - Artificial Intelligence California’s Assembly Approves AI Legislation SB 1047; Concerns Arise Over Potential Veto from Governor

California’s Assembly Approves AI Legislation SB 1047; Concerns Arise Over Potential Veto from Governor

by admin

Latest Update: The Appropriations Committee of California has made substantial revisions to SB 1047, which were adopted on Thursday, August 15. You can find detailed discussions of these changes here.

While the narrative of AI systems wreaking havoc belongs more to science fiction than real life, some legislators are eager to set precautionary measures in place before this fiction turns into an unwelcome reality. SB 1047, a legislative proposal in California, aims to mitigate potential disasters stemming from AI technology before they occur. After securing passage through the state senate in August, it now hovers on the brink of either being signed into law or vetoed by Governor Gavin Newsom of California.

Although the initiative appears universally beneficial, SB 1047 has faced criticism from a broad spectrum of entities in Silicon Valley, encompassing venture capitalists, major tech associations, researchers, and entrepreneurial pioneers. Amidst a national flurry of AI-related legislation, California’s bid for regulating Safe and Secure Innovation in Frontier Artificial Intelligence Models stands out as particularly divisive. The reasons for this are multifaceted.

What is SB 1047’s Objective?

The purpose of SB 1047 is to thwart the misuse of advanced AI models from instigating “critical harms” to humanity.

For instance, it classifies “critical harms” as scenarios where malevolent entities wield an AI model to either engineer a mass casualty weapon or mastermind a cyberattack that inflicts over $500 million in damages (to put this into perspective, the CrowdStrike disruption was projected to have cost well beyond $5 billion). It mandates that developers—those who craft these AI solutions—enact robust safety protocols to forestall such dire outcomes.

Which Models and Entities Fall Under This Regulation?

The regulatory scope of SB 1047 is limited to the most colossal AI models, specified as those requiring a minimum expenditure of $100 million and employing 10^26 FLOPS (floating-point operations per second) in their training phase. This benchmark is indicative of significant computational resources, although OpenAI’s CEO, Sam Altman, mentioned that the GPT-4 model had a comparable training cost. Adjustments to these criteria could be made as necessary.

Currently, only a select few companies have introduced AI offerings on such a grand scale, but giants like OpenAI, Google, and Microsoft are on the cusp. The accuracy and predictive capability of these AI models have generally improved with scale, a trend that is anticipated to persist. Mark Zuckerberg noted that the upcoming iteration of Meta’s AI, Llama, would necessitate tenfold more computational power, thereby bringing it under SB 1047’s purview.

The legislation specifies that in the context of open-source models and their offshoots, the initial creator holds responsibility unless a subsequent developer invests an additional $10 million in modifying the original model.

Moreover, SB 1047 prescribes the implementation of a safety framework for AI products that includes an “emergency stop” feature to deactivate the AI model entirely. Developers are compelled to devise testing protocols that evaluate AI-related risks, along with the obligation to engage independent auditors annually to review their AI safety measures.

The criterion for compliance is the “reasonable assurance” of these protocols’ efficacy in averting critical harms—which does not extend to guaranteeing absolute certainty, a notion inherently unattainable.

Enforcement and Compliance: How and by Whom?

Compliance oversight would fall to a newly established entity, the Board of Frontier Models in California. Any new public AI model aligning with SB 1047’s criteria would require certification, inclusive of a documented safety protocol.

This Board, comprised of nine appointees from various sectors including the AI industry, the open-source sphere, and academia—selected by the governor and the state legislature—would offer counsel to the attorney general of California on potential infractions and guide AI developers on safety practices.

An annual certification by the developer’s chief technology officer, appraising the AI model’s risk factors, the effectiveness of its safety protocol, and compliance with SB 1047’s mandates, is necessary. In the event of an “AI safety incident,” it must be reported to the FMD within 72 hours of its detection.

Should a developer’s safety precautions be deemed inadequate, SB 1047 authorizes California’s attorney general to initiate an injunctive action against the developer. This could result in halting the operation or training of the AI model.

In cases where an AI model plays a role in a catastrophic event, the company could face lawsuits from California’s attorney general with penalties potentially amounting to up to $10 million for the initial violation and escalating to $30 million for subsequent ones, scaled in accordance with the model’s training costs.

Additionally, the bill ensures whistleblower protection for individuals disclosing information about dangerous AI models to the attorney general of California.

Advocates of SB 1047: Their Perspective

Senator Scott Wiener of California, the legislative author behind SB 1047, and representative of San Francisco, communicated to TechCrunch that the bill is an endeavor to proactively safeguard citizens by learning from previous lapses in technology policy, notably in social media and data privacy.

Focusing on preemptive action, Wiener emphasizes the importance of not delaying until after adverse events occur. He notes the minimal legislation surrounding technology at the federal level over recent decades, underscoring the role of California in establishing a precedent.

Wiener disclosed engagements with leading labs, reaffirming the dialogue with entities like OpenAI and Meta regarding SB 1047. Prominent AI researchers, sometimes touted as the “godfathers of AI,” Geoffrey Hinton and Yoshua Bengio, alongside the Center for AI Safety, have expressed their backing for the bill, highlighting the urgency of addressing AI’s existential threats with as much gravity as global pandemics or nuclear warfare.

Despite facing scrutiny over potential conflicts of interest due to his AI risk assessment startup, Dan Hendrycks—an advocate for the bill—has divested his stake to demonstrate his unbiased stance, calling on critics to do likewise.

After incorporating suggestions from Anthropic, its CEO Dario Amodei acknowledges the potential merits of the bill, albeit cautiously, joined by a tentative nod from Elon Musk.

The Counterargument: Critics of SB 1047

The opposition to SB 1047 largely stems from Silicon Valley, voicing concerns over the legislation’s impacts.

Critics argue that the bill could stigmatize AI development by imposing burdensome regulations and shifting thresholds, potentially stifling innovation. This sentiment is echoed by venture capital firm a16z, which fears a dampening effect on the AI sphere.

Fei-Fei Li, alongside other influential AI figures, has voiced apprehensions regarding the bill’s potential to hinder the burgeoning AI ecosystem, suggesting that the legislation could disproportionately affect startups and the research community by casting a pall over open source contributions.

Furthermore, prominent voices from Big Tech, including a trade group encompassing Google, Apple, and Amazon, alongside U.S. Representative Ro Khanna, have expressed skepticism towards the bill’s approach, suggesting it could curtail innovation and arguing for a more nuanced, federally oriented regulatory framework.

What Lies Ahead?

The bill presently awaits Governor Gavin Newsom’s decision on its fate, with an impending deadline at the end of August. Senator Wiener remains uncertain of Newsom’s stance.

Should SB 1047 be enacted, its implementation would not be immediate; the formation of the Board of Frontier Models is slated for 2026. Moreover, the bill is poised to encounter legal challenges, potentially from some of its current detractors.

Clarification: An earlier version of this text inaccurately portrayed the provisions related to responsibility for derivative AI models in SB 1047. The present stipulation is that the developer of an enhanced model becomes responsible if their investment triples that of the initial model’s development spend.

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles