Home AI - Artificial Intelligence California’s SB 1047 Legislation Seeks to Forestall AI Missteps, Though Silicon Valley Predicts It May Prompt a Crisis

California’s SB 1047 Legislation Seeks to Forestall AI Missteps, Though Silicon Valley Predicts It May Prompt a Crisis

by admin

Update: The California Appropriations Committee approved SB 1047 with major amendments, altering the legislation on Thursday, August 15. Details on the changes can be found here.

Beyond the realm of science fiction, there hasn’t been a case of AI systems causing fatalities or massive cyber offensives. Nevertheless, certain legislators are pushing for preemptive measures to forestall such a grim future. SB 1047, a California legislative proposal, aims to avert catastrophes involving AI technologies before they occur. The bill is poised for a decisive vote in the California Senate towards the end of August.

Despite its noble intentions, SB 1047 has sparked controversy among entities in Silicon Valley, ranging from venture capitalists and major tech associations to researchers and entrepreneurs. Among the flurry of AI-related legislation nationwide, California’s Proposal for the Safe and Secure Development of Cutting-edge Artificial Intelligence stands out as particularly divisive. Here’s an explanation for the contention.

The Objectives of SB 1047

The aim of SB 1047 is to prevent the misuse of sophisticated AI models in causing significant harm to humanity.

It outlines scenarios of “significant harm” such as employing an AI to develop a weapon that leads to widespread loss of life, or using one to carry out a cyberattack that results in damages exceeding $500 million, to draw a parallel, the CrowdStrike incident was estimated to have cost over $5 billion. The legislation holds the model developers accountable for establishing robust safety measures to thwart such outcomes.

Eligibility Under SB 1047

Only top-tier AI models, costing a minimum of $100 million and utilizing 10^26 FLOPS in their training phase – akin to the compute power, would fall under the purview of SB 1047’s regulations. Sam Altman of OpenAI revealed GPT-4’s training costs were in this range. The threshold may be adjusted as deemed necessary.

Currently, a handful of firms have launched AI solutions meeting these criteria, but leading tech companies like OpenAI, Google, and Microsoft are on the verge. The prevailing trend suggests that as AI models swell in size, their predictive accuracy improves, a trajectory expected to persist. Mark Zuckerberg hinted at a significant increase in computational requirements for the next Meta Llama iteration, potentially bringing it under SB 1047’s domain.

Regarding open-source models and their iterations, the bill assigns accountability to the initial developer unless another party invests thrice as much in developing a derivative model.

Additionally, the bill mandates an “emergency stop” feature for all applicable AI models and directs developers to establish testing protocols and engage independent auditors on an annual basis to review AI safety measures.

Compliance must yield “reasonable assurance” against significant harms, albeit recognizing that absolute certainty is unattainable.

Enforcement and Implementation

The Frontier Model Division (FMD) of California would enforce compliance, requiring AI models to be certified along with their safety protocols in writing.

The FMD would consist of a five-member board representing various sectors, advising the state’s attorney general on violations and safety guidelines for AI model developers.

Developers must annually validate their AI models’ risk assessment and compliance with SB 1047 to the FMD. In case of an AI safety incident, notification to the FMD is required within 72 hours. Failure to adhere to these requirements could result in substantial fines, scaling with the training costs of the AI model.

The legislation also protects whistleblowers who report unsafe AI practices to the attorney general.

Supporters’ Perspective

State Senator Scott Wiener, the bill’s author, views SB 1047 as a proactive step to safeguard citizens against the potential risks of AI, learning from past mistakes with social media and privacy breaches.

Although the bill would have an international reach, Wiener highlights the lack of federal action on tech legislation, positioning California as a pioneering state in this domain.

Prominent AI researchers and safety advocates have endorsed the bill, emphasizing the necessity to mitigate existential risks posed by AI on par with global threats like pandemics and nuclear warfare.

A recent controversy touched on potential conflicts of interest regarding a startup that might benefit from the bill’s audit requirements, leading to the divestiture of stakes by the involved party to demonstrate commitment to AI safety.

Critics’ View

Opposition to SB 1047 is strong among technology firms and venture capital entities, citing concerns over stifling innovation and imposing arbitrary thresholds that could hinder startups.

Critics, including notable academics and industry leaders, argue that the bill may adversely affect open-source projects and research, based on exaggerated threats. They also worry about the practical implications for smaller entities operating within the AI space.

Moreover, there’s a fear that the bill could set a precedent for restrictive tech legislation, echoing concerns raised by previous efforts to regulate technology at the state level.

Looking Forward

With the Senate assembly’s vote pending, the fate of SB 1047 hangs in the balance. Amendments proposed by industry stakeholders are being considered, indicating potential collaboration even among those with reservations about the bill as it stands.

Should it succeed in the Senate, the bill’s next step would be a review by Governor Gavin Newsom. With its delayed implementation set for 2026, and possible legal challenges on the horizon, SB 1047’s journey through legislative processes and into practice will be closely watched.

Correction: An earlier version inaccurately described the provisions related to responsibility for derivative AI models. SB 1047 clarifies that responsibility lies with the developer of a derivative model only if the expenditure on training exceeds three times that of the original model’s development.

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles