Home AI - Artificial Intelligence California’s SB 1047 Legislation Seeks to Avert AI Catastrophes, Yet Silicon Valley Foresees Potential Backfire

California’s SB 1047 Legislation Seeks to Avert AI Catastrophes, Yet Silicon Valley Foresees Potential Backfire

by admin

While the notion of AI systems initiating fatal attacks or cyber warfare might sound like the plot of a futuristic movie, the potential for such scenarios has prompted preventive actions by some legislators. SB 1047, a proposed bill in California, aims to proactively address disasters facilitated by AI technologies. As it moves towards a decisive vote in the California Senate in late August, the bill seeks to establish preemptive checks against AI-induced catastrophes.

The proposition of such preventive measures has, intriguingly, garnered criticism from a broad spectrum of stakeholders in Silicon Valley, ranging from startup entrepreneurs to large technology corporations and investors. Amidst numerous AI-related legislative efforts nationwide, California’s attempt at securing and innovating frontier AI technologies through the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act has sparked significant debate and controversy, drawing attention to the various parties involved in the discourse.

Function and Scope of SB 1047

SB 1047 is designed to curtail the utilization of expansive AI models for the perpetration of “critical harms” towards humanity.

Illustrative “critical harms” as per the bill include leveraging an AI system to fabricate a weapon causing widespread fatalities, or orchestrating a cyber onslaught leading to financial damages exceeding $500 million (to put this in perspective, the Crowdstrike outage has been estimated to inflict over $5 billion in damages). The legislation imposes a responsibility on developers (the entities crafting these models) to enforce adequate safety measures to avert such scenarios.

Eligibility Criteria for Models and Entities

The stipulations of SB 1047 are aimed exclusively at the largest AI models, specifically those with a development cost of $100 million or more, and utilizing 10^26 FLOPS during their training session. Such significant computational requirements are noted, with GPT-4 by OpenAI being a cited example of these financial and computational benchmarks. The thresholds set by the bill may be subject to future adjustments.

Currently, a limited number of enterprises have launched AI offerings that meet these criteria, yet giants in the field like OpenAI, Google, and Microsoft are on the cusp of doing so. The tendency of AI models to enhance their precision with scale amplification is anticipated to persist, evidenced by Mark Zuckerberg mentioning Meta’s next-gen Llama needing 10x more computing power, thereby potentially falling under the purview of SB 1047.

In relation to open-source models and their derivatives, the bill specifies that the financial responsibility of $25M spent on development or enhancement shifts accountability to the party making such investments, from the initial creator.

Moreover, the bill mandates the implementation of safety features against misuse, including a comprehensive “emergency stop” functionality to deactivate AI systems instantly. Developers are also obligated to conduct risk assessment tests, employ independent auditors annually to evaluate AI safety protocols, thereby ensuring “reasonable assurance” against critical harms without guaranteeing absolute certainty.

Enforcement and Compliance

A newly established entity, the Frontier Model Division (FMD), would oversee adherence to the bill’s guidelines. Each new AI model that matches SB 1047’s criteria needs to secure certification, evidencing compliance with established safety protocols.

Governing the FMD would be a five-member board, incorporating voices from the AI sector, open-source community, and academic circles, nominated by the governor and legislative body of California. This board is tasked with advising the state attorney general on possible infractions and guiding AI developers on safety standards.

Developers must annually validate their AI models’ safety protocols and potential risks to the FMD. In the event of an “AI safety incident,” such occurrences must be swiftly reported to the FMD within a 72-hour window. Failure to adhere to the bill’s requirements could trigger civil lawsuits, with the initial penalties potentially soaring up to $10 million for a model with a $100 million training expense, escalating with subsequent violations.

The bill also upholds whistleblower protections to encourage disclosure of information pertaining to unsafe AI models to the California Attorney General.

Advocates for SB 1047

State Senator Scott Wiener, the bill’s author, positioned SB 1047 as a lesson derived from the shortcomings of prior technology policies, aiming to preemptively safeguard against potential detriments. Highlighting a reactive trend towards technological harms, Wiener advocates for a proactive stance.

Furthermore, the bill’s implications extend beyond California, applying to any entities conducting business within the state, signifying California’s pivotal role in setting regulatory precedents in technology legislation, according to Wiener.

Prominent AI researchers, Geoffrey Hinton and Yoshua Bengio, have expressed support for SB 1047, representing a segment within the AI community concerned about catastrophic implications of unchecked AI advancements. An endorsement by the Center for AI Safety emphasizes the urgency of addressing AI risks comparable to major global threats.

Dan Hendrycks, from the Center for AI Safety, perceives the bill as crucial for sustaining industry progression in the face of potential safety incidents. Despite facing scrutiny over potential conflicts of interest related to his startup, Gray Swan, Hendrycks has divested his equity, underscoring a commitment to transparent advocacy for AI safety.

Critics of SB 1047

The bill has, however, incited strong opposition from numerous sectors within Silicon Valley.

Critics like the venture firm A16Z challenge the bill’s practicality, citing concerns over stifling innovation and imposing burdensome requirements on startups. Fei-Fei Li, a distinguished figure in AI, warns against the detrimental effects on the burgeoning AI landscape, highlighting personal engagements and investments in AI ventures as points of contention.

Controversy also surrounds the bill’s impact on open-source development, with fears that it could hinder research initiatives and amplify risks due to the ease of modification and deployment for malicious purposes. This sentiment is shared by leading industry figures and academic researchers, who argue that the bill could inadvertently hamper technological progress and research.

The direct targets of the bill, major technology firms, also express apprehensions, suggesting that SB 1047 could curtail free speech and drive tech innovation away from California. This concern is echoed by tech executives and trade organizations, who have historically opposed state-level regulatory attempts, advocating instead for federal measures.

Looking Forward

As SB 1047 advances towards a vote in the California Senate, its fate hinges on impending amendments and legislative review. The introduction of suggested modifications by entities like Anthropic reflects a willingness to engage in constructive dialogue, albeit with reservations about the current bill’s provisions.

Pending Senate approval and potential endorsement by Governor Gavin Newsom, the bill’s enactment would not be immediate, given the scheduled formation of the FMD in 2026. Nonetheless, presuming its passage, legal challenges are anticipated, signaling a contentious path ahead for SB 1047 and its stakeholders.

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles