The author behind California’s SB 1047, the most debated AI safety legislation of 2024, has unveiled a new AI bill that could potentially disrupt Silicon Valley.
California State Senator Scott Wiener announced a fresh piece of legislation on Friday aimed at safeguarding employees within major AI laboratories, permitting them to voice concerns if they perceive their company’s AI systems as posing a “critical risk” to society. This latest bill, SB 53, will also initiate the creation of a public cloud computing infrastructure, dubbed CalCompute, designed to provide researchers and startups with the essential computational power to develop AI solutions that serve the public interest.
Wiener’s previous bill, California’s SB 1047, ignited a nationwide conversation about managing expansive AI systems that could lead to disastrous outcomes. The intention behind SB 1047 was to avert potential calamities stemming from large AI models, including loss of life or massive cyberattacks causing damages exceeding $500 million. However, Governor Gavin Newsom ultimately rejected the bill in September, articulating that SB 1047 was not the best path forward.
The fallout from the SB 1047 debate turned contentious, with some leaders in Silicon Valley arguing that the bill would diminish the United States’ competitive stance in the global AI arena. They contended that it was fueled by exaggerated fears of catastrophic AI scenarios reminiscent of science fiction. In contrast, Senator Wiener accused certain venture capitalists of waging a “propaganda campaign” against his legislation, highlighting a claim from Y Combinator that SB 1047 could imprison startup founders—a statement experts deemed misleading.
SB 53 essentially distills the less contentious elements of SB 1047—like whistleblower protections and the establishment of the CalCompute cluster—and reintroduces them through this new AI legislation.
Importantly, Wiener does not shy away from the issue of existential AI risks in SB 53. The proposed bill explicitly offers protections for whistleblowers who suspect their employers are developing AI systems that pose a “critical risk.” It defines critical risk as a “foreseeable or material risk that a developer’s creation, storage, or deployment of a foundation model, as specified, may lead to the death or serious injury of over 100 individuals, or cause damage exceeding $1 billion to financial rights or property.”
Moreover, SB 53 restricts leading frontier AI model developers—potentially including OpenAI, Anthropic, and xAI—from retaliating against employees who report concerning findings to California’s Attorney General, federal agencies, or fellow employees. The bill mandates these developers to provide feedback to whistleblowers regarding specific internal issues the latter find troubling.
Regarding CalCompute, SB 53 sets the stage for a team to develop a public cloud computing cluster. This team would comprise representatives from the University of California, alongside other public and private researchers, who would advise on aspects such as the construction of CalCompute, its size, and the users and organizations entitled to access it.
At this point, SB 53 is still in the early stages of the legislative process. It must undergo review and approval by California’s legislative bodies before it can reach Governor Newsom for his approval. Lawmakers will undoubtedly be attentive to Silicon Valley’s response to the newly proposed bill.
However, passing AI safety legislation in 2025 may prove more challenging than in 2024. While California enacted 18 AI-related laws in 2024, there’s a sense that the momentum behind the AI doom narrative has waned.
At the Paris AI Action Summit, Vice President J.D. Vance indicated that America is focused on AI innovation rather than safety. While the CalCompute initiative proposed by SB 53 may be seen as a step forward for AI development, the impact of legislative efforts addressing existential AI risks remains uncertain for 2025.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

