Home AI - Artificial Intelligence OpenAI Commits to Granting Early Access to Its Upcoming Model to the U.S. AI Safety Institute

OpenAI Commits to Granting Early Access to Its Upcoming Model to the U.S. AI Safety Institute

by admin

OpenAI’s leader, Sam Altman, revealed that the organization is collaborating with the U.S. AI Safety Institute, a governmental agency focused on scrutinizing and mitigating risks associated with AI technologies, on an accord to grant preliminary access to its forthcoming significant generative AI model for safety evaluation purposes.

Altman shared the news on X in a late Thursday post. The specifics were sparse. However, this announcement, along with a comparable arrangement reached with the United Kingdom’s AI safety organization in June, suggests a concerted effort by OpenAI to challenge the perception that it has sidelined AI safety concerns in favor of advancing its generative AI capabilities.

In May, a pivotal shift occurred at OpenAI, with the dissolution of a team dedicated to formulating safeguards against the potential threat of “superintelligent” AI systems acting autonomously. Reports indicated that OpenAI had deprioritized the team’s essential safety research in a push for product launches, culminating in the departure of the team’s dual leaders, Jan Leike (now leading safety research at Anthropic) and OpenAI co-founder Ilya Sutskever (who has initiated Safe Superintelligence Inc., focusing on AI safety).

Facing mounting scrutiny, OpenAI announced a phase-out of its limiting non-disparagement agreements, which subtly dissuaded whistleblower actions. Additionally, it pledged to form a safety watchdog and commit 20% of its computational resources towards safety research – a promise originally made to the now-disbanded safety team but never fulfilled. In May, Altman reaffirmed this commitment and announced the voiding of non-disparagement clauses for both current and future employees.

Yet, these actions have not fully assuaged some critics, especially after all members appointed to the safety commission were revealed to be from within the company, including Altman. Moreover, a significant AI safety executive was recently shifted to a different organizational role.

Concerns from five senators, led by Hawaii Democrat Brian Schatz, were formally expressed in a recent letter to Altman. In response, OpenAI’s chief strategy officer, Jason Kwon, stated today that the company is “committed to implementing rigorous safety measures at every phase of our operations.”

OpenAI’s recent agreement with the U.S. AI Safety Institute coincides curiously with the company’s support for the Future of Innovation Act, introduced in the Senate, which seeks to establish the Safety Institute as the authority on AI standards and guidelines. This sequence of actions might appear as an effort by OpenAI to sway regulatory practices and standards at a national level.

Notably, Altman serves on the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board, which advises on the safe and secure advancement and application of AI across crucial national infrastructures. Furthermore, OpenAI has markedly increased its federal lobbying efforts, allocating $800,000 in the first half of 2024, up from $260,000 for the entirety of 2023.

The U.S. AI Safety Institute operates under the Commerce Department’s National Institute of Standards and Technology and collaborates with a consortium of industry leaders, including Anthropic and major technology companies such as Google, Microsoft, Meta, Apple, Amazon, and Nvidia. This group endeavors to address objectives outlined in President Joe Biden’s October AI executive order, focusing on areas such as AI red-teaming, capability assessments, risk management, and the safeguarding and integrity of synthetic content.

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles