Home AI - Artificial Intelligence Former OpenAI Policy Lead Slams Company for ‘Altering’ Its AI Safety Narrative

Former OpenAI Policy Lead Slams Company for ‘Altering’ Its AI Safety Narrative

by admin

Miles Brundage, a prominent former policy researcher at OpenAI, expressed his concerns on social media this Wednesday, criticizing OpenAI for allegedly “distorting the history” surrounding its methods of deploying AI systems that may pose risks.

Just a few days ago, OpenAI released a report that details its current stance on AI safety and alignment, which involves engineering AI systems to behave in clarified and beneficial manners. This report claims that OpenAI views the emergence of Artificial General Intelligence (AGI) as a “progressive journey” that necessitates “iterative deployment and learning” to better harness AI technologies.

OpenAI noted, “In a non-linear world, safety insights arise from exercising significant caution with current systems relative to their apparent capabilities, which is precisely how we managed the rollout of [the AI model] GPT-2.” The organization further stated, “We now believe the first AGI is merely one milestone along a continuum of increasingly effective systems […] In a continuous framework, the best path to developing the next safe and beneficial system is to learn from the current one.”

Brundage, however, argues that GPT-2 indeed required considerable caution during its release and that this approach aligns perfectly with OpenAI’s current philosophy of iterative deployment.

“The gradual rollout of GPT-2, a process I was a part of, aligns seamlessly with OpenAI’s present view on iterative deployment,” Brundage stated in a post on X. “The model was released in phases, with insights shared at each stage. Numerous security experts at that time appreciated our cautious approach.”

Joining OpenAI in 2018, Brundage led the policy research department for several years and was a key player on the “AGI readiness” team which concentrated on the responsible deployment of language generation technologies, including OpenAI’s AI chatbot, ChatGPT.

Announced in 2019, GPT-2 was a precursor to the AI frameworks that enable ChatGPT. It could respond to inquiries, summarize content, and generate text with a quality often indistinguishable from human writing.

While its functionalities may seem rudimentary by today’s standards, GPT-2 was a crucial advancement at the time. Citing concerns over potential misuse, OpenAI initially withheld the model’s source code, allowing only selected news outlets to access a limited demo.

This decision elicited mixed reactions within the AI community, with many experts suggesting that the risks posed by GPT-2 had been exaggerated, arguing that there was little evidence for the feared misuse described by OpenAI. The AI-focused outlet, The Gradient, even published an open letter urging OpenAI to release the model, asserting it was too significant to keep under wraps.

Eventually, OpenAI released a limited version of GPT-2 six months after its announcement, followed by the full model shortly thereafter. Brundage believes this approach was appropriate.

“What aspect of the GPT-2 rollout was based on viewing AGI as non-linear? Absolutely none,” he remarked in a post on X. “What evidence exists to suggest this caution was ‘disproportionate’ beforehand? In hindsight, it may have seemed fine, but that doesn’t imply it would have been responsible to proceed recklessly given the information available then.”

Brundage is concerned that OpenAI’s intent behind the document could create a precedent where “concerns are viewed as overly alarmist” and that one would require substantial proof of immediate threats to take action. He labels this attitude as “extremely perilous” for advanced AI technologies.

“If I were still part of OpenAI, I would be questioning the rationale behind the document’s tone and what OpenAI aims to achieve by dismissing caution in such an imbalanced manner,” Brundage further noted.

Historically, OpenAI has faced accusations of focusing on “flashy products” at the cost of safety and of hastily releasing products to stay ahead of competitors. Last year, the company disbanded its AGI readiness team, resulting in the departure of several AI safety and policy researchers who moved to rival firms.

The competitive landscape has intensified, particularly with the Chinese AI organization DeepSeek capturing attention with its publicly available R1 model, which has achieved performance metrics similar to OpenAI’s o1 “reasoning” model on several key benchmarks. OpenAI’s CEO Sam Altman has acknowledged that DeepSeek has diminished the technological edge previously held by OpenAI and has indicated that OpenAI would expedite some releases to maintain competitiveness.

With billions of dollars at stake, OpenAI incurs significant annual losses and is reportedly anticipating its annual losses could increase threefold to $14 billion by 2026. A quicker product release cycle may enhance OpenAI’s short-term financial outlook but could pose long-term safety risks. Experts, including Brundage, question whether this trade-off is justified.

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles