Home AI - Artificial Intelligence Sam Altman: OpenAI Has Missed the Mark on Open Source History

Sam Altman: OpenAI Has Missed the Mark on Open Source History

by admin

Wrapping up a day filled with product unveilings, OpenAI’s team—including CEO Sam Altman—engaged with the public during a comprehensive Reddit AMA held on Friday.

OpenAI finds itself in a challenging scenario. The organization is combating views that it is falling behind in the AI competition, especially against Chinese firms like DeepSeek, which OpenAI claims may have appropriated its intellectual property. To address this, the AI developer has been striving to strengthen its connections with Washington while simultaneously embarking on a significant data center initiative and reportedly preparing for one of the largest financing efforts ever undertaken.

Altman acknowledged that DeepSeek has diminished OpenAI’s earlier advantage in AI innovation, expressing his belief that OpenAI has found itself “on the wrong side of history” regarding the open sourcing of its technologies. Although OpenAI has occasionally released open-source models, the organization typically prefers a closed-source, proprietary development model.

“[Personally, I think we need to] establish a new approach to open sourcing,” Altman remarked. “Not everyone at OpenAI shares this perspective, and it’s not our highest priority at the moment… We will create more advanced models in the future, but our lead will be smaller than it has been in prior years.”

In a subsequent comment, Kevin Weil, OpenAI’s Chief Product Officer, noted that the company is contemplating the possibility of open sourcing older models that are no longer cutting-edge. “We will certainly consider doing more of this,” he mentioned, without elaborating further.

Beyond reconsidering its release strategy, Altman highlighted that DeepSeek’s actions might prompt OpenAI to disclose more about how its models execute reasoning, such as the o3-mini model introduced that day, emphasizing their “thought processes.” Currently, OpenAI’s systems keep their reasoning under wraps to prevent competitors from harvesting training data for their own models. In contrast, DeepSeek’s reasoning model, R1, displays its complete line of reasoning.

“We are actively working on revealing much more than what is currently shown — [demonstrating the model’s thought process] will happen very soon,” Weil added. “The trade-off for making everything visible relates to competitive intelligence, but we understand that many users, especially power users, desire this information, so we will strive to find an appropriate balance.”

Altman and Weil also sought to clarify misconceptions about potential price hikes for ChatGPT, the chatbot platform that serves as a launchpad for many of OpenAI’s models. Altman expressed a desire to eventually make ChatGPT “more affordable” over time, if possible.

Previously, Altman revealed that OpenAI was operating at a loss with its most expensive ChatGPT offering, ChatGPT Pro, priced at $200 monthly.

In a related discussion, Weil stated that OpenAI continues to gather evidence that increased computing power leads to “better” and more effective models. This understanding is largely what is driving initiatives like Stargate, OpenAI’s recently announced large-scale data center project, according to Weil. The rising demand for compute resources is being propelled by a growing user base as well.

When discussing the potential for recursive self-improvement that these powerful models might enable, Altman indicated that he now finds the prospect of a “rapid acceleration” more likely than he had previously thought. Recursive self-improvement refers to a scenario in which an AI system enhances its intelligence and capabilities autonomously.

It should be noted that Altman has a reputation for setting expectations too high. Not long ago, he lowered OpenAI’s criteria for artificial general intelligence (AGI).

In response to a Reddit user’s inquiry about whether OpenAI’s models, whether self-improving or not, might be utilized for creating harmful weapons—specifically nuclear arms—the company recently announced a collaboration with the U.S. government to provide its models for research in nuclear defense via the U.S. National Laboratories.

Weil expressed his confidence in the government’s approach.

“I’ve come to know these scientists—they are experts in AI, as well as leading researchers,” he remarked. “They are well-aware of both the capabilities and limitations of the models, and I seriously doubt they would recklessly input any model output into a nuclear computation. They’re intelligent, evidence-based people who engage in thorough experimentation and data validation.”

The OpenAI team also faced several technical questions, such as the timeline for releasing the next reasoning model, o3 (“more than a few weeks, less than a few months,” according to Altman), the potential launch date for the company’s upcoming flagship non-reasoning model, GPT-5 (“no timeline available yet,” Altman stated), and the prospects for a successor to DALL-E 3, the company’s image-generating model. With DALL-E 3 being released nearly two years ago and facing increasing competition as image-generation technology evolves rapidly, the model is no longer competitive in several benchmark assessments.

“Yes! We’re actively developing it,” Weil confirmed regarding the follow-up to DALL-E 3. “And I assure you, it will be worth the wait.”

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles