Is Anthropic limiting the release of Mythos to protect the internet — or Anthropic?
Home AI - Artificial Intelligence Is Anthropic Restricting the Launch of Mythos to Safeguard the Internet—or Itself?

Is Anthropic Restricting the Launch of Mythos to Safeguard the Internet—or Itself?

by admin

Anthropic announced this week that it has restricted the public release of its latest AI model, Mythos, due to its exceptional ability to uncover security vulnerabilities in widely-used software. Instead of launching Mythos for general use, Anthropic plans to share the model exclusively with major corporations and organisations that manage critical online infrastructures, such as Amazon Web Services and JPMorgan Chase. OpenAI is reportedly considering a similar strategy for its forthcoming cybersecurity tools, aiming to empower these enterprises in combating potential threats posed by advanced AI used by malicious actors.

While the intention behind this strategic release is to address cybersecurity, there are indications that it may also serve other interests. Dan Lahav, CEO of AI cybersecurity lab Irregular, emphasized that the significance of vulnerabilities identified by AI depends on how they can be exploited, questioning whether Mythos truly represents a significant advancement in cybersecurity.

Anthropic claims that Mythos outperforms its predecessor, Opus, in detecting exploitable vulnerabilities. However, Aisle, another AI cybersecurity startup, contends it has been able to replicate much of Mythos’ capabilities using smaller, open-access models. This suggests that there is not a singular deep learning model that dictates the future of cybersecurity; rather, the effectiveness relies on the specific context of the task.

Given that Opus was already regarded as a revolutionary tool in the cybersecurity realm, another rationale for limiting access to powerful models is the potential to incentivise large enterprise contracts while making it challenging for smaller competitors to replicate their models through distillation—a method that utilises advanced models to train less costly new large language models (LLMs). David Crawshaw, CEO of startup exe.dev, argued that this strategy essentially bars smaller labs from accessing top-tier models, thus ensuring that the revenue from large enterprises remains robust.

This perspective aligns with a notable trend in the AI landscape: a race between frontier labs, which are developing the largest and most advanced models, and companies like Aisle, which leverage various models, including open-source options from China, as a competitive edge. Frontier labs have become increasingly protective of their models this year, with Anthropic alleging attempts by Chinese firms to replicate its technologies. A coalition of leading labs, including Anthropic, Google, and OpenAI, has formed to combat model copying.

As the threat of distillation poses challenges to the business models of frontier labs, selectively releasing models not only protects these companies’ financial interests but also highlights their enterprise offerings in a saturated market. Whether Mythos or any subsequent models genuinely endanger internet security remains uncertain. Nevertheless, Anthropic’s cautious approach in deploying this technology appears to balance responsibility with the imperative of safeguarding both the internet and the company’s interests.

Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles