Home AI - Artificial Intelligence UK Removes ‘Safety’ from AI Agency Name, Renaming it the AI Security Institute and Signing MOU with Anthropic

UK Removes ‘Safety’ from AI Agency Name, Renaming it the AI Security Institute and Signing MOU with Anthropic

by admin

The U.K. government is making a decisive shift towards leveraging AI to energize its economy and industrial sector. As part of this strategy, an institution established just over a year ago for a different objective is undergoing a rebranding. Today, the Department of Science, Industry, and Technology announced that the AI Safety Institute will now be known as the “AI Security Institute.” (The acronym remains unchanged: same URL.) This transition indicates a change in focus from studying risks like existential threats and biases in large language models to prioritizing cybersecurity, particularly in “enhancing defenses against the challenges AI presents to national security and criminal activities.”

In line with this announcement, the government revealed a new collaboration with Anthropic. While specific services weren’t disclosed, the memorandum of understanding indicates both parties will “explore” the integration of Anthropic’s AI assistant, Claude, into public services. Moreover, Anthropic aims to play a role in advancing scientific research and economic modeling. At the AI Security Institute, it will also offer tools to assess AI capabilities in terms of identifying security vulnerabilities.

“AI has the capacity to revolutionize how governments provide services to their citizens,” stated Dario Amodei, co-founder and CEO of Anthropic. “We eagerly anticipate exploring how Claude, Anthropic’s AI assistant, can assist U.K. governmental agencies in improving public services, ultimately aiming to find innovative ways to make essential information and services more efficient and accessible to residents.”

While Anthropic is the sole company spotlighted in today’s announcement—timed with a week of AI events in Munich and Paris—it is not the only firm collaborating with the government. A range of new tools introduced in January were all powered by OpenAI. At that time, Peter Kyle, the Secretary of State for Technology, indicated that the government intended to partner with various foundational AI companies, and today’s deal with Anthropic exemplifies this strategy.

The rebranding of the AI Safety Institute—launched over a year ago amid much excitement—to the AI Security Institute shouldn’t come as a surprise.

When the newly elected Labour government rolled out its AI-centric Plan for Change in January, it was striking that terms such as “safety,” “harm,” “existential,” and “threat” were entirely absent from the document.

This omission was intentional. The government’s ambition is to spark investment in a more contemporary economy, utilizing technology, particularly AI, to achieve this goal. It seeks to foster closer ties with major tech companies while also nurturing its own homegrown tech giants.

To support this agenda, the key messages being promoted are centered around development, AI, and further development. Civil servants will utilize their own AI assistant named “Humphrey,” and they are encouraged to share data and deploy AI in other areas to enhance efficiency. Additionally, consumers will receive digital wallets for accessing government documents and chatbots for assistance.

So, have concerns regarding AI safety been fully addressed? Not precisely, but the prevalent message appears to be that these issues cannot overshadow the pursuit of progress.

The government asserted that despite the name change, the core mission remains unchanged.

“The changes I’m announcing today signify a natural progression in our approach to responsible AI development – enabling us to harness AI to stimulate economic growth as part of our Plan for Change,” Kyle remarked in a statement. “The AI Security Institute’s efforts will remain consistent, yet this renewed focus will help protect our citizens—and those of our allies—from anyone attempting to exploit AI against our institutions, democratic values, and way of life.”

“From the outset, the Institute’s emphasis has been on security, and we have assembled a team of scientists dedicated to assessing critical risks to the public,” added Ian Hogarth, who continues to chair the institute. “The introduction of our new criminal misuse team and deepened collaboration with the national security sector represent the next phase in addressing these risks.”

On a broader scale, it is evident that the priorities surrounding “AI Safety” have evolved. The most pressing concern for the AI Safety Institute in the United States currently is the potential for its dismantling, a sentiment echoed by U.S. Vice President J.D. Vance earlier this week in his address in Paris.

TechCrunch has an AI-focused newsletter! Sign up here to have it delivered to your inbox every Wednesday.

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles