Dario Amodei, CEO of Anthropic, has expressed significant concerns over DeepSeek, the Chinese AI firm that has made a remarkable impact in Silicon Valley with its R1 model. His worries extend beyond the usual fears about user data being sent back to China.
During an interview on Jordan Schneider’s ChinaTalk podcast, Amodei revealed that DeepSeek produced alarming insights related to bioweapons during a safety evaluation conducted by Anthropic.
Amodei described DeepSeek’s performance as “the worst of any model we’ve ever evaluated.” He stated, “It showed no restrictions whatsoever when it came to generating sensitive information.”
According to Amodei, these assessments are part of Anthropic’s standard procedure for evaluating various AI models to determine their potential risks to national security. His team specifically examines whether these models can produce information related to bioweapons that is not readily available through Google searches or textbooks. Anthropic markets itself as an AI foundational model provider that prioritizes safety.
While Amodei does not believe DeepSeek’s models pose an immediate threat in offering rare and hazardous information, he does warn that this could change soon. He commended DeepSeek’s engineers as “talented” but urged them to deeply consider AI safety protocols.
Moreover, Amodei advocates for strict export controls on chips to China, citing the potential risk of enhancing China’s military capabilities.
In the ChinaTalk interview, Amodei did not specify which DeepSeek model was assessed, nor did he provide additional technical insights about the evaluation process. Anthropic did not respond promptly to TechCrunch’s request for comment, nor did DeepSeek.
The rapid rise of DeepSeek has caused widespread safety concerns. For instance, Cisco security researchers reported last week that the DeepSeek R1 model failed to prevent any harmful prompts during testing, achieving a 100% success rate in bypassing safety measures.
While Cisco did not address bioweapons directly, they were able to prompt DeepSeek into producing harmful content related to cybercrime and other illegal activities. It’s important to note that models like Meta’s Llama-3.1-405B and OpenAI’s GPT-4o also exhibited high failure rates of 96% and 86%, respectively.
It remains to be seen if these safety issues will significantly impact the swift adoption of DeepSeek. Major companies like AWS and Microsoft have openly embraced the integration of R1 into their cloud services — a contradiction, considering Amazon is Anthropic’s largest investor.
Conversely, there is an increasing number of nations, corporations, and particularly government entities, such as the U.S. Navy and the Pentagon, that have begun to restrict the use of DeepSeek.
Only time will reveal whether these actions will gain traction or if DeepSeek’s expansion will persist. Regardless, Amodei acknowledges that DeepSeek is a formidable contender among the leading AI companies in the U.S.
“The key takeaway here is that a new competitor has emerged,” he stated during the ChinaTalk podcast. “Among the major companies capable of training AI — Anthropic, OpenAI, Google, and perhaps Meta and xAI — DeepSeek is now potentially part of that group.”
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


