Anthropic’s latest flagship AI model, Claude 3.7 Sonnet, was trained at a cost of “a few tens of millions of dollars” using under 10^26 FLOPs of computational resources.
This information comes from Wharton professor Ethan Mollick, who shared a clarification via an X post on Monday after being contacted by Anthropic’s public relations team. “I was informed by Anthropic that Sonnet 3.7 would not qualify as a 10^26 FLOP model and that its training cost was a few tens of millions of dollars,” he stated, “although future models are expected to be significantly larger.”
TechCrunch reached out to Anthropic for confirmation but did not receive a reply by the time of publication.
If Claude 3.7 Sonnet really did cost only “a few tens of millions of dollars” to develop, excluding additional expenses, it indicates a trend toward the decreasing costs of launching cutting-edge models. Claude 3.5 Sonnet, released in the fall of 2024, was similarly reported to have cost a few tens of millions of dollars to train, according to Anthropic CEO Dario Amodei in a recent essay.
These figures stand in stark contrast to the hefty training expenses for leading models of 2023. OpenAI’s development of the GPT-4 model reportedly exceeded $100 million, as stated by OpenAI CEO Sam Altman. In comparison, Google is estimated to have invested nearly $200 million on training its Gemini Ultra model, according to a Stanford study estimate.
That said, Amodei anticipates that the costs for future AI models will escalate into the billions. It’s important to note that training expenses do not encompass factors such as safety testing and foundational research. Furthermore, as the AI sector shifts towards “reasoning” models that tackle complex problems over prolonged periods, the computing expenses associated with operating these models are likely to rise considerably.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

