In a recent policy paper released on Wednesday, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks advised against the United States pursuing an aggressive, Manhattan Project-like initiative to develop artificial intelligence systems with “superhuman” capabilities, also referred to as AGI.
Titled “Superintelligence Strategy,” the document argues that a unilateral effort by the U.S. to dominate the realm of superintelligent AI could trigger severe backlash from China, potentially manifesting as cyberattacks that would jeopardize global stability.
The authors argue, “[A] Manhattan Project [for AGI] implies that competitors will tolerate a persistent imbalance or even catastrophic consequences rather than take steps to avert it.” They warn that an initiative aimed at building a superweapon risks provoking aggressive counter-actions and elevating tensions, ultimately undermining the very stability the strategy aims to achieve.
Authored by three leading figures in the American AI sector, this paper emerges just months after a U.S. congressional commission suggested a ‘Manhattan Project-style’ mission for AGI development, modeled after the atomic bomb program of the 1940s. U.S. Secretary of Energy Chris Wright has publicly declared that the U.S. is at “the start of a new Manhattan Project” in AI as he stood alongside OpenAI co-founder Greg Brockman at a supercomputer facility.
The Superintelligence Strategy paper counters the notion, promoted by some U.S. policy and industry leaders recently, that a government-supported program aimed at achieving AGI is essential for competing with China.
According to Schmidt, Wang, and Hendrycks, the U.S. finds itself in a kind of AGI standoff reminiscent of mutually assured destruction. Just as global powers refrain from monopolizing nuclear weapons—fearing a preemptive attack from adversaries—the authors assert that the U.S. should proceed with caution in its quest to dominate powerful AI technologies.
While comparing AI systems to nuclear weapons may seem extreme, global leaders already view AI as a significant military advantage. The Pentagon has indicated that AI is accelerating the military’s operational capabilities.
The authors introduce the idea of Mutual Assured AI Malfunction (MAIM), suggesting that governments should have the capacity to proactively deactivate threatening AI initiatives rather than waiting for adversaries to exploit AGI.
Schmidt, Wang, and Hendrycks recommend shifting the U.S. focus from “winning the race to superintelligence” to creating strategies that deter other nations from developing superintelligent AI. They propose enhancing the government’s cyber capabilities to neutralize threatening AI projects controlled by foreign powers, as well as restricting access to advanced AI chips and open-source technologies.
The co-authors outline a divide in the AI policy community, with “doomers” believing that catastrophic outcomes from AI development are inevitable and advocating for a slowdown, and “ostriches” who argue for accelerating AI progress and banking on favorable outcomes.
The paper proposes a third approach: a balanced strategy towards AGI development that emphasizes defensive measures.
This new strategy is particularly significant coming from Schmidt, who has previously advocated for a more aggressive U.S. stance in competing with China on advanced AI systems. Just months ago, he expressed that DeepSeek signified a pivotal moment in America’s AI competition with China.
While the current administration seems determined to advance AI development in the U.S., the co-authors remind us that the implications of America’s AGI decisions are interconnected with global dynamics.
As the world observes America’s initiatives in AI, Schmidt and his co-authors suggest that adopting a more defensive posture may prove to be a wiser course of action.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


