Ex-Google CEO Eric Schmidt says an AI 'Manhattan Project' is a bad idea

Christian Marquardt/Getty
- Former Google CEO Eric Schmidt co-authored a paper warning the US about the dangers of an AI Manhattan Project.
- In the paper, Schmidt, Dan Hendrycks, and Alexandr Wang push for a more defensive approach.
- The authors suggest the US sabotage rival projects, rather than advance the AI frontier alone.
Some of the biggest names in AI tech say an AI "Manhattan Project" could have a destabalizing effect on the US, rather than help safeguard it.
The dire warning came from former Google CEO Eric Schmidt, Center for AI Safety director Dan Hendrycks, and Scale AI CEO Alexandr Wang. They coauthored a policy paper titled "Superintelligence Strategy" published on Wednesday.
In the paper, the tech titans urge the US to stay away from an aggressive push to develop superintelligent AI, or AGI, which the authors say could provoke international retaliation. China, in particular, "would not sit idle" while the US worked to actualize AGI, and "risk a loss of control," they write.
The authors write that circumstances similar to the nuclear arms race that birthed the Manhattan Project β a secretive initiative that ended in the creation of the first atom bomb β have developed around the AI frontier.
In November 2024, for example, a bipartisan congressional committee called for a "Manhattan Project-like" program, dedicated to pumping funds into initiatives that could help the US beat out China in the race to AGI. And just a few days before the authors released their paper, US Secretary of Energy Chris Wright said the country is already "at the start of a new Manhattan Project."
"The Manhattan Project assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it," the authors write. "What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure."
It's not just the government subsidizing AI advancements, either, according to Schmidt, Hendrycks, and Wang β private corporations are developing "Manhattan Projects" of their own. Demis Hassabis, CEO of Google DeepMind, has said he loses sleep over the possibility of ending up like Robert Oppenheimer.
"Currently, a similar urgency is evident in the global effort to lead in AI, with investment in AI training doubling every year for nearly the past decade," the authors say. "Several 'AI Manhattan Projects' aiming to eventually build superintelligence are already underway, financed by many of the most powerful corporations in the world."
The authors argue that the US already finds itself operating under conditions similar to mutually assured destruction, which refers to the idea that no nation with nuclear weapons will use its arsenal against another, for fear of retribution. They write that a further effort to control the AI space could provoke retaliation from rival global powers.
Instead, the paper suggests the US could benefit from taking a more defensive approach β sabotaging "destabilizing" AI projects via methods like cyberattacks, rather than rushing to perfect their own.
In order to address "rival states, rogue actors, and the risk of losing control" all at once, the authors put forth a threefold strategy. Deterring via sabotage, restricting access of chips and "weaponizable AI systems" to "rogue actors," and guaranteeing US access to AI chips via domestic manufacturing.
Overall, Schmidt, Hendrycks, and Wang push for balance, rather than what they call the "move fast and break things" strategy. They argue that the US has an opportunity to take a step back from the urgent rush of the arms race, and shift to a more defensive strategy.
"By methodically constraining the most destabilizing moves, states can guide AI toward unprecedented benefits rather than risk it becoming a catalyst of ruin," the authors write.