Plan Advises U.S. Government to Curb AGI Development, Make Research Safer

A report commissioned by the U.S. government warns of substantial national security risks posed by AI, including the potential for an “extinction-level threat to the human species.” The report stresses that the development of an advanced AI as well as AGI could destabilize global security. AGI, a hypothetical technology that could perform tasks at or above the human level, is being actively pursued by leading AI labs and is expected to arrive within five years, according to a story on

The report, “An Action Plan to Increase the Safety and Security of Advanced AI,” recommends policy actions that would disrupt the AI industry. It suggests making it illegal to train AI models using more than a certain level of computing power, and for AI companies to obtain government permission to train and deploy new models above a certain threshold. It also proposes outlawing the publication of the inner workings of powerful AI models, tightening controls on the manufacture and export of AI chips, and channeling federal funding toward “alignment” research to make advanced AI safer.

The report acknowledges that the physical boundaries of the universe may make it impossible to prevent the proliferation of advanced AI through chips. As AI algorithms continue to improve, more AI capabilities become available for less total computing, making mitigating AI proliferation through compute concentrations impractical. It suggests a new federal AI agency could explore blocking the publishing of research that improves algorithmic efficiency.

The report highlights two categories of risk: weaponization risk, where AI systems could be used to execute catastrophic attacks, and the “loss of control” risk, where advanced AI systems may outmaneuver their creators. Both risks are exacerbated by “race dynamics” in the AI industry, where the first company to achieve AGI could reap significant economic rewards, incentivizing companies to prioritize speed over safety. Despite potential resistance from the AI industry, the report’s authors argue that safety measures are essential to prevent irreversible damage.