
The Pentagon is threatening to cut ties with Anthropic after the AI company refused to remove safeguards against mass surveillance and fully autonomous weapons, highlighting a growing clash between military ambitions and AI ethics. (Source: Image by RR)
Defense Officials Want Frontier AI Tools Available for All Lawful Military Uses
The Pentagon is considering severing its relationship with Anthropic after months of tense negotiations over how the military can use the company’s AI models. According to a senior administration official, the Defense Department wants leading AI labs to allow their systems to be used for “all lawful purposes,” including sensitive weapons development, intelligence gathering, and battlefield operations. Anthropic, however, has refused to remove two core guardrails: prohibitions on mass domestic surveillance and fully autonomous weaponry.
At the center of the dispute is the ambiguity around what counts as “fully autonomous” or surveillance-related use. Pentagon officials, as noted in axios.com, argue that negotiating use cases individually is impractical and worry that Anthropic’s Claude model could unexpectedly block certain military applications. “Everything’s on the table,” the official said, including scaling back or ending the partnership — though replacing Claude would require an orderly transition due to its integration into classified systems.
Tensions reportedly escalated following a military operation targeting Venezuelan leader Nicolás Maduro, where Claude was used through Anthropic’s partnership with Palantir. A senior official claimed Anthropic executives raised concerns about whether their software had been involved in the raid, which included kinetic force. Anthropic denied discussing specific operations, stating its conversations with the Department of War have focused solely on policy limits regarding autonomous weapons and domestic surveillance.
Anthropic signed a contract worth up to $200 million with the Pentagon last year and was the first frontier AI firm to deploy models onto classified networks. Meanwhile, OpenAI, Google, and xAI have reportedly agreed to relax guardrails for Pentagon use and are negotiating expanded classified access under the same “all lawful purposes” framework. Despite internal tensions and cultural differences, Anthropic maintains it remains committed to supporting U.S. national security — but not at the cost of abandoning its core safeguards.
read more at axios.com
Leave A Comment