Defense Secretary Pete Hegseth has given Anthropic’s CEO until Friday to allow the unrestricted use of their artificial intelligence (AI) technology by the American military or risk losing their government contract, according to a source familiar with the matter.
Anthropic, the creator of the “Claude” conversationalist, is the latest company in the sector to not provide its technology to the new internal network of the American military. Its CEO, Dario Amodei, has raised ethical concerns multiple times about the uncontrolled use of AI by the government, especially the dangers associated with fully autonomous armed drones and AI-assisted mass surveillance that could track dissenters.
Defense officials have warned that they could designate Anthropic as a supply chain risk company or invoke defense production law to effectively grant the military greater authority over the use of its products without approving the usage.
This information, first reported by Axios, fuels the debate on the role of AI in national security and concerns about its potential use in critical situations involving lethal force, sensitive information, or government surveillance. It also comes at a time when Mr. Hegseth has committed to eradicating what he calls “woke culture” within the armed forces.
“A powerful AI, capable of analyzing billions of conversations from millions of people, could assess public opinion, detect emerging pockets of disloyalty, and snuff them out,” Mr. Amodei wrote in an article published last month.
In a statement following Tuesday’s meeting, Anthropic announced that they had held good-faith discussions regarding their usage policy to ensure they could continue supporting the government’s national security mission reliably and responsibly.
In the context of the enthusiasm for AI that followed the release of ChatGPT, Anthropic has closely aligned itself with President Joe Biden’s administration by proposing to subject their AI systems to third-party review to guard against national security risks.
Mr. Amodei, the CEO, has cautioned against the potentially catastrophic dangers of AI while rejecting the label of “pessimist” on the subject. In an article published in January, he stated, “we are considerably closer to a real danger in 2026 than in 2023,” but these risks must be realistically and pragmatically managed.
Anthropic sharply criticized Nvidia, a chip manufacturer, for considering selling AI chips to China, in line with Donald Trump’s proposals to loosen export controls. However, the AI specialized company remains a close partner of Nvidia.
The rapid adoption of AI by the Pentagon underscores the need for increased control or regulation of AI by Congress, especially if it is used to monitor American citizens, according to Amos Toh, senior counsel for the Liberty and National Security Program at the Brennan Center at New York University.





