The title of a recent article by the Wall Street Journal asserts that “Comment l’IA booste la guerre en Iran” (How AI boosts the war in Iran). According to the newspaper, the United States and Israel are conducting attacks on Iran at an unprecedented pace and precision, partly thanks to “une arme de pointe jamais déployée à cette échelle auparavant : l’intelligence artificielle” (a cutting-edge weapon never deployed on this scale before: artificial intelligence).
The Washington Post points out that “L’IA d’Anthropic, Claude, est au cœur de la campagne des États-Unis en Iran, alors qu’une âpre querelle est en cours” (Anthropic’s AI, Claude, is at the heart of the US campaign in Iran, while a bitter dispute is ongoing) between the Pentagon and the company. The CEO of Anthropic, Dario Amodei, has reportedly had tense relations with the Trump administration.
Prior to the start of bombings in Iran, the president announced a ban on the federal administration using Anthropic’s tools, giving them six months to find alternatives. OpenAI and xAI, Elon Musk’s company, are expected to replace Anthropic at the Pentagon.
In the immediate future, the Pentagon may struggle to do without Claude, the chatbot integrated into the Maven technomilitary program since late 2024. Claude has become a daily tool in most military branches within a year, aiding in anti-terrorism operations and a raid in Venezuela. This marks the first time it is being used in a large-scale military campaign, accelerating operations and reducing Iran’s response capacity.
The use of AI in war has sparked an ethical debate on its adoption speed. While AI is mainly used for intelligence, military planning, and logistics, industry experts warn against over-reliance on technology’s information as human judgment is irreplaceable. Emelia Probasco from Georgetown University and a former navy member advocates for more caution in the investments made in this type of technology infrastructure.





