The US military continued using Anthropic’s Claude artificial intelligence model during Saturday’s joint US-Israeli bombardment of Iran, despite President Trump having ordered all federal agencies to stop using the tool just hours before the strikes began, according to reports from the Wall Street Journal and Axios.
According to the Journal, military command used Claude for intelligence gathering, target selection and battlefield simulations during the massive assault on Iran. The reports highlight how deeply embedded AI tools have become in modern military operations, making a sudden withdrawal highly complex even when ordered directly by the president.
Trump moved to sever all ties with Anthropic on Friday, denouncing the company on Truth Social as a radical left AI company run by people who have no idea what the real world is about. The falling out stems from Anthropic’s objection to the US military’s use of Claude during its January raid to capture Venezuelan president Nicolás Maduro, which the company said violated its terms of use prohibiting Claude from being applied for violent purposes, weapons development or surveillance.
Defence Secretary Pete Hegseth escalated the dispute further with a lengthy post on X, accusing Anthropic of arrogance and betrayal and insisting that America’s warfighters would never be held hostage by the ideological whims of Big Tech. Hegseth demanded full and unrestricted access to all of Anthropic’s AI models for every lawful military purpose.
Despite the hardline rhetoric, Hegseth acknowledged the practical difficulty of rapidly removing AI systems already woven into military infrastructure, confirming Anthropic would continue providing services for a transition period of up to six months.
With the Anthropic relationship effectively over, rival company OpenAI has moved quickly to fill the gap. Chief executive Sam Altman confirmed he had reached an agreement with the Pentagon to deploy OpenAI’s tools, including ChatGPT, on its classified network.


