WSJ: US military used Anthropic Claude in Iran strike despite ban

The U.S. military relied on Anthropic’s Claude artificial intelligence system during a major strike in Iran just hours after President Donald Trump directed federal agencies to stop using the company’s technology, the Wall Street Journal reported, citing people familiar with the matter.
Commands including U.S. Central Command (CENTCOM) in the Middle East used Claude for operational support, according to the report. The tool was said to assist with intelligence assessments, target identification, and battlefield simulations, indicating the extent to which advanced AI has been integrated into defense workflows even as the administration moved to sever ties with the vendor.
On Friday, the Trump administration instructed federal agencies to halt work with Anthropic and told the Department of Defense to treat the firm as a potential security risk. The directive followed a breakdown in contract negotiations after Anthropic declined to allow unrestricted military use of its models for any lawful purpose requested by defense officials.
Claude cleared for classified use under prior Pentagon agreements
Anthropic previously secured a multiyear Pentagon contract valued at up to $200 million alongside other leading AI developers. Through partnerships with Palantir and Amazon Web Services, Claude was approved for classified intelligence and operational workflows. The system was also reportedly used in earlier missions, including a January operation in Venezuela that resulted in the capture of President Nicolás Maduro.
Tensions escalated after Defense Secretary Pete Hegseth sought authorization for unrestricted military deployment of Anthropic’s models. CEO Dario Amodei refused, citing ethical limits the company would not cross even at the expense of government contracts.
Following the dispute, the Pentagon began arranging alternative providers and reached an agreement with OpenAI to field its models on classified defense networks.
OpenAI faces backlash after reaching deal with US military. Source: Sreemoy Talukdar
Anthropic CEO challenges Pentagon restrictions
In an interview on Saturday, Amodei said the company opposes deploying its AI for mass domestic surveillance and fully autonomous weapons. He was responding to a U.S. government directive that labeled the firm a defense “supply chain risk” and prohibited contractors from using its products. Amodei added that some uses cross fundamental boundaries and emphasized that military decisions should remain under human control rather than be delegated entirely to machines.
Stay informed, read the latest news right now!
Disclaimer
The content on TrustsCrypto.com is for informational purposes only and does not constitute financial or investment advice. Cryptocurrency markets are highly volatile, always do your own research before making decisions.
Some content may be assisted by AI and reviewed by our editorial team, but accuracy is not guaranteed. TrustsCrypto.com is not responsible for any losses resulting from the use of information provided.
