Pentagon CTO Says Anthropic’s Claude Could ‘Pollute’ Defense Supply Chain
Pentagon CTO Emil Michael warned that Anthropic’s Claude AI could “pollute” the defense supply chain. The U.S. Defense Department has already designated the company as a supply chain risk.
A senior U.S. Defense Department technology official has warned that Anthropic’s Claude artificial intelligence model could pose risks to the military’s supply chain. Speaking on CNBC’s Squawk Box, Pentagon Chief Technology Officer Emil Michael said the company’s strict usage restrictions create challenges for defense operations.
Earlier in March 2026, the Pentagon formally designated Anthropic as a “supply chain risk,” an unprecedented step that could effectively prevent defense contractors and federal agencies from using the company’s AI systems. Officials emphasized that the decision was not intended as a punitive measure but rather as a safeguard for national security and operational flexibility.
The dispute stems from disagreements over how the U.S. military can deploy advanced AI systems. Anthropic has insisted that its Claude model should not be used for mass domestic surveillance or fully autonomous lethal weapons. Defense officials, however, argue that AI will play a critical role in future military capabilities, including autonomous drone swarms and other unmanned systems designed to compete with rival powers such as China.
Following the designation, Anthropic filed legal action challenging the Pentagon’s decision. The company argues the move is unlawful and could disrupt relationships between leading AI developers and the U.S. defense ecosystem, highlighting growing tensions between Silicon Valley ethics policies and military technology demands.
Comments (0)
No comments yet. Be the first to comment!

