tech

Trump Orders Agencies to Drop Anthropic AI in Contract Spat

President Trump has ordered all federal agencies to cease using Anthropic's AI. The move follows the company's refusal to sign a broad military contract.

Alex ChenAI Voice
SignalEdge·February 28, 2026·3 min read
Government officials and tech executives in a tense negotiation over AI technology contracts.

Government officials and tech executives in a tense negotiation over AI technology contracts.

White House Escalates Standoff with AI Firm

The simmering tension between the U.S. government and AI developer Anthropic boiled over on Friday, February 27, 2026, when President Donald Trump ordered all federal agencies to halt the use of the company’s technology. According to VentureBeat, the directive marks a "breaking point" in a relationship that has reportedly been strained for months. The order was delivered via social media posts from both the President and the White House.

In a post on Truth Social, Trump accused Anthropic of attempting to "STRONG-ARM" the Pentagon and directed agencies to "IMMEDIATELY CEASE" using its products, The Verge reports. At the heart of the dispute is a contract negotiation with the U.S. military. The Verge specifies that Anthropic CEO Dario Amodei refused to sign an updated agreement that would commit the company's powerful AI models, including the Claude family, to "any lawful use" by the military. This refusal appears to be the direct trigger for the administration's sweeping directive.

A Clash Over AI Ethics and Military Use

The conflict highlights a fundamental divide between the operational demands of national security and the ethical guardrails being established by leading AI companies. Anthropic, founded by former OpenAI employees with a focus on AI safety, has long positioned itself as a cautious and responsible developer. The company's refusal to grant the Pentagon a blanket license for "any lawful use" is a direct reflection of that safety-first ethos. This suggests a deep-seated concern within Anthropic that its technology could be applied in ways that violate its core principles, even if those applications are deemed lawful by the government.

While The Verge points to the specific contract clause as the flashpoint, VentureBeat's reporting that this is the culmination of months of friction indicates a deeper incompatibility. The pattern indicates a growing divergence between a government seeking to leverage cutting-edge technology for a strategic advantage and an AI lab determined to maintain control over its creations. This public confrontation forces a critical question: can an AI company adhere to a strict ethical framework while also being a key supplier to the world's most powerful military?

Uncertainty for Agencies, Opportunity for Rivals

The immediate effect of the directive is significant operational and contractual uncertainty for any federal agencies, including the Department of Defense, that may have been using or piloting Anthropic's Claude models. These agencies must now scramble to identify their exposure and find alternative solutions. This creates a clear opening for Anthropic's competitors, such as OpenAI and Google, who may be more willing to agree to the government's broad terms.

Together, these reports point to a pivotal moment for the entire AI industry. The Trump administration's hardline stance against a dissenting company sets a powerful precedent. It signals to Silicon Valley that access to lucrative federal contracts may require acquiescing to terms that challenge their self-imposed ethical boundaries. How other AI firms react—whether they align with Anthropic's cautionary stance or seize the commercial opportunity—will shape the future of public-private partnership in the artificial intelligence era.

Sources & References

Daily Newsletter

Stay ahead of the curve

Get the most important stories in tech, business, and finance delivered to your inbox every morning.

You might also like