How Trump administration is crushing Anthropic after it refused unrestricted military usage of AI

· OpIndia

A public showdown between the Trump administration and Anthropic, maker of the Claude AI model, has escalated into a sweeping federal ban, a Pentagon blacklist and a state based squeeze that appears to be designed to force the company to abandon its safety restrictions.

On 27th February, the President of the United States, Donald Trump, announced a ban on Anthropic in a long social media post on Truth Social in which he called the company a threat to national security. His stand was followed by Secretary of War Pete Hegseth in his social media post. Both the leaders framed the restrictions imposed by Anthropic as an unacceptable attempt by a private company to dictate how America fights wars.

The dispute between the White House and Anthropic is not about whether Anthropic should work with the US military. According to the company’s statement, it already does on an extensive level. The fight is about whether the US military should get access with no guardrails at all, which includes mass domestic surveillance and fully autonomous weapons that remove humans from the loop.

Anthropic says those two use cases are dangerous, incompatible with democratic values and beyond what today’s AI can safely deliver. However, the US President does not agree with the company’s stand.

What triggered the standoff

Chief executive of Anthropic, Dario Amodei, said in a statement that Claude is already deployed across the Department of War and other national security agencies for mission critical work that includes intelligence analysis, modelling and simulation, operational planning and cyber operations.

He further claimed that Anthropic has taken national security aligned decisions that hurt its own revenues, including cutting off use by firms linked to the Chinese Communist Party and supporting stronger chip export controls.

Despite that, the Department of War demanded that Anthropic remove safeguards and accept a broad standard that would permit “any lawful use”. Hegseth articulated that the administration’s position is that suppliers cannot impose operational terms, and that “lawful” national security needs should be the only boundary.

Anthropic’s red lines, mass surveillance and all AI weapons

What Amodei said in his statement draws a sharp line around two categories. The first is mass domestic surveillance. Anthropic says AI driven surveillance at scale creates novel risks to fundamental liberties. It argues that the law has not caught up with AI’s ability to stitch together scattered, individually innocuous data into an intimate, comprehensive picture of a person’s life, automatically and at massive scale. In short, it fears an internal dragnet made dramatically more powerful by frontier models.

Internal dragnet can be defined as a wide, sweeping surveillance net aimed inward at a country’s own people, not foreign targets. In this context, it implies the state using powerful tools like AI to collect, combine and analyse massive amounts of data about citizens at scale to map their movements, contacts, behaviour and associations, often without individualised suspicion. To understand it in simple terms, think of it as a fishing net thrown over the whole population, rather than a targeted investigation.

The second is fully autonomous weapons, systems that select and engage targets without human involvement. Anthropic says today’s frontier AI is not reliable enough to power such weapons safely. Furthermore, without proper oversight, these systems cannot be trusted to exercise the judgement that trained troops apply. Anthropic, reportedly, has offered to work on research and development to improve reliability but that offer was rejected by the government.

The company’s position is not that autonomous capabilities will never be needed. It is that the technology and the oversight structures are not there yet, and the costs of being wrong are catastrophic.

Trump’s order, immediate halt and a six month phase out

After the Pentagon deadline passed, Trump posted on Truth Social ordering every federal agency to immediately cease all use of Anthropic’s technology. The order included a six month phase out for the Department of War and other agencies where Anthropic’s tools are embedded. This move is intended to prevent disruption while forcing a rapid transition.

The confrontational language Trump used has an unmistakable political tone. He accused Anthropic of trying to strong arm the Department of War and called it a “radical left” company. Trump warned of using “the full power of the presidency” with “major civil and criminal consequences” if the company did not cooperate during the phase out.

Whatever the rhetoric, the practical effect is simple. A company that was reportedly integrated into sensitive systems now faces a government directed offboarding across the federal ecosystem.

The supply chain risk label, a death choke for a contractor ecosystem

Trump’s attack on Anthropic was not all. Hegseth announced that the Department of War would designate Anthropic a “supply chain risk to national security”, and that effective immediately, no contractor, supplier, or partner that does business with the US military may conduct any commercial activity with Anthropic.

This can be seen as the key escalation. It is not just the government stopping its own usage. It is effectively telling the sprawling universe of defence contractors and military linked vendors that they cannot touch Anthropic at all, even at a commercial level. According to Anthropic, that sort of designation has historically been reserved for US adversaries, and its use against an American firm is unprecedented.

In effect, the label operates like a choke point. A huge slice of corporate America sells something to the Pentagon, directly or indirectly. If those firms are barred from dealing with Anthropic, the company’s access to partnerships, distribution channels, cloud arrangements and integration pipelines can be crippled.

Pressure on partners, divestment rumours and the chilling message to industry

Anthropic is not the only company facing pressure and the wrath of the “almighty” White House. According to some reports, Hegseth is pressuring major technology firms, including Nvidia, Amazon and Google, to divest their shares and unwind partnerships with Anthropic. The logic is straightforward. If Anthropic is branded a supply chain risk, then any large firm that depends on defence business may find it safer to cut ties rather than risk procurement blowback.

The move is less of a procurement decision and more a coercive campaign. It should be seen as a warning shot to the entire AI industry dealing with the US government, especially in terms of military contracts. Sign the contract on the government’s terms or face a blacklist that can make other firms abandon you.

Anthropic’s response, court challenge and refusal to fold

Anthropic has said it will challenge the supply chain risk designation in court. The company has called the move legally unsound and warned that it sets a dangerous precedent for any American company that negotiates with the government. The company also says it will work to enable a smooth transition to other providers so military planning and operations are not disrupted. However, it has insisted it cannot in good conscience remove the two safeguards.

In other words, Anthropic is offering cooperation on offboarding, but not surrender on principle.

Why this matters beyond Anthropic

This episode is not just about Anthropic and its stand off with the US government. The administration is asserting that the military must have full control over the tools it buys and no company can impose constraints on its use. Anthropic is asserting that certain capabilities are too dangerous to enable, especially when AI is still prone to errors and hallucinations. When the use cases involve either turning the state’s gaze inward or delegating lethal decisions to machines, lack of human oversight can be catastrophic.

Even supporters of a strong national security posture should pause at the precedent. A state that can blacklist a domestic firm as a supply chain risk for refusing to enable mass surveillance or human out of the loop weapons is a state signalling that private sector dissent on ethics will be punished through procurement power.

The most worrying part is not the rhetoric. It is the mechanism. The supply chain risk label is not just a contract dispute tool. It is designed to isolate an entity from a defence linked ecosystem. Used this way, it turns national security procurement into leverage that can reshape the AI industry by force.

The bottom line

Anthropic claims that it has built its brand on AI safety, and it has drawn clear red lines around domestic mass surveillance and fully autonomous weapons without human oversight. The Trump administration has responded with an aggressive two step strike, a federal wide cease use order and a Pentagon backed “supply chain risk” designation that threatens to cut Anthropic off from partners and contractors across the defence economy.

For the moment, the message from Washington is blunt. Either an AI company gives the military unrestricted access on the government’s terms, or it risks being treated like an adversary, with consequences that can choke its business to death.

Read at source