×

US Military vs. Anthropic: “Supply Chain Risk” Battle

March 23, 2026
Editor(s): Iman Ahmed
Writer(s): Benjamin Dickson, Rudra Thakkalpalli, Mannat Sachdeva

Artificial intelligence is rapidly shifting from a technological breakthrough to an instrument of power. Machine learning models have rapidly evolved from commercial novelty to a technology so strategically critical that nations will go to incredible lengths to control them. 

As AI systems grow in capability, the question of who deploys them, and on what terms, has become increasingly relevant. This tension is highlighted by the legal battle between Anthropic and the United States Department of Defense (DoD). After refusing to permit its AI models to be used in applications deemed unethical, Anthropic was labelled as a “supply chain risk” by the department. Anthropic’s response, and the response of its competitors, illustrates how ethical commitments can erode under coercive state power and presages the sobering implications of state dismissal of ethical guardrails in powerful technological systems.

 

Ethics vs National Security Priorities 

At the heart of the conflict between the state and the corporate giant lies a core ethical dilemma surrounding the use of artificial intelligence in military contexts. Anthropic, founded in 2021 by ex-OpenAI executives Dario and Daniela Amodei, is grounded in strong ethical beliefs regarding the responsible use of AI. In contrast, the United States DoD has increasingly prioritised the integration of AI technologies as a crucial instrument in future warfare. This strategic alignment culminated in July 2025, when Anthropic was awarded a $200 million contract to deploy its AI systems to improve national security. To preserve its partnerships and protect sensitive data handled by Claude, Anthropic forfeited millions in revenue by restricting its technology from use by firms linked to the Chinese Communist Party, including some designated as Chinese military companies by the Pentagon. This decision elevates Claude beyond a conventional AI tool into a form of strategic infrastructure, where control over access and use carries direct implications for national security and broader geopolitical dynamics. 

Source: Reuters

The tension between Anthropic’s commitment to ethical AI constraints and the U.S. government’s objective of maximising the strategic use of AI for national security purposes began to materialise in practice and intensified in January 2026, when the company refused to breach its safety protocols. Its resistance to applications involving autonomous warfare and domestic surveillance of American citizens ultimately prompted the Trump Administration to exclude the technology from its systems. However, the U.S. government also indicated it could invoke the 1950 Defense Production Act, which would allow the department to make use of the technology as it sees fit. 

The potential implementation of the Defense Production Act reinforces that corporate ethical limitations are ultimately secondary to state power, especially in matters of national security. It reveals that the Pentagon prioritises strategic advantage over ethical constraints, and reveals a distinct power imbalance between the government and private AI companies. This dynamic was further reinforced when the DoD designated Anthropic a “supply chain risk”. Anthropic’s decision to litigate against the the DoD after being classified as such, signifies a defence of its commercial reputation and a broader effort to contest the government’s authority to sanction companies that adhere to stringent ethical standards in AI deployment.

Source: LatentAI

Military Demand Reshapes AI Industry Norms 

Anthropic’s refusal to yield to the DoD’s demands did not leave the US military without options for very long. In February 2026, within hours of Anthropic being labelled as a “supply chain risk”, OpenAI had announced that it struck a US$200 million deal with the DoD to deploy their “advanced AI systems in classified environments”. While the timing of this development may invite scrutiny, what is perhaps more significant is the wider industry context in which the agreement took place. 

Source: ABC

OpenAI’s 2026 agreement with the DoD marked a clear reversal of its 2023 policy, which restricted the use of its models for military purposes. It reflected the broader industry consensus that, without robust oversight, AI systems should not be deployed in warfare due to their capabilities in causing vast harm without moral discretion. However, due to the advancement of these capabilities, and the intense military demand that followed, this consensus has slowly been fracturing. Around $13.4 billion has been spent by the DoD over the past year in developing autonomous weapons, and another $9 billion towards datacentres. As AI’s share of the war economy grows, AI firms have been increasingly acting as defense contractors, integrating into the chain of modern warfare to entrench the notion of the US’s military hegemony.

Prior to the formal agreement to reverse policy, similar deals were struck with Elon Musk’s xAI, as well as Google and Palantir, providing services to integrate AI and machine learning into the U.S. national security framework. The Pentagon has additionally been accessing OpenAI’s models through Microsoft’s DoD contracts through enterprise licensing, effectively bypassing restrictions. While the 2026 agreement had not yet been strictly ratified, the use of once-commercial AI to support national intelligence and military interests had been normalised. 

With the formalisation of the deal, OpenAI’s CEO Sam Altman announced that the Pentagon had agreed not to use its models for fully autonomous weaponry or mass surveillance, the same restrictions for which Anthropic was deemed a risk. However, these apparent prohibitions are clarified by their specific contractual context allowing AI to be used for any legal purpose. The language here lends itself such that the enforceability of provisions are entrusted to authorities the government already has overarching control of. 

OpenAI shows us that the Anthropic conflict was not isolated. Rather, it reflects a broader tension between AI ethics and government demand that every lab is now navigating: the response to which will inform ethical standards and AI’s economic and military role in the future.

 

Market and Regulatory Consequences

Beyond the immediate policy and legal implications of the DoD’s  designation of Anthropic as a supply-chain risk, the longer-term economic consequences may prove more significant. Nada Sanders, a professor of supply chain management at Northeastern University, warns that applying a supply-chain risk designation to U.S. firms as leverage in negotiations could dampen innovation by deterring investments in guardrails if such efforts risk exclusion from government markets. Over time, this may shift incentives away from proactive safety development, as firms operating in the public sector increasingly prioritise conforming to government pressures over ensuring responsible safeguards.

The broader impact of the designation on other companies will depend in part on whether the blacklisting withstands judicial review as Anthropic continues to challenge the decision in court. However, the significance of the action lies less in its application to a single firm than in the precedent it establishes for how the government may arbitrate disputes with private-sector partners.

For investors, the exclusion raises concerns about Anthropic’s long-term profitability and exposure to regulatory risk, as the government’s actions signal potential restrictions on its future operations. Anthropic Chief Financial Officer Krishna Rao has stated that the government’s actions could cost the company several billion dollars in 2026 revenue, with losses stemming from both government and commercial customers. This uncertainty appears to be influencing customer behavior, with one fintech firm reportedly halving a US$10 million contract, explicitly citing concerns over Anthropic’s strained relationship with the Pentagon. 

While many customers reassess their relationship with Anthropic, major investors including Microsoft, Google, and Amazon have issued statements affirming their position for Claude to remain accessible through their cloud services outside of defense workWhile these commitments provide a degree of market reassurance that Claude will not disappear overnight, they rest on considerable financial ties. In November, Anthropic pledged US$30 billion on Microsoft Azure cloud services, while Microsoft agreed to invest US$5 billion in the company. Though these arrangements reflect the depth of financial interdependence between the two firms, they also demonstrate how Anthropic’s continued operation hinges on the support of a small number of large companies which could impact its own autonomy over time. 

 

Who Ultimately Controls AI Development 

Anthropic’s dispute with the U.S. government ultimately reflects a deeper struggle over who controls the boundaries of artificial intelligence. Its exclusion from all government systems, following the unprecedented decision to label it a supply-chain risk, demonstrates how the U.S. government can override ethical guardrails when national security interests are at stake. This sets a powerful precedent, signalling growing pressure on AI firms to align closely with military demands, even when it is conflicting with ethical boundaries. OpenAI’s rapid agreement with the Pentagon further highlights how competitive and regulatory pressures can drive firms to recalibrate their ethical positions. Ultimately, this conflict suggests that the future of AI will be shaped less by stated principles and more by the realities of power and politics. 

 

The CAINZ Digest is published by CAINZ, a student society affiliated with the Faculty of Business at the University of Melbourne. Opinions published are not necessarily those of the publishers, printers or editors. CAINZ and the University of Melbourne do not accept any responsibility for the accuracy of information contained in the publication.

Meet our authors:

Iman Ahmed
Editor

This author has not left any details

Benjamin Dickson
Writer

This author has not left any details

Rudra Thakkalpalli
Writer

This author has not left any details

Mannat Sachdeva
Writer

This author has not left any details