The Pentagon against ANTHROPIC: when the State refuses to allow Silicon Valley to dictate the rules of military AI

While artificial intelligence is becoming a strategic infrastructure for States, for the first time, an AI laboratory refuses the conditions of an army. This is the meaning of the conflict which today opposes Anthropic to the United States Department of Defense. The American company initiated legal proceedings against the American government after being classified as “supply chain risk”, a designation usually reserved for suppliers considered threats to national security.

This decision comes after the failure of negotiations around the use of the laboratory’s artificial intelligence models in military programs, and poses a much broader question: who should decide on the military use of artificial intelligence, the States or the companies that develop it?

A disagreement over the limits of military AI

Founded in 2021 by seven former OpenAI employeesAnthropic is developing the Claude family of artificial intelligence models. Launched in March 2023this system has gradually established itself in different professional and institutional environments.

According to the complaint filed by the company, Claude is today the most widely deployed cutting-edge AI model within the Department of Warand the only model of this category used on certain classified US government systems.

🚨 SMARTJOBS

  • MISTRAL – Account Executive, Enterprise, France – Paris
  • ANTHROPIC – Startup Partnerships – France & Southern Europe
  • CONTEXT – HR Director – Human Resources Director
  • ECOLE POLYTECHNIQUE – Director/Deputy Director of International Relations (F/M)
  • CLAROTY — Sales Development Representative
  • FRACTTAL — Account manager (France)
  • BRICKSAI — Founding Growth Manager

👉 Find all our offers on the DECODE MEDIA Jobboard

📩 Are you recruiting and want to strengthen your employer brand? Discover our partner offers

In this context, the company is already participating in several initiatives intended to accelerate the adoption of AI in the federal administration. Anthropic is notably part of a program led by the Pentagon’s Chief Digital and Artificial Intelligence Office aimed at experimenting with the integration of commercial AI models into military operations.

In this program, several suppliers, including Anthropic, OpenAI, Google and xAI, can receive contracts with a ceiling of up to $200 million per company.

As the capabilities of generative models advance, U.S. defense agencies are seeking to leverage these technologies in critical functions, whether in intelligence analysis, data processing, cybersecurity, or operational decision support.

Claude models can in particular be configured to analyze large quantities of information, assist software development work or even support certain operations related to cybersecurity and vulnerability detection.

But the negotiations quickly revealed a fundamental disagreement. Where the Pentagon requested access allowing any use consistent with American law, Anthropic wanted to maintain certain restrictions in its use policy.

The laboratory headed by Dario Amodei has set two red lines: the ban on using its models for lethal autonomous weapons without human supervision and for mass surveillance devices targeting American citizens. According to the company, these restrictions are based on the current technical limitations of artificial intelligence systems. Generative models can produce inaccurate answers or generate errors, a phenomenon often referred to as “hallucination”. For Anthropic, these systems are therefore not reliable enough to be used in contexts where lethal decisions could be made autonomously.

The company also believes that the use of these technologies in mass surveillance programs poses significant risks to civil liberties, due to the ability of AI systems to analyze unprecedented volumes of data.

For the American authorities, these limitations appear incompatible with the operational requirements of a military organization. The Department of Defense thus asked Anthropic to abandon its usage policy and accept that its models could be used for “any legal use”.

The failure of these discussions led to a radical decision by the American government.

A sanction with major economic consequences

THE February 27, 2026the President of the United States issued a directive ordering all federal agencies to immediately cease all use of Anthropic’s technologies. A few hours later, the Secretary of Defense announced the designation of the company as a “supply chain risk to national security”. This decision is accompanied by a particularly heavy measure: contractors, suppliers or partners working with the American military can no longer carry out commercial activities with Anthropic.

In practice, such a measure may force many companies involved in military programs to exclude Anthropic’s technologies from their systems.

The company’s complaint notes that this designation is usually reserved for situations where a supplier could be at risk of interference or sabotage by a foreign power. Anthropic claims to have never been accused of such risks and points out that it has security authorizations issued by the American government, notably within the framework of the program FedRAMPwhich enables the use of cloud services in the federal administration. The company also says its technologies were developed in collaboration with some federal agencies and that it has participated in projects involving sensitive data analysis and cybersecurity.

Despite this, several federal agencies quickly began ending their relationships with the company.

There General Services Administration notably removed Anthropic from its platforms for providing technology services for government, while other administrations have suspended the use of its models. The dispute escalated when the Department of Defense gave the lab an ultimatum to accept its new terms of use

Paradoxically, however, the government decision provides that Anthropic’s services can continue to be used for a transition period of up to six monthstime for administrations to migrate to other providers.

For Anthropic, the potential economic impact is considerable. In its complaint, the company claims that the measures taken by the American government could jeopardize several hundred million dollars in short-term revenue.

Beyond direct contracts with federal agencies, the company emphasizes that many industrial partners and suppliers working with the US administration could be forced to suspend or terminate their collaborations in order to avoid any regulatory risk.

AI enters the military-technological complex

Beyond the contractual conflict, the affair illustrates a broader development: the rapid integration of artificial intelligence into military systems.

AI models like Claude can be configured to operate in a so-called “agentic” manner, that is, to perform certain tasks with a greater degree of autonomy, such as searching for information, analyzing databases or executing code. These capabilities are of particular interest to defense institutions seeking to accelerate the processing of increasing volumes of operational data. Anthropic indicates, for example, that it has developed specific versions of its models intended for national security agencies, capable of working with classified information and analyzing data from cybersecurity or intelligence.

In this context, artificial intelligence laboratories are gradually becoming potential suppliers of critical infrastructure for States.

The complaint also highlights that several US government officials have recognized the strategic importance of Anthropic’s technologies, some going so far as to describe the company as a “national champion” in the field of artificial intelligence.

A precedent for the AI ​​industry

Regardless of the outcome of Anthropic’s lawsuit, the case could set a major precedent for the artificial intelligence industry.

According to the complaint, American authorities have explicitly criticized the company for maintaining restrictions that they consider “ideological” and incompatible with national defense needs.

Anthropic considers, on the contrary, that its decisions fall within its right to publicly express its positions on the technological limits of its systems and on the conditions of use of its products.

For the company, the issue goes well beyond a contractual dispute, and concerns the ability of a technology company to define usage limits for its own technologies, including when it comes to military applications.

In both cases, the episode marks a profound evolution in the sector. AI labs are no longer just technology companies. They become industrial and geopolitical actors whose decisions can influence the strategic balances between States.

The central question is no longer just that of the power of the models. It is now that of power: who controls the infrastructure of artificial intelligence and decides on its uses?