The scenario seemed stabilized, Anthropic, whose models were among the few authorized in federal classified environments, continued its discussions with the American Department of Defense (DoD), with a view to a contractual renegotiation. At first the outcome seemed open, it was not, and at the heart of the dispute, the extent of authorized uses.
A renegotiation with broader contours
According to information reported by several sources close to the discussions and reported by Bloomberg and CBS in particular, the Pentagon wanted to modify the clauses governing the use of Anthropic models. Two subjects concentrated the tensions.
The first involved the analysis of massive volumes of data collected on American citizens. This was not just external intelligence, but data likely to include search histories, interactions with conversational assistants, geolocation data or financial transactions, cross-referenced with other administrative or commercial bases.
Anthropic would have considered that this scope exceeded the limits it had set for itself. The company had included restrictions explicitly aimed at mass domestic surveillance in its terms of use. The DoD, for its part, would have liked to retain more flexible formulations, leaving room for operational interpretation.
🚨 SMARTJOBS
- CONTEXT – HR Director – Human Resources Director
- ECOLE POLYTECHNIQUE – Director/Deputy Director of International Relations (F/M)
- CLAROTY — Sales Development Representative
- CURE51 — Data Scientist (Internship)
- FRACTTAL — Account manager (France)
- BRICKSAI — Founding Growth Manager
👉 Find all our offers on the DECODE MEDIA Jobboard
đź“© Are you recruiting and want to strengthen your employer brand? Discover our partner offers
A first disagreement crystallized on this point.
Autonomous weapons: a disagreement less ideological than technical
The second topic was autonomous weapon systems. The US budget for these programs for fiscal year 2026 exceeds $13 billion. It covers individual drones, coordinated swarms, and other platforms capable of selecting and engaging targets without direct human intervention.
Anthropic would not have contested the principle of these programs. The blocking point would be that the company’s managers would consider that their models have not reached a sufficient level of reliability to be integrated into lethal decision-making loops.
However, a compromise would have been considered, namely limiting the use of models to the cloud, excluding their direct integration into embedded systems, called “edge”, operating in the field. The models could have contributed to the upstream analysis, without intervening in the commitment decision.
Anthropic would have considered this distinction insufficient, because in contemporary military architectures, the border between cloud and edge is blurring. Systems operate in interconnected networks, where remote computing capabilities can influence operations in near real time. From a functional point of view, the separation did not guarantee, according to the company, a clear separation of responsibility.
A hardened political context
These discussions take place in a particular institutional climate. The Trump administration has placed emphasis on strengthening technological sovereignty and greater latitude for the executive branch in security matters. Relations between Washington and large technology companies have experienced phases of cooperation, but also of tension, particularly when the latter opposed usage limits to federal agencies.
In this context, the DoD’s desire to obtain more flexible commitments is part of a logic of capacity optimization. For the Pentagon, advanced AI models are force multipliers. Operational room for maneuver is a strategic parameter.
For Anthropic, these same models represent a systemic liability. The company was built on a doctrine of “safety by design”, integrating safeguards from the design stage. Renouncing certain clauses amounted to redefining its positioning.
A break with ripple effects
The failure of the negotiations led to a suspension of contractual relations. The DoD has reportedly asked its partners and contractors to discontinue their collaborations with Anthropic. This decision de facto indirectly affects a broader ecosystem, including cloud infrastructure providers like Amazon.
At the same time, OpenAI announced a separate agreement with the Pentagon. The company specified that its models would be deployed “in the cloud”. OpenAi employees would also have expressed their attachment to red lines comparable to those defended by Anthropic.