OpenAI shows some cards in Brussels, Anthropic keeps its game closed

OpenAI has just accepted what the major artificial intelligence laboratories still refused a few months ago, namely to partially open their sensitive models, particularly to European institutions. Anthropic, conversely, has decided to continue to keep Mythos out of reach of Brussels, at least for the moment.

The difference in approach reveals the emergence of a new balance of power between American laboratories developing so-called “frontier” models and the European authorities, who seek to impose a right of review over technologies now considered strategic.

On Monday, OpenAI therefore played the diplomacy card, announcing that GPT-5.5-Cyber, a specialized version of its latest model, would be accessible to selected European partners: governments, cyber authorities, companies and community institutions, including the EU AI Office. A deployment which would however be limited to cybersecurity teams which Sam Altman’s company will carefully validate.

The announcement comes a month after Anthropic launched Mythos, an advanced cyber model that quickly sparked concerns around the potential automation of attacks against critical infrastructure and sensitive systems.

Since Brussels is trying to obtain early access to the model in order to assess its real capabilities and security mechanisms. Except at this stage, Anthropic does not want to provide access to its model and is keeping Mythos out of reach of European institutions.

Position that the European Commission made known yesterday, Thomas Regnier, spokesperson for the European executive, confirming that discussions with Anthropic were “not at the same stage” as those initiated with OpenAI. A statement which is far from trivial, and which means that Brussels is no longer only seeking to regulate the use of artificial intelligence after its deployment, but now wants to intervene upstream, at the very level of the most advanced models, and this before their large-scale distribution.

The EU AI Office thus seeks to build a political and regulatory precedent, with the aim of obtaining institutional access to models considered systemic. The challenge is to understand their real capabilities, their limits, their security mechanisms and their potential offensive uses. Brussels also wants to have visibility on red teaming procedures, the safeguards put in place by laboratories and post-deployment surveillance mechanisms.

In other words, the Commission is gradually seeking to establish supervision of advanced artificial intelligence models comparable to the control mechanisms already applied to the nuclear, financial or critical telecommunications sectors.

Cybersecurity serves here as the first testing ground, because models like GPT-5.5-Cyber ​​or Mythos are no longer perceived as simple software assistants. They become capacity multipliers capable of accelerating the discovery of vulnerabilities, the analysis of complex systems, the generation of exploits or even certain large-scale defensive operations.

The difficulty lies in their dual nature; the same capabilities can strengthen the security of European infrastructures but also facilitate certain forms of automated attack. This is precisely what Brussels is now seeking to anticipate.

OpenAI seems to have anticipated this political development more quickly than its competitors, in a strategy aimed at strengthening its institutional anchoring among European states. The group understands that in Europe, market access for advanced models will depend as much on regulatory cooperation as on technological quality. Laboratories capable of working closely with Brussels could benefit from a significant advantage with administrations, large groups subject to compliance constraints and operators of vital importance.

Anthropic is taking a much more cautious line for now. The lab is likely seeking to avoid excessive exposure of its true capabilities as well as the creation of regulatory precedents that could pave the way for more intrusive audits. Behind this caution also lies a central question for all American laboratories, to what extent can institutional inspection be accepted without weakening its technological advantage?

Because the subject goes far beyond OpenAI and Anthropic, the current sequence marks a change of doctrine in the relationship between Europe and American tech. Brussels no longer wants to regulate after the fact, once the technologies have been massively deployed and their effects have already been suffered, but to intervene upstream in the face of risks now considered critical.

Cybersecurity is probably only a starting point: tomorrow, the same debates could extend to the most advanced scientific, biological, financial or military models. Behind this sequence, a new balance of power is already emerging between States and private actors for the control of critical infrastructures of the AI ​​era.