As artificial intelligence becomes a global strategic issue, the major technological powers adopt increasingly differentiated trajectories. China publishes its models and facilitates its use. The United States protects them and restrict access. Europe, on the other hand, seeks to legally supervise an ecosystem that it only partially masters. Three approaches, three visions of the role of technology in the world order.
China is focusing on broadcasting
In recent months, China has multiplied the publications of open source models. Models like Deepseek, Yi, Baichuan or Minicpm are freely downloadable, with a level of documentation and sufficient performance for them to be adopted far beyond Chinese borders.
This policy is not a philanthropic gesture but responds to a logic of influence. By massively diffusing its technological bricks, China moves discreetly at the heart of world digital value chains. In emerging countries, startups, universities or even certain Western companies, these models become credible alternatives to closed solutions from the United States.
Their technological opening turns into a vector of strategic expansion. As Chinese models generalize, they shape the tools, interfaces, data and, in the long term, the uses that are made of it.
The United States favors control
In contrast, the United States has chosen the closure. OPENAI, Anthropic and Google Deepmind restrict access to their models. The interaction goes exclusively through APIs, the weights of the models are not published, and the training data remains confidential. This approach is based on two principles: the protection of technological assets and risk control.
If the arguments put forward are legitimate because the most advanced models cost hundreds of millions to cause and have risks of misuse, this strategy has a side effect by leaving the space of global open source to other actors.
In a context where the majority of AI potential users do not have access to these owner tools, Chinese models fill the void. The consequence is that American technological supremacy is coupled with a presence deficit in the open layers of the ecosystem.
Europe regulates, but little produced
Europe, for its part, deploys a strategy based on regulation. AI Act, recently adopted, offers an ambitious framework to supervise the uses of artificial intelligence according to their level of risk. Transparency, explanability, auditability are as much crazy guard that the European Union imposes where others favor speed.
This positioning, if it is consistent with European values, however, struggles to rely on a solid industrial basis. Europe has only a few actors capable of competing with large platforms. Initiatives like Mistral or Aleph Alpha are promising, but still isolated. In the absence of champions capable of disseminating their own models on a global scale, regulation is likely to transform into distance.
A global strategic dilemma
Between total opening and strict closure, no option is without risk. The open source models allow audit, distributed innovation and democratization of uses. But they also ask questions of security, control, even sovereignty. Conversely, closed models protect economic interests and limit drifts, but concentrate technological power in the hands of some actors.
Europe could embody a third way: that of an open source supervised, based on publication standards, responsible licenses, and shared governance. It is still necessary to invest in production, and not only in the standard.
Beyond the models, a question of architecture of the world
What is played out here goes beyond the technical debate. It is a question of knowing who will develop the cognitive infrastructure of tomorrow. AI models are not neutral. They convey biases, priorities, values systems. Distributing a model is necessarily to guide the interpretation of the world by machines.