Over the last 3 years, generative AI has been told as a text story, consisting of chatbots, assistants, all providing answers. A spectacular progress, but limited to a field where the machine can train on massive volumes of data already available, and where the environment and the language offer a form of ready-to-use world.
Arthur Mensch (X2011), co-founder and CEO of Mistral, came to meet the pupils and students of X for a talk on the implications of AI and on entrepreneurship.
In his introduction Arthur Mensch, CEO of Mistral, recalls the beginnings of the startup: “we started in 2023. The only thing we knew how to do was train chatbots”, then “quite quickly (…) everyone realized that it was more than just chatbots”.
A new frontier then emerges, more demanding and offering a view towards new challenges. Namely connecting AI to simulators, then to the real world, in other words, moving away from language as a unique environment and confronting models with physics, materials, robots, industrial constraints, or even safety.
🚨 SMARTJOBS
- ECOLE POLYTECHNIQUE – Director/Deputy Director of International Relations (F/M)
- CLAROTY — Sales Development Representative
- CURE51 — Data Scientist (Internship)
- FRACTTAL — Account manager (France)
- BRICKSAI — Founding Growth Manager
👉 Find all our offers on the DECODE MEDIA Jobboard
📩 Are you recruiting and want to strengthen your employer brand? Discover our partner offers
From text to operation
The language model becomes an orchestration layer capable of “interacting with information systems, (…) browsing the web, interacting with computers”, but also “discovering on your own which system to set up, which process to automate”.
This evolution takes AI from commentary to execution. However, where the text is a universe relatively tolerant of approximation, the real world imposes repeatability, robustness and responsibility. The question is no longer just to produce a plausible answer, but to act correctly in a constrained system.
Simulators as a learning ground
Simulators make it possible to train systems in controlled environments, explore scenarios, and accelerate learning cycles, without immediately exposing machines, infrastructure or patients to errors.
However, Arthur Mensch emphasizes that this field remains wide open. “We don’t know very well how to create systems (…) which interact with physics simulators, which interact with the real world, which understand how to operate robots, which understand the physics of materials, which understand biotechnologies”.
This sentence draws a landscape where language is only one step, and where interacting with a simulator requires understanding physical rules in particular. Thus, interacting with the real world requires managing the unexpected, variability, noise, wear and tear, and delays.
The simulator is an approximation of reality, the whole difficulty lies in the gap between the performances observed virtually and those obtained on a concrete system. This transition is both technical, but also organizational and economic.
Data does not fall from the sky
In industrial or scientific fields, where relevant data is neither abundant nor immediately usable, it must be collected, cleaned, structured, sometimes produced by specific instrumentation. Arthur Mensch says it bluntly: “if you want to push the capabilities of a model in materials science or quantum physics, you will have to get up early, go get data, find the simulators, set up the environments that allow the systems to be strengthened”.
This remark says the essential thing. AI applied to science and industry is not just a problem of model architecture. It is an environmental problem, and we must define what must be measured, how, at what frequency, under what conditions. You have to be able to design pipelines, protocols, interfaces. Performance depends as much on this infrastructure as on the algorithm itself.
Verticalization and specialization
As AI moves beyond the general domain of text, it faces specific constraints. A model capable of writing a report does not spontaneously become competent in robotics or materials science. In each of these areas, units change, standards are imposed, and margins for error are tightened.
For the founder of Mistral AI, “the specialization of models and penetration into increasingly verticalized areas continues to require a lot of human thinking and strategization capacity.”
This move toward verticalization involves hybrid teams, capable of understanding both models and application areas. AI does not replace expertise, on the contrary, it amplifies it or can also put it under pressure.
Robotics and the physical world: the robustness test
Robotics presents many difficulties, because it requires combining perception, planning, control and security. An error results in a poorly executed action, which can damage the environment or even cause a human risk.
When Arthur Mensch talks about systems “that understand how to operate robots,” he is not describing an abstract capability. He speaks of an intelligence confronted with constraint. Understanding, here, does not consist of producing a coherent explanation, but of integrating mechanical parameters, energy limits, execution times, safety standards, legal responsibilities.
In this context, AI is no longer reduced to a conversational interface. It becomes an orchestration layer, located between sensors, actuators, simulators and operational procedures. It must articulate perception, decision and execution in environments where error is not simply an imprecise response, but a cost, a risk or a failure.
It is at this precise location that the border is played out. No longer in the fluidity of language, but in the ability to act under constraint, in a reliable, repeatable and integrated way in reality.