Why conversational AI requires us to rethink the protection of minors

Conversational AI is interfering in the daily lives of miners at an unprecedented speed. In just a few months, systems capable of dialoguing, supporting, advising or simulating empathy have been installed in messaging, games, social networks and educational tools. By changing the conditions in which children and adolescents interact, socialize and build themselves, these services are not anecdotal and cannot remain in the legislator’s blind spot.

Ubiquitous, but largely invisible systems

Unlike traditional social networks, conversational AI is not easily observable, because it works through private, individualized exchanges, and above all continuously. For parents and educators alike, it becomes complex to identify the real nature of these interactions, because it is not content published on a social network, but a conversational link which takes place over time. It is precisely this relational dimension, more than the content itself, which must today attract the attention of public authorities

Architectures already in use among miners

Several categories of systems are already widely accessible.

THE general relational chatbotsfirstly, which are based on the continuity of the conversation and emotional adaptation. Services like Replika Or Character.AI popularized the idea of ​​a conversational agent capable of maintaining an ongoing, sometimes personalized, often immersive relationship

To these devices are added the AI integrated into social platformslike the My AI chatbot from Snapchator the conversational assistants deployed on Instagram. These tools can, in theory, be deactivated via the parental control devices offered by Metabut they insert themselves by default into environments already shaped by the attention economy. AI is not presented as a companion, but as a continuous assistance layer, called upon to interact as closely as possible with daily uses.

Finally, the games and virtual worlds gradually integrate non-player characters capable of freely dialoguing. On platforms like Robloxconversational AI fits into worlds where the border between games, fiction and socialization is already porous, particularly for the youngest.

These systems differ in their purpose, but share the same logic with personalized interaction, permanent availability, and powerful behavioral adaptation capabilities.

The shift in risk: from content to relationship

Existing regulations were designed to respond to violent, pornographic, hateful or fraudulent content. However, conversational AI introduces a new complexity where the risk does not necessarily lie in the words said, but in the way in which the relationship is built and develops.

A chatbot can operate without producing illicit content while exerting a significant psychological influence, which it exerts through repetition, through the implicit validation of certain feelings, through the absence of human contradiction, or through the simulation of unconditional listening. If in an adult, these mechanisms can be compensated, in a minor, they intervene in a phase of cognitive and emotional construction that is still unstable.

Regulatory convergence still fragile

The United Kingdom was one of the first to address the subject and by asking Ofcom to use the levers of the Online Safety Act to regulate chatbots interacting with minors. The State of New York, under the leadership of its Governor, Kathy Hochul, is directly attacking the design and functionalities accessible to minors and wants to impose restrictions by default, limit certain conversational AI functions, and strengthen parental controls.

Why the emergency is now evident

Relational AI can no longer be assumed to be neutral when addressing children. As always, time works against regulators, and each month without a clear framework consolidates uses, and normalizes interactions whose long-term effects remain, to date, poorly documented.

The question is not that of a general ban on conversational AI for minors, which can be useful in many areas, notably educational, but concerns the conditions of its deployment. It appears necessary, in the imperative sense of the term, to impose explicit transparency on the artificial nature of the interaction, that functional limitations designed according to age be implemented from conception.

A debate set to widen

Minors today constitute the primary area of ​​this reclassification, because they make specific vulnerabilities visible. But the questions raised (psychic influence, relational dependence, delegation of emotional functions to artificial systems) go far beyond this audience alone and also deserve reflection.

In this sense, the current framing around children and adolescents could well foreshadow a broader debate on the place that societies agree to grant to relational AI, not as an abstract technology, but as a full-fledged player in contemporary social environments.