Chatbots and children: London wants to impose the same obligations as for social networks

The United Kingdom is opening a new front in digital regulation by calling for immediate supervision of artificial intelligence chatbots when they interact with minors. Technology Secretary Liz Kendall has called on Ofcom to immediately use its powers under the Online Safety Act to impose obligations comparable to those that already apply to social media. This change in doctrine marks a break where conversational systems are no longer considered as simple tools, but as interactive environments presenting a systemic risk for children.

This development comes as generative AI services become part of the daily lives of young users through school, gaming, personal assistance or socialization. For the British government, the issue goes beyond classic moderation. Chatbots can generate unpredictable content, simulate empathy, blur emotional cues and expose minors to dangerous behavior. The logic followed is that of alignment: if the interactions resemble those of a social platform, the obligations must be equivalent.

The executive is now studying how automated conversations could be integrated into the legal framework of the Online Safety Act, despite the fact that this text was not designed for generative AI. Companies could be required, in the short term, to conduct minor-specific risk assessments, implement robust age detection systems, and document how their model prevents the generation of illegal or harmful content. Politically backed Ofcom is preparing to issue guidance and take enforcement action against non-compliant actors.

The government has also planned a national summit dedicated to child safety in the face of AI, bringing together experts, NGOs, platforms and regulators, with the aim of defining a doctrine of “child-safe AI by design”. At the same time, criminal legislation has already been introduced to criminalize the use of AI in the creation of child pornography content, increasing pressure on developers to integrate technical safeguards by design.

With this initiative, the United Kingdom positions itself as the first regulator to treat chatbots as social spaces subject to strict controls. If this direction is confirmed, it could become an international standard and create a precedent for other jurisdictions seeking to regulate the growing use of AI by minors.