In 2026, age verification will be one of the central issues of the digital economy. Under the combined pressure of governments, regulators and public opinion, the Internet is shifting towards an architecture where access to services, applications and content will be conditioned by a checkpoint responsible for identifying whether a user is a minor.
This control point is now the subject of a direct confrontation between the main digital players: social platforms, app store operators, device manufacturers, AI giants and American, European and Asian authorities, because it significantly redefines the responsibilities, economic models and balance of power that have structured the industry for more than a decade.
Meta, Snap, X, TikTok: the counter-offensive of social platforms
Social networks understood very early that the regulatory tightening exposed them directly. In the United States, Meta has lobbied extensively to shift responsibility for age verification to app stores. Utah was the first state to adopt this model (application in July 2026), followed by Texas (application on January 1, 2026), and Louisiana (July 2026).
Meta, Snap and X welcomed this text, which is favorable to them, in a joint statement, and more than a quarter of American states have tabled similar proposals.
The offensive has also begun in Europe. Meta released an advertising campaign calling on the European Union to impose a unified age control system on app stores. The company highlights a survey according to which 75% of European parents would like mandatory parental authorization for those under 16 before downloading an application. The argument is that centralizing control would limit the collection of personal data, the objective being to reduce friction in use and the risks weighing on social networks.
Behind this position there is above all a redistribution of legal responsibility. Brussels having initiated formal proceedings against Meta for insufficient age control under the Digital Services Act, placing responsibility on Apple and Google is a diversionary strategy.
Apple, Google, Samsung: gatekeepers refuse to become identity authorities
Apple and Google reject the idea of the app store becoming a body responsible for authenticating the age of users, otherwise it would be necessary, according to their arguments, to manage a massive collection of sensitive data, bear a direct legal risk and transform the stores into real digital identity infrastructures.
Apple: minimizing and shifting responsibility
In 2025, Apple introduced an API allowing developers to know whether a user is a minor or an adult, without transmitting an exact age or date of birth. This architecture is based on local control of the account, possibly managed by parents within the framework of the Apple ecosystem.
In recent exchanges with US lawmakers, Tim Cook insisted that Apple refuses any requirement to verify identity documents. For the Cupertino company, the priority remains data minimization and responsibility must lie with application publishers.
Google: an even harder line
Google finds Utah-style laws worrying. The firm believes that forcing an app store to share a user’s age with potentially millions of developers creates a major risk: opening new attack surfaces, misappropriation of data, lack of clear governance.
She advocates for a granular model, where only applications with a legitimate need would receive age information, and where the obligations would not weigh on the entire store.
Asian manufacturers, exposed without saying it
Samsung, Huawei, Oppo or Xiaomi are following the debate with caution. Any store-centric obligation imposed in Europe or the United States would greatly complicate their compliance in already fragmented OS.
In Asia, real identity regimes (notably in China and South Korea) show a completely different model, which reinforces geopolitical pressure around age control.
OpenAI, Nvidia, Anthropic: AI as a new point of control
The rise of AI models introduces an additional player into the debate. AI giants are developing systems capable of estimating age from biometric, behavioral or vocal signals. These technologies could ultimately become the main point of control, natively integrated into OSes, devices or services. But they raise new questions: bias, accuracy, social acceptability, compliance with European law. They open up a space where the boundary between assistance, security and surveillance becomes difficult to draw.
Regulators impose an unprecedented political rhythm
In the United States, state legislation is multiplying, often contradictory. Congress is considering several plans to hold app stores accountable. The Supreme Court could, with the case FSC v. Paxton, redefine the constitutional framework of these obligations, with a domino effect on all age verification laws.
In Europe, the DSA requires platforms to put in place reasonable age estimation measures, under penalty of sanctions. The European Commission has targeted Meta from 2024, believing that its tools are not sufficiently effective.
France has positioned itself as a political driving force, Clara Chappaz had announced that she wanted to impose on social networks a strict verification obligation before any account creation, even if it means acting unilaterally in the event of a blockage in Brussels. The precedent of pornographic sites blocked by Arcom serves as a reference. The new Minister in charge of digital technology Anne le Hénanff continues this position, especially as she has a perfect command of this subject. As early as 2023, as an MP, she questioned the government on the use of artificial intelligence conversational agents by minors.
A consensus on the objective, a shared refusal to bear the burden
While all stakeholders agree on the need to better protect minors online, they differ profoundly on where to place the checkpoint. Social networks want to transfer it to the app stores. Blind operators refuse to become identity authorities. Device manufacturers fear a model that would increase their liability. The AI giants are advancing their solutions without wanting to bear the regulatory risks and economic consequences.
To conclude, everyone recognizes the objective, but no one wishes to be its guardian.