META invents social-native AI: a new layer between content, recommendation and commerce

Change of direction for Meta AI, under the leadership of Alexandr Wang, now head of the Meta Superintelligence Lab. The group puts an end to several years structured around Llama. With Muse Spark, Meta is beginning a repositioning directly oriented towards usage.

This shift reflects a more profound reorganization: that of an AI no longer thought of as an autonomous technological brick in competition with OpenAI or Anthropic, but as a functional layer inserted into the group’s products. Meta today wants to orchestrate its ecosystem using proprietary and closed AI.

Muse Spark is “designed specifically for Meta’s products” and should “power a smarter, faster version of Meta AI.” Alexandr Wang’s objective is to build an AI capable of integrating into existing uses, a social-native AI, located at the junction of content, recommendation and commerce.

AI designed for a closed ecosystem

In this logic, Muse Spark becomes a functional layer distributed across the group’s platforms. Meta says the model “powers a smarter, faster version of Meta AI” and will gradually roll out to WhatsApp, Instagram, Facebook, Messenger and smart glasses.

Meta dilutes AI in environments already massively used, a diffusion which should create less friction in its adoption, as it will be less visible, but just as effective in nourishing the group’s economic model.

A response built from the social

The singularity of the model lies in its main source of contextualization. Meta says that Muse Spark will enable citing recommendations and content that users share on Instagram, Facebook and Threads.

To do this, Meta will rely on all the content produced within the platform by its users. Content that Muse Spark can re-orchestrate on top of the social graph. The responses will be made from contextualized assemblies, nourished by social signals: publications, interactions, communities.

This mechanism completely redefines the circulation of content, which is no longer only distributed via the feed or internal search; they become activatable bricks in the generation of responses. By relying on this infrastructure, Meta consolidates a structural advantage with direct, real-time access to native content and its usage dynamics, a type of data that is difficult to replicate outside its ecosystem.

A use-oriented agentic architecture

Another change, just as pragmatic: to optimize its responses, Meta AI can now “launch several sub-agents in parallel”, each being responsible for a specific task. For example, as part of a summer trip, one agent writes the itinerary, another compares Paris with other European capitals and a third suggests activities suitable for children.

This architecture makes it possible to process complex requests without switching to fully autonomous systems, which are still difficult to stabilize on a large scale. A pragmatic choice aimed at improving the quality of responses by multiplying processing points, without exposing the user to the underlying technical complexity.

See the world rather than describe it

One of Muse Spark’s strengths is its ability to “see and understand what you look at, not just read what you type.” For example, taking a photo of an airport snack aisle would allow Meta AI to identify and classify products based on their protein content.

This ability to interpret images in context considerably broadens the scope of application of AI, and allows the assistant to guide the user in a real situation. This perspective becomes even more structuring with connected glasses, where this visual perception is part of continuous use.

Health as a structuring use case

Meta also intends to play a role in day-to-day management, particularly on health issues. The company emphasizes that “health is one of the main reasons why people turn to AI”, a positioning that makes it possible to capture frequent interactions and place its services in uses with high perceived value.

This ambition, however, comes up against an issue of trust. Meta’s history in matters of personal data, punctuated by controversies and sanctions, questions the group’s ability to convince users to entrust it with even more sensitive information.

Towards a “personal superintelligence”

Meta wants to build an AI that is “an assistant that can help anyone, anywhere, with what matters most to them,” “an AI that doesn’t just answer your questions, but truly understands your world because it’s built from it.”

A pragmatic repositioning in the face of the race for LLMs, in favor of more direct integration into uses. Meta does not seek to compete solely on the performance of models, but to mobilize an asset that is difficult to replicate: social interactions produced on a large scale on its platforms.

Trust, a blind spot in Meta’s AI strategy

One point remains implicit: the question of trust. Meta’s history in the management of personal data remains marked by recurring controversies. In this context, the extension of AI to sensitive areas such as health or purchasing behavior poses a central question: to what extent will users agree to share even finer data to supply these new services?

Meta mentions the strengthening of its security and protection systems, but the issue goes beyond technical compliance alone. It affects the perception of control, the readability of data uses and the group’s ability to convince that this new layer of AI will not reproduce past ambiguities.