Build in instability: the new reality of teams produced in the LLMS era
At Anthropic, the teams produced advance with a constraint that few technological companies have known so far: the artificial intelligence model on which they rely is in perpetual change. Mike Krieger, Chief Product Officer of the Company and former Instagram co -founder, sums up this tension from a line: “We build a product while the model changes … and sometimes we don’t even know what the new model will do before the last days. »»
In the world of LLMS, stability is no longer a prerequisite, but an illusion. Each new version of the model – Claude 3.0, 3.5, 4.0 – silently transforms the way in which AI is expressed, takes a position, interacts or remains behind. A slight change in training, a variation in the data, an adjustment in the architecture, and the whole balance of the product changes. It is no longer the front-end that determines the experience, but the moving behavior of the underlying engine. The base is no longer frozen: he lives, he learns, he reacts.
In this context, to deliver a product is to accept to navigate visibly. The model and the interface evolve in parallel, without being able to be frozen together. The anthropic teams do not have the possibility of “freezer” a model to build around him. They must deal with the unknown, while maintaining an immediate usability requirement. An AI that responds unexpectedly, too long, or on the contrary too dryly, is enough to disturb the perception of a product-even if technology, in the background, has progressed objectively.
To prevent this instability from translating into frustration for users, Anthropic has built a continuous feedback system. Each interaction with Claude can be assessed, commented, dissected. It is not the thumbs up or lowered, but the verbatims. The aggregated remarks reveal trends: a claude too talkative, too consensual, too hesitant to defend a position. This qualitative signal becomes a compass to adjust the behavior of the model during the phases of fine-tuning.
But that’s not all. Anthropic is not content to optimize technical performance. She also works Claude’s personality as a full -fledged product. Tone, rhythm, posture, response structure: AI is conceived as a character, with an identity, preferences, a way of addressing the user. We no longer simply build an interface that “hosts” a model. We shape a presence. Claude is not neutral: he has vibes.
This approach has deep consequences on the organization produced. The role of the product manager, as we knew, is upset. It is no longer a question of unrolling a linear roadmap, but of piloting a moving relationship between a user, an engine, and a constantly evolving use. The PM becomes a dynamic conductor. He must anticipate the effects of the edge, identify micro-disagreements between the design vision and the real behavior of the AI, maintain functional consistency without ever being able to stabilize what he delivers. In this configuration, user tests no longer validate a finished product. They document a transitional state.
This understanding of risk and change, Mike Krieger acquired it harshly with Artifact, the personalized news startup that he had co -founded after Instagram. The product was elegant, the learning model efficient. But the promise of personalization was only materialized after several dozen items read. For the average user, the profit was not noticeable from the first minutes. The majority abandoned even before discovering its depth. “We have underestimated the strength of the immediate effect,” he admits. “People do not adopt a product by betting on its potential. They adopt it because it is good right away.”
This feedback deeply marks its current vision. At Anthropic, each new feature is designed to be useful immediately, effortless on the part of the user. In parallel, the teams are already preparing the conditions for a paradigm shift: Claude will no longer be just a conversational chatbot, but an agent. An active presence, capable of initiatives, of silent observation, of work in the background. He will have to know when to intervene, when to be silent, when to offer without disturbing. This progressive agency is one of the major projects. It requires designing a product which is not limited to an interaction, but which is part of a continuity, a relationship.
In this moving world, the UX is no longer an interface but a composite experience. What the user experiences is not determined by a screen, but by the momentary alignment between his request, the state of the model, and the shared memory of their exchanges. The AI product becomes a continuous form of conversation. And this conversation is anything but frozen.
Building in this context requires a new discipline: that of operational lucidity. It is no longer a question of optimizing a stable base, but of synchronizing moving entities. Quality is no longer an affected state, but a balance to be maintained. The bug here is not instability. It is not to take advantage of it.
“This is not a crisis. This is a new standard,” concludes Krieger. In the era of evolutionary models, the best products will not be those that resist change, but those who know how to make it a creation material.