What OpenAI’s CFO tells us about the company’s financial and industrial trajectory

The speaking of Sarah Friar, CFO of Open AI, comes at a time when OpenAI faces an accumulation of questions, sometimes contradictory, about the real nature of its economic model, its monetization trajectories and its development plan. The introduction of advertising formats in ChatGPT revived misunderstandings at the end of the week, especially since this option seemed to have been explicitly ruled out by Sam Altman in his previous speeches. At the same time, the hypothesis of an IPO around 2027 is circulating more and more openly in the ecosystem, reinforcing the idea that the company has entered a phase of more structured financial communication.

It is in this context that the speech given by Sarah Friar takes on its full meaning, and seeks above all to restore coherence and readability where the public debate tends to fragment the OpenAI model into isolated signals. I suggest you see this in more detail.

From an experimental tool to a deeply rooted use

In opening remarks, Sarah Friar reminds us that ChatGPT was launched as a simple research preview, with an exploratory intent that aimed to understand what would happen if frontier intelligence was placed directly in the hands of the public.

A victim of its success, the sequel went far beyond the initial framework, with very rapid adoption of ChatGPT, the use of which was well beyond what had been anticipated. Whether it is students seeking to complete an exercise, parents organizing trips or adjusting a budget, or more broadly, users using ChatGPT to make sense of personal situations, prepare for medical appointments or clarify complex decisions, personal uses have increased significantly.

“People used ChatGPT to think more clearly when they were tired, stressed, or unsure. » specifies Sarah Friar, who recalls that the initial value of ChatGPT is not in professional use, but in its ability to help users in their daily lives.

Entry into the company as an extension of personal use

It is only then that the use of ChatGPT shifts to the professional world, whether it is a reworked note before a meeting, a spreadsheet checked one last time, or a reformulated email to adjust its tone. This point is central to understanding the OpenAI model, the adoption of which is first personal, before penetrating the company.

A research and deployment company

Before addressing the financial dimensions, she returns to the way in which OpenAI defines itself, that the startup is neither a simple research laboratory nor a classic software publisher, but that it positions itself as a research and deployment organization. OpenAi’s goal is to narrow the gap between the rapid advancement of artificial intelligence capabilities and their practical adoption by individuals, businesses and public institutions.

This articulation sheds light on the structuring choices made in recent years, whether in terms of products or economic model, and that it is based on the conviction that research only has value if it translates into sustainable uses.

A monetization doctrine designed to last

Faced with questions about advertising and the company’s financial trajectory, Sarah Friar recalls that OpenAI’s economic model must evolve in proportion to the value actually delivered by AI, and that it is with rigor that this principle now permeates all revenue lines.

Thus, subscriptions respond to continuous individual uses, offers dedicated to teams are based on usage pricing, aligned with the work actually carried out, APIs allow developers and companies to integrate intelligence into their own products, with costs that evolve according to the results produced.

The introduction of formats linked to commerce and advertising is part of this same logic. More and more users come to ChatGPT to decide which product to buy, where to go, which option to choose. At this stage, OpenAI considers that offering relevant solutions creates value, for both the user and commercial partners, provided that these proposals are clearly identified and truly useful. “Monetization should feel native to the experience. If it doesn’t add value, it has no place. » she specifies, underlyingly recalling Altman’s low appetite for advertising, but the need to find the most efficient model.

Computing as an economic key to reading

Beyond these considerations, the heart of Sarah Friar’s remarks lies in the relationship established between computing and turnover. Between 2023 and 2025, OpenAI’s computing power increased from 0.2 gigawatts to approximately 1.9 gigawatts. Over the same period, annualized income increased from two billion to more than twenty billion dollars.

However, she believes that with more computing capacity, adoption and monetization would have progressed even more quickly. This reading places OpenAI in an industrial logic, where large-scale artificial intelligence relies on heavy infrastructure, anticipated capital commitments and fine management of resources.

From a constraint to a managed portfolio

It is this analysis that led OpenAI to organize the compute. The company has thus moved from dependence on a single supplier to a diversified ecosystem, providing it with both operational resilience and more visibility into its deployment capacity.

Thus it becomes an active portfolio, where the most demanding models are trained on premium infrastructures when performance is critical, and high-volume loads are directed towards more efficient environments when marginal cost becomes a priority. This segmentation helps reduce latency, increase throughput and lower unit costs.

This strategy leads OpenAI to favor partnerships rather than direct ownership of assets. Commitments are made in installments, based on tangible demand signals, in order to support growth without rigidifying the cost structure.

This logic is reflected in numerous partnerships which have followed one another in recent months. The most structuring remains that established with Microsoft, which provides OpenAI with a determining part of its infrastructure via Azure. This agreement covers both the training of models, their large-scale deployment and the integration of OpenAI intelligence into Microsoft products. It allowed the company to secure critical computing capabilities very early on without directly carrying the corresponding assets on its balance sheet.

As computing needs have grown, however, OpenAI has sought to reduce its reliance on a single vendor. The company has thus expanded its infrastructure ecosystem by relying in particular on CoreWeave, specialized in high-performance computing on GPUs, as well as on Oracle, with which OpenAI announced a partnership relating to additional cloud and data center capacities. These agreements aim to absorb massive loads related to training and inference, while improving medium-term visibility on available volumes.

On the hardware side, OpenAI works closely with NVIDIA, whose GPU architectures form the basis of current frontier models. Here again, this is less an isolated technological choice than an industrial alignment with supply chains capable of supporting rapid ramp-up.

Taken together, these partnerships illustrate the strategy described by Sarah Friar, combining premium infrastructure and more efficient capacities depending on usage, and committing capital in stages, based on real demand.

Towards a layer of exploitation of intellectual work

Above this infrastructure is deployed a platform covering a spectrum of uses ranging from text to image, from voice to code, to APIs. The next stage mentioned concerns agents capable of operating continuously, maintaining context over time and acting through several tools simultaneously.

For users, this means systems that can manage projects, coordinate plans and execute tasks. For organizations, this amounts to installing an operational layer dedicated to intellectual work.

As these uses become recurring, the economic predictability of the model strengthens, making it possible to invest over the long term.

Giving readability to a model still under construction

This putting into perspective aims to clarify not only the model, but also the development strategy assumed by OpenAI. subscriptions, APIs, computing management, the emergence of agents or the rise of uses are presented as components of the same system being structured. Advertising could be part of this if the test phase which has just been initiated is validated.

In a context where the hypothesis of an IPO by 2027 is increasingly mentioned, OpenAI wants to appear not as a company in search of a model, but as an organization which seeks to stabilize an intelligence infrastructure, with its industrial constraints, its capital arbitrages and its scaling requirements. A communication exercise which should multiply in the months to come and which we will not fail to decode.