Since the emergence of ChatGPT, artificial intelligence has occupied the media and organizational space as rarely before. But if its performance is impressive, its operating mode remains largely based on the ingestion and interpretation of masses of unstructured data. However, in most large organizations, the decisions that really involve budgets, responsibilities or risks are first developed using structured data, organized in tables.
In this area, everyone has noticed, LLMs are less comfortable, and this for the simple reason that they were designed to model language, and not to calculate on tabulars.
So a language model learns to predict the next token from a sequence. This mechanism works very well for text, where the information is carried by syntax, context and linguistic regularities. A table, on the contrary, encodes information in a logical structure: typed columns, cells, keys, missing values, implicit relationships between variables.
To “digest” a table in an LLM, you generally have to serialize it (convert it to text or JSON). We lose part of the structure, and we especially come up against a material constraint: the context window. An LLM can only reason about what it “sees”. Beyond a few thousand, or tens of thousands of lines, it switches from reading to sampling, or even summary. However, companies commonly work on datasets that have millions, sometimes billions, of rows.
And even when the table fits in the context, LLMs remain ill-suited to the operations that make the tabular valuable: reproducible calculations, large-scale aggregations, detection of weak signals in distributions, rigorous treatment of missing values, outliers, breaks and regime changes. They can produce a plausible explanation, but they are less reliable when it comes to delivering a stable and strictly identical result at each execution, which is an imperative in demanding business environments.
This is precisely where the so-called approaches are positioned. Large Tabular Modelsdesigned to reason directly about structured data. This is precisely that of Fundamental, an AI laboratory which has just emerged from the shadows by unveiling NEXUS, its foundational model designed to exploit structured data on the scale of large organizations.
The initial thesis is relatively simple: companies already have the data, often in massive quantities, but part of its value remains inaccessible because traditional predictive tools still too often function as a cottage industry. Fundamental offers a single foundation capable of being applied to a wide variety of problems, whether demand forecasting, price definition, churn forecasting, risk, fraud, without having to rebuild the architecture at each iteration.
The technological challenge is based on a break with language models. NEXUS is presented as a Large Tabular Model, and not as a “spreadsheet-friendly” LLM. The objective is to produce predictions and recommendations as close as possible to business requirements, particularly in terms of reproducibility.
The other point highlighted concerns scale. Where the most widespread architectures in current AI quickly come up against the limits of reading a massive dataset, Fundamental’s ambition is to process volumes that resemble the reality of large companies: gigantic tables, from transactional databases, data warehouses and management tools, sometimes with billions of rows.
The startup has therefore formed a strategic partnership with AWS to allow corporate clients to deploy NEXUS directly in their existing cloud environments. The goal is to reduce integration friction, bring the model closer to the data, and align the approach with security and compliance requirements that make, in practice, many AI promises difficult to industrialize.
If on a commercial level, Fundamental does not communicate on its client portfolio, it is currently recruiting an experienced Enterprise Account Manager to manage and develop relationships with Fortune 100 clients, an indication of the startup’s market approach.
Until now remaining very discreet, Fundamental announces having raised 255 million dollars for a valuation of 1.2 billion dollars. The round includes a $225 million Series A led by OAK HC/FT, with the participation of VALOR EQUITY PARTNERS, BATTERY VENTURES, SALESFORCE VENTURES and HETZ VENTURES, as well as a $30 million seed raised notably from QUADRILLE CAPITAL, KIMA VENTURES, MOTIER VENTURES and business angels, including ARAVIND SRINIVAS (Perplexity), HENRIQUE DUBUGRAS (Brex) and OLIVIER POMEL (Datadog).
At the origin of Fundamental, a “business × science × execution” team. JEREMY FRAENKEL (CEO) from JP MORGAN then BRIDGEWATER, and who has already co-founded a startup with DRIFT (formerly Arkifi) before Fundamental. MARTA GARNELO (CSO) comes from GOOGLE DEEPMIND, where she was a researcher for more than seven years. Finally GABRIEL SUISSA completes the team with a growth-oriented profile, having worked with JP MORGAN then GREENFIELD PARTNERS.