The enthusiasm generated by large language models has profoundly reshaped the landscape of artificial intelligence in business. Conversational interfaces, productivity assistants, content generation: uses have multiplied at great speed. But behind this excitement, a more structural question remains largely unanswered: how is AI really integrated into the heart of business decision-making systems? In other words, where the data that drives finance, operations, human resources or the supply chain is located.
It is in this blind spot that the emergence of Large Tabular Models (LTM). A family of models that are still little visible in the media, but which could well constitute the next decisive step in business-oriented AI.
The persistent gap between generative AI and operational reality
LLMs excel in a specific register: language. They interpret, synthesize and produce texts with unprecedented fluidity. In business, this capacity has translated into rapid gains in cross-functional functions, whether customer support, marketing, internal documentation, or even team assistance.
But the reality of business systems is of another nature, the majority of structuring decisions are not based on sentences or documents, but on paintings : invoicing lines, sales histories, stocks, schedules, financial indicators, customer bases, product benchmarks. Structured, standardized, interconnected data, often subject to strong regulatory constraints.
Applied to these environments, LLMs quickly show their limits. Their reasoning remains probabilistic, little constrained by business rules, and difficult to explain when it comes to high-impact decisions.
Tabular data, the silent backbone of the company
Tabular data structures ERPs, CRMs, accounting systems, industrial planning tools or business intelligence platforms. Unlike unstructured data, their value does not lie in their volume but in their relationships: dependencies between variables, temporality, exceptions, thresholds, causality.
Historically, their exploitation has been based on specialized statistical or machine learning models, efficient but fragmented: one model per use case, extensive feature engineering work, complex pipelines, often difficult to maintain over time. An effective approach, but expensive and not very generalizable.
What the emergence of Large Tabular Models changes
The Large Tabular Models aim to break with this artisanal logic. Their principle is relatively simple in its statement, more complex in its execution, namely training large models specifically designed to learn tabular structureson a large scale, on heterogeneous data.
Unlike classic tabular models, LTMs seek to generalize. They learn not only values, but schemas, inter-table relationships, temporal dynamics, business regularities.
Why this evolution becomes possible now
Several factors converge, with first of all architectural advances: adaptation of transformers to structured data, integration of graphs and time series, hybridization between statistical learning and causal reasoning. Then, the maturity of data infrastructures in companies: data warehouses, data lakes, increased standardization of repositories.
Above all, the post-LLM feedback has clarified one thing, namely that the value of AI is not only in the interface, but in its ability to improve decisions. In a context of pressure on margins in an uncertain economic context and regulatory complexity, decision automation is becoming a strategic issue.
Use cases with high economic intensity
The first areas of application of LTM appear where decisions are frequent, costly and measurable. In finance, for cash flow forecasting or detection of accounting anomalies. In industry, for optimizing planning or anticipating supply chain disruptions. In human resources, for workforce management or retention analysis. In commerce, for dynamic pricing or prioritization of opportunities. In all these cases, the tabular dominates, and the ability to project future scenarios constitutes a decisive advantage.
An issue of governance and sovereignty
Tabular data concentrates sensitive information, often non-shareable, which raises questions of confidentiality, compliance and control. As such, LTMs also open an industrial debate: where are these models trained, on what infrastructures, and with what level of control by the user companies?
For Europe, historically less dominant on general language models, this business systems-oriented approach represents a strategic opportunity.