Memory, this great forgotten part of AI: VERTICAL COMPUTE raises 57 million euros to tackle the “memory wall”

With the rise of generative models, artificial intelligence is often described as a race for computing power alone. Announcements of giant infrastructures, GPU clusters and new accelerators occupy most of the technological debate. However, a more discreet constraint is gradually emerging as one of the main obstacles to AI: memory.

In traditional computer architecture, processors perform calculations while data is stored in separate memory modules. This model, inherited from several decades of computer engineering, involves constant movement of data between computing units and memory. However, with the explosion of volumes manipulated by AI models, this movement is becoming more and more costly, in terms of latency and energy consumption.

Specialists now speak of a “memory wall”, a structural limit which manifests itself when the progression of processors exceeds that of memory technologies. “Memory technologies are facing limits in terms of density and performance, while processor performance continues to increase,” explains Sébastien Couet, technical director of Vertical Compute. He said the data access requirements imposed by AI workloads make it “imperative to overcome the memory wall to enable the next wave of innovation.”

This structural constraint becomes particularly visible in modern AI infrastructures. Large models require constant access to massive volumes of parameters and intermediate data. In current architectures, this information is generally stored in external memories, notably high bandwidth memories (HBM), before being transferred to the processors responsible for the calculations. This operation involves constant exchanges between components, generating latency, energy consumption and infrastructure costs.

🚨 SMARTJOBS

  • MISTRAL – Account Executive, Enterprise, France – Paris
  • ANTHROPIC – Startup Partnerships – France & Southern Europe
  • CONTEXT – HR Director – Human Resources Director
  • ECOLE POLYTECHNIQUE – Director/Deputy Director of International Relations (F/M)
  • CLAROTY — Sales Development Representative
  • FRACTTAL — Account Manager (France)
  • BRICKSAI — Founding Growth Manager

👉 Find all our offers on the DECODE MEDIA Jobboard

📩 Are you recruiting and want to strengthen your employer brand? Discover our partner offers

As models become larger, this movement of data becomes one of the dominant factors in actual system performance. In some cases, moving information consumes more resources than the computation itself.

Towards a vertical memory architecture

This is precisely the problem that Vertical Compute attempts to address. The startup is developing an architecture which consists of integrating the memory directly above the calculation units, in a vertical structure integrated within the chip.

The goal is to drastically reduce the distance traveled by data. In traditional architectures, these can circulate over several millimeters, or even centimeters, between components. In a vertical architecture, these exchanges take place at the nanometer scale.

This proximity would limit inefficiencies related to data transport, improve memory bandwidth and reduce energy consumption. The technology’s designers also discuss the possibility of approaching the performance of fast memories while increasing storage density.

The principle is based on a modular chiplet-type architecture, combining stacked memory structures and calculation units in the same assembly. This approach could make it possible to more efficiently integrate memories into computing systems, particularly for embedded AI applications or peripheral computing environments.

According to the founders, this development could help reduce dependence on centralized infrastructure. Current AI systems rely heavily on data centers, partly due to the cost and complexity of the memory architectures required. More compact integration could support running models directly on devices or embedded systems.

A discreet but strategic transformation of AI infrastructure

If this approach is confirmed on an industrial scale, it could change the way artificial intelligence systems are designed. For several years, the majority of investments have focused on specialized processors, GPUs, AI accelerators or dedicated architectures.

Memory, although essential to the overall performance of systems, is often considered a secondary component. However, the progression of AI models, whose size and data requirements are increasing rapidly, makes this dimension more and more critical.

In this context, innovations that aim to bring together memory and calculation could play a structuring role in the next generations of computer architectures. The goal is not necessarily to replace existing processors, but to improve their efficiency by reducing one of the main bottlenecks in modern AI.

A European deeptech resulting from nanoelectronics research

Founded in 2024, Vertical Compute is a spin-off of the European research center imec, one of the world’s leading institutes specializing in nanoelectronics and semiconductor technologies. The company develops technology aimed at integrating memory and calculation in a vertical architecture intended for artificial intelligence systems.

The company was founded by Sylvain Dubois, former manager of semiconductor projects at Google, and Sébastien Couet, researcher who led research programs on memory technologies within imec for several years.

Vertical Compute announces that it has secured an additional 37 million euros, completing a first fundraising of 20 million euros, thus bringing its seed round to 57 million euros. The operation was led by the investment fund Quantonation, with the participation of Flanders Future Techfund (managed by PMV), Wallonie Entreprendre, Sambrinvest, Noshaq, InvestBW, Drysdale Ventures and Kima Ventures, while historical investors Eurazeo, XAnge, Vector Gestion, imec.xpand and imec also participated in the financing.

With a team of around twenty-five employees spread between Belgium and France, the startup indicates that it has recently finalized a first test chip integrating its vertical memory architecture, a step intended to validate the industrial feasibility of its technology before a broader industrialization phase.