Meta just signed a long-term deal to spend as much as $27 billion on Nebius’ AI infrastructure. This marks a real shift in how hyperscalers lock down cloud capacity for advanced AI workloads.
Over the next five years, Nebius will deliver up to $12 billion in dedicated capacity across several locations. One of these sites will even host one of the first large-scale deployments of Nvidia’s Vera Rubin AI chips.
Meta gets the option to buy up to $15 billion more in available compute from Nebius during the same period. This kind of deal shows how cloud providers and AI-focused datacenters are teaming up as the need for scalable, efficient AI compute keeps growing.
Meta and Nebius strike a multi-year AI infrastructure deal
The agreement makes Nebius a key supplier for Meta’s expanding AI plans. It also shows how big tech companies are planning far ahead to keep up with the exploding demand for powerful AI models and services.
With other huge capacity commitments in the mix, Nebius is shaping up as a major hub for high-performance AI compute in Europe and maybe even further afield.
Beyond the price tag, the deal’s structure focuses on capacity locks and flexible procurement. Meta can scale or tweak its compute needs as its AI models evolve.
This setup fits with Nebius’s fast growth since it launched in 2022 and its 2024 U.S. listing. It’s a sign that newer cloud players are chasing long-lasting, strategic partnerships with the world’s biggest AI developers.
What the deal covers: capacity, pricing, and optional compute
At the heart of it, Meta gets up to $12 billion in Nebius-provided, dedicated AI capacity at multiple sites. On top of that, Meta can buy up to $15 billion in extra compute, giving it a five-year window to scale resources as its AI workloads grow.
This split between locked-in capacity and optional compute is becoming pretty common as hyperscalers try to plan datacenter resources for the long haul.
The framework secures hardware supply and matches Nebius’s push as a European-based cloud provider growing its reach to support AI-first infrastructure. Meta wants to speed up AI development but also keep operational risk in check by spreading out across different locations and supply chains.
Nvidia Vera Rubin: a milestone deployment
One of the standout parts of the deal is Nebius’s rollout of Nvidia’s Vera Rubin AI chips at scale. Vera Rubin is a new kind of accelerator chip aimed at boosting AI training efficiency and throughput.
By bringing Vera Rubin chips into its datacenters, Nebius can give Meta high-end compute with better performance-per-watt and maybe even lower power use. That’s a big deal for training and running large AI models.
Nvidia sees this partnership as more proof that its chips work in real-world, hyperscale environments. Tech like this is arriving just as companies push bigger models and more complex workloads into production.
Nebius’s broader market moves and related partnerships
Meta isn’t Nebius’s only big-name partner. Nvidia recently put $2 billion into Nebius, and Nebius has also agreed to supply Microsoft with up to $19.4 billion in compute over five years.
These multi-fleet deals show Nebius has a capital-efficient path to scaling, and everyone seems to expect that AI-dedicated cloud capacity will be a global must-have asset.
The stock market liked the Meta deal—Nebius shares jumped, and investors seem pretty excited about where the company’s headed. Citi even gave it a buy/high-risk rating, pointing to Nebius’s unique position in terms of TAM, margins, and scalable economics in AI datacenters.
Industry context: hyperscalers’ AI capex and the AI datacenter market
Meta’s move with Nebius fits into a much bigger wave of AI-driven spending among hyperscalers. Meta alone plans to spend between $115 billion and $135 billion on AI this year, part of about $700 billion that major cloud providers are investing together.
This all shows how crucial secure, scalable AI infrastructure has become for staying competitive over the next decade.
As cloud providers race for the top spot in AI performance, deals like Meta–Nebius—and Nebius’s partnerships with Microsoft and Nvidia—show a real shift toward long-term, globally distributed AI datacenters. This kind of ecosystem lets companies develop models faster, deploy them more reliably, and scale AI responsibly across industries.
Key takeaways for researchers and developers
- Strategic capacity planning—long-term contracts let hyperscalers lock in compute for future AI workloads.
- Hardware acceleration at scale—Nvidia Vera Rubin chips are rolling out, and that means newer accelerators are finally making it into production AI environments.
- Industry consolidation—big cloud players keep teaming up with specialized providers to push AI capability forward, not just to stack up more raw capacity.
- Investor interest—the market’s reaction and all the analyst buzz show how much people expect from Nebius and similar datacenter platforms.
Researchers and practitioners are watching the cloud compute landscape change fast. It’s affecting how experiments get designed, scaled, and deployed—everything from model training to real-time inference, in science and way beyond.
Here is the source article for this story: Meta signs deal worth up to $27 billion with Nebius for AI infrastructure