This blog post digs into OpenAI’s latest moves around how and where it gets the computational muscle it needs. There’s a real pivot happening: OpenAI is moving away from renting compute at a Norwegian data center and instead looking to boost Microsoft’s capacity at the Stargate Norway site in Narvik.
These changes fit into OpenAI’s shifting spending plans, ongoing regulatory headaches, and the broader scramble for AI infrastructure. Nvidia GPUs and Microsoft Azure remain at the heart of this race.
OpenAI shifts compute strategy in Norway and beyond
The main news? OpenAI has dropped plans to rent compute directly from a Norwegian data center. Now, it’s negotiating to rent capacity from Microsoft instead.
This follows OpenAI’s recent pause on a similar project in the UK. In Narvik, the Stargate Norway campus—originally planned by UK AI cloud startup Nscale—was supposed to pack in thousands of GPUs, with Microsoft stepping up its involvement.
OpenAI had discussed renting about half of Narvik’s capacity, but those talks fizzled out without a deal. The company now says it’s working with Microsoft to get the compute it needs, tying it all back to its ongoing Azure-based spending commitments.
What is happening with Stargate Norway and Narvik
Microsoft is taking a bigger role at Narvik under the new plan. They’re adding more than 30,000 Nvidia Rubin GPUs at the site.
This deployment connects to Nvidia’s Vera Rubin platform, which will roll out across the UK, Norway, and elsewhere. It all points to a beefier, Microsoft-led compute backbone for OpenAI’s workloads, funded through Azure and woven into OpenAI’s scaling roadmap.
- Partner shift: OpenAI is moving from direct data-center rental in Norway to a capacity-based deal with Microsoft.
- GPU expansion: Narvik will host over 30,000 Nvidia Rubin GPUs, boosting muscle for large model training and inference.
- Platform rollout: Nvidia Vera Rubin is going live across multiple sites to smooth out cross-regional workflows.
- Strategic alignment: This change tracks with OpenAI’s Azure-based spending approach and sidesteps new, stand-alone data-center commitments.
Implications for partners and the AI compute ecosystem
For Microsoft, the Narvik project is a big step up. It makes their cloud more attractive for enterprise AI workloads.
It also shows a wider cloud-provider play to secure long-term, high-volume compute deals with major AI developers chasing ever-larger models.
For Nvidia and its platforms, Narvik’s expansion hints that stand-alone regional facilities might matter less if cloud partnerships can deliver the same punch with integrated software. The Vera Rubin rollout in the UK and Norway highlights a trend: unified, cross-region AI infrastructure that prizes efficiency and interoperability.
OpenAI’s financial and strategic choices always draw scrutiny. The company has paused or slowed several projects, like the UK Stargate effort and the Sora video generator, even as it keeps chasing big long-term compute spends.
Back in March, OpenAI closed a funding round at a jaw-dropping $852 billion valuation. The company talks openly about aiming for hundreds of billions—maybe even trillions—in compute and infrastructure commitments down the road.
Strategic spending, IPO timelines, and future infrastructure
OpenAI’s potential move toward the public markets comes with some big challenges. The company has said energy costs and regulatory hurdles really shape which projects it takes on.
They’re eyeing a long-term goal of spending about $600 billion on compute by 2030. There’s even talk of maxing out infrastructure commitments at a wild $1.4 trillion.
That’s a staggering scale—honestly, it shows just how ambitious this whole AI race has become. Corporate strategy, data-center economics, and cloud partnerships all have to line up if anyone wants to keep up with the rapid growth in model size and capability.
Here is the source article for this story: OpenAI pulls back from Stargate Norway data center deal as Microsoft takes over