Marvell Targets Acquisition of Celestial AI to Accelerate Optical Compute

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

This article takes a look at Marvell Technologies’ $3.25 billion acquisition of Celestial AI. It digs into why this deal could shake up the connectivity backbone of next-generation AI and cloud data centers.

Let’s break down the technology behind Celestial AI’s Photonic Fabric platform. We’ll also consider why moving from copper to optics matters so much for the future of large-scale AI systems.

Marvell’s Strategic Move into Next-Generation AI Connectivity

Marvell Technologies just signed a definitive agreement to buy Celestial AI, a young but ambitious optical interconnects manufacturer. The cash-and-stock deal clocks in at $3.25 billion.

This isn’t just about adding another product. Marvell wants to change how data moves inside AI accelerators and throughout data center infrastructure.

Founded in 2020 by CEO David Lazovsky and COO Preet Virk, Celestial AI quickly drew a lot of attention in the optical computing world. They raised a hefty amount of venture funding, including a $250 million round in March 2025, to push their core technology forward.

Deal Structure and Timeline

Celestial AI’s shareholders will get $1 billion in cash and 27.2 million shares of Marvell stock. If all goes as planned, the deal should close by March 2026, pending regulatory approvals and the usual closing conditions.

Marvell expects Celestial AI to start bringing in real revenue in late fiscal 2028. They’re aiming for an annualized run rate of about $1 billion by fiscal 2029.

What Makes Celestial AI’s Photonic Fabric Different?

AI systems today don’t just struggle with compute speed—they’re often bottlenecked by how fast data can move between processors, memory, and storage. Celestial AI’s Photonic Fabric platform looks like a real game changer here.

Most optical solutions sit at the edge of a processor package. Photonic Fabric, on the other hand, allows optical data transfer right inside GPUs or ASICs, letting data reach anywhere on the chip—not just the sides.

Beyond Co-Packaged Optics: In-Package Photonics

Traditional co-packaged optics boost bandwidth compared to regular electrical interconnects, but they still stop at the chip boundary. Celestial AI pushes this further, weaving photonics deep into the compute architecture itself.

Marvell says Photonic Fabric can deliver:

  • Up to 25× greater bandwidth than standard co-packaged optical options
  • Up to 10× lower latency, which matters a lot for tightly coupled AI training and inference
  • That mix of higher bandwidth and lower latency is critical for huge models, where communication overhead can become a real headache.

    Enabling Disaggregated Compute and Memory

    Photonic Fabric’s architecture supports disaggregated compute and memory. Instead of locking memory and compute together, optical interconnects can link pools of GPUs, XPUs, and memory with latency close to local access.

    This opens doors for:

  • More flexible resource allocation across data centers
  • Better use of accelerators and memory pools
  • Scalable, rack-level systems built as unified high-speed fabrics, not just isolated nodes
  • Why Optical Interconnects Are Critical for AI Data Centers

    AI training clusters and large inference systems are pushing copper-based electrical interconnects to their limits. As bandwidth per node keeps rising and power densities go up, copper traces inside and between racks just can’t keep up without burning too much power or running into signal issues.

    Marvell really stresses the need to swap copper for optical links—not just between racks, but within racks and even inside packages. AI’s bandwidth and latency demands are only getting tougher.

    Thermal Stability and 3D Architectures

    Celestial AI’s Photonic Fabric is built to handle the tough thermal environments you get with multi-kilowatt XPUs and high-radix switches. Its thermal stability makes it reliable even where most optical parts would struggle.

    This toughness helps with vertical co-packaging in 3D architectures, stacking compute, memory, and photonics close together. More folks see 3D integration as the way forward, since 2D scaling and old-school interconnects just aren’t cutting it anymore.

    First Application: All-Optical Scale-Up Interconnects

    Marvell’s first target for Photonic Fabric is all-optical scale-up interconnects. These links will connect XPUs—GPUs, AI accelerators, and custom ASICs—at high speeds across new rack-scale systems.

    By rolling out high-throughput, low-latency optical links as the backbone of these platforms, Marvell wants to enable:

  • Faster model training with tightly linked accelerator clusters
  • Better performance for big, distributed inference workloads
  • A scalable path for future AI systems that need exascale-level communication
  • Implications for the Future of AI Infrastructure

    With this acquisition, Marvell is pushing to sit right at the center of the optical transformation in data centers. Models keep getting bigger, workloads keep piling up, and honestly, moving data at optical speeds—both inside and between chips—might soon matter just as much as the compute engines themselves.

    If Marvell really follows through on its roadmap, weaving Celestial AI’s Photonic Fabric into their designs could be a turning point. Optical connectivity wouldn’t just be an add-on anymore; it’d become a core design principle, stretching from the chip all the way to the rack and maybe further.

     
    Here is the source article for this story: Marvell Looks to Acquire Celestial AI

    Scroll to Top