The article explores a big shift in datacenter optics. It contrasts Nvidia’s push for co-packaged optics (CPO) with the industry’s fast-moving development of Extra-dense Pluggable Optics (XPO).
CPO is seen as the long-term path for high-performance AI infrastructure. Meanwhile, XPO offers a manufacturable, near-term route to much higher lane density and bandwidth.
Andy Bechtolsheim weighs in with a nuanced take. He thinks XPO and CPO can coexist, and notes that thousands of vendors are mobilizing to speed up production in the coming years.
What co-packaged optics mean for data centers
CPO looks inevitable to many leaders who want datacenters to scale interconnects between GPUs, DPUs, switches, and other accelerators. The idea is to put optical transceivers right next to processing tiles, shrinking distances and cutting latency.
That setup boosts bandwidth density, but manufacturability at scale is still a big hurdle. Nvidia’s approach really highlights this tension.
The company already uses CPO in its Quantum X800 InfiniBand and Spectrum X800 Ethernet switches. It plans to bring CPO to its future NVSwitch-enabled “Feynman” GPU systems by 2028.
AI-focused datacenters want denser optics than traditional SFP, QSFP, and OSFP pluggables. They need higher radix, shorter interconnect paths, and support for ever-larger GPU/XPU fabrics.
The challenge? Delivering reliable, cost-effective production of high-density optical engines at scale.
Unpacking Extra-dense Pluggable Optics (XPO)
The industry is racing to develop Extra-dense Pluggable Optics (XPO) to meet the near-term needs of AI workloads. The XPO MSA initiative has broad support from companies like Arista, Microsoft, Marvell, Broadcom, Ciena, and over 100 others.
This signals a major coalition around a pluggable solution that can massively boost lane density without forcing everyone to switch to CPO. An XPO module fits into the footprint of two OSFPs and packs 64 channels at 200 Gb/s each.
The total bandwidth? 12.8 Tb/s per module, which is eight times the 1.6 Tb/s typical OSFP performance. But there’s a catch: XPO modules burn through about 400 W, so you’ll need liquid cooling and cold plates.
Despite the higher heat, XPO assemblies run about 20–25°C cooler than air-cooled OSFP-ZR equivalents at the same bandwidth. The design uses a 50 V bus bar to optimize efficiency, which lets you build really dense switch racks and use shorter cable runs.
In practice, XPO could shrink a 12-rack Ethernet deployment to just six racks. Shorter cables also mean lower fiber costs, which is a pretty compelling win for operators wrestling with growing data-center footprints.
Industry momentum and roadmap
The XPO effort aims to be a practical bridge to the future. It supports any optics standard, connector, driver, retimer, or gearbox, so you get density gains without an all-in shift to CPO.
This flexibility matters as the ecosystem matures. Bechtolsheim points out that industry players can get density improvements now while still working on CPO manufacturability in parallel.
More than 20 vendors are expected to manufacture XPO modules, with volume production likely in 2027. By then, XPO could deliver the scalability that expanding AI data centers need while the industry works out the kinks in CPO manufacturing.
Practical implications for AI datacenters
AI-centric workloads benefit from higher interconnect density, less cabling, and smaller rack footprints. The trade-off is managing heat and power at these densities.
XPO’s fourfold increase in lane density and eightfold boost in aggregate bandwidth bring real advantages for big GPU/XPU fabrics. That means denser switch fabrics and more compact racks.
Operators have to weigh the benefits of liquid cooling against the costs of denser, high-power modules. From a design angle, XPO gives you a near-term path to higher radix interconnects without forcing immediate CPO adoption.
Looking further out, datacenters could use both approaches—CPO where manufacturability makes sense, and XPO where fast density gains are crucial for meeting AI demand.
Conclusion: a balanced path to denser datacenter optics
The optics landscape keeps shifting, and honestly, it’s a bit of a juggling act for next-gen AI infrastructure. CPO offers top-notch proximity and performance, but let’s face it—manufacturing hurdles aren’t going away overnight.
Meanwhile, the industry’s got its eyes on XPO too. Supporting both seems smart, right?
Datacenters can boost density now with XPO, then slowly lean into co-packaged optics as fabrication tech catches up. That’s a flexible, scalable way forward—less risky, more cost-effective, and honestly, it just feels practical for where AI datacenters are headed.
Here is the source article for this story: Bechtolsheim & Friends Breathe Life Into Pluggable Optics One Last Time