ISSCC Reveals Electro-Optical Router in Chiplet Package

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

Today’s post digs into a pretty wild demonstration in on-package optical networking: a 28 nm CMOS electro-optical router sitting on a photonic interposer. It can set up optical paths in just 18 nanoseconds, using barely any energy and barely taking up any space.

This work pulls together optical switching, routing control, SerDes, and clocking logic, all right alongside silicon photonics. The result? You get a compact, dense endpoint that can reconfigure itself on the scale of nanoseconds.

It’s a potentially game-changing approach to linking compute and memory inside a package. Suddenly, electrical routing bottlenecks don’t seem so scary, and you still get low latency.

Technology at a Glance

What really stands out about this router isn’t just the speed. The device lives right at the edge of electronics and optics, using 28 nm CMOS to push an electro-optical router on a photonic interposer.

One of the big wins here is how it sets up and tears down optical paths in just a few tens of nanoseconds. That’s fast enough for dynamic workloads that need to reconfigure on the fly.

The system supports multiple wavelengths per link. That means you can scale up and adapt as needed, without giving up on latency or energy efficiency. It’s compact, energy-aware, and flexible—exactly what you’d want if you’re trying to bring optical networking closer to compute and memory.

Honestly, after thirty years in this field, I’ll say it’s about time we saw silicon photonics and integrated optical switching really coming together with regular digital logic. Now, on-package optical networking finally looks practical for real-time applications.

Architectural Highlights

The router pulls together optical switching, routing control, SerDes, and clocking logic right alongside silicon photonics. Everything’s packed tight, with a footprint of about 0.007 mm² per link.

Analogue drivers team up with standard-cell-based SerDes and clocking circuits. This lets you couple optical endpoints closely to compute and memory spots.

By dropping these elements directly onto the photonic interposer, the architecture slashes electrical weaving and keeps the benefits of optical signaling for short, intra-package links.

Performance and Capabilities

  • Energy efficiency: 3.19 pJ/bit, so you can move a ton of data without blowing your power budget
  • Area efficiency: about 0.007 mm² per optical link, making dense packing a reality
  • Frame-level routing: setup and teardown at nanosecond speeds
  • Wavelength flexibility: each link can pick between 1 and 6 wavelengths on the fly
  • Integrated optical paths run across centimeter-scale interposers, supporting intra-package optical networking

Implications for HPC, AI Accelerators, and Data-Intensive Systems

This tech could really change how we see distributed memory and compute. Instead of separate islands tied together with electrical traces, you get a unified fabric.

With dynamic, low-latency optical paths inside the package, data can zip between memory and processing elements—no more paying the energy or latency price of long electrical routes.

The multi-wavelength feature per link means you can tune optical bandwidth to exactly what your workload needs. So you get peak performance, but you’re not overloading the network just to be safe.

In real deployments, researchers and engineers could build systems where compute, memory, and interconnect evolve together. Optical links could seriously shorten data paths, cut energy per bit, and speed up the data-hungry jobs you see in HPC and AI.

Looking Ahead: Architecture Options and Challenges

This demonstration marks a solid move toward practical on-package optical networking. Still, a few big factors will shape how and when people actually start using it.

Packaging and thermal management are huge—systems need to stay stable, even as workloads shift. Engineers also have to figure out how to reliably combine analog drivers with digital SerDes on silicon photonics platforms, which isn’t exactly trivial.

If this approach really scales, it could change how we build HPC accelerators and AI engines. Imagine a centimeter-scale optical fabric, stretching across the package and maybe even further.

The idea of treating distributed memory and compute as one fabric? That could open up some wild new architectures. It’s hard not to get excited about the possibilities for better latency, bandwidth, and energy efficiency in future data-heavy systems.

This 28 nm CMOS electro-optical router on a photonic interposer shows off nanosecond responsiveness and ultra-low energy per bit. The dense optical links make it a strong candidate for on-package networking.

For folks in research and industry, it’s a pretty compelling direction. Tightly integrated photonic endpoints could finally bring optical performance right up close to where latency and energy efficiency really matter—in the heart of computation.

 
Here is the source article for this story: ISSCC: Electro-optical router in a chiplet package

Scroll to Top