Co-packaged Optics Enable AI Data Center Scale-Up

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

The recent partnership between AIchip Technologies and Ayar Labs is shaking up AI infrastructure design. These two companies have just revealed a co-packaged optics (CPO) solution that’s meant to supercharge multi-rack AI clusters.

They’re integrating optical interconnects right into the AI accelerator packaging. This move aims squarely at one of AI’s most stubborn bottlenecks: how fast data can actually move around.

With this, they’ve built a high-bandwidth, low-latency setup that could totally change how AI systems connect at massive scale. It’s pretty wild to see how far this tech has come.

Solving the AI Data Movement Bottleneck

AI models just keep getting bigger and more complicated. As a result, the data shuffling between processors, accelerators, and memory has exploded.

Copper-based interconnects, with all their electrical quirks, are really starting to show their age. Optical links, on the other hand, can deliver way more bandwidth and use less power.

From Copper to Optical Links

AIchip Technologies and Ayar Labs are going straight at this problem by ditching old copper connections. They’re swapping them out for advanced optical links.

The new co-packaged optics system puts Ayar Labs’ TeraPHY optical engines together with AIchip’s own advanced packaging tech, all on the same substrate. That close integration lets data zip around with an efficiency that just wasn’t possible before.

Unprecedented Bandwidth and Scalability

By bringing optical I/O right up to the AI accelerator’s edge, the CPO design pushes over 100 terabits per second (Tbps) of bandwidth per accelerator. That’s not just a big number—it’s what next-gen AI workloads need, especially when you’re talking about multi-rack data centers.

256 Optical Scale-Up Ports per Device

This architecture supports more than 256 optical scale-up ports per device. That’s a ton of interconnect density.

It means organizations can scale their AI clusters to sizes that used to sound impossible. Connecting tons of compute nodes with barely any latency? Now we’re talking.

Protocol-Agnostic Design for Maximum Flexibility

The TeraPHY optical engine stands out for being protocol-agnostic. It can link up with all sorts of chiplets and interconnect fabrics, which gives system designers a lot of freedom.

Compatibility with UCIe and Beyond

This solution follows the Universal Chiplet Interconnect Express (UCIe) standard. That lets it handle flexible protocol endpoints right at the package boundary.

You can put optical interfaces next to compute tiles, memory, and accelerators—no need to compromise on performance or signal quality.

Advantages of Co-Packaged Optics Over Pluggable Solutions

Pluggable optics need optical modules to connect through traditional electrical paths, but co-packaged optics cut those trace lengths way down. That means less power draw and lower signal latency.

For high-performance AI processing, those two things are pretty much non-negotiable.

Thermal Efficiency and Design Integration

Bringing optical links into the package helps with thermal efficiency too. The optical engines are closer to the compute elements, so it’s easier to manage heat as a whole system.

Even when workloads get heavy, performance stays solid.

Deployment and Collaboration

AIchip Technologies and Ayar Labs are already working with select customers, putting this optical interconnect tech into next-generation AI accelerators and scale-up switches. They’re sharing reference designs and build options to help partners get up and running faster.

Impact on AI Cluster Architecture

If you’re designing sprawling AI infrastructure, this co-packaged optics approach could bring some real benefits:

  • Major boosts in interconnect bandwidth
  • Lower power usage compared to copper links
  • Lower latency for those high-speed AI jobs
  • Better scalability for big, multi-rack clusters
  • Improved thermal management and reliability

Looking Ahead

The introduction of co-packaged optics into AI accelerators is really pushing high-performance computing hardware to a new level. AI workloads just keep growing, and it feels like innovations such as the AIchip–Ayar Labs CPO system are what’ll help infrastructure keep up with all that demand.

This technology boosts performance and opens the door for more energy-efficient, future-ready AI architectures. Reference designs are already making the rounds with industry leaders, so it’s not a stretch to say we’ll probably see real deployments soon.

Optical interconnects aren’t just a cool theory anymore—they’re out there, changing the way big AI clusters get built and run. AI infrastructure is about to get a whole lot faster, leaner, and honestly, just tougher.

 
Here is the source article for this story: Co-packaged optics enables AI data center scale-up

Scroll to Top