Alchip and Ayar Labs have joined forces to shake up AI datacenter performance with a bold co-packaged optics (CPO) solution. They’re aiming to change how massive AI clusters get built and connected, sidestepping the usual headaches of copper interconnects.
By integrating optical engines with advanced ASICs, this partnership claims it can deliver bandwidth, scalability, and energy efficiency that just hasn’t been possible for next-gen AI workloads.
Breaking Through the Limits of Copper Interconnects
Copper interconnects have been the workhorse of datacenter communication for ages. But as AI systems balloon in size, copper is running into real problems—distance, bandwidth, power draw, and cooling just aren’t keeping up.
These limits can slow down multi-rack AI setups and make energy efficiency a serious challenge.
Why CPO Makes a Difference
Co-packaged optics move optical connections right up close to compute cores. That shortens electrical trace lengths, which cuts down on latency and power use.
With optics this close, you can avoid the usual downsides of pluggable optics. Suddenly, scaling up doesn’t have to mean sacrificing speed or efficiency.
The Partnership: Alchip ASICs Meet Ayar Labs Optical Engines
This collaboration centers on plugging Ayar Labs’ TeraPHY optical engines directly into Alchip’s advanced ASIC solutions. The result? Over 100 terabits per second of bandwidth per accelerator, which is a pretty big jump from what’s out there now.
Massive Scale for AI Clusters
With support for more than 256 optical scale-up ports per device, you can build AI clusters that act like a single logical system. That means less lag between nodes and smoother processing across racks.
For hyperscale AI, this kind of unified setup is a big deal.
Protocol-Agnostic and Highly Flexible
One of the cooler things about TeraPHY is that it’s protocol agnostic. You can integrate it with custom chiplets, fabrics, or UCIe-compatible die-to-die interconnects, so it’s ready for a bunch of architectures and needs.
Integration with Advanced Components
This flexibility lets you pair it up with:
- Compute tiles for heavy-duty processing
- Memory stacks to speed up data access
- Specialized accelerators built for AI
And you still get signal integrity and thermal efficiency—which are make-or-break for datacenter reliability.
Industry Leaders See a Turning Point
“Copper is hitting a wall,” said Erez Shaizaf, CTO of Alchip. The I/O needs of tomorrow’s AI just can’t be met with the old ways.
Honestly, these aren’t just technical gripes—they’re real obstacles to scaling AI further.
Paving the Way for Scalable, Energy-Efficient AI
Vladimir Stojanovic, CTO of Ayar Labs, called this partnership a major milestone for both hyperscalers and enterprises. By mixing optical interconnects with custom ASICs, they’re hoping to set the stage for AI clusters that connect easily, use less energy, and hit crazy bandwidths.
Implications for the Future of AI Datacenters
AI workloads keep getting bigger and more complex, so efficient scaling is more crucial than ever. CPO solutions like the one from Alchip and Ayar Labs could make a real difference by enabling:
- Multi-rack AI systems without the usual performance hits
- Lower costs thanks to reduced power use
- More flexibility across computing setups
- Faster rollout of new AI applications
A New Era of Connectivity
The move from copper to optical interconnects isn’t just about swapping out materials. It’s changing the way we approach datacenter design altogether.
When we embed optical capacity right into compute hardware, we open up new levels of scalability. That kind of efficiency just wasn’t possible before.
Here is the source article for this story: Co-packaged optics unveiled for AI datacentre scale-up