Ayar Labs and Wiwynn have teamed up to build optically connected, rack-scale AI systems for the next wave of hyperscale AI workloads. The project combines Ayar Labs’ co-packaged optics (CPO) tech—think TeraPHY optical engines powered by the SuperNova remote light source—with Wiwynn’s sturdy rack-level architecture and large-scale manufacturing chops.
They want to smash through copper interconnect bottlenecks by delivering high-bandwidth optical connectivity. Each AI accelerator gets to push past 100 Tbps and the design can scale to 1,024 accelerators per rack, with the wild possibility of thousands of accelerators working as one system across multiple racks.
Strategic collaboration to accelerate optical rack-scale AI
This partnership focuses on a full-stack approach to design, manufacturing, and deployment for hyperscale AI. By dropping advanced optical interconnects right into the server and accelerator fabric, they hope to cut down on latency and power losses that come with old-school copper networks.
The joint system is liquid-cooled and HVDC-enabled, built to handle the high power and density that modern AI accelerators demand. It’s also made to stay reliable in big data-center environments. More than just pushing bandwidth, they’re aiming for a scalable fabric that can tie thousands of accelerators into a single, unified AI compute platform.
Core technologies powering the solution
- CPO-enabled AI ASICs paired with TeraPHY optical engines, all powered by the SuperNova remote light source
- High-bandwidth optical interconnects that leap past copper’s limits, so every accelerator gets more bandwidth than copper could ever offer
- Architectures that scale to 1,024 accelerators per rack and stretch across racks to create a unified fabric
- Liquid cooling and HVDC power delivery for high-power operation with better efficiency
- Support for external laser small form factor pluggable (ELSFP) light sources and advanced fiber management
- Designs that focus on manufacturability and moving from component-level wins to system-ready solutions for hyperscale operators
These technologies target the biggest bottlenecks in AI data-paths, paving the way for faster and more energy-efficient model training and inference at scale. The focus on CPO, optical engines, and remote light sources signals a move away from traditional backplanes toward integrated optical-electrical systems that actually work at data-center scale.
System architecture, integration, and deployment considerations
Wiwynn brings deep experience in board design, system integration, and high-volume rack delivery. Their global manufacturing and track record—shipping servers to over 750 data centers—doesn’t hurt either.
Ayar Labs adds its optical-layer expertise, especially the CPO approach that puts optics right next to compute substrates. Together, they’re not just chasing performance. They’re wrestling with real-world deployment issues: thermal management, fiber handling, and serviceability. Hyperscale operators need to trust the tech before they roll it out at scale.
The partners point out key deployment factors like smooth integration of CPO-enabled AI ASICs into existing data-center setups. They also highlight strong thermal management for dense, optically and thermally coupled parts, and power efficiency from HVDC-enabled, liquid-cooled designs. Fiber routing, connector reliability, and maintenance ease matter a lot too—especially for operators running thousands of racks over the long haul.
Go-to-market strategy, demonstrations, and industry impact
The firms want to showcase their joint AI CPO solution at the Optical Fiber Communication Conference (OFC) in Los Angeles, March 15–19, 2026. They’re planning private briefings for select customers, press, and analysts.
This early-access move gives key stakeholders a chance to validate performance, reliability, and deployment workflows before the broader market sees it. The announcement paints the collaboration as a push to speed up optical rack-scale infrastructure and cut interconnect bottlenecks.
They’re hoping for meaningful gains in performance and energy efficiency, especially for hyperscale AI deployments. Honestly, that’s a bold promise—but it’s one a lot of folks in the industry are eager to see play out.
By combining advanced CPO technology with a scalable, manufacturable rack architecture, the Ayar Labs–Wiwynn partnership aims to create unified, low-latency fabrics. These fabrics could support some of the most demanding AI models now and in the future.
For data-center operators eyeing next-gen AI platforms, this collaboration hints at a possible path to higher throughput, lower latency, and better power efficiency across multi-rack AI fleets. There’s real potential here, though the proof will be in the pudding.
With hyperscale AI workloads growing in size and complexity, optical rack-scale solutions like this might just become a foundation for efficient, scalable data centers. The OFC demonstration could be a key moment to watch—will it actually shift the industry, or just be another flashy demo? Time will tell.
Here is the source article for this story: Ayar Labs, Wiwynn team up to develop co-packaged optics rack-scale AI infrastructure