POET Technologies and Lumilens are teaming up to bring an Electrical-Optical Interposer (EOI) platform to market. They’re focused on wafer-level photonic integration for AI and hyperscale data centers.
This deal covers development, supply, and manufacturing/”>manufacturing plans. The goal? Speed up high-volume optical engines for next-gen GPU interconnects and near-package/co-packaged optics.
There’s already a $50 million purchase order on the table. Over five years, that number could climb past $500 million if things go as planned.
The roadmap highlights 800G and 1.6T transceivers. They’re also eyeing broader scenarios for deploying AI at scale.
What the EOI Platform Aims to Solve
The EOI approach leans into active-alignment-free, wafer-scale manufacturing. The idea is to cut costs, improve yield, and enable more capital-efficient production of optical engines.
By tightly linking silicon, photonics, and packaging, this tech aims straight at the optical bottleneck that’s holding back GPU interconnect bandwidth in massive data centers. The partners want to bring a semiconductor-style discipline to optical engine manufacturing, which feels overdue in this space.
They’re aiming for a platform that can handle denser optical links while keeping capital spending per bit in check. That’s a big deal for data centers where performance and cost really matter for staying competitive and sustainable.
Financial Structure and Milestones
-
Initial and potential total commercial value: Lumilens kicked things off with a $50 million purchase order for EOI-based optical engines. If all goes well, total purchases could top $500 million over five years.
-
Warrant arrangement: POET gave Lumilens a warrant to buy up to 22,921,408 common shares at $8.25 per share. 2,292,140 shares are ready to go now, and the rest vest as Lumilens hits purchase milestones. The warrant runs for nine years.
-
Milestones and scale-up: The share exercise is tied to cumulative purchases, so equity potential grows as the program and manufacturing capacity expand.
-
Timeline: They’re aiming for engineering samples in late 2026. If development and qualification go smoothly, production ramps up in 2027.
Roadmap and Deployment Timeline
The agreement lays out a clear path from engineering to full-scale production, with milestones pegged to qualification and capacity. Plans include 800G and 1.6T pluggable transceivers, with moves toward Near-Package Optics (NPO) and Co-Packaged Optics (CPO) for future AI deployments.
If everything lines up—technical progress, manufacturing, the works—this could really shake up how GPU interconnects are done. Higher bandwidth, lower cost per bit, and faster time-to-market for the big cloud players? It sounds ambitious, but that’s what they’re shooting for.
Engineering samples should show up in late 2026. After that, qualification and scale-up will decide how well the partners can meet the wild pace of hyperscale data centers and the relentless growth of GPU fleets.
Industry Implications and Risks
The partners are chasing what they call a semiconductor-style discipline for optical engines. With the EOI program, they’re aiming for a real shift in how photonics-enabled data-center components get designed, built, and integrated.
If it works, the platform might cut costs and boost yield at scale. That’s a big deal, especially since GPU interconnects for AI workloads have hit some stubborn bottlenecks.
But let’s not ignore the risks that come with the investment and equity-linked terms. The warrant could mean future dilution for POET shareholders.
Commercial value depends on whether the modules qualify and if they can actually ramp up manufacturing to meet demand. Both companies admit the timeline and financial results are just projections.
There are plenty of wild cards—technology can stall, supply chains get messy, and customers might not jump on board as quickly as hoped.
If you’re watching the photonics-in-the-cloud scene, this collaboration feels like a sign of the times. It’s all about merging silicon, photonics, and packaging at wafer scale to break open high-bandwidth AI infrastructure.
Will the EOI platform make the leap from engineering samples to a solid, scalable production line? The next 18–24 months should tell us if it can really support hyperscale GPU fleets and the bigger AI economy.
Here is the source article for this story: Up to $500M light-speed push to ease AI’s data traffic jam