High-Clockrate Free-Space Optical Neural Network from Berkeley, USC, TU Berlin

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

Researchers from UC Berkeley, USC, and TU Berlin have introduced a new way to approach neural-network computation. They call it FAST-ONN, which stands for fanout spatial time-of-flight optical neural network.

This system is built for high-clock-rate in-memory computing using free-space optics. The researchers combined dense VCSEL input modulators with high-pixel-count spatial light modulators.

By doing this, they can store and manipulate weights directly in the optical domain. It allows signed weights, single-shot inference, and parallel processing in a 3D setup.

They’ve shown gigahertz-scale clock rates and some pretty wild throughput numbers. For example, on feature-extraction benchmarks like YOLO, they report speeds up to 100 million frames per second.

That could signal a pathway toward energy-efficient, real-time AI at the edge. There’s a lot of excitement around photonic reprogrammability too—it might allow for optical-domain backward-propagation training.

This could mean both inference and on-device learning get much faster for things like autonomous vehicles and remote robotics. The study, published in Light: Science & Applications in 2026, suggests that combining established VCSEL technology with spatial light modulation could scale free-space optical computing without losing parallelism or precision.

What FAST-ONN Is and Why It Matters

FAST-ONN pushes computation into the optics themselves. It uses a fanout arrangement and spatial time-of-flight to perform neural operations in memory.

The system moves weight storage and computation into free-space optics. That lets it hit very high clock rates while keeping parallelism and accuracy strong.

It’s really aimed at edge devices, where power, latency, and form-factor matter most. There’s a potential here for real-time deep neural network (DNN) tasks without relying only on digital accelerators.

FAST-ONN also supports signed weights and a three-dimensional optical architecture for parallel, differential readout. This setup allows for accurate single-shot inference and can scale as device counts go up.

Photonic reprogrammability is another highlight. It could enable backward-propagation training right inside the optical domain, so you don’t have to keep shuttling data back to digital processors for updates.

Core Technology Components

  • Dense VCSEL transmitter arrays for high-speed input modulation. These let the system access the optical channel quickly.
  • High-pixel-count spatial light modulators to set weights in free space, making large weight matrices possible without classic memory slowdowns.
  • Three-dimensional optical architecture that enables fanout pathways and parallel differential readout for robust inference.
  • Ability to store and process signed weights right in the optical domain, which gives more flexibility in representations.
  • In-system photonic reprogrammability for on-device learning and optical-domain backpropagation.
  • Extreme channel parallelism and device-count scalability to reach billions of convolutions per second.
  • Low-latency operation with ultralow power use compared to digital systems at similar throughput.

Performance, Benchmarks, and Real-World Implications

FAST-ONN hit about 100 million frames per second on convolutional feature extraction benchmarks using a YOLO-like task. That’s a real eye-opener for high-throughput perception workloads.

The photonic backbone enables fast, energy-efficient processing and keeps precision across parallel channels. That’s especially important for edge deployments and latency-sensitive jobs.

The system can scale to billions of convolutions per second by combining parallel channels with a large device count. It also supports in-system training through photonic reprogrammability, so backward-propagation can happen directly in the optical domain.

This could help cut down on training slowdowns from data movement and electronic conversion. All in all, the hardware looks well-suited for real-time tasks in autonomous vehicles, remote robotics, and other edge AI spots where power and latency really matter.

Outlook: Impact on Research and Industry

Published in Light: Science & Applications (2026), FAST-ONN sketches out a practical way to scale up free-space optical computing for high-throughput, low-power neural-network inference and training.

The authors claim that by pairing mature VCSEL technology with spatial light modulation, we can push clock rates into the gigahertz range—and maybe even beyond—without losing the parallelism or precision that modern DNNs need.

If this tech keeps moving forward and finds its way into current edge platforms, it could totally change the energy and latency landscape for real-time AI at the edge.

We might see more capable autonomous systems, better remote-operation, and fewer of those annoying compute bottlenecks that slow everything down.

 
Here is the source article for this story: IMC: Free-Space Optical Neural Network With High Clockrate (Berkeley, USC, TU Berlin)

Scroll to Top