A recent experiment in quantum astronomy shows a proof-of-principle way to connect distant optical telescopes using quantum memories and entanglement. The researchers want to overcome a long-standing limitation of optical interferometry—the rapid loss of single photons during transport to a central detector—by performing interference without physically recombining light paths.
This idea builds on a theoretical lineage dating back to Gottesman and colleagues’ 2012 proposal and Lukin’s 2019 refinement. Both envisioned a quantum repeater architecture to entangle detectors at separate sites and capture astronomical information through entanglement rather than direct path recombination.
The latest study uses solid-state qubits based on silicon-vacancy centers in diamond. These qubits extend coherence times via nuclear spins, so stored entanglement can wait for an incoming photon. It’s a pretty clever workaround for the timing problem that’s tripped up older designs.
Quantum-linked telescopes: a proof-of-principle
The centerpiece here is a demonstrator that stores and uses entanglement to link detectors over a long baseline. Traditional optical interferometry struggles beyond a few hundred meters due to photon losses.
This approach tries to multiply the usable baseline by removing the need to physically merge light paths at a single detector. The experiment shows heralded entanglement generated from a central laser source and tested across 1.5 kilometers of fiber—between adjacent laboratories. It nudges the concept from theory toward a real lab demonstration of quantum-assisted interferometry.
Background: A lineage from Gottesman to quantum memories
In 2012, Daniel Gottesman and colleagues proposed using a central source of entangled photons as a quantum repeater to entangle telescopes at distant sites. Their idea was to let an incoming astronomical photon influence the measurement without needing an actual path recombination at the detector.
But practical implementation hit a wall because distributing entanglement at rates matching the telescope’s spectral bandwidth was too hard. In 2019, Mikail Lukin and collaborators proposed a crucial tweak: add quantum memories so the entangled state could be stored and released when the photon arrives. That bridged the timing gap that previously limited feasibility.
The present experiment takes this evolution further by using quantum memories—solid-state qubits with long coherence times—to hold entanglement until the astronomical signal is ready. It’s a nice example of theory finally meeting hardware, at least in the lab.
How the experiment works
Researchers used silicon–vacancy (SiV) centers in diamond as the primary qubits. The key innovation is mapping electron-spin interactions with photons onto long-lived nuclear spins.
This dramatically extends coherence times and preserves quantum information long enough to synchronize distant detectors. A central laser source generates heralded entanglement, which acts as a “repeater” link between two sites.
By storing this entanglement in memory, the system prepares a ready state that can be correlated with incoming photons from an astronomical source. This lets them extract interference information without physically steering light through every path.
- Heralded entanglement generated by a central light source
- Detectors at separate sites linked via quantum memories
- Measurement of photon events above vacuum fluctuations over 1.5 km of fiber using synthetic light sources
Current limitations and the road ahead
Experts say this milestone is still just a proof-of-principle, not something you can use for astronomy tomorrow. Major challenges include boosting entanglement generation rates and solving narrowband issues in vacancy centers, which affect compatibility with real astronomical signals.
Even though the 1.5 km fiber demonstration is a solid step, the rate and bandwidth requirements for a working telescope network are nowhere near met. Parallel efforts, like Jian-Wei Pan’s group in China reporting similar results, are promising but haven’t yet been peer reviewed.
The work adds a meaningful repeater capability to quantum-assisted interferometry. But turning this into a scalable, field-ready system will need big advances in photon–qubit interfaces, memory lifetimes, and entanglement distribution efficiency. Still, it’s hard not to feel a bit optimistic watching theory inch closer to reality.
What this means for astronomy and the field
The reported results make a pretty strong case that quantum memories and entanglement might unlock real long-baseline interferometry, pushing past today’s physical limits. If future research can crack the rate and bandwidth bottlenecks, telescope networks could get higher angular resolution—without having to physically send every photon to one central detector.
This shift could transform how we capture high-resolution images of distant objects, from black hole shadows to exoplanet atmospheres. It opens up more flexible and scalable options for interferometric setups, which is honestly pretty exciting for anyone following the field.
- Potential impact: Sharper astronomical images and more baseline choices for future observatories.
- Key hurdles: Boosting entanglement generation rates, improving memory bandwidth, and making sure vacancy-center systems stay robust and work across broad bandwidths.
- Next milestones: Showing this works at higher rates, integrating with actual astronomical photon streams, and getting peer-reviewed validation from other teams.
Here is the source article for this story: Quantum memories could help make long-baseline optical astronomy a reality