Optical illusions have always fascinated scientists. They shine a light on the hidden assumptions baked into our perception.
This article takes a look at how modern artificial intelligence—especially deep neural networks—has become a fresh tool for studying these illusions. By seeing how both humans and machines get tricked by the same visual stunts, researchers are finding new clues about how the brain predicts, interprets, and sometimes just plain misreads what we see.
Optical Illusions as Windows into Visual Processing
For a long time, people called optical illusions simple “mistakes” made by our eyes and brains. These days, vision scientists see them as more than that: they’re evidence of the clever shortcuts the brain uses to handle a flood of sensory data.
Illusions show off the rules and assumptions that usually help us make sense of the world quickly. These shortcuts aren’t really flaws. They’re adaptive, helping us grab the important stuff from messy, incomplete, and constantly shifting environments.
When an illusion trips us up, it points to the internal models our brains lean on—and shows where those models can go off track.
Why Illusions Matter to Neuroscience
Illusions are valuable because they reveal the mechanisms that stay hidden during normal perception. By looking at when and how our perception fails, researchers can test different ideas about how vision works.
Deep Neural Networks as Ethical Testbeds
One huge challenge in studying human perception is ethics. You can’t just tinker with people’s brains or their lifelong sensory input.
Deep neural networks (DNNs) are a different story. Scientists can poke, prod, and mess with these models in ways that would never fly with human participants.
Some DNNs trained on natural images even fall for the same illusions as people do. That’s pretty wild, and it hints that similar computational principles might drive both artificial and biological vision.
PredNet and Predictive Coding
Eiji Watanabe’s group worked with a model called PredNet, which draws from predictive coding theory. They trained PredNet on video from head-mounted cameras, mimicking what a moving person might see.
When they showed PredNet motion illusions like the classic rotating snakes, it got fooled in the same situations that trip up humans. Seems like perception leans heavily on prediction:
Where AI Vision Still Differs from Human Vision
Still, there are big differences. PredNet sees all parts of a scene as moving at once, but people feel stronger motion in their peripheral vision than when they’re staring right at something.
This points to a key gap: today’s DNNs just don’t have human-like attention. None of the current models can copy the full range of visual illusions we know about.
Complementary, Not Identical, Models
AI models don’t replace human studies. They’re more like helpful sidekicks, letting scientists zero in on specific computational ideas—even though human perception is shaped by attention, experience, and having a body that moves around.
Beyond Classical Neural Networks
Some researchers are trying out new architectures. Ivan Maksymov, for example, built a quantum-inspired neural network that can switch between bistable perceptions, like the Necker cube or Rubin vase.
The timing of these switches actually lines up with human experience. That doesn’t mean the brain runs on quantum mechanics, but maybe quantum-style models can capture some of the brain’s decision-like quirks.
Experience, Environment, and Perception
Context shapes human perception too. Astronaut studies show that visual biases shift in microgravity, which changes how illusions play out. So, perception isn’t set in stone—it’s learned and flexible.
AI systems can mimic these extreme environments, offering a safe and flexible way to test how different conditions mess with visual interpretation.
Looking Ahead
Researchers are blending optical illusions, artificial intelligence, and neuroscience to dig deeper into how we build up visual reality. Theories like predictive coding are getting a fresh look—and honestly, it’s fascinating to watch them evolve.
AI doesn’t see the world like we do, not yet anyway. Still, it’s turning into a surprisingly valuable partner as we try to figure out why our brains interpret what we see the way they do.
Here is the source article for this story: AI can now ‘see’ optical illusions. What does it tell us about our own brains?