Optical illusions have always been successful on the internet – who doesn’t feel their brains do that somersault when they see the dancer spin?
To understand why our brain is tricked, scientists at Cambridge University’s Adaptive Brain Lab modeled a neural network to make it see and have the same visual perception of movement as a human being.
To set up the so-called MotionNet, the team of researchers, led by neuroscientist Reuben Rideaux, used data collected since the 1950s in studies of the perception of human movement. The next step was to train the neural network to estimate the speed and direction of the captured image sequences (as our eyes do).
Out of the brain
“It is very difficult to measure directly what is going on inside our brain when we perceive movement – even our best medical technology cannot show us the entire system at work. With MotionNet, we can explore characteristics of human visual processing that cannot be measured directly in the brain, ”said Rideaux, the study’s lead author, in a statement.
Trained in human mental processes when looking at an image (especially those that deceive our perception), artificial intelligence describes how space and time information is combined in our brain, resulting in what we see or think we see when we follow moving images.
The experiment involved a phenomenon called phi, which is the apparent movement created when two nearby optical stimuli are presented alternately, at a relatively high frequency.
“For example, if there is a black dot on the left of a screen, which disappears while another black dot appears on the right, we will ‘see’ the dot moving from left to right – this is called a phi movement. But if the dot that appears on the right is white on a dark background, we ‘see’ the dot moving from right to left, in what is known as reverse phi movement, ”says the statement.
This dynamic was presented to MotionNet. Because it was trained to see in the same way as a human brain, this is what AI saw and interpreted: points “walking” when, in fact, nothing moved, in a classic perception error.
The difference is that, with MotionNet, the researchers were able to follow the mental process followed by the neural network to understand (wrongly) what they were seeing. According to Rideaux, the reverse phi activated neurons tuned in the opposite direction to the real movement.
The researchers also found that the speed of reverse phi movement is affected by the distance from the points, contrary to what they predicted: if objects are farther away, they appear to move more slowly than when they are close – the closer, the faster they seem to move.
“We have known about the reverse phi movement for a long time, but the new model generated a completely new prediction about how we experienced it, and in a way that no one has ever tested before,” said Rideaux.
Visual perception of movement is acquired by the brain as we grow up, whether it is calculating whether we will have a break to cross the street without danger of being run over or grabbing a moving ball. As adults, we do this automatically: the brain, in a fraction of the time, processes patterns like changing light, but that doesn’t always work.
The findings of that study, now published in Journal of Vision, should still be validated in research with human beings but, according to the study’s co-author, neuroscientist Andrew Welchman, “we hope to fill many gaps in the current understanding of how that part of our brain works, and to know which part of the brain to focus on will save time”.