What can self-driving cars learn from the brain?

5 Aug 2024

Self-driving cars perform most driving tasks autonomously, with no human interaction or involvement.

These vehicles need to be able to monitor their environment and appropriately adjust their operations, in much the same way people do by making eye movements.

For many of us, this level of vehicular autonomy is a little uncomfortable.

To date the technology has not garnered enough confidence for people to willingly let a vehicle take complete control. A primary challenge in the development of autonomous vehicles is that comprehensively tracking actions and objects with no room for error is extremely difficult.

While engineers have struggled with this aspect of ensuring a fully autonomous, precise and rapid response to the various objects that may appear around a moving vehicle, the human brain does this effortlessly in real-time. It does so in a highly efficient manner using quite simple calculations, multiple times per second.

Researchers at UQ’s Queensland Brain Institute, Professor Jason Mattingley and Dr Will Harrison, set out to understand how our brains are able to keep track of visual information across the thousands of eye movements we make every day.

“We wanted to know how the brain maintains visual stability across planned and unplanned eye movements and shifts of attention,” Professor Mattingley said.

“We wanted to test the hypothesis that to maintain visual stability, the brain needs to predict the sensory consequences of the next visual shift.

“To do this we developed a mathematical model, inspired by recent neurophysiological findings, that estimates how visual attention is allocated to the real world positions of objects.”

Despite relying heavily on sensory information to inform our view of the world, sensory systems themselves, like the eyes, encode a limited amount of information.

The brain actively constructs a high-definition representation of the world based on the limited information it receives from the retina, the light sensitive surface at the back of our eyes.

Each time our eyes shift to sample a new part of the visual scene, the image of what we see jumps across the retina.  But we don’t perceive this abrupt jump from one image to the next when we scan a room. Rather, the brain coordinates the shift in attention and the movement of the eyes to maintain a stable reference frame. 

Our brain calculates, based on the size of the eye movement and the shift in visual attention, how to coordinate this sensory representation while maintaining a clear and stable percept of our surroundings.

According to Dr Harrison (who is now at the University of the Sunshine Coast), “The model was incredibly effective because it demonstrated that the brain coordinates visual attention, perception, and action, using much simpler processes than previously thought,”

“Rather than predict shifts in visual attention, the brain uses lower order visual processing systems.

“It takes the easiest, quickest route to maintain visual stability.

“Picture your brain as an expert pilot, manoeuvring eye movements and attention, dynamically updating its plans with real-time information to maintain clear and stable vision at all times.”

Computational models of the brain’s dynamic and precise coordination of visual attention could be used to advance the models that self-driving systems are leveraging in the race for autonomous vehicles.

This research was published in Proceedings of the National Academy of Sciences.

 

 

 

Latest