How the brain inspires AI

Scientists know how powerful and efficient our brains are, and if artificial intelligence is to match or even approach human intelligence, then it makes sense to be inspired by nature. 

Most current AI is built to learn by using artificial neural networks, which emulate many structural aspects of how neurons are organised in the brain.  

Neuroscientists still don’t know exactly how our brains process all of the information we take in, or decide what’s important to learn. But based on studies, even some from the 1950s, we know that information from our senses travels up and down different layers of processing in the brain.

For example, when looking at a cat, your eye detects the image through the retina. Information about that image is then transferred to the thalamus, a part of the brain important for relaying sensory information and regulating sleep and consciousness. The signal then travels sequentially through multiple areas of the neocortex.

At each level, different features of the cat image are processed. This all happens in a third of a second, as we recognise the shape in the image as a cat.

Artificial deep networks operate in a similar way. The ‘deep’ part of the name refers to the fact they can have many layers, which is one reason we need powerful computers to make them work.

Artificial neural networks 

Another similarity of AI with our brain relates to neurons—the cells of the brain. The equivalent components in deep neural networks are called ‘units’. Like neurons, these units are connected to each other, providing a way for information to move between layers.

And like neurons, the strength of connections between artificial deep network units can change. The more a group of connected neurons is used, the stronger that path becomes. The less it's used, the weaker it becomes. In the brain, changes in strength occur because of a process called plasticity—the ability of the brain to adapt or respond to repeated stimulation—which underlies learning.

Deep networks also learn by adjusting the strength of connections between units. After the network processes an input image (i.e. a picture of a cat), its output is checked, and, if it made a mistake (e.g. it didn’t detect the cat), the connections are re-adjusted so that it will be better at recognising the cat next time.

Over time, if the network is trained on a sufficient number of images, it learns to find cats even in pictures it has never seen before.

Reward-based learning

In 2015, researchers at DeepMind (now part of Google’s Alphabet) created a deep neural network that learned to play Atari video games. The program wasn’t told the basic rules or purpose of the game, which was to shoot down spaceships. Instead, it was designed only to increase its ‘reward’ in the form of a higher game score. Through trial and error, the AI learned to play those games. This is a different type of learning than detecting images of cats.

In biology, reinforcement— or reward-based learning, is a fundamental part of how the brain works. It's a similar way to how you’d train a dog with treats to perform a desired action.

“The human brain is the only existing proof that the sort of general intelligence we’re trying to build is even possible,” Demis Hassabis, CEO of DeepMind said in an interview with The Verge, “so we think it’s worth putting the effort in to try and understand how it achieves these capabilities.”

Efficient brains

An adult brain weighs about 1.3kg, and while that’s about 2% of the overall body weight, it uses about 20% of the body’s resting energy to power its ~100 billion neurons. Our brains are incredibly efficient, though, compared to computers, even with all the sleep they require. The brain needs about 15 Watts of power to perform basic functions; compare this to IBM’s Watson supercomputer, which needed 90,000W to defeat Jeopardy! champions in 2011.

Is AI smarter than a human brain?

The answer to that question is...sometimes.

At some tasks—such as playing Atari video games, chess or Go, or identifying cats in YouTube videos—today’s deep networks can closely match or even outperform humans. Some types of image processing and pattern detection are now performed better by AI, such as knowing whether chest X-rays show signs of pneumonia. But, if you took the incredible AlphaGo network and asked it to identify cat videos on YouTube, you’d see it fail. Even the Atari-playing AI could only perform well at one task at a time: once it was trained on a new game, it forgot everything it knew about the previous one.

In other words, AI right now is extremely task-specific. 

The quest for artificial general intelligence, capable of producing the range of human cognitive abilities, is considerably more difficult. Although scientific and tech influencers such as the late Professor Stephen Hawking and Elon Musk have talked up the dangers of super-intelligent AI, people working in the AI field, like DeepMind CEO Demis Hassabis, are far more restrained. The AI experts will tell you we are a long way from any such thing.

 

    ​      

Help QBI research

Give now

QBI newsletters

Subscribe

The Brain: Intelligent Machines QBI