Summary: A biological neural network which is sculpted by evolution provides a scaffolding to facilitate quick and easy learning.
Artificial intelligence (AI) still has a lot to learn from animal brains, says Cold Spring Harbor Laboratory (CSHL) neuroscientist Anthony Zador. Now, he’s hoping that lessons from neuroscience can help the next generation of artificial intelligence overcome some particularly difficult barriers.
Anthony Zador, M.D., Ph.D., has spent his career working to describe, down to the individual neuron, the complex neural networks that make up a living brain. But he started his career studying artificial neural networks (ANNs). ANNs, which are the computing systems behind the recent AI revolution, are inspired by the branching networks of neurons in animal and human brains. However, this broad concept is usually where the inspiration ends.
In a perspective piece recently published in Nature Communications, Zador describes how improved learning algorithms are allowing AI systems to achieve superhuman performance on an increasing number of more complex problems like chess and poker. Yet, machines are still stumped by what we consider to be the simplest problems.
Solving this paradox may finally enable robots to learn how to do something as organic as stalking prey or building a nest, or even something as human and mundane as doing the dishes-a task that Google CEO Eric Schmidt once called “literally the number one request… but an extraordinarily difficult problem” for a robot.
“The things that we find hard, like abstract thought or chess playing, are actually not the hard thing for machines. The things that we find easy, like interacting with the physical world, that’s what’s hard,” Zador explained. “The reason that we think it’s easy is that we had half a billion years of evolution that has wired up our circuits so that we do it effortlessly.”
That’s why Zador writes that the secret to quick learning might not be a perfected general learning algorithm. Instead, he suggests that biological neural networks sculpted by evolution provide a kind of scaffolding to facilitate the quick and easy learning for specific kinds of tasks-usually those crucial for survival.
For an example, Zador points to your backyard.
“You have squirrels that can jump from tree to tree within a few weeks after birth, but we don’t have mice learning the same thing. Why not?” Zador said. “It’s because one is genetically predetermined to become a tree-dwelling creature.”
Zador suggests that one result of this genetic predisposition is the innate circuitry that helps guide an animal’s early learning. However, these scaffolding networks are far less generalized than the perceived panacea of machine learning that most AI experts are pursuing. If ANNs identified and adapted similar sets of circuitry, Zador argues, the future’s household robots might just one day surprise us with clean dishes.
About this neuroscience research article
Source: CSHL Media Contacts: Sara Roncero-Menendez – CSHL Image Source: The image is credited to Patra Kongsirimongkolchai/Pond5.
A critique of pure learning and what artificial neural networks can learn from animal brains
Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better supervised learning algorithms. However, in stark contrast to young animals (including humans), training such networks requires enormous numbers of labeled examples, leading to the belief that animals must rely instead mainly on unsupervised learning. Here we argue that most animal behavior is not the result of clever learning algorithms—supervised or unsupervised—but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a “genomic bottleneck”. The genomic bottleneck suggests a path toward ANNs capable of rapid learning.