This shows a brain.
Neural network learning techniques stem from the dynamics of the brain. Credit: Neuroscience News

Outsmarting AI: Brain’s Wide, Shallow Learning Redefines Efficiency

Summary: Recent research contrasts the learning mechanisms of the human brain with those of deep learning in AI. Despite having fewer layers and slower, noisier dynamics, the brain can perform complex classification tasks as effectively as AI with hundreds of layers.

This study suggests that the brain’s efficiency lies in its wide, shallow architecture, akin to a broad building with few floors. The findings challenge current AI models and indicate a need for a shift in advanced GPU technology to better mimic the brain’s structure and learning methods.

Key Facts:

  1. The human brain’s shallow architecture efficiently performs complex tasks, in contrast to the deep, multi-layered structures of AI.
  2. This study introduces the concept of a wide shallow network, similar to the brain’s structure, as a potential model for AI development.
  3. Current GPU technology, designed for deep learning architectures, needs adaptation to implement wide shallow networks effectively.

Source: Bar-Ilan University

Neural network learning techniques stem from the dynamics of the brain. However, these two scenarios, brain learning and deep learning, are intrinsically different. One of the most prominent differences is the number of layers each one possesses.

Deep learning architectures typically consist of numerous layers that can be increased to hundreds, enabling efficient learning of complex classification tasks. Contrastingly, the brain consists of very few layers, yet despite its shallow architecture and noisy and slow dynamics, it can efficiently perform complex classification tasks.

The key question driving new research is the possible mechanism underlying the brain’s efficient shallow learning — one that enables it to perform classification tasks with the same accuracy as deep learning.

In an article published today in Physica A, researchers from Bar-Ilan University in Israel show how such shallow learning mechanisms can compete with deep learning.

“Instead of a deep architecture, like a skyscraper, the brain consists of a wide shallow architecture, more like a very wide building with only very few floors,” said Prof. Ido Kanter, of Bar-Ilan’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research.

“The capability to correctly classify objects increases where the architecture becomes deeper, with more layers. In contrast, the brain’s shallow mechanism indicates that a wider network better classifies objects,” said Ronit Gross, an undergraduate student and one of the key contributors to this work.

“Wider and higher architectures represent two complementary mechanisms,” she added.

 Nevertheless, the realization of very wide shallow architectures, imitating the brain’s dynamics, requires a shift in the properties of advanced GPU technology, which is capable of accelerating deep architecture, but fails in the implementation of wide shallow ones.

About this AI and neuroscience research news

Author: Elana Oberlander
Source: Bar-Ilan University
Contact: Elana Oberlander – Bar-Ilan University
Image: The image is credited to Neuroscience News

Original Research: Open access.
Efficient shallow learning mechanism as an alternative to deep learning” by Ido Kanter et al. Physica A: Statistical Mechanics and its Applications


Abstract

Efficient shallow learning mechanism as an alternative to deep learning

Deep learning architectures comprising tens or even hundreds of convolutional and fully-connected hidden layers differ greatly from the shallow architecture of the brain.

Here, we demonstrate that by increasing the relative number of filters per layer of a generalized shallow architecture, the error rates decay as a power law to zero. Additionally, a quantitative method to measure the performance of a single filter, shows that each filter identifies small clusters of possible output labels, with additional noise selected as labels outside the clusters.

This average noise per filter also decays for a given generalized architecture as a power law with an increasing number of filters per layer, forming the underlying mechanism of efficient shallow learning.

The results are supported by the training of the generalized LeNet-3, VGG-5, and VGG-16 on CIFAR-100 and suggest an increase in the noise power law exponent for deeper architectures. The presented underlying shallow learning mechanism calls for its further quantitative examination using various databases and shallow architectures.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.