Overcoming ‘Catastrophic Forgetting’: A Leap in AI Continuous Learning

Summary: Researchers are investigating a significant hurdle in machine learning known as “catastrophic forgetting,” a phenomenon where AI systems lose information from previous tasks while learning new ones.

The research shows that, like humans, AI remembers information better when faced with diverse tasks rather than those sharing similar features. Insights from the study could help improve continuous learning in AI systems, advancing their capabilities to mimic human learning processes and enhance performance.

Key Facts:

  1. “Catastrophic forgetting” is a challenge in AI systems, where they forget information from previous tasks while learning new ones.
  2. Artificial neural networks remember information better when presented with a variety of tasks, rather than tasks sharing similar attributes.
  3. The study’s insights could bridge the gap between machine learning and human learning, potentially leading to more sophisticated AI systems.

Source: Ohio State University

Memories can be as tricky to hold onto for machines as they can be for humans.

To help understand why artificial agents develop holes in their own cognitive processes, electrical engineers at The Ohio State University have analyzed how much a process called “continual learning” impacts their overall performance. 

This shows a robotic woman.
Essentially, the goal for these systems would be for them to one day mimic the learning capabilities of humans. Credit: Neuroscience News

Continual learning is when a computer is trained to continuously learn a sequence of tasks, using its accumulated knowledge from old tasks to better learn new tasks. 

Yet one major hurdle scientists still need to overcome to achieve such heights is learning how to circumvent the machine learning equivalent of memory loss – a process which in AI agents is known as “catastrophic forgetting.”

As artificial neural networks are trained on one new task after another, they tend to lose the information gained from those previous tasks, an issue that could become problematic as society comes to rely on AI systems more and more, said Ness Shroff, an Ohio Eminent Scholar and professor of computer science and engineering at The Ohio State University.

“As automated driving applications or other robotic systems are taught new things, it’s important that they don’t forget the lessons they’ve already learned for our safety and theirs,” said Shroff. “Our research delves into the complexities of continuous learning in these artificial neural networks, and what we found are insights that begin to bridge the gap between how a machine learns and how a human learns.” 

Researchers found that in the same way that people might struggle to recall contrasting facts about similar scenarios but remember inherently different situations with ease, artificial neural networks can recall information better when faced with diverse tasks in succession, instead of ones that share similar features, Shroff said. 

The team, including Ohio State postdoctoral researchers Sen Lin and Peizhong Ju and professors Yingbin Liang and Shroff, will present their research this month at the 40th annual International Conference on Machine Learning in Honolulu, Hawaii, a flagship conference in machine learning. 

While it can be challenging to teach autonomous systems to exhibit this kind of dynamic, lifelong learning, possessing such capabilities would allow scientists to scale up machine learning algorithms at a faster rate as well as easily adapt them to handle evolving environments and unexpected situations. Essentially, the goal for these systems would be for them to one day mimic the learning capabilities of humans.

Traditional machine learning algorithms are trained on data all at once, but this team’s findings showed that factors like task similarity, negative and positive correlations, and even the order in which an algorithm is taught a task matter in the length of time an artificial network retains certain knowledge. 

For instance, to optimize an algorithm’s memory, said Shroff, dissimilar tasks should be taught early on in the continual learning process. This method expands the network’s capacity for new information and improves its ability to subsequently learn more similar tasks down the line. 

Their work is particularly important as understanding the similarities between machines and the human brain could pave the way for a deeper understanding of AI, said Shroff. 

“Our work heralds a new era of intelligent machines that can learn and adapt like their human counterparts,” he said. 

Funding: The study was supported by the National Science Foundation and the Army Research Office. 

About this artificial intelligence and learning research news

Author: Tatyana Woodall
Source: Ohio State University
Contact: Tatyana Woodall – Ohio State University
Image: The image is credited to Neuroscience News

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.