Summary: Using a deep learning algorithm dubbed ‘DeepSqueak’, researchers have been able to pick up and decode rodent vocalizations.
Source: UW Medicine.
Many researchers realize that mice and rats are social and chatty. They spend all day talking to each other, but what are they really saying? Not only are many rodent vocalizations unable to be heard by humans, but also existing computer programs to detect these vocalizations are flawed. They pick up other noises, are slow to analyze data, and rely on inflexible. rules-based algorithms to detect calls.
Two young scientists at the University of Washington School of Medicine developed a software program called DeepSqueak, which lifts this technological barrier and promotes broad adoption of rodent vocalization research.
This program takes an audio signal and transforms it into an image, or sonogram. By reframing an audio problem as a visual one, the researchers could take advantage of state-of-the-art machine vision algorithms developed for self-driving cars. DeepSqueak represents the first use of deep artificial neural networks in squeak detection.
The program is highlighted in a recent paper published in Neuropsychopharmacology and was presented at Neurosciences 2018.
“DeepSqueak uses biomimetic algorithms that learn to isolate vocalizations by being given labeled examples of vocalizations and noise,” said co-author Russell Marx. Marx is a technician in the Neumaier lab, which investigates complex behaviors relating to stress and addiction, and created the program with Kevin Coffey, whose specialty is studying the psychological aspects of drugs.
So what have the researchers found out so far?
“The animals have a rich repertoire of calls, around 20 kinds,” said Coffey, a postdoctoral fellow in the Neumaier lab.
“With drugs of abuse, you see both positive and negative calls,” Coffey said, explaining the complicated nature of addiction.
Coffey said the rodents seem the happiest when they are anticipating reward, such as sugar, or are playing with their peers. Interestingly, when two male mice are together, he said, they make the same calls over and over.
However, when they sense a female mouse nearby, their vocalizations are more complex, as if they are singing a courtship song. This effect is even more dramatic when the male mouse can smell but not see the female mouse. This observation suggests that male mice have distinct songs for different stages of courtship.
John Neumaier, professor of psychiatry and behavioral sciences at the UW School of Medicine, head of the Division of Psychiatric Neurosciences and associate director of the Alcohol and Drug Abuse Institute, says his goal is to develop treatments for withdrawal from alcohol or opioids. He said DeepSqueak is going to help his lab get there much faster and credits his two young researchers for doing something no one has been able to do yet — making ultrasonic vocalizations convenient, affordable and widely available.”
“If scientists can understand better how drugs change brain activity to cause pleasure or unpleasant feelings, we could devise better treatments for addiction,” he said.
Source: Bobbi Nodell – UW Medicine
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is credited to Alice Gray.
Original Research: Abstract for “DeepSqueak: a deep learning-based system for detection and analysis of ultrasonic vocalizations” by Kevin R. Coffey, Russell G. Marx & John F. Neumaier in Neuropsychopharmacology. Published January 4 2019.
DeepSqueak: a deep learning-based system for detection and analysis of ultrasonic vocalizations
Rodents engage in social communication through a rich repertoire of ultrasonic vocalizations (USVs). Recording and analysis of USVs has broad utility during diverse behavioral tests and can be performed noninvasively in almost any rodent behavioral model to provide rich insights into the emotional state and motor function of the test animal. Despite strong evidence that USVs serve an array of communicative functions, technical and financial limitations have been barriers for most laboratories to adopt vocalization analysis. Recently, deep learning has revolutionized the field of machine hearing and vision, by allowing computers to perform human-like activities including seeing, listening, and speaking. Such systems are constructed from biomimetic, “deep”, artificial neural networks. Here, we present DeepSqueak, a USV detection and analysis software suite that can perform human quality USV detection and classification automatically, rapidly, and reliably using cutting-edge regional convolutional neural network architecture (Faster-RCNN). DeepSqueak was engineered to allow non-experts easy entry into USV detection and analysis yet is flexible and adaptable with a graphical user interface and offers access to numerous input and analysis features. Compared to other modern programs and manual analysis, DeepSqueak was able to reduce false positives, increase detection recall, dramatically reduce analysis time, optimize automatic syllable classification, and perform automatic syntax analysis on arbitrarily large numbers of syllables, all while maintaining manual selection review and supervised classification. DeepSqueak allows USV recording and analysis to be added easily to existing rodent behavioral procedures, hopefully revealing a wide range of innate responses to provide another dimension of insights into behavior when combined with conventional outcome measures.