Deep Learning Algorithm Can Hear Alcohol in Voice

Summary: New AI technology can instantly determine whether a person is above the legal alcohol limit by analyzing a 12-second clip of their voice.

Source: La Trobe University

La Trobe University researchers have developed an artificial intelligence (AI) algorithm that could work alongside expensive and potentially biased breath testing devices in pubs and clubs.

The technology can instantly determine whether a person has exceeded the legal alcohol limit purely on using a 12-seconds recording of their voice.

In a paper published in the journal Alcohol, the study led by Ph.D. student Abraham Albert Bonela and supervised by Professors Emmanuel Kuntsche and Associate Professor Zhen He, from the Center for Alcohol Policy Research and the Department of Computer Science and Information Technology at La Trobe University, respectively, describes the development of the Audio-based Deep Learning Algorithm to Identify Alcohol Inebriation (ADLAIA) that can determine an individual’s intoxication status based on a 12-second recording of their speech.

According to Albert Bonela, acute alcohol intoxication impairs cognitive and psychomotor abilities leading to various public health hazards such as road traffic accidents and alcohol-related violence.

“Intoxicated individuals are usually identified by measuring their blood alcohol concentration (BAC) using breathalyzers that are expensive and labor-intensive,” Albert Bonela said.

This shows a woman holding a drink
The technology can instantly determine whether a person has exceeded the legal alcohol limit purely on using a 12-seconds recording of their voice. Image is in the public domain

“A test that could simply rely on someone speaking into a microphone would be a game changer.”

The algorithm was developed, and tested against, using a database dataset of 12,360 audio clips of inebriated and sober speakers. According to the researchers, ADLAIA was able to identify inebriated speakers—with BAC of 0.05% or higher—with an accuracy of almost 70%. The algorithm had a higher performance of almost 76%, in identifying intoxicated speakers with a BAC of higher than 0.12%.

The researchers suggest that one potential future application of ADLAIA could be the integration into mobile applications and to be used in environments (such as bars and sports stadiums) to get instantaneous results about inebriation status of individuals.

“Being able to identify intoxicated individuals solely based on their speech would be a much cheaper alternative to current systems where breath-based alcohol testing in these places is expensive and often unreliable,” Albert Bonela said.

“Upon further improvement in its overall performance, ADLAIA could be integrated into mobile applications and used as a preliminary tool for identifying alcohol- inebriated individuals.”

About this AI research news

Author: Press Office
Source: La Trobe University
Contact: Press Office – La Trobe University
Image: The image is in the public domain

Original Research: Closed access.
Audio-based Deep Learning Algorithm to Identify Alcohol Inebriation” by Abraham Albert Bonela et al. Alcohol


Abstract

Audio-based Deep Learning Algorithm to Identify Alcohol Inebriation

Background

Acute alcohol intoxication impairs cognitive and psychomotor abilities leading to various public health hazards such as road traffic accidents and alcohol-related violence. Intoxicated individuals are usually identified by measuring their blood alcohol concentration (BAC) using breathalysers that are expensive and labour-intensive. In this paper, we developed the Audio-based Deep Learning Algorithm to Identify Alcohol Inebriation (ADLAIA) that can instantly predict an individual’s intoxication status based on a 12-second recording of their speech.

Methods

ADLAIA was trained on a publicly available German Alcohol Language Corpus that comprises a total of 12,360 audio clips of inebriated and sober speakers (total of 162, aged 21-64, 47.7% female). ADLAIA’s performance was determined by computing the unweighted average recall (UAR) and accuracy of inebriation prediction.

Results

ADLAIA was able to identify inebriated speakers—with BAC of 0.05% or higher—with an UAR of 68.09% and accuracy of 67.67%. ADLAIA had a higher performance (UAR of 75.7%) in identifying intoxicated speakers (BAC > 0.12%).

Conclusion

Being able to identify intoxicated individuals solely based on their speech, ADLAIA could be integrated in mobile applications and used in environments (such as bars, sports stadiums) to get instantaneous results about inebriation status of individuals.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.