Researchers have created a digital audio platform that can modify the emotional tone of people’s voices while they are talking, to make them sound happier, sadder or more fearful. New results show that while listening to their altered voices, participants’ emotional state change in accordance with the new emotion.
“Very little is known about the mechanisms behind the production of vocal emotion”, says lead author Jean-Julien Aucouturier from the French National Centre for Scientific Research (CNRS), France.
“Previous research has suggested that people try to manage and control their emotions, for example hold back an expression or reappraise feelings. We wanted to investigate what kind of awareness people have of their own emotional expressions.”
In an initial study using a novel digital audio platform, published in Proceedings of the National Academy of Sciences (PNAS), participants read a short story aloud while hearing their own altered voice, sounding happier, sadder or more fearful, through a headset.
The study found that the participants were unaware that their voices were being manipulated, while their emotional state changed in accordance with the manipulated emotion portrayed. This indicates that people do not always control their own voice to meet a specific goal and that people listen to their own voice to learn how they are feeling.
“The relationship between the expression and experience of emotions has been a long-standing topic of disagreement in the field of psychology”, says Petter Johansson, one of the authors from Lund University, Sweden. “This is the first evidence of direct feedback effects on emotional experience in the auditory domain.”
The emotional manipulations were created by digital audio processing algorithms that simulate acoustic components of emotional vocalisations. For example, the happy manipulation modifies the pitch of a speaker’s voice using pitch shifting and inflection to make it sound more positive, modifies its dynamic range using compression to make it sound more confident, and modifies its spectral content using high pass filtering to make it sound more excited.
The researchers believe this novel audio platform opens up many new areas of experimentation.
“Previously, this kind of emotion manipulation has not been done on running speech, only on recorded segments”, explains Jean-Julien Aucouturier. “We are making a version of the voice manipulation platform available as open-source on our website, and we invite anyone to download and experiment with the tools.”
For applications outside academia, co-author Katsumi Watanabe from Waseda University and the University of Tokyo in Japan considers that the platform could be used for therapeutic purposes, for example for mood disorders by inducing positive attitude change from retelling affective memories or by redescribing emotionally laden events in a modified tone of voice. It might also be possible to enhance the emotional impact of Karaoke or live singing performances, or maybe to alter the emotional atmosphere of conversations in online meetings and gaming.
About this psychology research
The study was conducted by researchers at the Science and Technology of Music and Sound Lab (STMS), (IRCAM/CNRS/UPMC) and the LEAD Lab (CNRS/University of Burgundy) in France, Lund University in Sweden, and Waseda University and the University of Tokyo in Japan.
Source: Cecilia Schubert – Lund University Image Credit: The image is credited to Science Team Original Research: Full open access research for “Covert digital manipulation of vocal emotion alter speakers’ emotional states in a congruent direction” by Jean-Julien Aucouturier, Petter Johansson, Lars Hall, Rodrigo Segnini, Lolita Mercadié, and Katsumi Watanabe in PNAS. Published online January 11 2016 doi:10.1073/pnas.1506552113
Covert digital manipulation of vocal emotion alter speakers’ emotional states in a congruent direction
Research has shown that people often exert control over their emotions. By modulating expressions, reappraising feelings, and redirecting attention, they can regulate their emotional experience. These findings have contributed to a blurring of the traditional boundaries between cognitive and emotional processes, and it has been suggested that emotional signals are produced in a goal-directed way and monitored for errors like other intentional actions. However, this interesting possibility has never been experimentally tested. To this end, we created a digital audio platform to covertly modify the emotional tone of participants’ voices while they talked in the direction of happiness, sadness, or fear. The result showed that the audio transformations were being perceived as natural examples of the intended emotions, but the great majority of the participants, nevertheless, remained unaware that their own voices were being manipulated. This finding indicates that people are not continuously monitoring their own voice to make sure that it meets a predetermined emotional target. Instead, as a consequence of listening to their altered voices, the emotional state of the participants changed in congruence with the emotion portrayed, which was measured by both self-report and skin conductance level. This change is the first evidence, to our knowledge, of peripheral feedback effects on emotional experience in the auditory domain. As such, our result reinforces the wider framework of self-perception theory: that we often use the same inferential strategies to understand ourselves as those that we use to understand others.
“Covert digital manipulation of vocal emotion alter speakers’ emotional states in a congruent direction” by Jean-Julien Aucouturier, Petter Johansson, Lars Hall, Rodrigo Segnini, Lolita Mercadié, and Katsumi Watanabe in PNAS. Published online January 11 2016 doi:10.1073/pnas.1506552113