This shows different emotions on faces.
The system is fully integrated with a data processing circuit for wireless data transfer, enabling real-time emotion recognition. Credit: Neuroscience News

Wearable Tech Reads Human Emotions

Summary: Researchers unveiled a pioneering technology capable of real-time human emotion recognition, promising transformative applications in wearable devices and digital services.

The system, known as the personalized skin-integrated facial interface (PSiFI), combines verbal and non-verbal cues through a self-powered, stretchable sensor, efficiently processing data for wireless communication.

This breakthrough, supported by machine learning, accurately identifies emotions even under mask-wearing conditions and has been applied in a VR “digital concierge” scenario, showcasing its potential to personalize user experiences in smart environments. The development is a significant stride towards enhancing human-machine interactions by integrating complex emotional data.

Key Facts:

  1. Innovative Emotion Recognition System: UNIST’s research team developed a multi-modal system that integrates verbal and non-verbal expressions for real-time emotion recognition.
  2. Self-Powered and Stretchable Sensor: The PSiFI system utilizes a novel sensor that is self-powered, facilitating the simultaneous capture and integration of diverse emotional data without external power sources.
  3. Practical Applications in VR: Demonstrated in a VR environment, the technology provides personalized services based on user emotions, indicating its vast potential in digital concierge services and beyond.

Source: UNIST

A groundbreaking technology that can recognize human emotions in real time has been developed by Professor Jiyun Kim and his research team in the Department of Material Science and Engineering at UNIST.

This innovative technology is poised to revolutionize various industries, including next-generation wearable systems that provide services based on emotions.

Understanding and accurately extracting emotional information has long been a challenge due to the abstract and ambiguous nature of human affects such as emotions, moods, and feelings.

To address this, the research team has developed a multi-modal human emotion recognition system that combines verbal and non-verbal expression data to efficiently utilize comprehensive emotional information.

At the core of this system is the personalized skin-integrated facial interface (PSiFI) system, which is self-powered, facile, stretchable, and transparent. It features a first-of-its-kind bidirectional triboelectric strain and vibration sensor that enables the simultaneous sensing and integration of verbal and non-verbal expression data.

The system is fully integrated with a data processing circuit for wireless data transfer, enabling real-time emotion recognition.

Utilizing machine learning algorithms, the developed technology demonstrates accurate and real-time human emotion recognition tasks, even when individuals are wearing masks. The system has also been successfully applied in a digital concierge application within a virtual reality (VR) environment.

The technology is based on the phenomenon of “friction charging,” where objects separate into positive and negative charges upon friction. Notably, the system is self-generating, requiring no external power source or complex measuring devices for data recognition.

Professor Kim commented, “Based on these technologies, we have developed a skin-integrated face interface (PSiFI) system that can be customized for individuals.” The team utilized a semi-curing technique to manufacture a transparent conductor for the friction charging electrodes. Additionally, a personalized mask was created using a multi-angle shooting technique, combining flexibility, elasticity, and transparency.

The research team successfully integrated the detection of facial muscle deformation and vocal cord vibrations, enabling real-time emotion recognition. The system’s capabilities were demonstrated in a virtual reality “digital concierge” application, where customized services based on users’ emotions were provided.

Jin Pyo Lee, the first author of the study, stated, “With this developed system, it is possible to implement real-time emotion recognition with just a few learning steps and without complex measurement equipment. This opens up possibilities for portable emotion recognition devices and next-generation emotion-based digital platform services in the future.”

The research team conducted real-time emotion recognition experiments, collecting multimodal data such as facial muscle deformation and voice. The system exhibited high emotional recognition accuracy with minimal training. Its wireless and customizable nature ensures wearability and convenience.

Furthermore, the team applied the system to VR environments, utilizing it as a “digital concierge” for various settings, including smart homes, private movie theaters, and smart offices. The system’s ability to identify individual emotions in different situations enables the provision of personalized recommendations for music, movies, and books.

Professor Kim emphasized, “For effective interaction between humans and machines, human-machine interface (HMI) devices must be capable of collecting diverse data types and handling complex integrated information. This study exemplifies the potential of using emotions, which are complex forms of human information, in next-generation wearable systems.”

The research was conducted in collaboration with Professor Lee Pui See of Nanyang Technical University in Singapore and was supported by the National Research Foundation of Korea (NRF) and the Korea Institute of Materials (KIMS) under the Ministry of Science and ICT.

About this emotion and neurotech research news

Author: JooHyeon Heo
Source: UNIST
Contact: JooHyeon Heo – UNIST
Image: The image is credited to Neuroscience News

Original Research: Open access.
Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface” by Jiyun Kim et al. Nature Communications


Abstract

Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface

Human affects such as emotions, moods, feelings are increasingly being considered as key parameter to enhance the interaction of human with diverse machines and systems. However, their intrinsically abstract and ambiguous nature make it challenging to accurately extract and exploit the emotional information.

Here, we develop a multi-modal human emotion recognition system which can efficiently utilize comprehensive emotional information by combining verbal and non-verbal expression data.

This system is composed of personalized skin-integrated facial interface (PSiFI) system that is self-powered, facile, stretchable, transparent, featuring a first bidirectional triboelectric strain and vibration sensor enabling us to sense and combine the verbal and non-verbal expression data for the first time. It is fully integrated with a data processing circuit for wireless data transfer allowing real-time emotion recognition to be performed.

With the help of machine learning, various human emotion recognition tasks are done accurately in real time even while wearing mask and demonstrated digital concierge application in VR environment.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.