A Systematic Review on Affective Computing: Emotion Models, Databases, and Recent Advances release_hzl5xptarfg53ncfus7jt4ezl4

by Yan Wang, Wei Song, Wei Tao, Antonio Liotta, Dawei Yang, Xinlei Li, Shuyong Gao, Yixuan Sun, Weifeng Ge, Wei Zhang, Wenqiang Zhang

Released as a article .



Affective computing plays a key role in human-computer interactions, entertainment, teaching, safe driving, and multimedia integration. Major breakthroughs have been made recently in the areas of affective computing (i.e., emotion recognition and sentiment analysis). Affective computing is realized based on unimodal or multimodal data, primarily consisting of physical information (e.g., textual, audio, and visual data) and physiological signals (e.g., EEG and ECG signals). Physical-based affect recognition caters to more researchers due to multiple public databases. However, it is hard to reveal one's inner emotion hidden purposely from facial expressions, audio tones, body gestures, etc. Physiological signals can generate more precise and reliable emotional results; yet, the difficulty in acquiring physiological signals also hinders their practical application. Thus, the fusion of physical information and physiological signals can provide useful features of emotional states and lead to higher accuracy. Instead of focusing on one specific field of affective analysis, we systematically review recent advances in the affective computing, and taxonomize unimodal affect recognition as well as multimodal affective analysis. Firstly, we introduce two typical emotion models followed by commonly used databases for affective computing. Next, we survey and taxonomize state-of-the-art unimodal affect recognition and multimodal affective analysis in terms of their detailed architectures and performances. Finally, we discuss some important aspects on affective computing and their applications and conclude this review with an indication of the most promising future directions, such as the establishment of baseline dataset, fusion strategies for multimodal affective analysis, and unsupervised learning models.
In text/plain format

Archived Files and Locations

application/pdf  2.0 MB
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-03-14
Version   v1
Language   en ?
arXiv  2203.06935v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 9409cee7-0c99-4bfb-881f-59bff2201bc9