메뉴 건너뛰기
Library Notice
Institutional Access
If you certify, you can access the articles for free.
Check out your institutions.
ex)Hankuk University, Nuri Motors
Log in Register Help KOR
Subject

Emotion Recognition Implementation with Multimodalities of Face, Voice and EEG
Recommendations
Search
Questions

논문 기본 정보

Type
Academic journal
Author
Miracle Udurume (Kumoh National Institute of Technology) Angela Caliwag (Kumoh National Institute of Technology) Wansu Lim (Kumoh National Institute of Technology) Gwigon Kim (Kumoh National Institute of Technology)
Journal
한국정보통신학회JICCE Journal of information and communication convergence engineering Vol.20 No.3 KCI Candidated Journals SCOPUS
Published
2022.9
Pages
174 - 180 (7page)

Usage

cover
📌
Topic
📖
Background
🔬
Method
🏆
Result
Emotion Recognition Implementation with Multimodalities of Face, Voice and EEG
Ask AI
Recommendations
Search
Questions

Abstract· Keywords

Report Errors
Emotion recognition is an essential component of complete interaction between human and machine. The issues related to emotion recognition are a result of the different types of emotions expressed in several forms such as visual, sound, and physiological signal. Recent advancements in the field show that combined modalities, such as visual, voice and electroencephalography signals, lead to better result compared to the use of single modalities separately. Previous studies have explored the use of multiple modalities for accurate predictions of emotion; however the number of studies regarding real-time implementation is limited because of the difficulty in simultaneously implementing multiple modalities of emotion recognition. In this study, we proposed an emotion recognition system for real-time emotion recognition implementation. Our model was built with a multithreading block that enables the implementation of each modality using separate threads for continuous synchronization. First, we separately achieved emotion recognition for each modality before enabling the use of the multithreaded system. To verify the correctness of the results, we compared the performance accuracy of unimodal and multimodal emotion recognitions in real-time. The experimental results showed real-time user emotion recognition of the proposed model. In addition, the effectiveness of the multimodalities for emotion recognition was observed. Our multimodal model was able to obtain an accuracy of 80.1% as compared to the unimodality, which obtained accuracies of 70.9, 54.3, and 63.1%.

Contents

Abstract
Ⅰ. INTRODUCTION
Ⅱ. PROPOSED METHOD
Ⅲ. RESULTS AND DISCUSSION
Ⅳ. CONCLUSIONS
REFERENCES

References (0)

Add References

Recommendations

It is an article recommended by DBpia according to the article similarity. Check out the related articles!

Related Authors

Comments(0)

0

Write first comments.

UCI(KEPA) : I410-ECN-0101-2023-004-000406968