Coupled Multimodal Emotional Feature Analysis Based on Broad-Deep Fusion Networks in Human-Robot Interaction

dc.authorscopusidWitold Pedrycz / 58861905800
dc.authorwosidWitold Pedrycz / HJZ-2779-2023
dc.contributor.authorChen, Luefeng
dc.contributor.authorLi, Min
dc.contributor.authorWu, Min
dc.contributor.authorPedrycz, Witold
dc.contributor.authorHirota, Kaoru
dc.date.accessioned2025-04-18T10:31:47Z
dc.date.available2025-04-18T10:31:47Z
dc.date.issued2024
dc.departmentİstinye Üniversitesi, Mühendislik ve Doğa Bilimleri Fakültesi, Bilgisayar Mühendisliği Bölümü
dc.description.abstractA coupled multimodal emotional feature analysis (CMEFA) method based on broad-deep fusion networks, which divide multimodal emotion recognition into two layers, is proposed. First, facial emotional features and gesture emotional features are extracted using the broad and deep learning fusion network (BDFN). Considering that the bi-modal emotion is not completely independent of each other, canonical correlation analysis (CCA) is used to analyze and extract the correlation between the emotion features, and a coupling network is established for emotion recognition of the extracted bi-modal features. Both simulation and application experiments are completed. According to the simulation experiments completed on the bimodal face and body gesture database (FABO), the recognition rate of the proposed method has increased by 1.15% compared to that of the support vector machine recursive feature elimination (SVMRFE) (without considering the unbalanced contribution of features). Moreover, by using the proposed method, the multimodal recognition rate is 21.22%, 2.65%, 1.61%, 1.54%, and 0.20% higher than those of the fuzzy deep neural network with sparse autoencoder (FDNNSA), ResNet-101 + GFK, C3D + MCB + DBN, the hierarchical classification fusion strategy (HCFS), and cross-channel convolutional neural network (CCCNN), respectively. In addition, preliminary application experiments are carried out on our developed emotional social robot system, where emotional robot recognizes the emotions of eight volunteers based on their facial expressions and body gestures. © 2012 IEEE.
dc.identifier.citationChen, L., Li, M., Wu, M., Pedrycz, W., & Hirota, K. (2023). Coupled multimodal emotional feature analysis based on broad-deep fusion networks in human–robot interaction. IEEE Transactions on Neural Networks and Learning Systems.
dc.identifier.doi10.1109/TNNLS.2023.3236320
dc.identifier.endpage9673
dc.identifier.issn2162237X
dc.identifier.issue7
dc.identifier.pmid37021991
dc.identifier.scopus2-s2.0-85147283251
dc.identifier.scopusqualityQ1
dc.identifier.startpage9663
dc.identifier.urihttp://dx.doi.org/10.1109/TNNLS.2023.3236320
dc.identifier.urihttps://hdl.handle.net/20.500.12713/7106
dc.identifier.volume35
dc.identifier.wosWOS:001128303600001
dc.identifier.wosqualityQ1
dc.indekslendigikaynakScopus
dc.indekslendigikaynakPubMed
dc.indekslendigikaynakWeb of Science
dc.institutionauthorPedrycz, Witold
dc.institutionauthoridWitold Pedrycz / 0000-0002-9335-9930
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.relation.ispartofIEEE Transactions on Neural Networks and Learning Systems
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.subjectBroad Learning
dc.subjectDeep Feature Fusion
dc.subjectDeep Neural Networks
dc.subjectHuman-Robot İnteraction
dc.subjectMultimodal Emotion Recognition
dc.titleCoupled Multimodal Emotional Feature Analysis Based on Broad-Deep Fusion Networks in Human-Robot Interaction
dc.typeArticle

Dosyalar

Lisans paketi
Listeleniyor 1 - 1 / 1
Küçük Resim Yok
İsim:
license.txt
Boyut:
1.17 KB
Biçim:
Item-specific license agreed upon to submission
Açıklama: