Convolutional Features-Based Broad Learning With LSTM for Multidimensional Facial Emotion Recognition in Human-Robot Interaction

dc.authoridHirota, Kaoru/0000-0002-8118-9815
dc.authoridli, min/0000-0002-5773-8976
dc.authoridWu, Min/0000-0002-0668-8315
dc.contributor.authorChen, Luefeng
dc.contributor.authorLi, Min
dc.contributor.authorWu, Min
dc.contributor.authorPedrycz, Witold
dc.contributor.authorHirota, Kaoru
dc.date.accessioned2024-05-19T14:41:14Z
dc.date.available2024-05-19T14:41:14Z
dc.date.issued2024
dc.departmentİstinye Üniversitesien_US
dc.description.abstractConvolutional feature-based broad learning with long short-term memory (CBLSTM) is proposed to recognize multidimensional facial emotions in human-robot interaction. The CBLSTM model consists of convolution and pooling layers, broad learning (BL), and long-and short-term memory network. It aims to obtain the depth, width, and time scale information of facial emotion through three parts of the model, so as to realize multidimensional facial emotion recognition. CBLSTM adopts the structure of BL after processing was done at the convolution and pooling layer to replace the original random mapping method and extract features with more representation ability, which significantly reduces the computational time of the facial emotion recognition network. Moreover, we adopted incremental learning, which can quickly reconstruct the model without a complete retraining process. Experiments on three databases are developed, including CK+, MMI, and SFEW2.0 databases. The experimental results show that the proposed CBLSTM model using multidimensional information produces higher recognition accuracy than that without time scale information. It is 1.30% higher on the CK+ database and 1.06% higher on the MMI database. The computation time is 9.065 s, which is significantly shorter than the time reported for the convolutional neural network (CNN). In addition, the proposed method obtains improvement compared to the state-of-the-art methods. It improves the recognition rate by 3.97%, 1.77%, and 0.17% compared to that of CNN-SIPS, HOG-TOP, and CMACNN in the CK+ database, 5.17%, 5.14%, and 3.56% compared to TLMOS, ALAW, and DAUGN in the MMI database, and 7.08% and 2.98% compared to CNNVA and QCNN in the SFEW2.0 database.en_US
dc.description.sponsorshipNational Natural Science Foundation of China [61973286, 61603356, 61733016]; 111 Project [B17040]; Fundamental Research Funds for the Central Universities; China University of Geosciences [201839]en_US
dc.description.sponsorship& nbsp;This work was supported in part by the National Natural Science Foundation of China under Grant 61973286, Grant 61603356, and Grant 61733016; in part by the 111 Project under Grant B17040; and in part by the Fundamental Research Funds for the Central Universities, China University of Geosciences under Grant 201839.en_US
dc.identifier.doi10.1109/TSMC.2023.3301001
dc.identifier.endpage75en_US
dc.identifier.issn2168-2216
dc.identifier.issn2168-2232
dc.identifier.issue1en_US
dc.identifier.scopus2-s2.0-85168681758en_US
dc.identifier.scopusqualityQ1en_US
dc.identifier.startpage64en_US
dc.identifier.urihttps://doi.org10.1109/TSMC.2023.3301001
dc.identifier.urihttps://hdl.handle.net/20.500.12713/5081
dc.identifier.volume54en_US
dc.identifier.wosWOS:001060667900001en_US
dc.identifier.wosqualityN/Aen_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US
dc.language.isoenen_US
dc.publisherIeee-Inst Electrical Electronics Engineers Incen_US
dc.relation.ispartofIeee Transactions on Systems Man Cybernetics-Systemsen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.snmz20240519_kaen_US
dc.subjectEmotion Recognitionen_US
dc.subjectHuman-Robot Interactionen_US
dc.subjectLong Short-Term Memory (Lstm)en_US
dc.subjectHuman-Robot Interactionen_US
dc.subjectLong Short-Term Memory (Lstm)en_US
dc.titleConvolutional Features-Based Broad Learning With LSTM for Multidimensional Facial Emotion Recognition in Human-Robot Interactionen_US
dc.typeArticleen_US

Dosyalar