Yazar "Hirota, K." seçeneğine göre listele
Listeleniyor 1 - 4 / 4
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Broad-deep network-based fuzzy emotional inference model with personal information for intention understanding in human–robot interaction(Elsevier Ltd, 2024) Li, M.; Chen, L.; Wu, M.; Hirota, K.; Pedrycz, W.A broad-deep fusion network-based fuzzy emotional inference model with personal information (BDFEI) is proposed for emotional intention understanding in human–robot interaction. It aims to understand students’ intentions in the university teaching scene. Initially, we employ convolution and maximum pooling for feature extraction. Subsequently, we apply the ridge regression algorithm for emotional behavior recognition, which effectively mitigates the impact of complex network structures and slow network updates often associated with deep learning. Moreover, we utilize multivariate analysis of variance to identify the key personal information factors influencing intentions and calculate their influence coefficients. Finally, a fuzzy inference method is employed to gain a comprehensive understanding of intentions. Our experimental results demonstrate the effectiveness of the BDFEI model. When compared to existing models, namely FDNNSA, ResNet-101+GFK, and HCFS, the BDFEI model achieved superior accuracy on the FABO database, surpassing them by 12.21%, 1.89%, and 0.78%, respectively. Furthermore, our self-built database experiments yielded an impressive 82.00% accuracy in intention understanding, confirming the efficacy of our emotional intention inference model. © 2024 Elsevier LtdÖğe Partially Occluded Face Expression Recognition with CBAM-Based Residual Network for Teaching Scene(Institute of Electrical and Electronics Engineers Inc., 2023) Bai, Y.; Chen, L.; Li, M.; Wu, M.; Pedrycz, W.; Hirota, K.In this paper, a deep residual network based on convolutional block attention module (CBAM) is proposed, which is utilized for feature extraction of partially occluded face expression data. The proposed method overcomes the problem of localized occlusion face feature extraction by focusing on the regions and channels containing important information in the occluded face data through CBAM. Multi-task cascaded convolutional networks (MTCNN) are firstly utilized to localize the key regions of face emotion, and then deep emotion features are extracted by CBAM-ResNet network. The final emotion labels are generated. The effectiveness of this paper's method is verified on the RAF-DB dataset and the occluded CK+ dataset. The experimental accuracy in the RAF-DB dataset is 76.3%, which is 3.74% and 1.64% higher than the accuracy produced by the method of RGBT, and the WLS-RF, respectively. Application experiments are carried out in the real teaching scenario, which verifies the applicability of the algorithm in the real teaching scene. © 2023 IEEE.Öğe Plate Shape Prediction Based on Data-Driven in Roll Quenching Process(Institute of Electrical and Electronics Engineers Inc., 2023) Hu, L.; Chen, L.; Hu, J.; Wu, M.; Pedrycz, W.; Hirota, K.In the process of steel plate production, predicting the plate shape is of great significance for producing high-quality and consistently stable plate shapes. This paper presents a model that predicts both the defect types and flatness of the plate, providing theoretical support for setting process parameters in roller quenching production. First, the parameters of the quenching process are analyzed to identify their characteristics. Then, the K-Means clustering algorithm and correlation analysis are employed to process the quenching process parameters. A gradient boosting decision tree (GBDT) model is used to predict the defect types and flatness of the steel plates. Finally, industrial production data is utilized for experimental validation. The obtained experimental results verify the reliability of the proposed method. © 2023 IEEE.Öğe Skeleton-Based Multi-Stream Adaptive Graph Convolutional Network for Indoor Scene Action Recognition(Institute of Electrical and Electronics Engineers Inc., 2023) Li, J.; Chen, L.; Li, M.; Wu, M.; Pedrycz, W.; Hirota, K.With the rapid advances in computer vision, human action recognition has gradually received attention, but the current methods still exhibit some problems in indoor environments. The human skeleton, as the framework of human motion, contains high-quality actional feature information, and the skeleton-based action recognition method effectively avoid the interference of interior background noise and has advantages in indoor action recognition. The outstanding effect of graph convolutional networks on graph structure data processing has led to its rapid development and wide application in skeleton-based action recognition. Second-order skeletal information also contains a large number of actional features but is not effectively utilized. The artificial predefined topology of the human skeleton map has limitations, and cannot reflect the interaction between limbs. To solve the above problems, this article designs an adaptive weighted multi-stream graph convolutional network (AM-GCN) based on skeletal information, using an attention mechanism to enhance the network's ability to extract actional features, and an adaptive layer to make the construction graph more flexible, incorporating second-order skeletal features through a dual-stream architecture. In this article, the NTU-RGB+D dataset has been used for the experiments, the results show that the method in this article has good results. © 2023 IEEE.