Center transfer for supervised domain adaptation

Yükleniyor...
Küçük Resim

Tarih

2023

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Springer

Erişim Hakkı

info:eu-repo/semantics/openAccess

Özet

Domain adaptation (DA) is a popular strategy for pattern recognition and classification tasks. It leverages a large amount of data from the source domain to help train the model applied in the target domain. Supervised domain adaptation (SDA) approaches are desirable when only few labeled samples from the target domain are available. They can be easily adopted in many real-world applications where data collection is expensive. In this study, we propose a new supervision signal, namely center transfer loss (CTL), to efficiently align features under the SDA setting in the deep learning (DL) field. Unlike most previous SDA methods that rely on pairing up training samples, the proposed loss is trainable only using one-stream input based on the mini-batch strategy. The CTL exhibits two main functionalities in training to increase the performance of DL models, i.e., domain alignment and increasing the feature's discriminative power. The hyper-parameter to balance these two functionalities is waived in CTL, which is the second improvement from the previous approaches. Extensive experiments completed on well-known public datasets show that the proposed method performs better than recent state-of-the-art approaches.

Açıklama

Anahtar Kelimeler

Supervised Domain Adaptation, Deep Learning, Center Transfer Loss, Transfer Learning

Kaynak

Applied Intelligence

WoS Q Değeri

Q2

Scopus Q Değeri

Cilt

Sayı

Künye

Huang, X., Zhou, N., Huang, J., Zhang, H., Pedrycz, W., & Choi, K. S. (2023). Center transfer for supervised domain adaptation. Applied Intelligence, 1-17.