Jadoon, H.K.Jamil, A.Zulfiqar, A.Hameed, A.A.2024-05-192024-05-1920239798350322347https://doi.org/10.1109/AIBThings58340.2023.10292483https://hdl.handle.net/20.500.12713/4242Central Michigan University (CMU);IEEE2023 IEEE International Conference on Artificial Intelligence, Blockchain, and Internet of Things, AIBThings 2023 -- 16 September 2023 through 17 September 2023 -- -- 194014Image classification poses a fundamental challenge in deep learning, especially in scenarios where labeled data is scarce but unlabeled data is abundant. Precise pseudo-labels are crucial to facilitate classification in such situations. One common approach involves the use of binary classifiers with a one-vs-all strategy to assign pseudo-labels to unlabeled data, offering the advantage of tailored predictions for each class. However, this method faces challenges, including class imbalance, often requiring oversampling for resolution, and extended training times due to multiple binary classifiers. Our proposed approach addresses the inherent class imbalance in the one-vs-all method, eliminating the need for oversampling. We achieve this by training a single multi-class classifier through a combination of binary classifiers, transfer learning, and fine-tuning while enforcing a stringent prediction threshold for pseudo-labels. This transition to a single multi-class classifier significantly reduces both training duration and storage demands. Our model's effectiveness is rigorously evaluated on two diverse datasets, MNIST, and Fashion MNIST, achieving impressive test accuracies of 95.59% and 84.84%, respectively, for a pseudo-label generation. © 2023 IEEE.eninfo:eu-repo/semantics/closedAccessBinary ClassifiersClass ImbalanceImage ClassificationPseudo-Label GenerationSemi-Supervised LearningEnhancing the Multiclass Image Classification Accuracy using Binary Classifiers for Semi-Supervised LearningConference Object2-s2.0-8517851504310.1109/AIBThings58340.2023.10292483N/A