Robust semi-supervised classification based on data augmented online ELMs with deep features
Hu, Xiaochang; Zeng, Yujun; Xu, Xin; Zhou, Sihang; Liu, Li (2021-07-21)
Hu, X., Zeng, Y., Xu, X., Zhou, S., & Liu, L. (2021). Robust semi-supervised classification based on data augmented online ELMs with deep features. Knowledge-Based Systems, 229, 107307. https://doi.org/10.1016/j.knosys.2021.107307
© 2021 Published by Elsevier B.V. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/.
https://creativecommons.org/licenses/by-nc-nd/4.0/
https://urn.fi/URN:NBN:fi-fe2022022821164
Tiivistelmä
Abstract
One important strategy in semi-supervised learning is to utilize the predicted pseudo labels of unlabeled data to relieve the overdependence on the ground truth of supervised learning algorithms. However, the performance of such kinds of semi-supervised methods heavily relies on the quality of pseudo labels. To address this issue, a robust semi-supervised classification method, named data augmented online extreme learning machines (ELMs) with deep features (DF-DAELM) is proposed. This method firstly extracts features and infers labels for unlabeled data through self-training. Then, with the learned features and inferred labels, two noise-robust shallow classifiers based on data augmentation (i.e., SLI-OELM and CR-OELM) are proposed to eliminate the adverse effects of noises on classifier training. Specifically, inspired by label smoothing, a data augmented method, SLI-OELM is designed based on stochastic linear interpolation to improve the robustness of classifiers based on ELMs. Furthermore, based on the smoothing assumption, the proposed CR-OELM utilizes an ℓ₂-norm consistency regularization term to implicitly weight noisy samples. Comprehensive experiments demonstrate that DF-DAELM achieves competitive or even better performance on CIFAR-10/100 and SVHN over the related state-of-the-art methods. Meanwhile, for the proposed classifiers, experimental results on the MNIST dataset with different noise levels and sample scales demonstrate their superior performance, especially when the sample scale is small (≤ 20 K) and the noise is strong (40% ~ 80% ).
Kokoelmat
- Avoin saatavuus [34540]