You are here

Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning


Eric Arazo, Diego Ortego, Paul Albert, Noel O’Connor, Kevin McGuinness

Publication Type: 
Refereed Original Article
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudolabels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that label noise and mixup augmentation are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR10/100 and Mini-Imaget despite being much simpler than other state-of-the-art. These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Source code is available at
Digital Object Identifer (DOI): 
Publication Status: 
Publication Date: 
Research Group: 
Dublin City University (DCU)
Open access repository: