Command Palette
Search for a command to run...
SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning
Byoungjip Kim Jinho Choo Yeong-Dae Kwon Seongho Joe Seungjai Min Youngjune Gwon

Abstract
This paper introduces SelfMatch, a semi-supervised learning method that combines the power of contrastive self-supervised learning and consistency regularization. SelfMatch consists of two stages: (1) self-supervised pre-training based on contrastive learning and (2) semi-supervised fine-tuning based on augmentation consistency regularization. We empirically demonstrate that SelfMatch achieves the state-of-the-art results on standard benchmark datasets such as CIFAR-10 and SVHN. For example, for CIFAR-10 with 40 labeled examples, SelfMatch achieves 93.19% accuracy that outperforms the strong previous methods such as MixMatch (52.46%), UDA (70.95%), ReMixMatch (80.9%), and FixMatch (86.19%). We note that SelfMatch can close the gap between supervised learning (95.87%) and semi-supervised learning (93.19%) by using only a few labels for each class.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| semi-supervised-image-classification-on-cifar | SelfMatch | Percentage error: 4.06±0.08 |
| semi-supervised-image-classification-on-cifar-6 | SelfMatch | Percentage error: 4.87±0.26 |
| semi-supervised-image-classification-on-cifar-7 | SelfMatch | Percentage error: 6.81±1.08 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.