HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations

Xiao Wang Daisuke Kihara Jiebo Luo Guo-Jun Qi

EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations

Abstract

Deep neural networks have been successfully applied to many real-world applications. However, such successes rely heavily on large amounts of labeled data that is expensive to obtain. Recently, many methods for semi-supervised learning have been proposed and achieved excellent performance. In this study, we propose a new EnAET framework to further improve existing semi-supervised methods with self-supervised information. To our best knowledge, all current semi-supervised methods improve performance with prediction consistency and confidence ideas. We are the first to explore the role of {\bf self-supervised} representations in {\bf semi-supervised} learning under a rich family of transformations. Consequently, our framework can integrate the self-supervised information as a regularization term to further improve {\it all} current semi-supervised methods. In the experiments, we use MixMatch, which is the current state-of-the-art method on semi-supervised learning, as a baseline to test the proposed EnAET framework. Across different datasets, we adopt the same hyper-parameters, which greatly improves the generalization ability of the EnAET framework. Experiment results on different datasets demonstrate that the proposed EnAET framework greatly improves the performance of current semi-supervised algorithms. Moreover, this framework can also improve {\bf supervised learning} by a large margin, including the extremely challenging scenarios with only 10 images per class. The code and experiment records are available in \url{https://github.com/maple-research-lab/EnAET}.

Code Repositories

maple-research-lab/EnAET
Official
pytorch
Mentioned in GitHub
wang3702/EnAET
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-cifar-10EnAET
Percentage correct: 98.01
image-classification-on-cifar-100EnAET
Percentage correct: 83.13
image-classification-on-stl-10EnAET
Percentage correct: 95.48
image-classification-on-svhnEnAET
Percentage error: 2.22
semi-supervised-image-classification-on-3EnAET
Percentage correct: 92.4
semi-supervised-image-classification-on-cifarEnAET
Percentage error: 4.18
semi-supervised-image-classification-on-cifar-2EnAET (WRN-28-2)
Percentage error: 26.93±0.21
semi-supervised-image-classification-on-cifar-2EnAET (WRN-28-2-Large)
Percentage error: 22.92
semi-supervised-image-classification-on-cifar-3EnAET
Percentage correct: 41.27
semi-supervised-image-classification-on-cifar-4EnAET
Percentage correct: 68.17
semi-supervised-image-classification-on-stlEnAET
Accuracy: 95.48
semi-supervised-image-classification-on-stl-1EnAET
Accuracy: 91.96
semi-supervised-image-classification-on-svhnEnAET
Accuracy: 97.58
semi-supervised-image-classification-on-svhn-1EnAET
Accuracy: 96.79

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp