HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Masking meets Supervision: A Strong Learning Alliance

Byeongho Heo Taekyung Kim Sangdoo Yun Dongyoon Han

Masking meets Supervision: A Strong Learning Alliance

Abstract

Pre-training with random masked inputs has emerged as a novel trend in self-supervised training. However, supervised learning still faces a challenge in adopting masking augmentations, primarily due to unstable training. In this paper, we propose a novel way to involve masking augmentations dubbed Masked Sub-branch (MaskSub). MaskSub consists of the main-branch and sub-branch, the latter being a part of the former. The main-branch undergoes conventional training recipes, while the sub-branch merits intensive masking augmentations, during training. MaskSub tackles the challenge by mitigating adverse effects through a relaxed loss function similar to a self-distillation loss. Our analysis shows that MaskSub improves performance, with the training loss converging faster than in standard training, which suggests our method stabilizes the training process. We further validate MaskSub across diverse training scenarios and models, including DeiT-III training, MAE finetuning, CLIP finetuning, BERT training, and hierarchical architectures (ResNet and Swin Transformer). Our results show that MaskSub consistently achieves impressive performance gains across all the cases. MaskSub provides a practical and effective solution for introducing additional regularization under various training recipes. Code available at https://github.com/naver-ai/augsub

Code Repositories

naver-ai/augsub
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-imagenetViT-B @224 (DeiT-III + AugSub)
Number of params: 86.6M
Top 1 Accuracy: 84.2%
image-classification-on-imagenetViT-H @224 (DeiT-III + AugSub)
Number of params: 632M
Top 1 Accuracy: 85.7%
image-classification-on-imagenetViT-L @224 (DeiT-III + AugSub)
Number of params: 304M
Top 1 Accuracy: 85.3%
self-supervised-image-classification-on-1MAE + AugSub finetune (ViT-B/16)
Number of Params: 87M
Top 1 Accuracy: 83.9%
self-supervised-image-classification-on-1MAE + AugSub finetune (ViT-L/16)
Number of Params: 304M
Top 1 Accuracy: 86.1%
self-supervised-image-classification-on-1MAE + AugSub finetune (ViT-H/14)
Number of Params: 632M
Top 1 Accuracy: 87.2%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp