HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Exploring Target Representations for Masked Autoencoders

Xingbin Liu Jinghao Zhou Tao Kong Xianming Lin Rongrong Ji

Exploring Target Representations for Masked Autoencoders

Abstract

Masked autoencoders have become popular training paradigms for self-supervised visual representation learning. These models randomly mask a portion of the input and reconstruct the masked portion according to the target representations. In this paper, we first show that a careful choice of the target representation is unnecessary for learning good representations, since different targets tend to derive similarly behaved models. Driven by this observation, we propose a multi-stage masked distillation pipeline and use a randomly initialized model as the teacher, enabling us to effectively train high-capacity models without any efforts to carefully design target representations. Interestingly, we further explore using teachers of larger capacity, obtaining distilled students with remarkable transferring ability. On different tasks of classification, transfer learning, object detection, and semantic segmentation, the proposed method to perform masked knowledge distillation with bootstrapped teachers (dBOT) outperforms previous self-supervised methods by nontrivial margins. We hope our findings, as well as the proposed method, could motivate people to rethink the roles of target representations in pre-training masked autoencoders.The code and pre-trained models are publicly available at https://github.com/liuxingbin/dbot.

Code Repositories

liuxingbin/dbot
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-imagenetdBOT ViT-B (CLIP as Teacher)
Top 1 Accuracy: 85.7%
image-classification-on-imagenetdBOT ViT-H (CLIP as Teacher)
Top 1 Accuracy: 88.2%
image-classification-on-imagenetdBOT ViT-L (CLIP as Teacher)
Top 1 Accuracy: 87.8%
instance-segmentation-on-cocodBOT ViT-B (CLIP)
mask AP: 46.2
instance-segmentation-on-cocodBOT ViT-L (CLIP)
mask AP: 48.8
instance-segmentation-on-cocodBOT ViT-L
mask AP: 48.3
instance-segmentation-on-cocodBOT ViT-B
mask AP: 46.3
object-detection-on-cocodBOT ViT-B (CLIP)
box mAP: 53.6
object-detection-on-cocodBOT ViT-L (CLIP)
box mAP: 56.8
object-detection-on-cocodBOT ViT-B
box mAP: 53.5
object-detection-on-cocodBOT ViT-L
box mAP: 56.1
self-supervised-image-classification-on-1dBOT (ViT-H/14)
Number of Params: 632M
Top 1 Accuracy: 88.0%
semantic-segmentation-on-ade20kdBOT ViT-B
Validation mIoU: 50.8
semantic-segmentation-on-ade20kdBOT ViT-L (CLIP)
Validation mIoU: 56.2
semantic-segmentation-on-ade20kdBOT ViT-L
Validation mIoU: 55.2
semantic-segmentation-on-ade20kdBOT ViT-B (CLIP)
Validation mIoU: 52.9

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp