HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget

Johannes Lehner Benedikt Alkin Andreas Fürst Elisabeth Rumetshofer Lukas Miklautz Sepp Hochreiter

Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget

Abstract

Masked Image Modeling (MIM) methods, like Masked Autoencoders (MAE), efficiently learn a rich representation of the input. However, for adapting to downstream tasks, they require a sufficient amount of labeled data since their rich features code not only objects but also less relevant image background. In contrast, Instance Discrimination (ID) methods focus on objects. In this work, we study how to combine the efficiency and scalability of MIM with the ability of ID to perform downstream classification in the absence of large amounts of labeled data. To this end, we introduce Masked Autoencoder Contrastive Tuning (MAE-CT), a sequential approach that utilizes the implicit clustering of the Nearest Neighbor Contrastive Learning (NNCLR) objective to induce abstraction in the topmost layers of a pre-trained MAE. MAE-CT tunes the rich features such that they form semantic clusters of objects without using any labels. Notably, MAE-CT does not rely on hand-crafted augmentations and frequently achieves its best performances while using only minimal augmentations (crop & flip). Further, MAE-CT is compute efficient as it requires at most 10% overhead compared to MAE re-training. Applied to large and huge Vision Transformer (ViT) models, MAE-CT excels over previous self-supervised methods trained on ImageNet in linear probing, k-NN and low-shot classification accuracy as well as in unsupervised clustering accuracy. With ViT-H/16 MAE-CT achieves a new state-of-the-art in linear probing of 82.2%.

Code Repositories

ml-jku/mae-ct
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-clustering-on-imagenetMAE-CT (ViT-H/16 best)
Accuracy: 58.0
NMI: 81.8
image-clustering-on-imagenetMAE-CT (ViT-H/16 mean)
Accuracy: 57.1
NMI: 81.7
image-clustering-on-imagenet-dog-15MAE-CT (best)
ARI: 0.879
Accuracy: 0.943
Backbone: ViT-H/16
Image Size: 224
NMI: 0.904
image-clustering-on-imagenet-dog-15MAE-CT (mean)
ARI: 0.821
Accuracy: 0.874
Backbone: ViT-H/16
Image Size: 224
NMI: 0.882
self-supervised-image-classification-onMAE-CT (ViT-H/16)
Number of Params: 632M
Top 1 Accuracy: 82.2%
self-supervised-image-classification-onMAE-CT (ViT-L/16
Number of Params: 307M
Top 1 Accuracy: 81.5%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp