HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models

Andy Zhou Jindong Wang Yu-Xiong Wang Haohan Wang

Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models

Abstract

We propose a conceptually simple and lightweight framework for improving the robustness of vision models through the combination of knowledge distillation and data augmentation. We address the conjecture that larger models do not make for better teachers by showing strong gains in out-of-distribution robustness when distilling from pretrained foundation models. Following this finding, we propose Discrete Adversarial Distillation (DAD), which leverages a robust teacher to generate adversarial examples and a VQGAN to discretize them, creating more informative samples than standard data augmentation techniques. We provide a theoretical framework for the use of a robust teacher in the knowledge distillation with data augmentation setting and demonstrate strong gains in out-of-distribution robustness and clean accuracy across different student architectures. Notably, our method adds minor computational overhead compared to similar techniques and can be easily combined with other data augmentations for further improvements.

Code Repositories

lapisrocks/DiscreteAdversarialDistillation
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
domain-generalization-on-imagenet-aDiscrete Adversarial Distillation (ResNet-50)
Top-1 accuracy %: 7.7
domain-generalization-on-imagenet-aDiscrete Adversarial Distillation (ViT-B/224)
Top-1 accuracy %: 31.8
domain-generalization-on-imagenet-rDiscrete Adversarial Distillation (ViT-B,224)
Top-1 Error Rate: 34.9
domain-generalization-on-imagenet-sketchDiscrete Adversarial Distillation (ViT-B, 224)
Top-1 accuracy: 46.1
image-classification-on-imagenetDiscrete Adversarial Distillation (ViT-B, 224)
Top 1 Accuracy: 81.9%
image-classification-on-imagenet-v2Discrete Adversarial Distillation (ViT-B, 224)
Top 1 Accuracy: 71.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp