HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

PromptKD: Unsupervised Prompt Distillation for Vision-Language Models

Zheng Li Xiang Li Xinyi Fu Xin Zhang Weiqiang Wang Shuo Chen Jian Yang

PromptKD: Unsupervised Prompt Distillation for Vision-Language Models

Abstract

Prompt learning has emerged as a valuable technique in enhancing vision-language models (VLMs) such as CLIP for downstream tasks in specific domains. Existing work mainly focuses on designing various learning forms of prompts, neglecting the potential of prompts as effective distillers for learning from larger teacher models. In this paper, we introduce an unsupervised domain prompt distillation framework, which aims to transfer the knowledge of a larger teacher model to a lightweight target model through prompt-driven imitation using unlabeled domain images. Specifically, our framework consists of two distinct stages. In the initial stage, we pre-train a large CLIP teacher model using domain (few-shot) labels. After pre-training, we leverage the unique decoupled-modality characteristics of CLIP by pre-computing and storing the text features as class vectors only once through the teacher text encoder. In the subsequent stage, the stored class vectors are shared across teacher and student image encoders for calculating the predicted logits. Further, we align the logits of both the teacher and student models via KL divergence, encouraging the student image encoder to generate similar probability distributions to the teacher through the learnable prompts. The proposed prompt distillation process eliminates the reliance on labeled data, enabling the algorithm to leverage a vast amount of unlabeled images within the domain. Finally, the well-trained student image encoders and pre-stored text features (class vectors) are utilized for inference. To our best knowledge, we are the first to (1) perform unsupervised domain-specific prompt-driven knowledge distillation for CLIP, and (2) establish a practical pre-storing mechanism of text features as shared class vectors between teacher and student. Extensive experiments on 11 datasets demonstrate the effectiveness of our method.

Code Repositories

zhengli97/promptkd
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
prompt-engineering-on-caltech-101PromptKD
Harmonic mean: 97.77
prompt-engineering-on-dtdPromptKD
Harmonic mean: 77.94
prompt-engineering-on-eurosatPromptKD
Harmonic mean: 89.14
prompt-engineering-on-fgvc-aircraftPromptKD
Harmonic mean: 45.17
prompt-engineering-on-food-101PromptKD
Harmonic mean: 93.05
prompt-engineering-on-imagenetPromptKD
Harmonic mean: 77.62
prompt-engineering-on-oxford-102-flowerPromptKD
Harmonic mean: 90.24
prompt-engineering-on-oxford-iiit-pet-datasetPromptKD
Harmonic mean: 97.15
prompt-engineering-on-stanford-cars-1PromptKD
Harmonic mean: 83.13
prompt-engineering-on-sun397PromptKD
Harmonic mean: 82.60
prompt-engineering-on-ucf101PromptKD
Harmonic mean: 86.10

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp