HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Self-regulating Prompts: Foundational Model Adaptation without Forgetting

Muhammad Uzair Khattak Syed Talal Wasim Muzammal Naseer Salman Khan Ming-Hsuan Yang Fahad Shahbaz Khan

Self-regulating Prompts: Foundational Model Adaptation without Forgetting

Abstract

Prompt learning has emerged as an efficient alternative for fine-tuning foundational models, such as CLIP, for various downstream tasks. Conventionally trained using the task-specific objective, i.e., cross-entropy loss, prompts tend to overfit downstream data distributions and find it challenging to capture task-agnostic general features from the frozen CLIP. This leads to the loss of the model's original generalization capability. To address this issue, our work introduces a self-regularization framework for prompting called PromptSRC (Prompting with Self-regulating Constraints). PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations using a three-pronged approach by: (a) regulating prompted representations via mutual agreement maximization with the frozen model, (b) regulating with self-ensemble of prompts over the training trajectory to encode their complementary strengths, and (c) regulating with textual diversity to mitigate sample diversity imbalance with the visual branch. To the best of our knowledge, this is the first regularization framework for prompt learning that avoids overfitting by jointly attending to pre-trained model features, the training trajectory during prompting, and the textual diversity. PromptSRC explicitly steers the prompts to learn a representation space that maximizes performance on downstream tasks without compromising CLIP generalization. We perform extensive experiments on 4 benchmarks where PromptSRC overall performs favorably well compared to the existing methods. Our code and pre-trained models are publicly available at: https://github.com/muzairkhattak/PromptSRC.

Code Repositories

muzairkhattak/promptsrc
Official
pytorch
Mentioned in GitHub
asif-hanif/vafa
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
prompt-engineering-on-caltech-101PromptSRC
Harmonic mean: 96.02
prompt-engineering-on-dtdPromptSRC
Harmonic mean: 71.75
prompt-engineering-on-eurosatPromptSRC
Harmonic mean: 82.32
prompt-engineering-on-fgvc-aircraftPromptSRC
Harmonic mean: 40.15
prompt-engineering-on-food-101PromptSRC
Harmonic mean: 91.10
prompt-engineering-on-imagenetPromptSRC
Harmonic mean: 74.01
prompt-engineering-on-imagenet-aPromptSRC
Top-1 accuracy %: 50.90
prompt-engineering-on-imagenet-rPromptSRC
Top-1 accuracy %: 77.80
prompt-engineering-on-imagenet-sPromptSRC
Top-1 accuracy %: 49.55
prompt-engineering-on-imagenet-v2PromptSRC
Top-1 accuracy %: 64.35
prompt-engineering-on-oxford-102-flowerPromptSRC
Harmonic mean: 85.95
prompt-engineering-on-oxford-iiit-pet-datasetPromptSRC
Harmonic mean: 96.30
prompt-engineering-on-stanford-cars-1PromptSRC
Harmonic mean: 76.58
prompt-engineering-on-sun397PromptSRC
Harmonic mean: 80.52
prompt-engineering-on-ucf101PromptSRC
Harmonic mean: 82.74

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp