Command Palette
Search for a command to run...
Yan Luo; Yongkang Wong; Mohan S. Kankanhalli; Qi Zhao

Abstract
Benefiting from deep learning research and large-scale datasets, saliency prediction has achieved significant success in the past decade. However, it still remains challenging to predict saliency maps on images in new domains that lack sufficient data for data-hungry models. To solve this problem, we propose a few-shot transfer learning paradigm for saliency prediction, which enables efficient transfer of knowledge learned from the existing large-scale saliency datasets to a target domain with limited labeled examples. Specifically, very few target domain examples are used as the reference to train a model with a source domain dataset such that the training process can converge to a local minimum in favor of the target domain. Then, the learned model is further fine-tuned with the reference. The proposed framework is gradient-based and model-agnostic. We conduct comprehensive experiments and ablation study on various source domain and target domain pairs. The results show that the proposed framework achieves a significant performance improvement. The code is publicly available at \url{https://github.com/luoyan407/n-reference}.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| few-shot-transfer-learning-on-salicon | DINet+FT|Ref | AUC: 0.8051 CC: 0.6121 NSS: 1.5077 |
| few-shot-transfer-learning-on-salicon | ResNet+FT|Ref | AUC: 0.7983 CC: 0.5817 NSS: 1.4272 |
| few-shot-transfer-learning-on-salicon-1 | DINet+FT|Ref | AUC: 0.8200 CC: 0.6468 NSS: 1.6085 |
| few-shot-transfer-learning-on-salicon-2 | DINet+FT|Ref | AUC: 0.8276 CC: 0.6605 NSS: 1.6439 |
| few-shot-transfer-learning-on-salicon-3 | DINet+FT|Ref | AUC: 0.8494 CC: 0.7442 NSS: 1.8831 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.