HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Efficient parametrization of multi-domain deep neural networks

Sylvestre-Alvise Rebuffi; Hakan Bilen; Andrea Vedaldi

Efficient parametrization of multi-domain deep neural networks

Abstract

A practical limitation of deep neural networks is their high degree of specialization to a single task and visual domain. Recently, inspired by the successes of transfer learning, several authors have proposed to learn instead universal, fixed feature extractors that, used as the first stage of any deep network, work well for several tasks and domains simultaneously. Nevertheless, such universal features are still somewhat inferior to specialized networks. To overcome this limitation, in this paper we propose to consider instead universal parametric families of neural networks, which still contain specialized problem-specific models, but differing only by a small number of parameters. We study different designs for such parametrizations, including series and parallel residual adapters, joint adapter compression, and parameter allocations, and empirically identify the ones that yield the highest compression. We show that, in order to maximize performance, it is necessary to adapt both shallow and deep layers of a deep network, but the required changes are very small. We also show that these universal parametrization are very effective for transfer learning, where they outperform traditional fine-tuning techniques.

Code Repositories

SLrepo/residual_adapter
pytorch
Mentioned in GitHub
srebuffi/residual_adapters
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
continual-learning-on-visual-domain-decathlonSeries Res. adapt.
decathlon discipline (Score): 3159
continual-learning-on-visual-domain-decathlonParallel Res. adapt.
Avg. Accuracy: 78.07
decathlon discipline (Score): 3412

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp