Command Palette
Search for a command to run...
Yin Hongxu ; Vahdat Arash ; Alvarez Jose ; Mallya Arun ; Kautz Jan ; Molchanov Pavlo

Abstract
We introduce A-ViT, a method that adaptively adjusts the inference cost ofvision transformer (ViT) for images of different complexity. A-ViT achievesthis by automatically reducing the number of tokens in vision transformers thatare processed in the network as inference proceeds. We reformulate AdaptiveComputation Time (ACT) for this task, extending halting to discard redundantspatial tokens. The appealing architectural properties of vision transformersenables our adaptive token reduction mechanism to speed up inference withoutmodifying the network architecture or inference hardware. We demonstrate thatA-ViT requires no extra parameters or sub-network for halting, as we base thelearning of adaptive halting on the original network parameters. We furtherintroduce distributional prior regularization that stabilizes training comparedto prior ACT approaches. On the image classification task (ImageNet1K), we showthat our proposed A-ViT yields high efficacy in filtering informative spatialfeatures and cutting down on the overall compute. The proposed method improvesthe throughput of DeiT-Tiny by 62% and DeiT-Small by 38% with only 0.3%accuracy drop, outperforming prior art by a large margin. Project page athttps://a-vit.github.io/
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| efficient-vits-on-imagenet-1k-with-deit-s | A-ViT | GFLOPs: 3.6 Top 1 Accuracy: 78.6 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.