HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Focal Self-attention for Local-Global Interactions in Vision Transformers

Jianwei Yang Chunyuan Li Pengchuan Zhang Xiyang Dai Bin Xiao Lu Yuan Jianfeng Gao

Focal Self-attention for Local-Global Interactions in Vision Transformers

Abstract

Recently, Vision Transformer and its variants have shown great promise on various computer vision tasks. The ability of capturing short- and long-range visual dependencies through self-attention is arguably the main source for the success. But it also brings challenges due to quadratic computational overhead, especially for the high-resolution vision tasks (e.g., object detection). In this paper, we present focal self-attention, a new mechanism that incorporates both fine-grained local and coarse-grained global interactions. Using this new mechanism, each token attends the closest surrounding tokens at fine granularity but the tokens far away at coarse granularity, and thus can capture both short- and long-range visual dependencies efficiently and effectively. With focal self-attention, we propose a new variant of Vision Transformer models, called Focal Transformer, which achieves superior performance over the state-of-the-art vision Transformers on a range of public image classification and object detection benchmarks. In particular, our Focal Transformer models with a moderate size of 51.1M and a larger size of 89.8M achieve 83.5 and 83.8 Top-1 accuracy, respectively, on ImageNet classification at 224x224 resolution. Using Focal Transformers as the backbones, we obtain consistent and substantial improvements over the current state-of-the-art Swin Transformers for 6 different object detection methods trained with standard 1x and 3x schedules. Our largest Focal Transformer yields 58.7/58.9 box mAPs and 50.9/51.3 mask mAPs on COCO mini-val/test-dev, and 55.4 mIoU on ADE20K for semantic segmentation, creating new SoTA on three of the most challenging computer vision tasks.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
instance-segmentation-on-cocoFocal-L (HTC++, multi-scale)
AP50: 75.4
AP75: 56.5
APL: 64.2
APS: 35.6
mask AP: 51.3
instance-segmentation-on-coco-minivalFocal-L (HTC++, multi-scale)
mask AP: 50.9
object-detection-on-cocoFocal-L (DyHead, multi-scale)
box mAP: 58.9
object-detection-on-coco-minivalFocal-L (DyHead, multi-scale)
AP50: 77.2
APL: 73.4
box AP: 58.7
semantic-segmentation-on-ade20kFocal-L (UperNet, ImageNet-22k pretrain)
Validation mIoU: 55.40
semantic-segmentation-on-ade20k-valFocal-L (UperNet, ImageNet-22k pretrain)
mIoU: 55.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Focal Self-attention for Local-Global Interactions in Vision Transformers | Papers | HyperAI