Command Palette
Search for a command to run...
Claire Yuqing Cui; Apoorv Khandelwal; Yoav Artzi; Noah Snavely; Hadar Averbuch-Elor

Abstract
We present a task and benchmark dataset for person-centric visual grounding, the problem of linking between people named in a caption and people pictured in an image. In contrast to prior work in visual grounding, which is predominantly object-based, our new task masks out the names of people in captions in order to encourage methods trained on such image-caption pairs to focus on contextual cues (such as rich interactions between multiple people), rather than learning associations between names and appearances. To facilitate this task, we introduce a new dataset, Who's Waldo, mined automatically from image-caption data on Wikimedia Commons. We propose a Transformer-based method that outperforms several strong baselines on this task, and are releasing our data to the research community to spur work on contextual models that consider both vision and language.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| person-centric-visual-grounding-on-whos-waldo | Who's Waldo | Accuracy: 63.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.