HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Connecting Vision and Language with Localized Narratives

Jordi Pont-Tuset Jasper Uijlings Soravit Changpinyo Radu Soricut Vittorio Ferrari

Connecting Vision and Language with Localized Narratives

Abstract

We propose Localized Narratives, a new form of multimodal image annotations connecting vision and language. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing. Since the voice and the mouse pointer are synchronized, we can localize every single word in the description. This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data. We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available. We provide an extensive analysis of these annotations showing they are diverse, accurate, and efficient to produce. We also demonstrate their utility on the application of controlled image captioning.

Code Repositories

google/localized-narratives
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-captioning-on-localized-narrativesRCNN + trace positions
CIDEr: 106.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp