HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input

David Harwath; Adrià Recasens; Dídac Surís; Galen Chuang; Antonio Torralba; James Glass

Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input

Abstract

In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learned as a by-product of training to perform an image-audio retrieval task. Our models operate directly on the image pixels and speech waveform, and do not rely on any conventional supervision in the form of labels, segmentations, or alignments between the modalities during training. We perform analysis using the Places 205 and ADE20k datasets demonstrating that our models implicitly learn semantically-coupled object and word detectors.

Benchmarks

BenchmarkMethodologyMetrics
sound-prompted-semantic-segmentation-onDAVENet
mAP: 16.8
mIoU: 18.1
speech-prompted-semantic-segmentation-onDAVENet
mAP: 32.2
mIoU: 26.3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input | Papers | HyperAI