HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Multimodal Clustering Networks for Self-supervised Learning from Unlabeled Videos

Multimodal Clustering Networks for Self-supervised Learning from
  Unlabeled Videos

Abstract

Multimodal self-supervised learning is getting more and more attention as itallows not only to train large networks without human supervision but also tosearch and retrieve data across various modalities. In this context, this paperproposes a self-supervised training framework that learns a common multimodalembedding space that, in addition to sharing representations across differentmodalities, enforces a grouping of semantically similar instances. To this end,we extend the concept of instance-level contrastive learning with a multimodalclustering step in the training pipeline to capture semantic similaritiesacross modalities. The resulting embedding space enables retrieval of samplesacross all modalities, even from unseen datasets and different domains. Toevaluate our approach, we train our model on the HowTo100M dataset and evaluateits zero-shot retrieval capabilities in two challenging domains, namelytext-to-video retrieval, and temporal action localization, showingstate-of-the-art results on four different datasets.

Code Repositories

brian7685/Multimodal-Clustering-Network
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
long-video-retrieval-background-removed-onMCN
Cap. Avg. R@1: 53.4
Cap. Avg. R@10: 81.4
Cap. Avg. R@5: 75.0

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp