Command Palette
Search for a command to run...
End-to-End Learning of Visual Representations from Uncurated Instructional Videos
Miech Antoine ; Alayrac Jean-Baptiste ; Smaira Lucas ; Laptev Ivan ; Sivic Josef ; Zisserman Andrew

Abstract
Annotating videos is cumbersome, expensive and not scalable. Yet, many strongvideo models still rely on manually annotated data. With the recentintroduction of the HowTo100M dataset, narrated videos now offer thepossibility of learning video representations without manual supervision. Inthis work we propose a new learning approach, MIL-NCE, capable of addressingmisalignments inherent to narrated videos. With this approach we are able tolearn strong video representations from scratch, without the need for anymanual annotation. We evaluate our representations on a wide range of fourdownstream tasks over eight datasets: action recognition (HMDB-51, UCF-101,Kinetics-700), text-to-video retrieval (YouCook2, MSR-VTT), action localization(YouTube-8M Segments, CrossTask) and action segmentation (COIN). Our methodoutperforms all published self-supervised approaches for these tasks as well asseveral fully supervised baselines.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| action-recognition-on-rareact | HT100M S3D | mWAP: 30.5 |
| action-segmentation-on-coin | CBT | Frame accuracy: 53.9 |
| action-segmentation-on-coin | MIL-NCE | Frame accuracy: 61.0 |
| long-video-retrieval-background-removed-on | MIL-NCE | Cap. Avg. R@1: 43.1 Cap. Avg. R@10: 79.1 Cap. Avg. R@5: 68.6 |
| zero-shot-video-retrieval-on-msr-vtt | MIL-NCE | text-to-video Mean Rank: 29.5 text-to-video R@1: 9.9 text-to-video R@10: 32.4 text-to-video R@5: 24.0 |
| zero-shot-video-retrieval-on-youcook2 | MIL-NCE | text-to-video Mean Rank: 10 text-to-video R@1: 15.1 text-to-video R@10: 51.2 text-to-video R@5: 38.0 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.