Command Palette
Search for a command to run...
Yue Yang; Artemis Panagopoulou; Qing Lyu; Li Zhang; Mark Yatskar; Chris Callison-Burch

Abstract
Understanding what sequence of steps are needed to complete a goal can help artificial intelligence systems reason about human activities. Past work in NLP has examined the task of goal-step inference for text. We introduce the visual analogue. We propose the Visual Goal-Step Inference (VGSI) task, where a model is given a textual goal and must choose which of four images represents a plausible step towards that goal. With a new dataset harvested from wikiHow consisting of 772,277 images representing human actions, we show that our task is challenging for state-of-the-art multimodal models. Moreover, the multimodal representation learned from our data can be effectively transferred to other datasets like HowTo100m, increasing the VGSI accuracy by 15 - 20%. Our task will facilitate multimodal reasoning about procedural events.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| vgsi-on-wikihow-image | Triplet Network | Accuracy: 0.7494 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.