Command Palette
Search for a command to run...
Analyzing Generalization of Vision and Language Navigation to Unseen Outdoor Areas
Raphael Schumann Stefan Riezler

Abstract
Vision and language navigation (VLN) is a challenging visually-grounded language understanding task. Given a natural language navigation instruction, a visual agent interacts with a graph-based environment equipped with panorama images and tries to follow the described route. Most prior work has been conducted in indoor scenarios where best results were obtained for navigation on routes that are similar to the training routes, with sharp drops in performance when testing on unseen environments. We focus on VLN in outdoor scenarios and find that in contrast to indoor VLN, most of the gain in outdoor VLN on unseen data is due to features like junction type embedding or heading delta that are specific to the respective environment graph, while image information plays a very minor role in generalizing VLN to unseen outdoor areas. These findings show a bias to specifics of graph representations of urban environments, demanding that VLN tasks grow in scale and diversity of geographical environments.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics | 
|---|---|---|
| vision-and-language-navigation-on-map2seq | ORAR | Task Completion (TC): 45.1  | 
| vision-and-language-navigation-on-map2seq | ORAR + junction type + heading delta | Task Completion (TC): 46.7  | 
| vision-and-language-navigation-on-map2seq | Rconcat | Task Completion (TC): 14.7  | 
| vision-and-language-navigation-on-map2seq | Gated Attention | Task Completion (TC): 17  | 
| vision-and-language-navigation-on-touchdown | ORAR + junction type + heading delta | Task Completion (TC): 29.1  | 
| vision-and-language-navigation-on-touchdown | ORAR | Task Completion (TC): 24.2  | 
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.