HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation

Jiannan Xiang Xin Eric Wang William Yang Wang

Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation

Abstract

Vision-and-Language Navigation (VLN) is a natural language grounding task where an agent learns to follow language instructions and navigate to specified destinations in real-world environments. A key challenge is to recognize and stop at the correct location, especially for complicated outdoor environments. Existing methods treat the STOP action equally as other actions, which results in undesirable behaviors that the agent often fails to stop at the destination even though it might be on the right path. Therefore, we propose Learning to Stop (L2Stop), a simple yet effective policy module that differentiates STOP and other actions. Our approach achieves the new state of the art on a challenging urban VLN dataset Touchdown, outperforming the baseline by 6.89% (absolute improvement) on Success weighted by Edit Distance (SED).

Benchmarks

BenchmarkMethodologyMetrics
vision-and-language-navigation-on-touchdownARC
Task Completion (TC): 14.13
vision-and-language-navigation-on-touchdownARC + L2STOP
Task Completion (TC): 16.68

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation | Papers | HyperAI