HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

First return, then explore

Adrien Ecoffet Joost Huizinga Joel Lehman Kenneth O. Stanley Jeff Clune

First return, then explore

Abstract

The promise of reinforcement learning is to solve complex sequential decision problems autonomously by specifying a high-level reward function only. However, reinforcement learning algorithms struggle when, as is often the case, simple and intuitive rewards provide sparse and deceptive feedback. Avoiding these pitfalls requires thoroughly exploring the environment, but creating algorithms that can do so remains one of the central challenges of the field. We hypothesise that the main impediment to effective exploration originates from algorithms forgetting how to reach previously visited states ("detachment") and from failing to first return to a state before exploring from it ("derailment"). We introduce Go-Explore, a family of algorithms that addresses these two challenges directly through the simple principles of explicitly remembering promising states and first returning to such states before intentionally exploring. Go-Explore solves all heretofore unsolved Atari games and surpasses the state of the art on all hard-exploration games, with orders of magnitude improvements on the grand challenges Montezuma's Revenge and Pitfall. We also demonstrate the practical potential of Go-Explore on a sparse-reward pick-and-place robotics task. Additionally, we show that adding a goal-conditioned policy can further improve Go-Explore's exploration efficiency and enable it to handle stochasticity throughout training. The substantial performance gains from Go-Explore suggest that the simple principles of remembering states, returning to them, and exploring from them are a powerful and general approach to exploration, an insight that may prove critical to the creation of truly intelligent learning agents.

Code Repositories

qgallouedec/lge
pytorch
Mentioned in GitHub
uber-research/go-explore
Official
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
atari-games-on-atari-2600-berzerkGo-Explore
Score: 197376
atari-games-on-atari-2600-bowlingGo-Explore
Score: 260
atari-games-on-atari-2600-centipedeGo-Explore
Score: 1422628
atari-games-on-atari-2600-freewayGo-Explore
Score: 34
atari-games-on-atari-2600-gravitarGo-Explore
Score: 7588
atari-games-on-atari-2600-montezumas-revengeGo-Explore
Score: 43791
atari-games-on-atari-2600-pitfallGo-Explore
Score: 6954
atari-games-on-atari-2600-private-eyeGo-Explore
Score: 95756
atari-games-on-atari-2600-skiingGo-Explore
Score: -3660
atari-games-on-atari-2600-solarisGo-Explore
Score: 19671
atari-games-on-atari-2600-ventureGo-Explore
Score: 2281
atari-games-on-atari-gamesGo-Explore
Mean Human Normalized Score: 4989.94%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp