HyperAIHyperAI

Atari Games On Atari 2600 Krull

Metrics

Score

Results

Performance results of various models on this benchmark

Model Name
Score
Paper TitleRepository
Rainbow+SEER3277.5Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings-
GDI-I397575GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning-
VPN15930Value Prediction Network-
Prior hs6872.8Prioritized Experience Replay-
NoisyNet-Dueling10754Noisy Networks for Exploration-
Gorila6363.1Massively Parallel Methods for Deep Reinforcement Learning-
Prior+Duel noop10374.4Dueling Network Architectures for Deep Reinforcement Learning-
CGP9086.8Evolving simple programs for playing Atari games-
Prior+Duel hs7658.6Deep Reinforcement Learning with Double Q-learning-
A3C FF hs5560.0Asynchronous Methods for Deep Reinforcement Learning-
Duel hs8051.6Dueling Network Architectures for Deep Reinforcement Learning-
Duel noop11451.9Dueling Network Architectures for Deep Reinforcement Learning-
Best Learner3371.5The Arcade Learning Environment: An Evaluation Platform for General Agents-
DreamerV250061Mastering Atari with Discrete World Models-
Persistent AL8689.81Increasing the Action Gap: New Operators for Reinforcement Learning-
A3C LSTM hs5911.4Asynchronous Methods for Deep Reinforcement Learning-
DQN noop8422.3Deep Reinforcement Learning with Double Q-learning-
GDI-H3594540Generalized Data Distribution Iteration-
ASL DDQN10422.5Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity-
MuZero269358.27Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model-
0 of 45 row(s) selected.