HyperAIHyperAI

Atari Games On Atari 2600 Kangaroo

Metrics

Score

Results

Performance results of various models on this benchmark

Model Name
Score
Paper TitleRepository
NoisyNet-Dueling15227Noisy Networks for Exploration-
QR-DQN-115356Distributional Reinforcement Learning with Quantile Regression-
GDI-I314500GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning-
Ape-X1416Distributed Prioritized Experience Replay-
MuZero16763.60Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model-
DQN hs4496.0Deep Reinforcement Learning with Double Q-learning-
Nature DQN6740.0Human level control through deep reinforcement learning
MuZero (Res2 Adam)13838Online and Offline Reinforcement Learning by Planning with a Learned Model-
Advantage Learning10809.16Increasing the Action Gap: New Operators for Reinforcement Learning-
IQN15487Implicit Quantile Networks for Distributional Reinforcement Learning-
Prior noop16200.0Prioritized Experience Replay-
Prior+Duel noop1792.0Dueling Network Architectures for Deep Reinforcement Learning-
Agent5724034.16Agent57: Outperforming the Atari Human Benchmark-
ASL DDQN13027Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity-
SARSA8.8--
A3C FF hs94.0Asynchronous Methods for Deep Reinforcement Learning-
ES FF (1 hour) noop11200.0Evolution Strategies as a Scalable Alternative to Reinforcement Learning-
DDQN (tuned) hs11204.0Deep Reinforcement Learning with Double Q-learning-
DDQN (tuned) noop12992.0Dueling Network Architectures for Deep Reinforcement Learning-
A3C FF (1 day) hs106.0Asynchronous Methods for Deep Reinforcement Learning-
0 of 47 row(s) selected.