HyperAI
HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Atari Games
Atari Games On Atari 2600 Wizard Of Wor
Atari Games On Atari 2600 Wizard Of Wor
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
Repository
Prior noop
4802.0
Prioritized Experience Replay
-
A2C + SIL
7088.3
Self-Imitation Learning
-
Prior hs
5727.0
Prioritized Experience Replay
-
R2D2
144362.7
Recurrent Experience Replay in Distributed Reinforcement Learning
-
MuZero (Res2 Adam)
100096.6
Online and Offline Reinforcement Learning by Planning with a Learned Model
-
DreamerV2
12851
Mastering Atari with Discrete World Models
-
ES FF (1 hour) noop
3480.0
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
-
Best Learner
1981.3
The Arcade Learning Environment: An Evaluation Platform for General Agents
-
DDQN+Pop-Art noop
483.0
Learning values across many orders of magnitude
-
GDI-H3
63735
Generalized Data Distribution Iteration
-
Gorila
10431.0
Massively Parallel Methods for Deep Reinforcement Learning
-
DQN noop
2704.0
Deep Reinforcement Learning with Double Q-learning
-
Ape-X
46204
Distributed Prioritized Experience Replay
-
GDI-I3
64239
Generalized Data Distribution Iteration
-
Agent57
157306.41
Agent57: Outperforming the Atari Human Benchmark
-
Prior+Duel noop
12352.0
Dueling Network Architectures for Deep Reinforcement Learning
-
UCT
105500
The Arcade Learning Environment: An Evaluation Platform for General Agents
-
NoisyNet-Dueling
9149
Noisy Networks for Exploration
-
Nature DQN
3393.0
Human level control through deep reinforcement learning
DDQN (tuned) hs
6201.0
Deep Reinforcement Learning with Double Q-learning
-
0 of 41 row(s) selected.
Previous
Next