HyperAIHyperAI

Atari Games On Atari 2600 Time Pilot

Metrics

Score

Results

Performance results of various models on this benchmark

Model Name
Score
Paper TitleRepository
Advantage Learning8969.12Increasing the Action Gap: New Operators for Reinforcement Learning-
A3C FF hs12679.0Asynchronous Methods for Deep Reinforcement Learning-
Best Learner3741.2The Arcade Learning Environment: An Evaluation Platform for General Agents-
GDI-I3216770Generalized Data Distribution Iteration-
NoisyNet-Dueling17301Noisy Networks for Exploration-
DDQN (tuned) noop8339.0Dueling Network Architectures for Deep Reinforcement Learning-
Duel noop11666.0Dueling Network Architectures for Deep Reinforcement Learning-
POP3D3770.33Policy Optimization With Penalized Point Probability Distance: An Alternative To Proximal Policy Optimization-
Nature DQN5947.0Human level control through deep reinforcement learning
QR-DQN-110345Distributional Reinforcement Learning with Quantile Regression-
DNA12774DNA: Proximal Policy Optimization with a Dual Network Architecture-
Rational DQN Average17632Adaptive Rational Activations to Boost Deep Reinforcement Learning-
MuZero476763.90Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model-
UCT63854.5The Arcade Learning Environment: An Evaluation Platform for General Agents-
ES FF (1 hour) noop4970.0Evolution Strategies as a Scalable Alternative to Reinforcement Learning-
IDVQ + DRSC + XNES4600Playing Atari with Six Neurons-
Bootstrapped DQN9079.4Deep Exploration via Bootstrapped DQN-
R2D2445377.3Recurrent Experience Replay in Distributed Reinforcement Learning-
DDQN+Pop-Art noop4870.0Learning values across many orders of magnitude-
CGP12040Evolving simple programs for playing Atari games-
0 of 44 row(s) selected.
Atari Games On Atari 2600 Time Pilot | SOTA | HyperAI