HyperAIHyperAI

Atari Games On Atari 2600 Bowling

Metrics

Score

Results

Performance results of various models on this benchmark

Model Name
Score
Paper TitleRepository
Advantage Learning57.41Increasing the Action Gap: New Operators for Reinforcement Learning-
GDI-I3201.9GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning-
DDQN (tuned) hs69.6Deep Reinforcement Learning with Double Q-learning-
DNA181DNA: Proximal Policy Optimization with a Dual Network Architecture-
A3C LSTM hs41.8Asynchronous Methods for Deep Reinforcement Learning-
CGP85.8Evolving simple programs for playing Atari games-
GDI-H3205.2Generalized Data Distribution Iteration-
IMPALA (deep)59.92IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures-
Duel noop65.5Dueling Network Architectures for Deep Reinforcement Learning-
ASL DDQN62.4Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity-
DQN noop50.4Deep Reinforcement Learning with Double Q-learning-
Ape-X17.6Distributed Prioritized Experience Replay-
IQN86.5Implicit Quantile Networks for Distributional Reinforcement Learning-
Duel hs65.7Dueling Network Architectures for Deep Reinforcement Learning-
QR-DQN-177.2Distributional Reinforcement Learning with Quantile Regression-
A3C FF hs35.1Asynchronous Methods for Deep Reinforcement Learning-
DDQN (tuned) noop68.1Dueling Network Architectures for Deep Reinforcement Learning-
RUDDER179RUDDER: Return Decomposition for Delayed Rewards-
Persistent AL71.59Increasing the Action Gap: New Operators for Reinforcement Learning-
Gorila54Massively Parallel Methods for Deep Reinforcement Learning-
0 of 44 row(s) selected.