HyperAI超神经

Atari Games On Atari 2600 Ice Hockey

评估指标

Score

评测结果

各个模型在此基准测试上的表现结果

比较表格
模型名称Score
deep-reinforcement-learning-with-double-q0.5
impala-scalable-distributed-deep-rl-with3.48
dna-proximal-policy-optimization-with-a-dual7.2
human-level-control-through-deep-1.6
noisy-networks-for-exploration3
deep-reinforcement-learning-with-double-q-1.6
mastering-atari-go-chess-and-shogi-by67.04
evolving-simple-programs-for-playing-atari4
implicit-quantile-networks-for-distributional0.2
generalized-data-distribution-iteration44.94
a-distributional-perspective-on-reinforcement-3.5
the-arcade-learning-environment-an-evaluation-9.5
mastering-atari-with-discrete-world-models-126
agent57-outperforming-the-atari-human63.64
self-imitation-learning-2.4
distributional-reinforcement-learning-with-1-1.7
asynchronous-methods-for-deep-reinforcement-1.7
train-a-real-world-local-path-planner-in-one-3.6
prioritized-experience-replay-0.2
asynchronous-methods-for-deep-reinforcement-4.7
policy-optimization-with-penalized-point-4.12
gdi-rethinking-what-makes-reinforcement44.94
dueling-network-architectures-for-deep-2.7
generalized-data-distribution-iteration481.9
evolution-strategies-as-a-scalable-4.1
asynchronous-methods-for-deep-reinforcement-2.8
massively-parallel-methods-for-deep-1.7
learning-values-across-many-orders-of-4.1
recurrent-experience-replay-in-distributed79.3
模型 30-3.2
dueling-network-architectures-for-deep-0.4
increasing-the-action-gap-new-operators-for-1.24
dueling-network-architectures-for-deep0.5
deep-exploration-via-bootstrapped-dqn-1.3
deep-reinforcement-learning-with-double-q-1.9
the-arcade-learning-environment-an-evaluation39.4
fully-parameterized-quantile-function-for17.3
dueling-network-architectures-for-deep-1.3
online-and-offline-reinforcement-learning-by41.66
deep-reinforcement-learning-with-double-q-2.5
increasing-the-action-gap-new-operators-for-0.25
distributed-prioritized-experience-replay33
prioritized-experience-replay1.3