Openai Gym On Hopper V4
Metrics
Average Return
Results
Performance results of various models on this benchmark
Model Name | Average Return | Paper Title | Repository |
---|---|---|---|
DDPG | 1290.24 | Continuous control with deep reinforcement learning | - |
MEow | 3332.99 | Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow | - |
TD3 | 3319.98 | Addressing Function Approximation Error in Actor-Critic Methods | - |
PPO | 790.77 | Proximal Policy Optimization Algorithms | - |
SAC | 2882.56 | Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor | - |
0 of 5 row(s) selected.