HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Discrete and Continuous Action Representation for Practical RL in Video Games

Olivier Delalleau; Maxim Peter; Eloi Alonso; Adrien Logut

Discrete and Continuous Action Representation for Practical RL in Video Games

Abstract

While most current research in Reinforcement Learning (RL) focuses on improving the performance of the algorithms in controlled environments, the use of RL under constraints like those met in the video game industry is rarely studied. Operating under such constraints, we propose Hybrid SAC, an extension of the Soft Actor-Critic algorithm able to handle discrete, continuous and parameterized actions in a principled way. We show that Hybrid SAC can successfully solve a highspeed driving task in one of our games, and is competitive with the state-of-the-art on parameterized actions benchmark tasks. We also explore the impact of using normalizing flows to enrich the expressiveness of the policy at minimal computational cost, and identify a potential undesired effect of SAC when used with normalizing flows, that may be addressed by optimizing a different objective.

Code Repositories

nisheeth-golakiya/hybrid-sac
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
control-with-prametrised-actions-on-halfHybrid SAC
Goal Probability: 0.639
control-with-prametrised-actions-on-platformHybrid SAC
Return: 0.981
control-with-prametrised-actions-on-robotHybrid SAC
Goal Probability: 0.728

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Discrete and Continuous Action Representation for Practical RL in Video Games | Papers | HyperAI