HyperAI
HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
SMAC+
Smac On Smac Off Superhard Parallel
Smac On Smac Off Superhard Parallel
Metrics
Median Win Rate
Results
Performance results of various models on this benchmark
Columns
Model Name
Median Win Rate
Paper Title
Repository
VDN
0.0
Value-Decomposition Networks For Cooperative Multi-Agent Learning
-
DRIMA
0.0
Disentangling Sources of Risk for Distributional Multi-Agent Reinforcement Learning
-
DDN
0.0
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
-
DIQL
0.0
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
-
COMA
0.0
Counterfactual Multi-Agent Policy Gradients
-
DMIX
0.0
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
-
IQL
0.0
The StarCraft Multi-Agent Challenges+ : Learning of Multi-Stage Tasks and Environmental Factors without Precise Reward Functions
-
MASAC
0.0
Decomposed Soft Actor-Critic Method for Cooperative Multi-Agent Reinforcement Learning
-
QMIX
0.0
QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
-
QTRAN
0.0
QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning
-
0 of 10 row(s) selected.
Previous
Next
Smac On Smac Off Superhard Parallel | SOTA | HyperAI