prabhatkr's picture
Initialize MuJoCo SOTA benchmark dataset with evaluation guide
763b9ab verified
metadata
license: apache-2.0
task_categories:
  - reinforcement-learning
tags:
  - mujoco
  - gymnasium
  - robotics
  - benchmark
  - ppo
  - sanskrit
pretty_name: MuJoCo SOTA Benchmark
size_categories:
  - n<1K

MuJoCo SOTA Benchmark

Standard MuJoCo continuous control benchmarks from Gymnasium used to evaluate reinforcement learning algorithms.

Benchmark Environments

Environment Obs Dim Act Dim CleanRL SOTA ParamTatva Best
Hopper-v5 11 3 2,382 +/- 271 3,183.2 (134%)
Walker2d-v5 17 6 ~4,000 4,918.5 (123%)
HalfCheetah-v5 17 6 ~6,000 5,803.9 (97%)
Reacher-v5 8 2 ~-4 -4.2 (~100%)
Ant-v5 27 8 ~5,000 886.6 (training)
Humanoid-v5 348 17 ~5,000 573.8 (training)

How to Evaluate

pip install torch gymnasium[mujoco]
import torch
import gymnasium as gym

# Load checkpoint
checkpoint = torch.load("hopper_v5_sota.pt")
agent = Agent(obs_dim=11, act_dim=3)
agent.load_state_dict(checkpoint["model_state_dict"])

# Evaluate
env = gym.make("Hopper-v5")
returns = []
for ep in range(100):
    obs, _ = env.reset()
    total = 0
    done = False
    while not done:
        action = agent.get_action(torch.FloatTensor(obs))
        obs, reward, term, trunc, _ = env.step(action.detach().numpy())
        total += reward
        done = term or trunc
    returns.append(total)

print(f"Mean: {sum(returns)/len(returns):.1f}")

Reference

License

Apache 2.0

ParamTatva.org 2026