ppo-LunarLander-v2 / results.json
Phips's picture
Upload PPO LunarLander-v2 trained agent v3, 3000000 steps/iters with batch size 128
a829ecf
raw
history blame contribute delete
157 Bytes
{"mean_reward": 269.0164832, "std_reward": 71.95355661999263, "is_deterministic": true, "n_eval_episodes": 10, "eval_datetime": "2023-06-26T13:04:15.955112"}