Flight_Luntik-v2 / results.json
Dmitriy007's picture
Upload PPO Flight_Luntik-v3 trained agent + make_vec_env, total_timesteps=1100000, n_envs=32
46c14e4
raw
history blame contribute delete
164 Bytes
{"mean_reward": 272.3192417158427, "std_reward": 15.234374487504649, "is_deterministic": true, "n_eval_episodes": 10, "eval_datetime": "2022-12-29T12:57:57.402956"}