Allejandro commited on
Commit
a0bd2fd
·
verified ·
1 Parent(s): e52949b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -3
README.md CHANGED
@@ -26,12 +26,49 @@ This is a trained model of a **PPO** agent playing **LunarLander-v2**
26
  using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
27
 
28
  ## Usage (with Stable-baselines3)
29
- TODO: Add your code
30
-
31
 
32
  ```python
33
- from stable_baselines3 import ...
 
 
 
 
 
34
  from huggingface_sb3 import load_from_hub
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  ...
37
  ```
 
26
  using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
27
 
28
  ## Usage (with Stable-baselines3)
 
 
29
 
30
  ```python
31
+ import gymnasium
32
+
33
+ from stable_baselines3 import PPO
34
+ from stable_baselines3.common.env_util import make_vec_env
35
+ from stable_baselines3.common.evaluation import evaluate_policy
36
+ from stable_baselines3.common.monitor import Monitor
37
  from huggingface_sb3 import load_from_hub
38
 
39
+
40
+ # Create the environment
41
+ env = make_vec_env('LunarLander-v2', n_envs=16)
42
+
43
+ # Define a PPO MlpPolicy architecture
44
+ # We use MultiLayerPerceptron (MLPPolicy) because the input is a vector,
45
+ # if we had frames as input we would use CnnPolicy
46
+ model = PPO(
47
+ "MlpPolicy",
48
+ env = env,
49
+ n_steps = 1024,
50
+ batch_size = 64,
51
+ n_epochs = 4,
52
+ gamma = 0.999,
53
+ gae_lambda = 0.98,
54
+ ent_coef = 0.01,
55
+ verbose=1)
56
+
57
+ # Train it for 1,000,000 timesteps
58
+ model.learn(total_timesteps=1000000)
59
+ # Specify file name for model and save the model to file
60
+ model_name = "ppo-LunarLander-v2"
61
+ model.save(model_name)
62
+
63
+ # Evaluate the agent
64
+ # Create a new environment for evaluation
65
+ eval_env = Monitor(gym.make("LunarLander-v2"))
66
+
67
+ # Evaluate the model with 10 evaluation episodes and deterministic=True
68
+ mean_reward, std_reward = evaluate_policy(model=model, env=eval_env, n_eval_episodes=10, deterministic=True)
69
+
70
+ # Print the results
71
+ print(mean_reward)
72
+ print(std_reward)
73
  ...
74
  ```