oussamab2n commited on
Commit
f7c3275
·
verified ·
1 Parent(s): f2ef258

Upload trained A2C model on CartPole-v1

Browse files
README.md CHANGED
@@ -1,37 +1,37 @@
1
- ---
2
- library_name: stable-baselines3
3
- tags:
4
- - CartPole-v1
5
- - deep-reinforcement-learning
6
- - reinforcement-learning
7
- - stable-baselines3
8
- model-index:
9
- - name: A2C
10
- results:
11
- - task:
12
- type: reinforcement-learning
13
- name: reinforcement-learning
14
- dataset:
15
- name: CartPole-v1
16
- type: CartPole-v1
17
- metrics:
18
- - type: mean_reward
19
- value: 500.00 +/- 0.00
20
- name: mean_reward
21
- verified: false
22
- ---
23
-
24
- # **A2C** Agent playing **CartPole-v1**
25
- This is a trained model of a **A2C** agent playing **CartPole-v1**
26
- using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
27
-
28
- ## Usage (with Stable-baselines3)
29
- TODO: Add your code
30
-
31
-
32
- ```python
33
- from stable_baselines3 import ...
34
- from huggingface_sb3 import load_from_hub
35
-
36
- ...
37
- ```
 
1
+ ---
2
+ library_name: stable-baselines3
3
+ tags:
4
+ - CartPole-v1
5
+ - deep-reinforcement-learning
6
+ - reinforcement-learning
7
+ - stable-baselines3
8
+ model-index:
9
+ - name: A2C
10
+ results:
11
+ - task:
12
+ type: reinforcement-learning
13
+ name: reinforcement-learning
14
+ dataset:
15
+ name: CartPole-v1
16
+ type: CartPole-v1
17
+ metrics:
18
+ - type: mean_reward
19
+ value: 500.00 +/- 0.00
20
+ name: mean_reward
21
+ verified: false
22
+ ---
23
+
24
+ # **A2C** Agent playing **CartPole-v1**
25
+ This is a trained model of a **A2C** agent playing **CartPole-v1**
26
+ using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
27
+
28
+ ## Usage (with Stable-baselines3)
29
+ TODO: Add your code
30
+
31
+
32
+ ```python
33
+ from stable_baselines3 import ...
34
+ from huggingface_sb3 import load_from_hub
35
+
36
+ ...
37
+ ```
a2c-cartpole.zip CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:85df6dd3ae0efc4d0ee4dae2a719c033a10042712ce6bae3d8029ba1e840c8fb
3
- size 101008
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d34cf78d0d71eb2aef6cec9575c412dc23bca6889cb3e85b603b146d6b78f922
3
+ size 101554
a2c-cartpole/data CHANGED
@@ -4,20 +4,20 @@
4
  ":serialized:": "gAWVOwAAAAAAAACMIXN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi5wb2xpY2llc5SMEUFjdG9yQ3JpdGljUG9saWN5lJOULg==",
5
  "__module__": "stable_baselines3.common.policies",
6
  "__doc__": "\n Policy class for actor-critic algorithms (has both policy and value prediction).\n Used by A2C, PPO and the likes.\n\n :param observation_space: Observation space\n :param action_space: Action space\n :param lr_schedule: Learning rate schedule (could be constant)\n :param net_arch: The specification of the policy and value networks.\n :param activation_fn: Activation function\n :param ortho_init: Whether to use or not orthogonal initialization\n :param use_sde: Whether to use State Dependent Exploration or not\n :param log_std_init: Initial value for the log standard deviation\n :param full_std: Whether to use (n_features x n_actions) parameters\n for the std instead of only (n_features,) when using gSDE\n :param use_expln: Use ``expln()`` function instead of ``exp()`` to ensure\n a positive standard deviation (cf paper). It allows to keep variance\n above zero and prevent it from growing too fast. In practice, ``exp()`` is usually enough.\n :param squash_output: Whether to squash the output using a tanh function,\n this allows to ensure boundaries when using gSDE.\n :param features_extractor_class: Features extractor to use.\n :param features_extractor_kwargs: Keyword arguments\n to pass to the features extractor.\n :param share_features_extractor: If True, the features extractor is shared between the policy and value networks.\n :param normalize_images: Whether to normalize images or not,\n dividing by 255.0 (True by default)\n :param optimizer_class: The optimizer to use,\n ``th.optim.Adam`` by default\n :param optimizer_kwargs: Additional keyword arguments,\n excluding the learning rate, to pass to the optimizer\n ",
7
- "__init__": "<function ActorCriticPolicy.__init__ at 0x0000016682DF1120>",
8
- "_get_constructor_parameters": "<function ActorCriticPolicy._get_constructor_parameters at 0x0000016682DF11B0>",
9
- "reset_noise": "<function ActorCriticPolicy.reset_noise at 0x0000016682DF1240>",
10
- "_build_mlp_extractor": "<function ActorCriticPolicy._build_mlp_extractor at 0x0000016682DF12D0>",
11
- "_build": "<function ActorCriticPolicy._build at 0x0000016682DF1360>",
12
- "forward": "<function ActorCriticPolicy.forward at 0x0000016682DF13F0>",
13
- "extract_features": "<function ActorCriticPolicy.extract_features at 0x0000016682DF1480>",
14
- "_get_action_dist_from_latent": "<function ActorCriticPolicy._get_action_dist_from_latent at 0x0000016682DF1510>",
15
- "_predict": "<function ActorCriticPolicy._predict at 0x0000016682DF15A0>",
16
- "evaluate_actions": "<function ActorCriticPolicy.evaluate_actions at 0x0000016682DF1630>",
17
- "get_distribution": "<function ActorCriticPolicy.get_distribution at 0x0000016682DF16C0>",
18
- "predict_values": "<function ActorCriticPolicy.predict_values at 0x0000016682DF1750>",
19
  "__abstractmethods__": "frozenset()",
20
- "_abc_impl": "<_abc._abc_data object at 0x0000016682DEF580>"
21
  },
22
  "verbose": 1,
23
  "policy_kwargs": {
@@ -30,21 +30,21 @@
30
  "weight_decay": 0
31
  }
32
  },
33
- "num_timesteps": 10000,
34
- "_total_timesteps": 10000,
35
  "_num_timesteps_at_start": 0,
36
  "seed": null,
37
  "action_noise": null,
38
- "start_time": 1739281209040567000,
39
  "learning_rate": 0.0007,
40
  "tensorboard_log": null,
41
  "_last_obs": {
42
  ":type:": "<class 'numpy.ndarray'>",
43
- ":serialized:": "gAWVhQAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJYQAAAAAAAAAOMWg79KyFK951vfPRKusj2UjAVudW1weZSMBWR0eXBllJOUjAJmNJSJiIeUUpQoSwOMATyUTk5OSv////9K/////0sAdJRiSwFLBIaUjAFDlHSUUpQu"
44
  },
45
  "_last_episode_starts": {
46
  ":type:": "<class 'numpy.ndarray'>",
47
- ":serialized:": "gAWVdAAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJYBAAAAAAAAAACUjAVudW1weZSMBWR0eXBllJOUjAJiMZSJiIeUUpQoSwOMAXyUTk5OSv////9K/////0sAdJRiSwGFlIwBQ5R0lFKULg=="
48
  },
49
  "_last_original_obs": null,
50
  "_episode_num": 0,
@@ -54,36 +54,13 @@
54
  "_stats_window_size": 100,
55
  "ep_info_buffer": {
56
  ":type:": "<class 'collections.deque'>",
57
- ":serialized:": "gAWV5QsAAAAAAACMC2NvbGxlY3Rpb25zlIwFZGVxdWWUk5QpS2SGlFKUKH2UKIwBcpRHQDsAAAAAAACMAWyUSxuMAXSUR0ARYoQWepXIdX2UKGgGR0A7AAAAAAAAaAdLG2gIR0ARm9ytFKChdX2UKGgGR0A3AAAAAAAAaAdLF2gIR0ARwLmZE2HddX2UKGgGR0A+AAAAAAAAaAdLHmgIR0AR+hHskY4ydX2UKGgGR0BIgAAAAAAAaAdLMWgIR0ASTQY1pCa7dX2UKGgGR0AuAAAAAAAAaAdLD2gIR0ASZ6KLsKLLdX2UKGgGR0A5AAAAAAAAaAdLGWgIR0ASlLGrCFbndX2UKGgGR0AyAAAAAAAAaAdLEmgIR0ASvqzJIUaidX2UKGgGR0BBgAAAAAAAaAdLI2gIR0AS+hHskY4ydX2UKGgGR0A7AAAAAAAAaAdLG2gIR0ATL7gsK9f1dX2UKGgGR0BDgAAAAAAAaAdLJ2gIR0ATenjyWiUQdX2UKGgGR0A3AAAAAAAAaAdLF2gIR0ATo29+PRzBdX2UKGgGR0A9AAAAAAAAaAdLHWgIR0AT0oiLVFx5dX2UKGgGR0A6AAAAAAAAaAdLGmgIR0AT/YukDZDidX2UKGgGR0A4AAAAAAAAaAdLGGgIR0AUK6DoQnQZdX2UKGgGR0BEgAAAAAAAaAdLKWgIR0AUc1cdHUc5dX2UKGgGR0A4AAAAAAAAaAdLGGgIR0AUnEP1+RYBdX2UKGgGR0A3AAAAAAAAaAdLF2gIR0AU0YQ8OkLydX2UKGgGR0A5AAAAAAAAaAdLGWgIR0AVAqmTC+DfdX2UKGgGR0BWwAAAAAAAaAdLW2gIR0AVo287IT4+dX2UKGgGR0A6AAAAAAAAaAdLGmgIR0AV0HzH0btJdX2UKGgGR0BQQAAAAAAAaAdLQWgIR0AWRTjvNNahdX2UKGgGR0A5AAAAAAAAaAdLGWgIR0AWb779AHE/dX2UKGgGR0BJAAAAAAAAaAdLMmgIR0AWxL127nPndX2UKGgGR0BGgAAAAAAAaAdLLWgIR0AXFqh11W8zdX2UKGgGR0BXQAAAAAAAaAdLXWgIR0AXwakyk9EDdX2UKGgGR0BVQAAAAAAAaAdLVWgIR0AYYWepXIU8dX2UKGgGR0A6AAAAAAAAaAdLGmgIR0AYjGm1pj+adX2UKGgGR0BBAAAAAAAAaAdLImgIR0AYzO2RaHKwdX2UKGgGR0AyAAAAAAAAaAdLEmgIR0AY8tAcDKYBdX2UKGgGR0BQgAAAAAAAaAdLQmgIR0AZd+7UXpGGdX2UKGgGR0A1AAAAAAAAaAdLFWgIR0AZo/eLvTgEdX2UKGgGR0AuAAAAAAAAaAdLD2gIR0AZvZDiOvMbdX2UKGgGR0A6AAAAAAAAaAdLGmgIR0AZ7bmEGqxUdX2UKGgGR0AyAAAAAAAAaAdLEmgIR0AaEYk3S8aodX2UKGgGR0A9AAAAAAAAaAdLHWgIR0AaPZGrjo6kdX2UKGgGR0BAAAAAAAAAaAdLIGgIR0AadeKKpDNRdX2UKGgGR0BCgAAAAAAAaAdLJWgIR0AarTMJQcghdX2UKGgGR0BCgAAAAAAAaAdLJWgIR0Aa9u0kWykcdX2UKGgGR0BWQAAAAAAAaAdLWWgIR0AbhUMoc7yQdX2UKGgGR0BGAAAAAAAAaAdLLGgIR0Ab2TvAoG6gdX2UKGgGR0BEAAAAAAAAaAdLKGgIR0AcIe/5+H8CdX2UKGgGR0BAAAAAAAAAaAdLIGgIR0AcX2Dg62fDdX2UKGgGR0BEgAAAAAAAaAdLKWgIR0Acntx+8XendX2UKGgGR0A9AAAAAAAAaAdLHWgIR0Ac0Qsf7rLRdX2UKGgGR0A/AAAAAAAAaAdLH2gIR0AdDG5tm+TNdX2UKGgGR0BDAAAAAAAAaAdLJmgIR0AdRcbBGhEjdX2UKGgGR0BQwAAAAAAAaAdLQ2gIR0Adv6GgzxgBdX2UKGgGR0BRQAAAAAAAaAdLRWgIR0AeN3A2ycCpdX2UKGgGR0BIAAAAAAAAaAdLMGgIR0AelJ8OTaCddX2UKGgGR0BHgAAAAAAAaAdLL2gIR0Ae6qwQlKK6dX2UKGgGR0BHAAAAAAAAaAdLLmgIR0AfPZ39rGipdX2UKGgGR0BUAAAAAAAAaAdLUGgIR0Af43n6l+EzdX2UKGgGR0BBgAAAAAAAaAdLI2gIR0AgEPd2xIJ7dX2UKGgGR0A5AAAAAAAAaAdLGWgIR0AgK5eZ5Rj0dX2UKGgGR0BiIAAAAAAAaAdLkWgIR0Agry1eBxxUdX2UKGgGR0BDgAAAAAAAaAdLJ2gIR0Agz27nPmgbdX2UKGgGR0BCAAAAAAAAaAdLJGgIR0Ag7J4jbBXTdX2UKGgGR0BQQAAAAAAAaAdLQWgIR0AhLyz5XU6QdX2UKGgGR0BHAAAAAAAAaAdLLmgIR0AhXALy+YdAdX2UKGgGR0BSQAAAAAAAaAdLSWgIR0AhnAN5MURGdX2UKGgGR0A+AAAAAAAAaAdLHmgIR0AhvlFMIu5CdX2UKGgGR0A+AAAAAAAAaAdLHmgIR0Ah3xWDHwPRdX2UKGgGR0BCAAAAAAAAaAdLJGgIR0Ah/1bJOnEVdX2UKGgGR0BFgAAAAAAAaAdLK2gIR0AiISGrS3LFdX2UKGgGR0A+AAAAAAAAaAdLHmgIR0AiOLMLWqcWdX2UKGgGR0A7AAAAAAAAaAdLG2gIR0AiVmEGqxTsdX2UKGgGR0BMgAAAAAAAaAdLOWgIR0Aih4fwI+nqdX2UKGgGR0A5AAAAAAAAaAdLGWgIR0AinIWgvlEJdX2UKGgGR0BBAAAAAAAAaAdLImgIR0Aiwml67dzodX2UKGgGR0A+AAAAAAAAaAdLHmgIR0Ai4Btk4FRpdX2UKGgGR0BSQAAAAAAAaAdLSWgIR0AjLK5CngpCdX2UKGgGR0BEAAAAAAAAaAdLKGgIR0AjTwCKaXrudX2UKGgGR0BHAAAAAAAAaAdLLmgIR0Ajd27FsHjZdX2UKGgGR0BKgAAAAAAAaAdLNWgIR0Ajp5Rjz7MxdX2UKGgGR0BLgAAAAAAAaAdLN2gIR0Aj2j+rELpidX2UKGgGR0BYQAAAAAAAaAdLYWgIR0AkLTGYKIBSdX2UKGgGR0BqYAAAAAAAaAdL02gIR0Ak5okAxSHedX2UKGgGR0BigAAAAAAAaAdLlGgIR0AlbDHfdhy9dX2UKGgGR0BjgAAAAAAAaAdLnGgIR0Al9eyAxzq9dX2UKGgGR0BcgAAAAAAAaAdLcmgIR0AmW8/UvwmWdX2UKGgGR0ByUAAAAAAAaAdNJQFoCEdAJ2OXeFcps3V9lChoBkdAbeAAAAAAAGgHS+9oCEdAKERa5f+junV9lChoBkdAXsAAAAAAAGgHS3toCEdAKMPXkHUtqnV9lChoBkdAVgAAAAAAAGgHS1hoCEdAKRcMEzO5a3V9lChoBkdAWcAAAAAAAGgHS2doCEdAKW+jdpItlXV9lChoBkdASoAAAAAAAGgHSzVoCEdAKZwzUI9kjHV9lChoBkdAU0AAAAAAAGgHS01oCEdAKeVstTUAk3V9lChoBkdATAAAAAAAAGgHSzhoCEdAKhF0YCQtBnV9lChoBkdAYMAAAAAAAGgHS4ZoCEdAKotSAH3UQXV9lChoBkdAZ2AAAAAAAGgHS7toCEdAKzIIWxhUi3V9lChoBkdAYaAAAAAAAGgHS41oCEdAK68Aiml67nV9lChoBkdAYQAAAAAAAGgHS4hoCEdALCGwRoRIz3V9lChoBkdAeGAAAAAAAGgHTYYBaAhHQC16Q5myxA11fZQoaAZHQGXAAAAAAABoB0uuaAhHQC4XPTodMkB1fZQoaAZHQGZAAAAAAABoB0uyaAhHQC6y4e9zwMJ1fZQoaAZHQGnAAAAAAABoB0vOaAhHQC9pKDkELYx1fZQoaAZHQHcwAAAAAABoB01zAWgIR0AwVdt2s7uEdX2UKGgGR0B6oAAAAAAAaAdNqgFoCEdAMRNMbm2b5XV9lChoBkdAeeAAAAAAAGgHTZ4BaAhHQDHISh8IAwR1ZS4="
58
  },
59
  "ep_success_buffer": {
60
  ":type:": "<class 'collections.deque'>",
61
  ":serialized:": "gAWVIAAAAAAAAACMC2NvbGxlY3Rpb25zlIwFZGVxdWWUk5QpS2SGlFKULg=="
62
  },
63
- "_n_updates": 2000,
64
- "n_steps": 5,
65
- "gamma": 0.99,
66
- "gae_lambda": 1.0,
67
- "ent_coef": 0.0,
68
- "vf_coef": 0.5,
69
- "max_grad_norm": 0.5,
70
- "rollout_buffer_class": {
71
- ":type:": "<class 'abc.ABCMeta'>",
72
- ":serialized:": "gAWVNgAAAAAAAACMIHN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi5idWZmZXJzlIwNUm9sbG91dEJ1ZmZlcpSTlC4=",
73
- "__module__": "stable_baselines3.common.buffers",
74
- "__annotations__": "{'observations': <class 'numpy.ndarray'>, 'actions': <class 'numpy.ndarray'>, 'rewards': <class 'numpy.ndarray'>, 'advantages': <class 'numpy.ndarray'>, 'returns': <class 'numpy.ndarray'>, 'episode_starts': <class 'numpy.ndarray'>, 'log_probs': <class 'numpy.ndarray'>, 'values': <class 'numpy.ndarray'>}",
75
- "__doc__": "\n Rollout buffer used in on-policy algorithms like A2C/PPO.\n It corresponds to ``buffer_size`` transitions collected\n using the current policy.\n This experience will be discarded after the policy update.\n In order to use PPO objective, we also store the current value of each state\n and the log probability of each taken action.\n\n The term rollout here refers to the model-free notion and should not\n be used with the concept of rollout used in model-based RL or planning.\n Hence, it is only involved in policy and value function training but not action selection.\n\n :param buffer_size: Max number of element in the buffer\n :param observation_space: Observation space\n :param action_space: Action space\n :param device: PyTorch device\n :param gae_lambda: Factor for trade-off of bias vs variance for Generalized Advantage Estimator\n Equivalent to classic advantage when set to 1.\n :param gamma: Discount factor\n :param n_envs: Number of parallel environments\n ",
76
- "__init__": "<function RolloutBuffer.__init__ at 0x0000016682B43A30>",
77
- "reset": "<function RolloutBuffer.reset at 0x0000016682B43AC0>",
78
- "compute_returns_and_advantage": "<function RolloutBuffer.compute_returns_and_advantage at 0x0000016682B43B50>",
79
- "add": "<function RolloutBuffer.add at 0x0000016682B43BE0>",
80
- "get": "<function RolloutBuffer.get at 0x0000016682B43C70>",
81
- "_get_samples": "<function RolloutBuffer._get_samples at 0x0000016682B43D00>",
82
- "__abstractmethods__": "frozenset()",
83
- "_abc_impl": "<_abc._abc_data object at 0x0000016682B0B480>"
84
- },
85
- "rollout_buffer_kwargs": {},
86
- "normalize_advantage": false,
87
  "observation_space": {
88
  ":type:": "<class 'gymnasium.spaces.box.Box'>",
89
  ":serialized:": "gAWV/gEAAAAAAACMFGd5bW5hc2l1bS5zcGFjZXMuYm94lIwDQm94lJOUKYGUfZQojAVkdHlwZZSMBW51bXB5lIwFZHR5cGWUk5SMAmY0lImIh5RSlChLA4wBPJROTk5K/////0r/////SwB0lGKMBl9zaGFwZZRLBIWUjANsb3eUjBJudW1weS5jb3JlLm51bWVyaWOUjAtfZnJvbWJ1ZmZlcpSTlCiWEAAAAAAAAACamZnAAACA/1B31r4AAID/lGgLSwSFlIwBQ5R0lFKUjA1ib3VuZGVkX2JlbG93lGgTKJYEAAAAAAAAAAEAAQCUaAiMAmIxlImIh5RSlChLA4wBfJROTk5K/////0r/////SwB0lGJLBIWUaBZ0lFKUjARoaWdolGgTKJYQAAAAAAAAAJqZmUAAAIB/UHfWPgAAgH+UaAtLBIWUaBZ0lFKUjA1ib3VuZGVkX2Fib3ZllGgTKJYEAAAAAAAAAAEAAQCUaB1LBIWUaBZ0lFKUjAhsb3dfcmVwcpSMMVstNC44ICAgICAgICAgICAgICAgLWluZiAtMC40MTg4NzkwMyAgICAgICAgLWluZl2UjAloaWdoX3JlcHKUjC1bNC44ICAgICAgICAgICAgICAgaW5mIDAuNDE4ODc5MDMgICAgICAgIGluZl2UjApfbnBfcmFuZG9tlE51Yi4=",
@@ -101,16 +78,39 @@
101
  },
102
  "action_space": {
103
  ":type:": "<class 'gymnasium.spaces.discrete.Discrete'>",
104
- ":serialized:": "gAWV/QAAAAAAAACMGWd5bW5hc2l1bS5zcGFjZXMuZGlzY3JldGWUjAhEaXNjcmV0ZZSTlCmBlH2UKIwBbpSMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMBnNjYWxhcpSTlIwFbnVtcHmUjAVkdHlwZZSTlIwCaTiUiYiHlFKUKEsDjAE8lE5OTkr/////Sv////9LAHSUYkMIAgAAAAAAAACUhpRSlIwFc3RhcnSUaAhoDkMIAAAAAAAAAACUhpRSlIwGX3NoYXBllCmMBWR0eXBllGgLjAJpOJSJiIeUUpQoSwNoD05OTkr/////Sv////9LAHSUYowKX25wX3JhbmRvbZROdWIu",
105
  "n": "2",
106
  "start": "0",
107
  "_shape": [],
108
  "dtype": "int64",
109
  "_np_random": null
110
  },
111
- "n_envs": 1,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
112
  "lr_schedule": {
113
  ":type:": "<class 'function'>",
114
- ":serialized:": "gAWVLQMAAAAAAACMF2Nsb3VkcGlja2xlLmNsb3VkcGlja2xllIwOX21ha2VfZnVuY3Rpb26Uk5QoaACMDV9idWlsdGluX3R5cGWUk5SMCENvZGVUeXBllIWUUpQoSwFLAEsASwFLA0sTQwx0AIgAfACDAYMBUwCUToWUjAVmbG9hdJSFlIwScHJvZ3Jlc3NfcmVtYWluaW5nlIWUjD1jOlxQWVRIT05cbGliXHNpdGUtcGFja2FnZXNcc3RhYmxlX2Jhc2VsaW5lczNcY29tbW9uXHV0aWxzLnB5lIwIPGxhbWJkYT6US2JDAgwAlIwOdmFsdWVfc2NoZWR1bGWUhZQpdJRSlH2UKIwLX19wYWNrYWdlX1+UjBhzdGFibGVfYmFzZWxpbmVzMy5jb21tb26UjAhfX25hbWVfX5SMHnN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi51dGlsc5SMCF9fZmlsZV9flGgOdU5OaACMEF9tYWtlX2VtcHR5X2NlbGyUk5QpUpSFlHSUUpRoAIwSX2Z1bmN0aW9uX3NldHN0YXRllJOUaCB9lH2UKGgYaA+MDF9fcXVhbG5hbWVfX5SMIWdldF9zY2hlZHVsZV9mbi48bG9jYWxzPi48bGFtYmRhPpSMD19fYW5ub3RhdGlvbnNfX5R9lIwOX19rd2RlZmF1bHRzX1+UTowMX19kZWZhdWx0c19flE6MCl9fbW9kdWxlX1+UaBmMB19fZG9jX1+UTowLX19jbG9zdXJlX1+UaACMCl9tYWtlX2NlbGyUk5RoAihoByhLAUsASwBLAUsBSxNDBIgAUwCUaAkpjAFflIWUaA6MBGZ1bmOUS4ZDAgQBlIwDdmFslIWUKXSUUpRoFU5OaBwpUpSFlHSUUpRoImg8fZR9lChoGGgzaCWMGWNvbnN0YW50X2ZuLjxsb2NhbHM+LmZ1bmOUaCd9lGgpTmgqTmgraBloLE5oLWgvRz9G8AaNuLrHhZRSlIWUjBdfY2xvdWRwaWNrbGVfc3VibW9kdWxlc5RdlIwLX19nbG9iYWxzX1+UfZR1hpSGUjCFlFKUhZRoRF2UaEZ9lHWGlIZSMC4="
115
  }
116
  }
 
4
  ":serialized:": "gAWVOwAAAAAAAACMIXN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi5wb2xpY2llc5SMEUFjdG9yQ3JpdGljUG9saWN5lJOULg==",
5
  "__module__": "stable_baselines3.common.policies",
6
  "__doc__": "\n Policy class for actor-critic algorithms (has both policy and value prediction).\n Used by A2C, PPO and the likes.\n\n :param observation_space: Observation space\n :param action_space: Action space\n :param lr_schedule: Learning rate schedule (could be constant)\n :param net_arch: The specification of the policy and value networks.\n :param activation_fn: Activation function\n :param ortho_init: Whether to use or not orthogonal initialization\n :param use_sde: Whether to use State Dependent Exploration or not\n :param log_std_init: Initial value for the log standard deviation\n :param full_std: Whether to use (n_features x n_actions) parameters\n for the std instead of only (n_features,) when using gSDE\n :param use_expln: Use ``expln()`` function instead of ``exp()`` to ensure\n a positive standard deviation (cf paper). It allows to keep variance\n above zero and prevent it from growing too fast. In practice, ``exp()`` is usually enough.\n :param squash_output: Whether to squash the output using a tanh function,\n this allows to ensure boundaries when using gSDE.\n :param features_extractor_class: Features extractor to use.\n :param features_extractor_kwargs: Keyword arguments\n to pass to the features extractor.\n :param share_features_extractor: If True, the features extractor is shared between the policy and value networks.\n :param normalize_images: Whether to normalize images or not,\n dividing by 255.0 (True by default)\n :param optimizer_class: The optimizer to use,\n ``th.optim.Adam`` by default\n :param optimizer_kwargs: Additional keyword arguments,\n excluding the learning rate, to pass to the optimizer\n ",
7
+ "__init__": "<function ActorCriticPolicy.__init__ at 0x7dd2e615d1c0>",
8
+ "_get_constructor_parameters": "<function ActorCriticPolicy._get_constructor_parameters at 0x7dd2e615d260>",
9
+ "reset_noise": "<function ActorCriticPolicy.reset_noise at 0x7dd2e615d300>",
10
+ "_build_mlp_extractor": "<function ActorCriticPolicy._build_mlp_extractor at 0x7dd2e615d3a0>",
11
+ "_build": "<function ActorCriticPolicy._build at 0x7dd2e615d440>",
12
+ "forward": "<function ActorCriticPolicy.forward at 0x7dd2e615d4e0>",
13
+ "extract_features": "<function ActorCriticPolicy.extract_features at 0x7dd2e615d580>",
14
+ "_get_action_dist_from_latent": "<function ActorCriticPolicy._get_action_dist_from_latent at 0x7dd2e615d620>",
15
+ "_predict": "<function ActorCriticPolicy._predict at 0x7dd2e615d6c0>",
16
+ "evaluate_actions": "<function ActorCriticPolicy.evaluate_actions at 0x7dd2e615d760>",
17
+ "get_distribution": "<function ActorCriticPolicy.get_distribution at 0x7dd2e615d800>",
18
+ "predict_values": "<function ActorCriticPolicy.predict_values at 0x7dd2e615d8a0>",
19
  "__abstractmethods__": "frozenset()",
20
+ "_abc_impl": "<_abc._abc_data object at 0x7dd2e613b340>"
21
  },
22
  "verbose": 1,
23
  "policy_kwargs": {
 
30
  "weight_decay": 0
31
  }
32
  },
33
+ "num_timesteps": 500000,
34
+ "_total_timesteps": 500000,
35
  "_num_timesteps_at_start": 0,
36
  "seed": null,
37
  "action_noise": null,
38
+ "start_time": 1740778610633784151,
39
  "learning_rate": 0.0007,
40
  "tensorboard_log": null,
41
  "_last_obs": {
42
  ":type:": "<class 'numpy.ndarray'>",
43
+ ":serialized:": "gAWV9QAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJaAAAAAAAAAABDBwr7jcDa913moPBVvXDxPsai+/NwdvahoEDxFr5O8HLWkvj8oDr13pZ68YM82urJqxr5mdRK+p+sAPUV0Ij4CASy+Hg5hvjHzXLwu4N496KxzvqpHEL4QLCk9zLaNPXVm8DxINw+9cIEWPQSCIb2xwL2+TScmPnNmMjx1tG6+lIwFbnVtcHmUjAVkdHlwZZSTlIwCZjSUiYiHlFKUKEsDjAE8lE5OTkr/////Sv////9LAHSUYksISwSGlIwBQ5R0lFKULg=="
44
  },
45
  "_last_episode_starts": {
46
  ":type:": "<class 'numpy.ndarray'>",
47
+ ":serialized:": "gAWVewAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJYIAAAAAAAAAAAAAAAAAAEAlIwFbnVtcHmUjAVkdHlwZZSTlIwCYjGUiYiHlFKUKEsDjAF8lE5OTkr/////Sv////9LAHSUYksIhZSMAUOUdJRSlC4="
48
  },
49
  "_last_original_obs": null,
50
  "_episode_num": 0,
 
54
  "_stats_window_size": 100,
55
  "ep_info_buffer": {
56
  ":type:": "<class 'collections.deque'>",
57
+ ":serialized:": "gAWVRAwAAAAAAACMC2NvbGxlY3Rpb25zlIwFZGVxdWWUk5QpS2SGlFKUKH2UKIwBcpRHQH9AAAAAAACMAWyUTfQBjAF0lEdAYVynCO3lS3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGFtCaqjrRl1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bhb9XPqs2fdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYXjNTtLL6nV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGF7UbLlmvp1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bhfv0XgtOEdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYYPOu7pV0nV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGGMZAQg9vF1fZQoaAZHQH9AAAAAAABoB030AWgIR0BhkuS6lLvkdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYaUb7TDwY3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGGoP6TGHYZ1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bhsk/0NBnjdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYbWrEtNBW3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGG6oGY8dPt1fZQoaAZHQH9AAAAAAABoB030AWgIR0BhwA5FPSDzdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYckSHM2WIHV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGHMwsf7rLR1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bh2g71ZkkKdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYdwweNkvsnV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGHinZsbedl1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bh5IdbPhQ4dX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYeb7v5P/JnV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGHp4cebNKR1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bh71tbcGkfdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYfOZKnNxEXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGIAen62v0R1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiAgMQVbiZdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYgdje9Ba93V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGIJI1UEPlN1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiC5shxHXmdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYg5hd+ocaXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGIVBpYcNpd1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiGaiItUXIdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYicLdepn6HV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGIo2YnfEXN1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiLm6y0KJEdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYjAvOhTOxHV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGIyr56+nIh1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiNVq8DjiodX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYjsmCROk+HV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGI/TVUdaMd1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiS9GEwnIAdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYk1tZ3cHnnV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGJUMwco6S11fZQoaAZHQH9AAAAAAABoB030AWgIR0BiVdV1fVqfdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYlh+85CF9XV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGJbjC53C9B1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiYY20iQkpdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYmXgdfb9InV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGJzVcMVk+Z1fZQoaAZHQH9AAAAAAABoB030AWgIR0BidQagmJFcdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYnqb7TDwY3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGJ8+y7f51x1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bif+7BfrrxdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYoLmbsniN3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGKJEdvKlpJ1fZQoaAZHQH9AAAAAAABoB030AWgIR0BijRLPD50sdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYplh6Skj5nV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGKa9oN/e+F1fZQoaAZHQH9AAAAAAABoB030AWgIR0BioJfMOf/WdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYqKjk+5e7nV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGKlst03fhx1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiqOK4x1xLdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYq6UornTzHV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGKy35WRzRx1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiwFPrOZ9edX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYsHo371qWXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGLIOnVG0/p1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiyfT1CgK4dX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYs0iGFi8WnV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGLQgksz2vl1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bi1y57PY4AdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYtt9Hc1wYXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGLoWjXWe6J1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bi6iMkyDZldX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYvC/IsAeaXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGLy9b5dnkF1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bi9hgLJCBxdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYvlGxUvPC3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGL/hBZ6lch1fZQoaAZHQH9AAAAAAABoB030AWgIR0BjA+UnogV5dX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYxRoIv8IiXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGMXYCQtBfN1fZQoaAZHQH9AAAAAAABoB030AWgIR0BjIAgDA8B/dX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYyMCROk+HXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGMmooE0SAZ1fZQoaAZHQH9AAAAAAABoB030AWgIR0BjK0tsenyedX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYzRf8/D+BHV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGM5ZUtI0651fZQoaAZHQH9AAAAAAABoB030AWgIR0BjS3CIk7fYdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAY05cZccENnV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGNX7KaG5+Z1fZQoaAZHQH9AAAAAAABoB030AWgIR0BjWrrLQokSdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAY1+MPz4DcXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGNkgctGus91fZQoaAZHQH9AAAAAAABoB030AWgIR0BjbSZ0CA+ZdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAY3GTzundf3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGN/FEZzgdh1fZQoaAZHQH9AAAAAAABoB030AWgIR0BjgM+qzZ6EdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAY4fLOiWVvHVlLg=="
58
  },
59
  "ep_success_buffer": {
60
  ":type:": "<class 'collections.deque'>",
61
  ":serialized:": "gAWVIAAAAAAAAACMC2NvbGxlY3Rpb25zlIwFZGVxdWWUk5QpS2SGlFKULg=="
62
  },
63
+ "_n_updates": 12500,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
  "observation_space": {
65
  ":type:": "<class 'gymnasium.spaces.box.Box'>",
66
  ":serialized:": "gAWV/gEAAAAAAACMFGd5bW5hc2l1bS5zcGFjZXMuYm94lIwDQm94lJOUKYGUfZQojAVkdHlwZZSMBW51bXB5lIwFZHR5cGWUk5SMAmY0lImIh5RSlChLA4wBPJROTk5K/////0r/////SwB0lGKMBl9zaGFwZZRLBIWUjANsb3eUjBJudW1weS5jb3JlLm51bWVyaWOUjAtfZnJvbWJ1ZmZlcpSTlCiWEAAAAAAAAACamZnAAACA/1B31r4AAID/lGgLSwSFlIwBQ5R0lFKUjA1ib3VuZGVkX2JlbG93lGgTKJYEAAAAAAAAAAEAAQCUaAiMAmIxlImIh5RSlChLA4wBfJROTk5K/////0r/////SwB0lGJLBIWUaBZ0lFKUjARoaWdolGgTKJYQAAAAAAAAAJqZmUAAAIB/UHfWPgAAgH+UaAtLBIWUaBZ0lFKUjA1ib3VuZGVkX2Fib3ZllGgTKJYEAAAAAAAAAAEAAQCUaB1LBIWUaBZ0lFKUjAhsb3dfcmVwcpSMMVstNC44ICAgICAgICAgICAgICAgLWluZiAtMC40MTg4NzkwMyAgICAgICAgLWluZl2UjAloaWdoX3JlcHKUjC1bNC44ICAgICAgICAgICAgICAgaW5mIDAuNDE4ODc5MDMgICAgICAgIGluZl2UjApfbnBfcmFuZG9tlE51Yi4=",
 
78
  },
79
  "action_space": {
80
  ":type:": "<class 'gymnasium.spaces.discrete.Discrete'>",
81
+ ":serialized:": "gAWV2wAAAAAAAACMGWd5bW5hc2l1bS5zcGFjZXMuZGlzY3JldGWUjAhEaXNjcmV0ZZSTlCmBlH2UKIwBbpSMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMBnNjYWxhcpSTlIwFbnVtcHmUjAVkdHlwZZSTlIwCaTiUiYiHlFKUKEsDjAE8lE5OTkr/////Sv////9LAHSUYkMIAgAAAAAAAACUhpRSlIwFc3RhcnSUaAhoDkMIAAAAAAAAAACUhpRSlIwGX3NoYXBllCmMBWR0eXBllGgOjApfbnBfcmFuZG9tlE51Yi4=",
82
  "n": "2",
83
  "start": "0",
84
  "_shape": [],
85
  "dtype": "int64",
86
  "_np_random": null
87
  },
88
+ "n_envs": 8,
89
+ "n_steps": 5,
90
+ "gamma": 0.99,
91
+ "gae_lambda": 1.0,
92
+ "ent_coef": 0.0,
93
+ "vf_coef": 0.5,
94
+ "max_grad_norm": 0.5,
95
+ "rollout_buffer_class": {
96
+ ":type:": "<class 'abc.ABCMeta'>",
97
+ ":serialized:": "gAWVNgAAAAAAAACMIHN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi5idWZmZXJzlIwNUm9sbG91dEJ1ZmZlcpSTlC4=",
98
+ "__module__": "stable_baselines3.common.buffers",
99
+ "__annotations__": "{'observations': <class 'numpy.ndarray'>, 'actions': <class 'numpy.ndarray'>, 'rewards': <class 'numpy.ndarray'>, 'advantages': <class 'numpy.ndarray'>, 'returns': <class 'numpy.ndarray'>, 'episode_starts': <class 'numpy.ndarray'>, 'log_probs': <class 'numpy.ndarray'>, 'values': <class 'numpy.ndarray'>}",
100
+ "__doc__": "\n Rollout buffer used in on-policy algorithms like A2C/PPO.\n It corresponds to ``buffer_size`` transitions collected\n using the current policy.\n This experience will be discarded after the policy update.\n In order to use PPO objective, we also store the current value of each state\n and the log probability of each taken action.\n\n The term rollout here refers to the model-free notion and should not\n be used with the concept of rollout used in model-based RL or planning.\n Hence, it is only involved in policy and value function training but not action selection.\n\n :param buffer_size: Max number of element in the buffer\n :param observation_space: Observation space\n :param action_space: Action space\n :param device: PyTorch device\n :param gae_lambda: Factor for trade-off of bias vs variance for Generalized Advantage Estimator\n Equivalent to classic advantage when set to 1.\n :param gamma: Discount factor\n :param n_envs: Number of parallel environments\n ",
101
+ "__init__": "<function RolloutBuffer.__init__ at 0x7dd2e652df80>",
102
+ "reset": "<function RolloutBuffer.reset at 0x7dd2e652e020>",
103
+ "compute_returns_and_advantage": "<function RolloutBuffer.compute_returns_and_advantage at 0x7dd2e652e0c0>",
104
+ "add": "<function RolloutBuffer.add at 0x7dd2e652e200>",
105
+ "get": "<function RolloutBuffer.get at 0x7dd2e652e2a0>",
106
+ "_get_samples": "<function RolloutBuffer._get_samples at 0x7dd2e652e340>",
107
+ "__abstractmethods__": "frozenset()",
108
+ "_abc_impl": "<_abc._abc_data object at 0x7dd2e64d1100>"
109
+ },
110
+ "rollout_buffer_kwargs": {},
111
+ "normalize_advantage": false,
112
  "lr_schedule": {
113
  ":type:": "<class 'function'>",
114
+ ":serialized:": "gAWVIAQAAAAAAACMF2Nsb3VkcGlja2xlLmNsb3VkcGlja2xllIwOX21ha2VfZnVuY3Rpb26Uk5QoaACMDV9idWlsdGluX3R5cGWUk5SMCENvZGVUeXBllIWUUpQoSwFLAEsASwFLBUsTQzSVAZcAdAEAAAAAAAAAAAAAAgCJAXwApgEAAKsBAAAAAAAAAACmAQAAqwEAAAAAAAAAAFMAlE6FlIwFZmxvYXSUhZSMEnByb2dyZXNzX3JlbWFpbmluZ5SFlIxJL3Vzci9sb2NhbC9saWIvcHl0aG9uMy4xMS9kaXN0LXBhY2thZ2VzL3N0YWJsZV9iYXNlbGluZXMzL2NvbW1vbi91dGlscy5weZSMCDxsYW1iZGE+lIwhZ2V0X3NjaGVkdWxlX2ZuLjxsb2NhbHM+LjxsYW1iZGE+lEtiQxr4gAClZahOqE7QO03RLE7ULE7RJk/UJk+AAJRDAJSMDnZhbHVlX3NjaGVkdWxllIWUKXSUUpR9lCiMC19fcGFja2FnZV9flIwYc3RhYmxlX2Jhc2VsaW5lczMuY29tbW9ulIwIX19uYW1lX1+UjB5zdGFibGVfYmFzZWxpbmVzMy5jb21tb24udXRpbHOUjAhfX2ZpbGVfX5SMSS91c3IvbG9jYWwvbGliL3B5dGhvbjMuMTEvZGlzdC1wYWNrYWdlcy9zdGFibGVfYmFzZWxpbmVzMy9jb21tb24vdXRpbHMucHmUdU5OaACMEF9tYWtlX2VtcHR5X2NlbGyUk5QpUpSFlHSUUpRoAIwSX2Z1bmN0aW9uX3NldHN0YXRllJOUaCN9lH2UKGgajAg8bGFtYmRhPpSMDF9fcXVhbG5hbWVfX5SMIWdldF9zY2hlZHVsZV9mbi48bG9jYWxzPi48bGFtYmRhPpSMD19fYW5ub3RhdGlvbnNfX5R9lIwOX19rd2RlZmF1bHRzX1+UTowMX19kZWZhdWx0c19flE6MCl9fbW9kdWxlX1+UaBuMB19fZG9jX1+UTowLX19jbG9zdXJlX1+UaACMCl9tYWtlX2NlbGyUk5RoAihoByhLAUsASwBLAUsBSxNDCJUBlwCJAVMAlGgJKYwBX5SFlGgOjARmdW5jlIwZY29uc3RhbnRfZm4uPGxvY2Fscz4uZnVuY5RLhkMI+IAA2A8SiAqUaBKMA3ZhbJSFlCl0lFKUaBdOTmgfKVKUhZR0lFKUaCVoQX2UfZQoaBqMBGZ1bmOUaCmMGWNvbnN0YW50X2ZuLjxsb2NhbHM+LmZ1bmOUaCt9lGgtTmguTmgvaBtoME5oMWgzRz9G8AaNuLrHhZRSlIWUjBdfY2xvdWRwaWNrbGVfc3VibW9kdWxlc5RdlIwLX19nbG9iYWxzX1+UfZR1hpSGUjCFlFKUhZRoSl2UaEx9lHWGlIZSMC4="
115
  }
116
  }
a2c-cartpole/policy.optimizer.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4596d34bd2ea939177ae1f567918148f21a7a91d216666454d1bebb38859f660
3
  size 43406
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea8d3578867e41481445cd5d85104a8949cccc352c8c962b5c81a8385f44587e
3
  size 43406
a2c-cartpole/policy.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bc33bd4fb1b9cb00b0d7e58f449d9f10ffd25117220d113d098cd431afd63f52
3
  size 41074
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00e05ea7f1dd5e53540957125dbf4bf469deaaaad7e44c975173e5169e65a4ae
3
  size 41074
a2c-cartpole/pytorch_variables.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb4dde0c1ad63b7740276006a06cc491b21b407ea6c889928c223ec77ddad79f
3
  size 864
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c35cea3b2e60fb5e7e162d3592df775cd400e575a31c72f359fb9e654ab00c5
3
  size 864
a2c-cartpole/system_info.txt CHANGED
@@ -1,9 +1,9 @@
1
- - OS: Windows-10-10.0.19045-SP0 10.0.19045
2
- - Python: 3.10.6
3
  - Stable-Baselines3: 2.5.0
4
- - PyTorch: 2.6.0+cpu
5
  - GPU Enabled: False
6
  - Numpy: 1.26.4
7
- - Cloudpickle: 3.0.0
8
  - Gymnasium: 1.0.0
9
- - OpenAI Gym: 0.26.2
 
1
+ - OS: Linux-6.1.85+-x86_64-with-glibc2.35 # 1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
2
+ - Python: 3.11.11
3
  - Stable-Baselines3: 2.5.0
4
+ - PyTorch: 2.5.1+cu124
5
  - GPU Enabled: False
6
  - Numpy: 1.26.4
7
+ - Cloudpickle: 3.1.1
8
  - Gymnasium: 1.0.0
9
+ - OpenAI Gym: 0.25.2
config.json CHANGED
@@ -1 +1 @@
1
- {"policy_class": {":type:": "<class 'abc.ABCMeta'>", ":serialized:": "gAWVOwAAAAAAAACMIXN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi5wb2xpY2llc5SMEUFjdG9yQ3JpdGljUG9saWN5lJOULg==", "__module__": "stable_baselines3.common.policies", "__doc__": "\n Policy class for actor-critic algorithms (has both policy and value prediction).\n Used by A2C, PPO and the likes.\n\n :param observation_space: Observation space\n :param action_space: Action space\n :param lr_schedule: Learning rate schedule (could be constant)\n :param net_arch: The specification of the policy and value networks.\n :param activation_fn: Activation function\n :param ortho_init: Whether to use or not orthogonal initialization\n :param use_sde: Whether to use State Dependent Exploration or not\n :param log_std_init: Initial value for the log standard deviation\n :param full_std: Whether to use (n_features x n_actions) parameters\n for the std instead of only (n_features,) when using gSDE\n :param use_expln: Use ``expln()`` function instead of ``exp()`` to ensure\n a positive standard deviation (cf paper). It allows to keep variance\n above zero and prevent it from growing too fast. In practice, ``exp()`` is usually enough.\n :param squash_output: Whether to squash the output using a tanh function,\n this allows to ensure boundaries when using gSDE.\n :param features_extractor_class: Features extractor to use.\n :param features_extractor_kwargs: Keyword arguments\n to pass to the features extractor.\n :param share_features_extractor: If True, the features extractor is shared between the policy and value networks.\n :param normalize_images: Whether to normalize images or not,\n dividing by 255.0 (True by default)\n :param optimizer_class: The optimizer to use,\n ``th.optim.Adam`` by default\n :param optimizer_kwargs: Additional keyword arguments,\n excluding the learning rate, to pass to the optimizer\n ", "__init__": "<function ActorCriticPolicy.__init__ at 0x0000016682DF1120>", "_get_constructor_parameters": "<function ActorCriticPolicy._get_constructor_parameters at 0x0000016682DF11B0>", "reset_noise": "<function ActorCriticPolicy.reset_noise at 0x0000016682DF1240>", "_build_mlp_extractor": "<function ActorCriticPolicy._build_mlp_extractor at 0x0000016682DF12D0>", "_build": "<function ActorCriticPolicy._build at 0x0000016682DF1360>", "forward": "<function ActorCriticPolicy.forward at 0x0000016682DF13F0>", "extract_features": "<function ActorCriticPolicy.extract_features at 0x0000016682DF1480>", "_get_action_dist_from_latent": "<function ActorCriticPolicy._get_action_dist_from_latent at 0x0000016682DF1510>", "_predict": "<function ActorCriticPolicy._predict at 0x0000016682DF15A0>", "evaluate_actions": "<function ActorCriticPolicy.evaluate_actions at 0x0000016682DF1630>", "get_distribution": "<function ActorCriticPolicy.get_distribution at 0x0000016682DF16C0>", "predict_values": "<function ActorCriticPolicy.predict_values at 0x0000016682DF1750>", "__abstractmethods__": "frozenset()", "_abc_impl": "<_abc._abc_data object at 0x0000016682DEF580>"}, "verbose": 1, "policy_kwargs": {":type:": "<class 'dict'>", ":serialized:": "gAWVgQAAAAAAAAB9lCiMD29wdGltaXplcl9jbGFzc5SME3RvcmNoLm9wdGltLnJtc3Byb3CUjAdSTVNwcm9wlJOUjBBvcHRpbWl6ZXJfa3dhcmdzlH2UKIwFYWxwaGGURz/vrhR64UeujANlcHOURz7k+LWI42jxjAx3ZWlnaHRfZGVjYXmUSwB1dS4=", "optimizer_class": "<class 'torch.optim.rmsprop.RMSprop'>", "optimizer_kwargs": {"alpha": 0.99, "eps": 1e-05, "weight_decay": 0}}, "num_timesteps": 10000, "_total_timesteps": 10000, "_num_timesteps_at_start": 0, "seed": null, "action_noise": null, "start_time": 1739281209040567000, "learning_rate": 0.0007, "tensorboard_log": null, "_last_obs": {":type:": "<class 'numpy.ndarray'>", ":serialized:": "gAWVhQAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJYQAAAAAAAAAOMWg79KyFK951vfPRKusj2UjAVudW1weZSMBWR0eXBllJOUjAJmNJSJiIeUUpQoSwOMATyUTk5OSv////9K/////0sAdJRiSwFLBIaUjAFDlHSUUpQu"}, "_last_episode_starts": {":type:": "<class 'numpy.ndarray'>", ":serialized:": "gAWVdAAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJYBAAAAAAAAAACUjAVudW1weZSMBWR0eXBllJOUjAJiMZSJiIeUUpQoSwOMAXyUTk5OSv////9K/////0sAdJRiSwGFlIwBQ5R0lFKULg=="}, "_last_original_obs": null, "_episode_num": 0, "use_sde": false, "sde_sample_freq": -1, "_current_progress_remaining": 0.0, "_stats_window_size": 100, "ep_info_buffer": {":type:": "<class 'collections.deque'>", ":serialized:": "gAWV5QsAAAAAAACMC2NvbGxlY3Rpb25zlIwFZGVxdWWUk5QpS2SGlFKUKH2UKIwBcpRHQDsAAAAAAACMAWyUSxuMAXSUR0ARYoQWepXIdX2UKGgGR0A7AAAAAAAAaAdLG2gIR0ARm9ytFKChdX2UKGgGR0A3AAAAAAAAaAdLF2gIR0ARwLmZE2HddX2UKGgGR0A+AAAAAAAAaAdLHmgIR0AR+hHskY4ydX2UKGgGR0BIgAAAAAAAaAdLMWgIR0ASTQY1pCa7dX2UKGgGR0AuAAAAAAAAaAdLD2gIR0ASZ6KLsKLLdX2UKGgGR0A5AAAAAAAAaAdLGWgIR0ASlLGrCFbndX2UKGgGR0AyAAAAAAAAaAdLEmgIR0ASvqzJIUaidX2UKGgGR0BBgAAAAAAAaAdLI2gIR0AS+hHskY4ydX2UKGgGR0A7AAAAAAAAaAdLG2gIR0ATL7gsK9f1dX2UKGgGR0BDgAAAAAAAaAdLJ2gIR0ATenjyWiUQdX2UKGgGR0A3AAAAAAAAaAdLF2gIR0ATo29+PRzBdX2UKGgGR0A9AAAAAAAAaAdLHWgIR0AT0oiLVFx5dX2UKGgGR0A6AAAAAAAAaAdLGmgIR0AT/YukDZDidX2UKGgGR0A4AAAAAAAAaAdLGGgIR0AUK6DoQnQZdX2UKGgGR0BEgAAAAAAAaAdLKWgIR0AUc1cdHUc5dX2UKGgGR0A4AAAAAAAAaAdLGGgIR0AUnEP1+RYBdX2UKGgGR0A3AAAAAAAAaAdLF2gIR0AU0YQ8OkLydX2UKGgGR0A5AAAAAAAAaAdLGWgIR0AVAqmTC+DfdX2UKGgGR0BWwAAAAAAAaAdLW2gIR0AVo287IT4+dX2UKGgGR0A6AAAAAAAAaAdLGmgIR0AV0HzH0btJdX2UKGgGR0BQQAAAAAAAaAdLQWgIR0AWRTjvNNahdX2UKGgGR0A5AAAAAAAAaAdLGWgIR0AWb779AHE/dX2UKGgGR0BJAAAAAAAAaAdLMmgIR0AWxL127nPndX2UKGgGR0BGgAAAAAAAaAdLLWgIR0AXFqh11W8zdX2UKGgGR0BXQAAAAAAAaAdLXWgIR0AXwakyk9EDdX2UKGgGR0BVQAAAAAAAaAdLVWgIR0AYYWepXIU8dX2UKGgGR0A6AAAAAAAAaAdLGmgIR0AYjGm1pj+adX2UKGgGR0BBAAAAAAAAaAdLImgIR0AYzO2RaHKwdX2UKGgGR0AyAAAAAAAAaAdLEmgIR0AY8tAcDKYBdX2UKGgGR0BQgAAAAAAAaAdLQmgIR0AZd+7UXpGGdX2UKGgGR0A1AAAAAAAAaAdLFWgIR0AZo/eLvTgEdX2UKGgGR0AuAAAAAAAAaAdLD2gIR0AZvZDiOvMbdX2UKGgGR0A6AAAAAAAAaAdLGmgIR0AZ7bmEGqxUdX2UKGgGR0AyAAAAAAAAaAdLEmgIR0AaEYk3S8aodX2UKGgGR0A9AAAAAAAAaAdLHWgIR0AaPZGrjo6kdX2UKGgGR0BAAAAAAAAAaAdLIGgIR0AadeKKpDNRdX2UKGgGR0BCgAAAAAAAaAdLJWgIR0AarTMJQcghdX2UKGgGR0BCgAAAAAAAaAdLJWgIR0Aa9u0kWykcdX2UKGgGR0BWQAAAAAAAaAdLWWgIR0AbhUMoc7yQdX2UKGgGR0BGAAAAAAAAaAdLLGgIR0Ab2TvAoG6gdX2UKGgGR0BEAAAAAAAAaAdLKGgIR0AcIe/5+H8CdX2UKGgGR0BAAAAAAAAAaAdLIGgIR0AcX2Dg62fDdX2UKGgGR0BEgAAAAAAAaAdLKWgIR0Acntx+8XendX2UKGgGR0A9AAAAAAAAaAdLHWgIR0Ac0Qsf7rLRdX2UKGgGR0A/AAAAAAAAaAdLH2gIR0AdDG5tm+TNdX2UKGgGR0BDAAAAAAAAaAdLJmgIR0AdRcbBGhEjdX2UKGgGR0BQwAAAAAAAaAdLQ2gIR0Adv6GgzxgBdX2UKGgGR0BRQAAAAAAAaAdLRWgIR0AeN3A2ycCpdX2UKGgGR0BIAAAAAAAAaAdLMGgIR0AelJ8OTaCddX2UKGgGR0BHgAAAAAAAaAdLL2gIR0Ae6qwQlKK6dX2UKGgGR0BHAAAAAAAAaAdLLmgIR0AfPZ39rGipdX2UKGgGR0BUAAAAAAAAaAdLUGgIR0Af43n6l+EzdX2UKGgGR0BBgAAAAAAAaAdLI2gIR0AgEPd2xIJ7dX2UKGgGR0A5AAAAAAAAaAdLGWgIR0AgK5eZ5Rj0dX2UKGgGR0BiIAAAAAAAaAdLkWgIR0Agry1eBxxUdX2UKGgGR0BDgAAAAAAAaAdLJ2gIR0Agz27nPmgbdX2UKGgGR0BCAAAAAAAAaAdLJGgIR0Ag7J4jbBXTdX2UKGgGR0BQQAAAAAAAaAdLQWgIR0AhLyz5XU6QdX2UKGgGR0BHAAAAAAAAaAdLLmgIR0AhXALy+YdAdX2UKGgGR0BSQAAAAAAAaAdLSWgIR0AhnAN5MURGdX2UKGgGR0A+AAAAAAAAaAdLHmgIR0AhvlFMIu5CdX2UKGgGR0A+AAAAAAAAaAdLHmgIR0Ah3xWDHwPRdX2UKGgGR0BCAAAAAAAAaAdLJGgIR0Ah/1bJOnEVdX2UKGgGR0BFgAAAAAAAaAdLK2gIR0AiISGrS3LFdX2UKGgGR0A+AAAAAAAAaAdLHmgIR0AiOLMLWqcWdX2UKGgGR0A7AAAAAAAAaAdLG2gIR0AiVmEGqxTsdX2UKGgGR0BMgAAAAAAAaAdLOWgIR0Aih4fwI+nqdX2UKGgGR0A5AAAAAAAAaAdLGWgIR0AinIWgvlEJdX2UKGgGR0BBAAAAAAAAaAdLImgIR0Aiwml67dzodX2UKGgGR0A+AAAAAAAAaAdLHmgIR0Ai4Btk4FRpdX2UKGgGR0BSQAAAAAAAaAdLSWgIR0AjLK5CngpCdX2UKGgGR0BEAAAAAAAAaAdLKGgIR0AjTwCKaXrudX2UKGgGR0BHAAAAAAAAaAdLLmgIR0Ajd27FsHjZdX2UKGgGR0BKgAAAAAAAaAdLNWgIR0Ajp5Rjz7MxdX2UKGgGR0BLgAAAAAAAaAdLN2gIR0Aj2j+rELpidX2UKGgGR0BYQAAAAAAAaAdLYWgIR0AkLTGYKIBSdX2UKGgGR0BqYAAAAAAAaAdL02gIR0Ak5okAxSHedX2UKGgGR0BigAAAAAAAaAdLlGgIR0AlbDHfdhy9dX2UKGgGR0BjgAAAAAAAaAdLnGgIR0Al9eyAxzq9dX2UKGgGR0BcgAAAAAAAaAdLcmgIR0AmW8/UvwmWdX2UKGgGR0ByUAAAAAAAaAdNJQFoCEdAJ2OXeFcps3V9lChoBkdAbeAAAAAAAGgHS+9oCEdAKERa5f+junV9lChoBkdAXsAAAAAAAGgHS3toCEdAKMPXkHUtqnV9lChoBkdAVgAAAAAAAGgHS1hoCEdAKRcMEzO5a3V9lChoBkdAWcAAAAAAAGgHS2doCEdAKW+jdpItlXV9lChoBkdASoAAAAAAAGgHSzVoCEdAKZwzUI9kjHV9lChoBkdAU0AAAAAAAGgHS01oCEdAKeVstTUAk3V9lChoBkdATAAAAAAAAGgHSzhoCEdAKhF0YCQtBnV9lChoBkdAYMAAAAAAAGgHS4ZoCEdAKotSAH3UQXV9lChoBkdAZ2AAAAAAAGgHS7toCEdAKzIIWxhUi3V9lChoBkdAYaAAAAAAAGgHS41oCEdAK68Aiml67nV9lChoBkdAYQAAAAAAAGgHS4hoCEdALCGwRoRIz3V9lChoBkdAeGAAAAAAAGgHTYYBaAhHQC16Q5myxA11fZQoaAZHQGXAAAAAAABoB0uuaAhHQC4XPTodMkB1fZQoaAZHQGZAAAAAAABoB0uyaAhHQC6y4e9zwMJ1fZQoaAZHQGnAAAAAAABoB0vOaAhHQC9pKDkELYx1fZQoaAZHQHcwAAAAAABoB01zAWgIR0AwVdt2s7uEdX2UKGgGR0B6oAAAAAAAaAdNqgFoCEdAMRNMbm2b5XV9lChoBkdAeeAAAAAAAGgHTZ4BaAhHQDHISh8IAwR1ZS4="}, "ep_success_buffer": {":type:": "<class 'collections.deque'>", ":serialized:": "gAWVIAAAAAAAAACMC2NvbGxlY3Rpb25zlIwFZGVxdWWUk5QpS2SGlFKULg=="}, "_n_updates": 2000, "n_steps": 5, "gamma": 0.99, "gae_lambda": 1.0, "ent_coef": 0.0, "vf_coef": 0.5, "max_grad_norm": 0.5, "rollout_buffer_class": {":type:": "<class 'abc.ABCMeta'>", ":serialized:": "gAWVNgAAAAAAAACMIHN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi5idWZmZXJzlIwNUm9sbG91dEJ1ZmZlcpSTlC4=", "__module__": "stable_baselines3.common.buffers", "__annotations__": "{'observations': <class 'numpy.ndarray'>, 'actions': <class 'numpy.ndarray'>, 'rewards': <class 'numpy.ndarray'>, 'advantages': <class 'numpy.ndarray'>, 'returns': <class 'numpy.ndarray'>, 'episode_starts': <class 'numpy.ndarray'>, 'log_probs': <class 'numpy.ndarray'>, 'values': <class 'numpy.ndarray'>}", "__doc__": "\n Rollout buffer used in on-policy algorithms like A2C/PPO.\n It corresponds to ``buffer_size`` transitions collected\n using the current policy.\n This experience will be discarded after the policy update.\n In order to use PPO objective, we also store the current value of each state\n and the log probability of each taken action.\n\n The term rollout here refers to the model-free notion and should not\n be used with the concept of rollout used in model-based RL or planning.\n Hence, it is only involved in policy and value function training but not action selection.\n\n :param buffer_size: Max number of element in the buffer\n :param observation_space: Observation space\n :param action_space: Action space\n :param device: PyTorch device\n :param gae_lambda: Factor for trade-off of bias vs variance for Generalized Advantage Estimator\n Equivalent to classic advantage when set to 1.\n :param gamma: Discount factor\n :param n_envs: Number of parallel environments\n ", "__init__": "<function RolloutBuffer.__init__ at 0x0000016682B43A30>", "reset": "<function RolloutBuffer.reset at 0x0000016682B43AC0>", "compute_returns_and_advantage": "<function RolloutBuffer.compute_returns_and_advantage at 0x0000016682B43B50>", "add": "<function RolloutBuffer.add at 0x0000016682B43BE0>", "get": "<function RolloutBuffer.get at 0x0000016682B43C70>", "_get_samples": "<function RolloutBuffer._get_samples at 0x0000016682B43D00>", "__abstractmethods__": "frozenset()", "_abc_impl": "<_abc._abc_data object at 0x0000016682B0B480>"}, "rollout_buffer_kwargs": {}, "normalize_advantage": false, "observation_space": {":type:": "<class 'gymnasium.spaces.box.Box'>", ":serialized:": "gAWV/gEAAAAAAACMFGd5bW5hc2l1bS5zcGFjZXMuYm94lIwDQm94lJOUKYGUfZQojAVkdHlwZZSMBW51bXB5lIwFZHR5cGWUk5SMAmY0lImIh5RSlChLA4wBPJROTk5K/////0r/////SwB0lGKMBl9zaGFwZZRLBIWUjANsb3eUjBJudW1weS5jb3JlLm51bWVyaWOUjAtfZnJvbWJ1ZmZlcpSTlCiWEAAAAAAAAACamZnAAACA/1B31r4AAID/lGgLSwSFlIwBQ5R0lFKUjA1ib3VuZGVkX2JlbG93lGgTKJYEAAAAAAAAAAEAAQCUaAiMAmIxlImIh5RSlChLA4wBfJROTk5K/////0r/////SwB0lGJLBIWUaBZ0lFKUjARoaWdolGgTKJYQAAAAAAAAAJqZmUAAAIB/UHfWPgAAgH+UaAtLBIWUaBZ0lFKUjA1ib3VuZGVkX2Fib3ZllGgTKJYEAAAAAAAAAAEAAQCUaB1LBIWUaBZ0lFKUjAhsb3dfcmVwcpSMMVstNC44ICAgICAgICAgICAgICAgLWluZiAtMC40MTg4NzkwMyAgICAgICAgLWluZl2UjAloaWdoX3JlcHKUjC1bNC44ICAgICAgICAgICAgICAgaW5mIDAuNDE4ODc5MDMgICAgICAgIGluZl2UjApfbnBfcmFuZG9tlE51Yi4=", "dtype": "float32", "_shape": [4], "low": "[-4.8 -inf -0.41887903 -inf]", "bounded_below": "[ True False True False]", "high": "[4.8 inf 0.41887903 inf]", "bounded_above": "[ True False True False]", "low_repr": "[-4.8 -inf -0.41887903 -inf]", "high_repr": "[4.8 inf 0.41887903 inf]", "_np_random": null}, "action_space": {":type:": "<class 'gymnasium.spaces.discrete.Discrete'>", ":serialized:": "gAWV/QAAAAAAAACMGWd5bW5hc2l1bS5zcGFjZXMuZGlzY3JldGWUjAhEaXNjcmV0ZZSTlCmBlH2UKIwBbpSMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMBnNjYWxhcpSTlIwFbnVtcHmUjAVkdHlwZZSTlIwCaTiUiYiHlFKUKEsDjAE8lE5OTkr/////Sv////9LAHSUYkMIAgAAAAAAAACUhpRSlIwFc3RhcnSUaAhoDkMIAAAAAAAAAACUhpRSlIwGX3NoYXBllCmMBWR0eXBllGgLjAJpOJSJiIeUUpQoSwNoD05OTkr/////Sv////9LAHSUYowKX25wX3JhbmRvbZROdWIu", "n": "2", "start": "0", "_shape": [], "dtype": "int64", "_np_random": null}, "n_envs": 1, "lr_schedule": {":type:": "<class 'function'>", ":serialized:": "gAWVLQMAAAAAAACMF2Nsb3VkcGlja2xlLmNsb3VkcGlja2xllIwOX21ha2VfZnVuY3Rpb26Uk5QoaACMDV9idWlsdGluX3R5cGWUk5SMCENvZGVUeXBllIWUUpQoSwFLAEsASwFLA0sTQwx0AIgAfACDAYMBUwCUToWUjAVmbG9hdJSFlIwScHJvZ3Jlc3NfcmVtYWluaW5nlIWUjD1jOlxQWVRIT05cbGliXHNpdGUtcGFja2FnZXNcc3RhYmxlX2Jhc2VsaW5lczNcY29tbW9uXHV0aWxzLnB5lIwIPGxhbWJkYT6US2JDAgwAlIwOdmFsdWVfc2NoZWR1bGWUhZQpdJRSlH2UKIwLX19wYWNrYWdlX1+UjBhzdGFibGVfYmFzZWxpbmVzMy5jb21tb26UjAhfX25hbWVfX5SMHnN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi51dGlsc5SMCF9fZmlsZV9flGgOdU5OaACMEF9tYWtlX2VtcHR5X2NlbGyUk5QpUpSFlHSUUpRoAIwSX2Z1bmN0aW9uX3NldHN0YXRllJOUaCB9lH2UKGgYaA+MDF9fcXVhbG5hbWVfX5SMIWdldF9zY2hlZHVsZV9mbi48bG9jYWxzPi48bGFtYmRhPpSMD19fYW5ub3RhdGlvbnNfX5R9lIwOX19rd2RlZmF1bHRzX1+UTowMX19kZWZhdWx0c19flE6MCl9fbW9kdWxlX1+UaBmMB19fZG9jX1+UTowLX19jbG9zdXJlX1+UaACMCl9tYWtlX2NlbGyUk5RoAihoByhLAUsASwBLAUsBSxNDBIgAUwCUaAkpjAFflIWUaA6MBGZ1bmOUS4ZDAgQBlIwDdmFslIWUKXSUUpRoFU5OaBwpUpSFlHSUUpRoImg8fZR9lChoGGgzaCWMGWNvbnN0YW50X2ZuLjxsb2NhbHM+LmZ1bmOUaCd9lGgpTmgqTmgraBloLE5oLWgvRz9G8AaNuLrHhZRSlIWUjBdfY2xvdWRwaWNrbGVfc3VibW9kdWxlc5RdlIwLX19nbG9iYWxzX1+UfZR1hpSGUjCFlFKUhZRoRF2UaEZ9lHWGlIZSMC4="}, "system_info": {"OS": "Windows-10-10.0.19045-SP0 10.0.19045", "Python": "3.10.6", "Stable-Baselines3": "2.5.0", "PyTorch": "2.6.0+cpu", "GPU Enabled": "False", "Numpy": "1.26.4", "Cloudpickle": "3.0.0", "Gymnasium": "1.0.0", "OpenAI Gym": "0.26.2"}}
 
1
+ {"policy_class": {":type:": "<class 'abc.ABCMeta'>", ":serialized:": "gAWVOwAAAAAAAACMIXN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi5wb2xpY2llc5SMEUFjdG9yQ3JpdGljUG9saWN5lJOULg==", "__module__": "stable_baselines3.common.policies", "__doc__": "\n Policy class for actor-critic algorithms (has both policy and value prediction).\n Used by A2C, PPO and the likes.\n\n :param observation_space: Observation space\n :param action_space: Action space\n :param lr_schedule: Learning rate schedule (could be constant)\n :param net_arch: The specification of the policy and value networks.\n :param activation_fn: Activation function\n :param ortho_init: Whether to use or not orthogonal initialization\n :param use_sde: Whether to use State Dependent Exploration or not\n :param log_std_init: Initial value for the log standard deviation\n :param full_std: Whether to use (n_features x n_actions) parameters\n for the std instead of only (n_features,) when using gSDE\n :param use_expln: Use ``expln()`` function instead of ``exp()`` to ensure\n a positive standard deviation (cf paper). It allows to keep variance\n above zero and prevent it from growing too fast. In practice, ``exp()`` is usually enough.\n :param squash_output: Whether to squash the output using a tanh function,\n this allows to ensure boundaries when using gSDE.\n :param features_extractor_class: Features extractor to use.\n :param features_extractor_kwargs: Keyword arguments\n to pass to the features extractor.\n :param share_features_extractor: If True, the features extractor is shared between the policy and value networks.\n :param normalize_images: Whether to normalize images or not,\n dividing by 255.0 (True by default)\n :param optimizer_class: The optimizer to use,\n ``th.optim.Adam`` by default\n :param optimizer_kwargs: Additional keyword arguments,\n excluding the learning rate, to pass to the optimizer\n ", "__init__": "<function ActorCriticPolicy.__init__ at 0x7dd2e615d1c0>", "_get_constructor_parameters": "<function ActorCriticPolicy._get_constructor_parameters at 0x7dd2e615d260>", "reset_noise": "<function ActorCriticPolicy.reset_noise at 0x7dd2e615d300>", "_build_mlp_extractor": "<function ActorCriticPolicy._build_mlp_extractor at 0x7dd2e615d3a0>", "_build": "<function ActorCriticPolicy._build at 0x7dd2e615d440>", "forward": "<function ActorCriticPolicy.forward at 0x7dd2e615d4e0>", "extract_features": "<function ActorCriticPolicy.extract_features at 0x7dd2e615d580>", "_get_action_dist_from_latent": "<function ActorCriticPolicy._get_action_dist_from_latent at 0x7dd2e615d620>", "_predict": "<function ActorCriticPolicy._predict at 0x7dd2e615d6c0>", "evaluate_actions": "<function ActorCriticPolicy.evaluate_actions at 0x7dd2e615d760>", "get_distribution": "<function ActorCriticPolicy.get_distribution at 0x7dd2e615d800>", "predict_values": "<function ActorCriticPolicy.predict_values at 0x7dd2e615d8a0>", "__abstractmethods__": "frozenset()", "_abc_impl": "<_abc._abc_data object at 0x7dd2e613b340>"}, "verbose": 1, "policy_kwargs": {":type:": "<class 'dict'>", ":serialized:": "gAWVgQAAAAAAAAB9lCiMD29wdGltaXplcl9jbGFzc5SME3RvcmNoLm9wdGltLnJtc3Byb3CUjAdSTVNwcm9wlJOUjBBvcHRpbWl6ZXJfa3dhcmdzlH2UKIwFYWxwaGGURz/vrhR64UeujANlcHOURz7k+LWI42jxjAx3ZWlnaHRfZGVjYXmUSwB1dS4=", "optimizer_class": "<class 'torch.optim.rmsprop.RMSprop'>", "optimizer_kwargs": {"alpha": 0.99, "eps": 1e-05, "weight_decay": 0}}, "num_timesteps": 500000, "_total_timesteps": 500000, "_num_timesteps_at_start": 0, "seed": null, "action_noise": null, "start_time": 1740778610633784151, "learning_rate": 0.0007, "tensorboard_log": null, "_last_obs": {":type:": "<class 'numpy.ndarray'>", ":serialized:": "gAWV9QAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJaAAAAAAAAAABDBwr7jcDa913moPBVvXDxPsai+/NwdvahoEDxFr5O8HLWkvj8oDr13pZ68YM82urJqxr5mdRK+p+sAPUV0Ij4CASy+Hg5hvjHzXLwu4N496KxzvqpHEL4QLCk9zLaNPXVm8DxINw+9cIEWPQSCIb2xwL2+TScmPnNmMjx1tG6+lIwFbnVtcHmUjAVkdHlwZZSTlIwCZjSUiYiHlFKUKEsDjAE8lE5OTkr/////Sv////9LAHSUYksISwSGlIwBQ5R0lFKULg=="}, "_last_episode_starts": {":type:": "<class 'numpy.ndarray'>", ":serialized:": "gAWVewAAAAAAAACMEm51bXB5LmNvcmUubnVtZXJpY5SMC19mcm9tYnVmZmVylJOUKJYIAAAAAAAAAAAAAAAAAAEAlIwFbnVtcHmUjAVkdHlwZZSTlIwCYjGUiYiHlFKUKEsDjAF8lE5OTkr/////Sv////9LAHSUYksIhZSMAUOUdJRSlC4="}, "_last_original_obs": null, "_episode_num": 0, "use_sde": false, "sde_sample_freq": -1, "_current_progress_remaining": 0.0, "_stats_window_size": 100, "ep_info_buffer": {":type:": "<class 'collections.deque'>", ":serialized:": "gAWVRAwAAAAAAACMC2NvbGxlY3Rpb25zlIwFZGVxdWWUk5QpS2SGlFKUKH2UKIwBcpRHQH9AAAAAAACMAWyUTfQBjAF0lEdAYVynCO3lS3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGFtCaqjrRl1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bhb9XPqs2fdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYXjNTtLL6nV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGF7UbLlmvp1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bhfv0XgtOEdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYYPOu7pV0nV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGGMZAQg9vF1fZQoaAZHQH9AAAAAAABoB030AWgIR0BhkuS6lLvkdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYaUb7TDwY3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGGoP6TGHYZ1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bhsk/0NBnjdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYbWrEtNBW3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGG6oGY8dPt1fZQoaAZHQH9AAAAAAABoB030AWgIR0BhwA5FPSDzdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYckSHM2WIHV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGHMwsf7rLR1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bh2g71ZkkKdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYdwweNkvsnV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGHinZsbedl1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bh5IdbPhQ4dX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYeb7v5P/JnV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGHp4cebNKR1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bh71tbcGkfdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYfOZKnNxEXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGIAen62v0R1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiAgMQVbiZdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYgdje9Ba93V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGIJI1UEPlN1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiC5shxHXmdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYg5hd+ocaXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGIVBpYcNpd1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiGaiItUXIdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYicLdepn6HV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGIo2YnfEXN1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiLm6y0KJEdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYjAvOhTOxHV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGIyr56+nIh1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiNVq8DjiodX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYjsmCROk+HV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGI/TVUdaMd1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiS9GEwnIAdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYk1tZ3cHnnV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGJUMwco6S11fZQoaAZHQH9AAAAAAABoB030AWgIR0BiVdV1fVqfdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYlh+85CF9XV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGJbjC53C9B1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiYY20iQkpdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYmXgdfb9InV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGJzVcMVk+Z1fZQoaAZHQH9AAAAAAABoB030AWgIR0BidQagmJFcdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYnqb7TDwY3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGJ8+y7f51x1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bif+7BfrrxdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYoLmbsniN3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGKJEdvKlpJ1fZQoaAZHQH9AAAAAAABoB030AWgIR0BijRLPD50sdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYplh6Skj5nV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGKa9oN/e+F1fZQoaAZHQH9AAAAAAABoB030AWgIR0BioJfMOf/WdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYqKjk+5e7nV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGKlst03fhx1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiqOK4x1xLdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYq6UornTzHV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGKy35WRzRx1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiwFPrOZ9edX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYsHo371qWXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGLIOnVG0/p1fZQoaAZHQH9AAAAAAABoB030AWgIR0BiyfT1CgK4dX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYs0iGFi8WnV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGLQgksz2vl1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bi1y57PY4AdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYtt9Hc1wYXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGLoWjXWe6J1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bi6iMkyDZldX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYvC/IsAeaXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGLy9b5dnkF1fZQoaAZHQH9AAAAAAABoB030AWgIR0Bi9hgLJCBxdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYvlGxUvPC3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGL/hBZ6lch1fZQoaAZHQH9AAAAAAABoB030AWgIR0BjA+UnogV5dX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYxRoIv8IiXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGMXYCQtBfN1fZQoaAZHQH9AAAAAAABoB030AWgIR0BjIAgDA8B/dX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYyMCROk+HXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGMmooE0SAZ1fZQoaAZHQH9AAAAAAABoB030AWgIR0BjK0tsenyedX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAYzRf8/D+BHV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGM5ZUtI0651fZQoaAZHQH9AAAAAAABoB030AWgIR0BjS3CIk7fYdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAY05cZccENnV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGNX7KaG5+Z1fZQoaAZHQH9AAAAAAABoB030AWgIR0BjWrrLQokSdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAY1+MPz4DcXV9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGNkgctGus91fZQoaAZHQH9AAAAAAABoB030AWgIR0BjbSZ0CA+ZdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAY3GTzundf3V9lChoBkdAf0AAAAAAAGgHTfQBaAhHQGN/FEZzgdh1fZQoaAZHQH9AAAAAAABoB030AWgIR0BjgM+qzZ6EdX2UKGgGR0B/QAAAAAAAaAdN9AFoCEdAY4fLOiWVvHVlLg=="}, "ep_success_buffer": {":type:": "<class 'collections.deque'>", ":serialized:": "gAWVIAAAAAAAAACMC2NvbGxlY3Rpb25zlIwFZGVxdWWUk5QpS2SGlFKULg=="}, "_n_updates": 12500, "observation_space": {":type:": "<class 'gymnasium.spaces.box.Box'>", ":serialized:": "gAWV/gEAAAAAAACMFGd5bW5hc2l1bS5zcGFjZXMuYm94lIwDQm94lJOUKYGUfZQojAVkdHlwZZSMBW51bXB5lIwFZHR5cGWUk5SMAmY0lImIh5RSlChLA4wBPJROTk5K/////0r/////SwB0lGKMBl9zaGFwZZRLBIWUjANsb3eUjBJudW1weS5jb3JlLm51bWVyaWOUjAtfZnJvbWJ1ZmZlcpSTlCiWEAAAAAAAAACamZnAAACA/1B31r4AAID/lGgLSwSFlIwBQ5R0lFKUjA1ib3VuZGVkX2JlbG93lGgTKJYEAAAAAAAAAAEAAQCUaAiMAmIxlImIh5RSlChLA4wBfJROTk5K/////0r/////SwB0lGJLBIWUaBZ0lFKUjARoaWdolGgTKJYQAAAAAAAAAJqZmUAAAIB/UHfWPgAAgH+UaAtLBIWUaBZ0lFKUjA1ib3VuZGVkX2Fib3ZllGgTKJYEAAAAAAAAAAEAAQCUaB1LBIWUaBZ0lFKUjAhsb3dfcmVwcpSMMVstNC44ICAgICAgICAgICAgICAgLWluZiAtMC40MTg4NzkwMyAgICAgICAgLWluZl2UjAloaWdoX3JlcHKUjC1bNC44ICAgICAgICAgICAgICAgaW5mIDAuNDE4ODc5MDMgICAgICAgIGluZl2UjApfbnBfcmFuZG9tlE51Yi4=", "dtype": "float32", "_shape": [4], "low": "[-4.8 -inf -0.41887903 -inf]", "bounded_below": "[ True False True False]", "high": "[4.8 inf 0.41887903 inf]", "bounded_above": "[ True False True False]", "low_repr": "[-4.8 -inf -0.41887903 -inf]", "high_repr": "[4.8 inf 0.41887903 inf]", "_np_random": null}, "action_space": {":type:": "<class 'gymnasium.spaces.discrete.Discrete'>", ":serialized:": "gAWV2wAAAAAAAACMGWd5bW5hc2l1bS5zcGFjZXMuZGlzY3JldGWUjAhEaXNjcmV0ZZSTlCmBlH2UKIwBbpSMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMBnNjYWxhcpSTlIwFbnVtcHmUjAVkdHlwZZSTlIwCaTiUiYiHlFKUKEsDjAE8lE5OTkr/////Sv////9LAHSUYkMIAgAAAAAAAACUhpRSlIwFc3RhcnSUaAhoDkMIAAAAAAAAAACUhpRSlIwGX3NoYXBllCmMBWR0eXBllGgOjApfbnBfcmFuZG9tlE51Yi4=", "n": "2", "start": "0", "_shape": [], "dtype": "int64", "_np_random": null}, "n_envs": 8, "n_steps": 5, "gamma": 0.99, "gae_lambda": 1.0, "ent_coef": 0.0, "vf_coef": 0.5, "max_grad_norm": 0.5, "rollout_buffer_class": {":type:": "<class 'abc.ABCMeta'>", ":serialized:": "gAWVNgAAAAAAAACMIHN0YWJsZV9iYXNlbGluZXMzLmNvbW1vbi5idWZmZXJzlIwNUm9sbG91dEJ1ZmZlcpSTlC4=", "__module__": "stable_baselines3.common.buffers", "__annotations__": "{'observations': <class 'numpy.ndarray'>, 'actions': <class 'numpy.ndarray'>, 'rewards': <class 'numpy.ndarray'>, 'advantages': <class 'numpy.ndarray'>, 'returns': <class 'numpy.ndarray'>, 'episode_starts': <class 'numpy.ndarray'>, 'log_probs': <class 'numpy.ndarray'>, 'values': <class 'numpy.ndarray'>}", "__doc__": "\n Rollout buffer used in on-policy algorithms like A2C/PPO.\n It corresponds to ``buffer_size`` transitions collected\n using the current policy.\n This experience will be discarded after the policy update.\n In order to use PPO objective, we also store the current value of each state\n and the log probability of each taken action.\n\n The term rollout here refers to the model-free notion and should not\n be used with the concept of rollout used in model-based RL or planning.\n Hence, it is only involved in policy and value function training but not action selection.\n\n :param buffer_size: Max number of element in the buffer\n :param observation_space: Observation space\n :param action_space: Action space\n :param device: PyTorch device\n :param gae_lambda: Factor for trade-off of bias vs variance for Generalized Advantage Estimator\n Equivalent to classic advantage when set to 1.\n :param gamma: Discount factor\n :param n_envs: Number of parallel environments\n ", "__init__": "<function RolloutBuffer.__init__ at 0x7dd2e652df80>", "reset": "<function RolloutBuffer.reset at 0x7dd2e652e020>", "compute_returns_and_advantage": "<function RolloutBuffer.compute_returns_and_advantage at 0x7dd2e652e0c0>", "add": "<function RolloutBuffer.add at 0x7dd2e652e200>", "get": "<function RolloutBuffer.get at 0x7dd2e652e2a0>", "_get_samples": "<function RolloutBuffer._get_samples at 0x7dd2e652e340>", "__abstractmethods__": "frozenset()", "_abc_impl": "<_abc._abc_data object at 0x7dd2e64d1100>"}, "rollout_buffer_kwargs": {}, "normalize_advantage": false, "lr_schedule": {":type:": "<class 'function'>", ":serialized:": "gAWVIAQAAAAAAACMF2Nsb3VkcGlja2xlLmNsb3VkcGlja2xllIwOX21ha2VfZnVuY3Rpb26Uk5QoaACMDV9idWlsdGluX3R5cGWUk5SMCENvZGVUeXBllIWUUpQoSwFLAEsASwFLBUsTQzSVAZcAdAEAAAAAAAAAAAAAAgCJAXwApgEAAKsBAAAAAAAAAACmAQAAqwEAAAAAAAAAAFMAlE6FlIwFZmxvYXSUhZSMEnByb2dyZXNzX3JlbWFpbmluZ5SFlIxJL3Vzci9sb2NhbC9saWIvcHl0aG9uMy4xMS9kaXN0LXBhY2thZ2VzL3N0YWJsZV9iYXNlbGluZXMzL2NvbW1vbi91dGlscy5weZSMCDxsYW1iZGE+lIwhZ2V0X3NjaGVkdWxlX2ZuLjxsb2NhbHM+LjxsYW1iZGE+lEtiQxr4gAClZahOqE7QO03RLE7ULE7RJk/UJk+AAJRDAJSMDnZhbHVlX3NjaGVkdWxllIWUKXSUUpR9lCiMC19fcGFja2FnZV9flIwYc3RhYmxlX2Jhc2VsaW5lczMuY29tbW9ulIwIX19uYW1lX1+UjB5zdGFibGVfYmFzZWxpbmVzMy5jb21tb24udXRpbHOUjAhfX2ZpbGVfX5SMSS91c3IvbG9jYWwvbGliL3B5dGhvbjMuMTEvZGlzdC1wYWNrYWdlcy9zdGFibGVfYmFzZWxpbmVzMy9jb21tb24vdXRpbHMucHmUdU5OaACMEF9tYWtlX2VtcHR5X2NlbGyUk5QpUpSFlHSUUpRoAIwSX2Z1bmN0aW9uX3NldHN0YXRllJOUaCN9lH2UKGgajAg8bGFtYmRhPpSMDF9fcXVhbG5hbWVfX5SMIWdldF9zY2hlZHVsZV9mbi48bG9jYWxzPi48bGFtYmRhPpSMD19fYW5ub3RhdGlvbnNfX5R9lIwOX19rd2RlZmF1bHRzX1+UTowMX19kZWZhdWx0c19flE6MCl9fbW9kdWxlX1+UaBuMB19fZG9jX1+UTowLX19jbG9zdXJlX1+UaACMCl9tYWtlX2NlbGyUk5RoAihoByhLAUsASwBLAUsBSxNDCJUBlwCJAVMAlGgJKYwBX5SFlGgOjARmdW5jlIwZY29uc3RhbnRfZm4uPGxvY2Fscz4uZnVuY5RLhkMI+IAA2A8SiAqUaBKMA3ZhbJSFlCl0lFKUaBdOTmgfKVKUhZR0lFKUaCVoQX2UfZQoaBqMBGZ1bmOUaCmMGWNvbnN0YW50X2ZuLjxsb2NhbHM+LmZ1bmOUaCt9lGgtTmguTmgvaBtoME5oMWgzRz9G8AaNuLrHhZRSlIWUjBdfY2xvdWRwaWNrbGVfc3VibW9kdWxlc5RdlIwLX19nbG9iYWxzX1+UfZR1hpSGUjCFlFKUhZRoSl2UaEx9lHWGlIZSMC4="}, "system_info": {"OS": "Linux-6.1.85+-x86_64-with-glibc2.35 # 1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024", "Python": "3.11.11", "Stable-Baselines3": "2.5.0", "PyTorch": "2.5.1+cu124", "GPU Enabled": "False", "Numpy": "1.26.4", "Cloudpickle": "3.1.1", "Gymnasium": "1.0.0", "OpenAI Gym": "0.25.2"}}
results.json CHANGED
@@ -1 +1 @@
1
- {"mean_reward": 500.0, "std_reward": 0.0, "is_deterministic": true, "n_eval_episodes": 10, "eval_datetime": "2025-02-11T14:42:34.276987"}
 
1
+ {"mean_reward": 500.0, "std_reward": 0.0, "is_deterministic": true, "n_eval_episodes": 10, "eval_datetime": "2025-02-28T21:40:24.724927"}