ZTWHHH commited on
Commit
086d568
·
verified ·
1 Parent(s): a25de18

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. deepseek/lib/python3.10/site-packages/ray/rllib/examples/connectors/__pycache__/prev_actions_prev_rewards.cpython-310.pyc +0 -0
  2. deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/__init__.py +0 -0
  3. deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/coin_game_non_vectorized_env.py +344 -0
  4. deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/correlated_actions_env.py +74 -0
  5. deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/deterministic_envs.py +13 -0
  6. deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/matrix_sequential_social_dilemma.py +314 -0
  7. deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/memory_leaking_env.py +35 -0
  8. deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/mock_env.py +220 -0
  9. deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/pendulum_mass.py +33 -0
  10. deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/stateless_cartpole.py +39 -0
  11. deepseek/lib/python3.10/site-packages/ray/rllib/examples/fault_tolerance/__init__.py +0 -0
  12. deepseek/lib/python3.10/site-packages/ray/rllib/examples/fault_tolerance/__pycache__/__init__.cpython-310.pyc +0 -0
  13. deepseek/lib/python3.10/site-packages/ray/rllib/examples/fault_tolerance/__pycache__/crashing_and_stalling_env.cpython-310.pyc +0 -0
  14. deepseek/lib/python3.10/site-packages/ray/rllib/examples/fault_tolerance/crashing_and_stalling_env.py +177 -0
  15. deepseek/lib/python3.10/site-packages/ray/rllib/examples/inference/__init__.py +0 -0
  16. deepseek/lib/python3.10/site-packages/ray/rllib/examples/inference/__pycache__/__init__.cpython-310.pyc +0 -0
  17. deepseek/lib/python3.10/site-packages/ray/rllib/examples/inference/__pycache__/policy_inference_after_training_with_lstm.cpython-310.pyc +0 -0
  18. deepseek/lib/python3.10/site-packages/ray/rllib/examples/inference/policy_inference_after_training.py +188 -0
  19. deepseek/lib/python3.10/site-packages/ray/rllib/examples/inference/policy_inference_after_training_w_connector.py +274 -0
  20. deepseek/lib/python3.10/site-packages/ray/rllib/examples/inference/policy_inference_after_training_with_lstm.py +185 -0
  21. deepseek/lib/python3.10/site-packages/ray/rllib/examples/learners/classes/__pycache__/intrinsic_curiosity_learners.cpython-310.pyc +0 -0
  22. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/__init__.cpython-310.pyc +0 -0
  23. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/custom_heuristic_policy.cpython-310.pyc +0 -0
  24. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/different_spaces_for_agents.cpython-310.pyc +0 -0
  25. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/multi_agent_cartpole.cpython-310.pyc +0 -0
  26. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/multi_agent_pendulum.cpython-310.pyc +0 -0
  27. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/pettingzoo_independent_learning.cpython-310.pyc +0 -0
  28. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/pettingzoo_shared_value_function.cpython-310.pyc +0 -0
  29. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/rock_paper_scissors_heuristic_vs_learned.cpython-310.pyc +0 -0
  30. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/rock_paper_scissors_learned_vs_learned.cpython-310.pyc +0 -0
  31. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/self_play_league_based_with_open_spiel.cpython-310.pyc +0 -0
  32. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/self_play_with_open_spiel.cpython-310.pyc +0 -0
  33. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/two_step_game_with_grouped_agents.cpython-310.pyc +0 -0
  34. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/custom_heuristic_policy.py +101 -0
  35. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/different_spaces_for_agents.py +112 -0
  36. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/pettingzoo_parameter_sharing.py +105 -0
  37. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/rock_paper_scissors_learned_vs_learned.py +91 -0
  38. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/self_play_with_open_spiel.py +236 -0
  39. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/utils/__init__.py +43 -0
  40. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/utils/__pycache__/__init__.cpython-310.pyc +0 -0
  41. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/utils/__pycache__/self_play_callback.cpython-310.pyc +0 -0
  42. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/utils/__pycache__/self_play_callback_old_api_stack.cpython-310.pyc +0 -0
  43. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/utils/__pycache__/self_play_league_based_callback.cpython-310.pyc +0 -0
  44. deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/utils/__pycache__/self_play_league_based_callback_old_api_stack.cpython-310.pyc +0 -0
  45. deepseek/lib/python3.10/site-packages/ray/rllib/examples/ray_serve/classes/__pycache__/__init__.cpython-310.pyc +0 -0
  46. deepseek/lib/python3.10/site-packages/ray/rllib/examples/rl_modules/__init__.py +0 -0
  47. deepseek/lib/python3.10/site-packages/ray/rllib/examples/rl_modules/__pycache__/action_masking_rl_module.cpython-310.pyc +0 -0
  48. deepseek/lib/python3.10/site-packages/ray/rllib/examples/rl_modules/__pycache__/autoregressive_actions_rl_module.cpython-310.pyc +0 -0
  49. deepseek/lib/python3.10/site-packages/ray/rllib/examples/rl_modules/__pycache__/custom_cnn_rl_module.cpython-310.pyc +0 -0
  50. deepseek/lib/python3.10/site-packages/ray/rllib/examples/rl_modules/classes/__pycache__/__init__.cpython-310.pyc +0 -0
deepseek/lib/python3.10/site-packages/ray/rllib/examples/connectors/__pycache__/prev_actions_prev_rewards.cpython-310.pyc ADDED
Binary file (7.1 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/__init__.py ADDED
File without changes
deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/coin_game_non_vectorized_env.py ADDED
@@ -0,0 +1,344 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ##########
2
+ # Contribution by the Center on Long-Term Risk:
3
+ # https://github.com/longtermrisk/marltoolbox
4
+ ##########
5
+
6
+ import copy
7
+
8
+ try:
9
+ # This works in Python<3.9
10
+ from collections import Iterable
11
+ except ImportError:
12
+ # This works in Python>=3.9
13
+ from collections.abc import Iterable
14
+
15
+ import gymnasium as gym
16
+ import logging
17
+ import numpy as np
18
+ from gymnasium.spaces import Discrete
19
+ from gymnasium.utils import seeding
20
+ from ray.rllib.env.multi_agent_env import MultiAgentEnv
21
+ from ray.rllib.utils import override
22
+ from typing import Dict, Optional
23
+
24
+ from ray.rllib.examples.envs.classes.utils.interfaces import InfoAccumulationInterface
25
+
26
+ logger = logging.getLogger(__name__)
27
+
28
+
29
+ class CoinGame(InfoAccumulationInterface, MultiAgentEnv, gym.Env):
30
+ """
31
+ Coin Game environment.
32
+ """
33
+
34
+ NAME = "CoinGame"
35
+ NUM_AGENTS = 2
36
+ NUM_ACTIONS = 4
37
+ action_space = Discrete(NUM_ACTIONS)
38
+ observation_space = None
39
+ MOVES = [
40
+ np.array([0, 1]),
41
+ np.array([0, -1]),
42
+ np.array([1, 0]),
43
+ np.array([-1, 0]),
44
+ ]
45
+
46
+ def __init__(self, config: Optional[Dict] = None):
47
+ if config is None:
48
+ config = {}
49
+
50
+ self._validate_config(config)
51
+
52
+ self._load_config(config)
53
+ self.player_red_id, self.player_blue_id = self.players_ids
54
+ self.n_features = self.grid_size**2 * (2 * self.NUM_AGENTS)
55
+ self.observation_space = gym.spaces.Box(
56
+ low=0, high=1, shape=(self.grid_size, self.grid_size, 4), dtype="uint8"
57
+ )
58
+
59
+ self.step_count_in_current_episode = None
60
+ if self.output_additional_info:
61
+ self._init_info()
62
+
63
+ def _validate_config(self, config):
64
+ if "players_ids" in config:
65
+ assert isinstance(config["players_ids"], Iterable)
66
+ assert len(config["players_ids"]) == self.NUM_AGENTS
67
+
68
+ def _load_config(self, config):
69
+ self.players_ids = config.get("players_ids", ["player_red", "player_blue"])
70
+ self.max_steps = config.get("max_steps", 20)
71
+ self.grid_size = config.get("grid_size", 3)
72
+ self.output_additional_info = config.get("output_additional_info", True)
73
+ self.asymmetric = config.get("asymmetric", False)
74
+ self.both_players_can_pick_the_same_coin = config.get(
75
+ "both_players_can_pick_the_same_coin", True
76
+ )
77
+
78
+ @override(gym.Env)
79
+ def reset(self, *, seed=None, options=None):
80
+ self.np_random, seed = seeding.np_random(seed)
81
+
82
+ self.step_count_in_current_episode = 0
83
+
84
+ if self.output_additional_info:
85
+ self._reset_info()
86
+
87
+ self._randomize_color_and_player_positions()
88
+ self._generate_coin()
89
+ obs = self._generate_observation()
90
+
91
+ return {self.player_red_id: obs[0], self.player_blue_id: obs[1]}, {}
92
+
93
+ def _randomize_color_and_player_positions(self):
94
+ # Reset coin color and the players and coin positions
95
+ self.red_coin = self.np_random.integers(low=0, high=2)
96
+ self.red_pos = self.np_random.integers(low=0, high=self.grid_size, size=(2,))
97
+ self.blue_pos = self.np_random.integers(low=0, high=self.grid_size, size=(2,))
98
+ self.coin_pos = np.zeros(shape=(2,), dtype=np.int8)
99
+
100
+ self._players_do_not_overlap_at_start()
101
+
102
+ def _players_do_not_overlap_at_start(self):
103
+ while self._same_pos(self.red_pos, self.blue_pos):
104
+ self.blue_pos = self.np_random.integers(self.grid_size, size=2)
105
+
106
+ def _generate_coin(self):
107
+ self._switch_between_coin_color_at_each_generation()
108
+ self._coin_position_different_from_players_positions()
109
+
110
+ def _switch_between_coin_color_at_each_generation(self):
111
+ self.red_coin = 1 - self.red_coin
112
+
113
+ def _coin_position_different_from_players_positions(self):
114
+ success = 0
115
+ while success < self.NUM_AGENTS:
116
+ self.coin_pos = self.np_random.integers(self.grid_size, size=2)
117
+ success = 1 - self._same_pos(self.red_pos, self.coin_pos)
118
+ success += 1 - self._same_pos(self.blue_pos, self.coin_pos)
119
+
120
+ def _generate_observation(self):
121
+ obs = np.zeros((self.grid_size, self.grid_size, 4))
122
+ obs[self.red_pos[0], self.red_pos[1], 0] = 1
123
+ obs[self.blue_pos[0], self.blue_pos[1], 1] = 1
124
+ if self.red_coin:
125
+ obs[self.coin_pos[0], self.coin_pos[1], 2] = 1
126
+ else:
127
+ obs[self.coin_pos[0], self.coin_pos[1], 3] = 1
128
+
129
+ obs = self._get_obs_invariant_to_the_player_trained(obs)
130
+
131
+ return obs
132
+
133
+ @override(gym.Env)
134
+ def step(self, actions: Dict):
135
+ """
136
+ :param actions: Dict containing both actions for player_1 and player_2
137
+ :return: observations, rewards, done, info
138
+ """
139
+ actions = self._from_RLlib_API_to_list(actions)
140
+
141
+ self.step_count_in_current_episode += 1
142
+ self._move_players(actions)
143
+ reward_list, generate_new_coin = self._compute_reward()
144
+ if generate_new_coin:
145
+ self._generate_coin()
146
+ observations = self._generate_observation()
147
+
148
+ return self._to_RLlib_API(observations, reward_list)
149
+
150
+ def _same_pos(self, x, y):
151
+ return (x == y).all()
152
+
153
+ def _move_players(self, actions):
154
+ self.red_pos = (self.red_pos + self.MOVES[actions[0]]) % self.grid_size
155
+ self.blue_pos = (self.blue_pos + self.MOVES[actions[1]]) % self.grid_size
156
+
157
+ def _compute_reward(self):
158
+
159
+ reward_red = 0.0
160
+ reward_blue = 0.0
161
+ generate_new_coin = False
162
+ red_pick_any, red_pick_red, blue_pick_any, blue_pick_blue = (
163
+ False,
164
+ False,
165
+ False,
166
+ False,
167
+ )
168
+
169
+ red_first_if_both = None
170
+ if not self.both_players_can_pick_the_same_coin:
171
+ if self._same_pos(self.red_pos, self.coin_pos) and self._same_pos(
172
+ self.blue_pos, self.coin_pos
173
+ ):
174
+ red_first_if_both = bool(self.np_random.integers(low=0, high=2))
175
+
176
+ if self.red_coin:
177
+ if self._same_pos(self.red_pos, self.coin_pos) and (
178
+ red_first_if_both is None or red_first_if_both
179
+ ):
180
+ generate_new_coin = True
181
+ reward_red += 1
182
+ if self.asymmetric:
183
+ reward_red += 3
184
+ red_pick_any = True
185
+ red_pick_red = True
186
+ if self._same_pos(self.blue_pos, self.coin_pos) and (
187
+ red_first_if_both is None or not red_first_if_both
188
+ ):
189
+ generate_new_coin = True
190
+ reward_red += -2
191
+ reward_blue += 1
192
+ blue_pick_any = True
193
+ else:
194
+ if self._same_pos(self.red_pos, self.coin_pos) and (
195
+ red_first_if_both is None or red_first_if_both
196
+ ):
197
+ generate_new_coin = True
198
+ reward_red += 1
199
+ reward_blue += -2
200
+ if self.asymmetric:
201
+ reward_red += 3
202
+ red_pick_any = True
203
+ if self._same_pos(self.blue_pos, self.coin_pos) and (
204
+ red_first_if_both is None or not red_first_if_both
205
+ ):
206
+ generate_new_coin = True
207
+ reward_blue += 1
208
+ blue_pick_blue = True
209
+ blue_pick_any = True
210
+
211
+ reward_list = [reward_red, reward_blue]
212
+
213
+ if self.output_additional_info:
214
+ self._accumulate_info(
215
+ red_pick_any=red_pick_any,
216
+ red_pick_red=red_pick_red,
217
+ blue_pick_any=blue_pick_any,
218
+ blue_pick_blue=blue_pick_blue,
219
+ )
220
+
221
+ return reward_list, generate_new_coin
222
+
223
+ def _from_RLlib_API_to_list(self, actions):
224
+ """
225
+ Format actions from dict of players to list of lists
226
+ """
227
+ actions = [actions[player_id] for player_id in self.players_ids]
228
+ return actions
229
+
230
+ def _get_obs_invariant_to_the_player_trained(self, observation):
231
+ """
232
+ We want to be able to use a policy trained as player 1,
233
+ for evaluation as player 2 and vice versa.
234
+ """
235
+
236
+ # player_red_observation contains
237
+ # [Red pos, Blue pos, Red coin pos, Blue coin pos]
238
+ player_red_observation = observation
239
+ # After modification, player_blue_observation will contain
240
+ # [Blue pos, Red pos, Blue coin pos, Red coin pos]
241
+ player_blue_observation = copy.deepcopy(observation)
242
+ player_blue_observation[..., 0] = observation[..., 1]
243
+ player_blue_observation[..., 1] = observation[..., 0]
244
+ player_blue_observation[..., 2] = observation[..., 3]
245
+ player_blue_observation[..., 3] = observation[..., 2]
246
+
247
+ return [player_red_observation, player_blue_observation]
248
+
249
+ def _to_RLlib_API(self, observations, rewards):
250
+ state = {
251
+ self.player_red_id: observations[0],
252
+ self.player_blue_id: observations[1],
253
+ }
254
+ rewards = {
255
+ self.player_red_id: rewards[0],
256
+ self.player_blue_id: rewards[1],
257
+ }
258
+
259
+ epi_is_done = self.step_count_in_current_episode >= self.max_steps
260
+ if self.step_count_in_current_episode > self.max_steps:
261
+ logger.warning(
262
+ "step_count_in_current_episode > self.max_steps: "
263
+ f"{self.step_count_in_current_episode} > {self.max_steps}"
264
+ )
265
+
266
+ done = {
267
+ self.player_red_id: epi_is_done,
268
+ self.player_blue_id: epi_is_done,
269
+ "__all__": epi_is_done,
270
+ }
271
+
272
+ if epi_is_done and self.output_additional_info:
273
+ player_red_info, player_blue_info = self._get_episode_info()
274
+ info = {
275
+ self.player_red_id: player_red_info,
276
+ self.player_blue_id: player_blue_info,
277
+ }
278
+ else:
279
+ info = {}
280
+
281
+ return state, rewards, done, done, info
282
+
283
+ @override(InfoAccumulationInterface)
284
+ def _get_episode_info(self):
285
+ """
286
+ Output the following information:
287
+ pick_speed is the fraction of steps during which the player picked a
288
+ coin.
289
+ pick_own_color is the fraction of coins picked by the player which have
290
+ the same color as the player.
291
+ """
292
+ player_red_info, player_blue_info = {}, {}
293
+
294
+ if len(self.red_pick) > 0:
295
+ red_pick = sum(self.red_pick)
296
+ player_red_info["pick_speed"] = red_pick / len(self.red_pick)
297
+ if red_pick > 0:
298
+ player_red_info["pick_own_color"] = sum(self.red_pick_own) / red_pick
299
+
300
+ if len(self.blue_pick) > 0:
301
+ blue_pick = sum(self.blue_pick)
302
+ player_blue_info["pick_speed"] = blue_pick / len(self.blue_pick)
303
+ if blue_pick > 0:
304
+ player_blue_info["pick_own_color"] = sum(self.blue_pick_own) / blue_pick
305
+
306
+ return player_red_info, player_blue_info
307
+
308
+ @override(InfoAccumulationInterface)
309
+ def _reset_info(self):
310
+ self.red_pick.clear()
311
+ self.red_pick_own.clear()
312
+ self.blue_pick.clear()
313
+ self.blue_pick_own.clear()
314
+
315
+ @override(InfoAccumulationInterface)
316
+ def _accumulate_info(
317
+ self, red_pick_any, red_pick_red, blue_pick_any, blue_pick_blue
318
+ ):
319
+
320
+ self.red_pick.append(red_pick_any)
321
+ self.red_pick_own.append(red_pick_red)
322
+ self.blue_pick.append(blue_pick_any)
323
+ self.blue_pick_own.append(blue_pick_blue)
324
+
325
+ @override(InfoAccumulationInterface)
326
+ def _init_info(self):
327
+ self.red_pick = []
328
+ self.red_pick_own = []
329
+ self.blue_pick = []
330
+ self.blue_pick_own = []
331
+
332
+
333
+ class AsymCoinGame(CoinGame):
334
+ NAME = "AsymCoinGame"
335
+
336
+ def __init__(self, config: Optional[dict] = None):
337
+ if config is None:
338
+ config = {}
339
+
340
+ if "asymmetric" in config:
341
+ assert config["asymmetric"]
342
+ else:
343
+ config["asymmetric"] = True
344
+ super().__init__(config)
deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/correlated_actions_env.py ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gymnasium as gym
2
+ from gymnasium.spaces import Box, Discrete, Tuple
3
+ import numpy as np
4
+ from typing import Any, Dict, Optional
5
+
6
+
7
+ class AutoRegressiveActionEnv(gym.Env):
8
+ """Custom Environment with autoregressive continuous actions.
9
+
10
+ Simple env in which the policy has to emit a tuple of correlated actions.
11
+
12
+ In each step, the agent observes a random number (between -1 and 1) and has
13
+ to choose two actions a1 and a2.
14
+
15
+ It gets 0 reward for matching a2 to the random obs times action a1. In all
16
+ other cases the negative deviance between the desired action a2 and its
17
+ actual counterpart serves as reward. The reward is constructed in such a
18
+ way that actions need to be correlated to succeed. It is not possible
19
+ for the network to learn each action head separately.
20
+
21
+ One way to effectively learn this is through correlated action
22
+ distributions, e.g., in examples/rl_modules/autoregressive_action_rlm.py
23
+
24
+ The game ends after the first step.
25
+ """
26
+
27
+ def __init__(self, _=None):
28
+
29
+ # Define the action space (two continuous actions a1, a2)
30
+ self.action_space = Tuple([Discrete(2), Discrete(2)])
31
+
32
+ # Define the observation space (state is a single continuous value)
33
+ self.observation_space = Box(low=-1, high=1, shape=(1,), dtype=np.float32)
34
+
35
+ # Internal state for the environment (e.g., could represent a factor
36
+ # influencing the relationship)
37
+ self.state = None
38
+
39
+ def reset(
40
+ self, seed: Optional[int] = None, options: Optional[Dict[str, Any]] = None
41
+ ):
42
+ """Reset the environment to an initial state."""
43
+ super().reset(seed=seed, options=options)
44
+
45
+ # Randomly initialize the state between -1 and 1
46
+ self.state = np.random.uniform(-1, 1, size=(1,))
47
+
48
+ return self.state, {}
49
+
50
+ def step(self, action):
51
+ """Apply the autoregressive action and return step information."""
52
+
53
+ # Extract actions
54
+ a1, a2 = action
55
+
56
+ # The state determines the desired relationship between a1 and a2
57
+ desired_a2 = (
58
+ self.state[0] * a1
59
+ ) # Autoregressive relationship dependent on state
60
+
61
+ # Reward is based on how close a2 is to the state-dependent autoregressive
62
+ # relationship
63
+ reward = -np.abs(a2 - desired_a2) # Negative absolute error as the reward
64
+
65
+ # Optionally: add some noise or complexity to the reward function
66
+ # reward += np.random.normal(0, 0.01) # Small noise can be added
67
+
68
+ # Terminate after each step (no episode length in this simple example)
69
+ done = True
70
+
71
+ # Empty info dictionary
72
+ info = {}
73
+
74
+ return self.state, reward, done, False, info
deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/deterministic_envs.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gymnasium as gym
2
+
3
+
4
+ def create_cartpole_deterministic(config):
5
+ env = gym.make("CartPole-v1")
6
+ env.reset(seed=config.get("seed", 0))
7
+ return env
8
+
9
+
10
+ def create_pendulum_deterministic(config):
11
+ env = gym.make("Pendulum-v1")
12
+ env.reset(seed=config.get("seed", 0))
13
+ return env
deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/matrix_sequential_social_dilemma.py ADDED
@@ -0,0 +1,314 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ##########
2
+ # Contribution by the Center on Long-Term Risk:
3
+ # https://github.com/longtermrisk/marltoolbox
4
+ # Some parts are originally from:
5
+ # https://github.com/alshedivat/lola/tree/master/lola
6
+ ##########
7
+
8
+ import logging
9
+ from abc import ABC
10
+ from collections import Iterable
11
+ from typing import Dict, Optional
12
+
13
+ import numpy as np
14
+ from gymnasium.spaces import Discrete
15
+ from gymnasium.utils import seeding
16
+ from ray.rllib.env.multi_agent_env import MultiAgentEnv
17
+ from ray.rllib.examples.envs.classes.utils.interfaces import InfoAccumulationInterface
18
+ from ray.rllib.examples.envs.classes.utils.mixins import (
19
+ TwoPlayersTwoActionsInfoMixin,
20
+ NPlayersNDiscreteActionsInfoMixin,
21
+ )
22
+
23
+ logger = logging.getLogger(__name__)
24
+
25
+
26
+ class MatrixSequentialSocialDilemma(InfoAccumulationInterface, MultiAgentEnv, ABC):
27
+ """
28
+ A multi-agent abstract class for two player matrix games.
29
+
30
+ PAYOUT_MATRIX: Numpy array. Along the dimension N, the action of the
31
+ Nth player change. The last dimension is used to select the player
32
+ whose reward you want to know.
33
+
34
+ max_steps: number of step in one episode
35
+
36
+ players_ids: list of the RLlib agent id of each player
37
+
38
+ output_additional_info: ask the environment to aggregate information
39
+ about the last episode and output them as info at the end of the
40
+ episode.
41
+ """
42
+
43
+ def __init__(self, config: Optional[Dict] = None):
44
+ if config is None:
45
+ config = {}
46
+
47
+ assert "reward_randomness" not in config.keys()
48
+ assert self.PAYOUT_MATRIX is not None
49
+ if "players_ids" in config:
50
+ assert (
51
+ isinstance(config["players_ids"], Iterable)
52
+ and len(config["players_ids"]) == self.NUM_AGENTS
53
+ )
54
+
55
+ self.players_ids = config.get("players_ids", ["player_row", "player_col"])
56
+ self.player_row_id, self.player_col_id = self.players_ids
57
+ self.max_steps = config.get("max_steps", 20)
58
+ self.output_additional_info = config.get("output_additional_info", True)
59
+
60
+ self.step_count_in_current_episode = None
61
+
62
+ # To store info about the fraction of each states
63
+ if self.output_additional_info:
64
+ self._init_info()
65
+
66
+ def reset(self, *, seed=None, options=None):
67
+ self.np_random, seed = seeding.np_random(seed)
68
+
69
+ self.step_count_in_current_episode = 0
70
+ if self.output_additional_info:
71
+ self._reset_info()
72
+ return {
73
+ self.player_row_id: self.NUM_STATES - 1,
74
+ self.player_col_id: self.NUM_STATES - 1,
75
+ }, {}
76
+
77
+ def step(self, actions: dict):
78
+ """
79
+ :param actions: Dict containing both actions for player_1 and player_2
80
+ :return: observations, rewards, done, info
81
+ """
82
+ self.step_count_in_current_episode += 1
83
+ action_player_row = actions[self.player_row_id]
84
+ action_player_col = actions[self.player_col_id]
85
+
86
+ if self.output_additional_info:
87
+ self._accumulate_info(action_player_row, action_player_col)
88
+
89
+ observations = self._produce_observations_invariant_to_the_player_trained(
90
+ action_player_row, action_player_col
91
+ )
92
+ rewards = self._get_players_rewards(action_player_row, action_player_col)
93
+ epi_is_done = self.step_count_in_current_episode >= self.max_steps
94
+ if self.step_count_in_current_episode > self.max_steps:
95
+ logger.warning("self.step_count_in_current_episode >= self.max_steps")
96
+ info = self._get_info_for_current_epi(epi_is_done)
97
+
98
+ return self._to_RLlib_API(observations, rewards, epi_is_done, info)
99
+
100
+ def _produce_observations_invariant_to_the_player_trained(
101
+ self, action_player_0: int, action_player_1: int
102
+ ):
103
+ """
104
+ We want to be able to use a policy trained as player 1
105
+ for evaluation as player 2 and vice versa.
106
+ """
107
+ return [
108
+ action_player_0 * self.NUM_ACTIONS + action_player_1,
109
+ action_player_1 * self.NUM_ACTIONS + action_player_0,
110
+ ]
111
+
112
+ def _get_players_rewards(self, action_player_0: int, action_player_1: int):
113
+ return [
114
+ self.PAYOUT_MATRIX[action_player_0][action_player_1][0],
115
+ self.PAYOUT_MATRIX[action_player_0][action_player_1][1],
116
+ ]
117
+
118
+ def _to_RLlib_API(
119
+ self, observations: list, rewards: list, epi_is_done: bool, info: dict
120
+ ):
121
+
122
+ observations = {
123
+ self.player_row_id: observations[0],
124
+ self.player_col_id: observations[1],
125
+ }
126
+
127
+ rewards = {self.player_row_id: rewards[0], self.player_col_id: rewards[1]}
128
+
129
+ if info is None:
130
+ info = {}
131
+ else:
132
+ info = {self.player_row_id: info, self.player_col_id: info}
133
+
134
+ done = {
135
+ self.player_row_id: epi_is_done,
136
+ self.player_col_id: epi_is_done,
137
+ "__all__": epi_is_done,
138
+ }
139
+
140
+ return observations, rewards, done, done, info
141
+
142
+ def _get_info_for_current_epi(self, epi_is_done):
143
+ if epi_is_done and self.output_additional_info:
144
+ info_for_current_epi = self._get_episode_info()
145
+ else:
146
+ info_for_current_epi = None
147
+ return info_for_current_epi
148
+
149
+ def __str__(self):
150
+ return self.NAME
151
+
152
+
153
+ class IteratedMatchingPennies(
154
+ TwoPlayersTwoActionsInfoMixin, MatrixSequentialSocialDilemma
155
+ ):
156
+ """
157
+ A two-agent environment for the Matching Pennies game.
158
+ """
159
+
160
+ NUM_AGENTS = 2
161
+ NUM_ACTIONS = 2
162
+ NUM_STATES = NUM_ACTIONS**NUM_AGENTS + 1
163
+ ACTION_SPACE = Discrete(NUM_ACTIONS)
164
+ OBSERVATION_SPACE = Discrete(NUM_STATES)
165
+ PAYOUT_MATRIX = np.array([[[+1, -1], [-1, +1]], [[-1, +1], [+1, -1]]])
166
+ NAME = "IMP"
167
+
168
+
169
+ class IteratedPrisonersDilemma(
170
+ TwoPlayersTwoActionsInfoMixin, MatrixSequentialSocialDilemma
171
+ ):
172
+ """
173
+ A two-agent environment for the Prisoner's Dilemma game.
174
+ """
175
+
176
+ NUM_AGENTS = 2
177
+ NUM_ACTIONS = 2
178
+ NUM_STATES = NUM_ACTIONS**NUM_AGENTS + 1
179
+ ACTION_SPACE = Discrete(NUM_ACTIONS)
180
+ OBSERVATION_SPACE = Discrete(NUM_STATES)
181
+ PAYOUT_MATRIX = np.array([[[-1, -1], [-3, +0]], [[+0, -3], [-2, -2]]])
182
+ NAME = "IPD"
183
+
184
+
185
+ class IteratedAsymPrisonersDilemma(
186
+ TwoPlayersTwoActionsInfoMixin, MatrixSequentialSocialDilemma
187
+ ):
188
+ """
189
+ A two-agent environment for the Asymmetric Prisoner's Dilemma game.
190
+ """
191
+
192
+ NUM_AGENTS = 2
193
+ NUM_ACTIONS = 2
194
+ NUM_STATES = NUM_ACTIONS**NUM_AGENTS + 1
195
+ ACTION_SPACE = Discrete(NUM_ACTIONS)
196
+ OBSERVATION_SPACE = Discrete(NUM_STATES)
197
+ PAYOUT_MATRIX = np.array([[[+0, -1], [-3, +0]], [[+0, -3], [-2, -2]]])
198
+ NAME = "IPD"
199
+
200
+
201
+ class IteratedStagHunt(TwoPlayersTwoActionsInfoMixin, MatrixSequentialSocialDilemma):
202
+ """
203
+ A two-agent environment for the Stag Hunt game.
204
+ """
205
+
206
+ NUM_AGENTS = 2
207
+ NUM_ACTIONS = 2
208
+ NUM_STATES = NUM_ACTIONS**NUM_AGENTS + 1
209
+ ACTION_SPACE = Discrete(NUM_ACTIONS)
210
+ OBSERVATION_SPACE = Discrete(NUM_STATES)
211
+ PAYOUT_MATRIX = np.array([[[3, 3], [0, 2]], [[2, 0], [1, 1]]])
212
+ NAME = "IteratedStagHunt"
213
+
214
+
215
+ class IteratedChicken(TwoPlayersTwoActionsInfoMixin, MatrixSequentialSocialDilemma):
216
+ """
217
+ A two-agent environment for the Chicken game.
218
+ """
219
+
220
+ NUM_AGENTS = 2
221
+ NUM_ACTIONS = 2
222
+ NUM_STATES = NUM_ACTIONS**NUM_AGENTS + 1
223
+ ACTION_SPACE = Discrete(NUM_ACTIONS)
224
+ OBSERVATION_SPACE = Discrete(NUM_STATES)
225
+ PAYOUT_MATRIX = np.array([[[+0, +0], [-1.0, +1.0]], [[+1, -1], [-10, -10]]])
226
+ NAME = "IteratedChicken"
227
+
228
+
229
+ class IteratedAsymChicken(TwoPlayersTwoActionsInfoMixin, MatrixSequentialSocialDilemma):
230
+ """
231
+ A two-agent environment for the Asymmetric Chicken game.
232
+ """
233
+
234
+ NUM_AGENTS = 2
235
+ NUM_ACTIONS = 2
236
+ NUM_STATES = NUM_ACTIONS**NUM_AGENTS + 1
237
+ ACTION_SPACE = Discrete(NUM_ACTIONS)
238
+ OBSERVATION_SPACE = Discrete(NUM_STATES)
239
+ PAYOUT_MATRIX = np.array([[[+2.0, +0], [-1.0, +1.0]], [[+2.5, -1], [-10, -10]]])
240
+ NAME = "AsymmetricIteratedChicken"
241
+
242
+
243
+ class IteratedBoS(TwoPlayersTwoActionsInfoMixin, MatrixSequentialSocialDilemma):
244
+ """
245
+ A two-agent environment for the BoS game.
246
+ """
247
+
248
+ NUM_AGENTS = 2
249
+ NUM_ACTIONS = 2
250
+ NUM_STATES = NUM_ACTIONS**NUM_AGENTS + 1
251
+ ACTION_SPACE = Discrete(NUM_ACTIONS)
252
+ OBSERVATION_SPACE = Discrete(NUM_STATES)
253
+ PAYOUT_MATRIX = np.array(
254
+ [[[+3.0, +2.0], [+0.0, +0.0]], [[+0.0, +0.0], [+2.0, +3.0]]]
255
+ )
256
+ NAME = "IteratedBoS"
257
+
258
+
259
+ class IteratedAsymBoS(TwoPlayersTwoActionsInfoMixin, MatrixSequentialSocialDilemma):
260
+ """
261
+ A two-agent environment for the BoS game.
262
+ """
263
+
264
+ NUM_AGENTS = 2
265
+ NUM_ACTIONS = 2
266
+ NUM_STATES = NUM_ACTIONS**NUM_AGENTS + 1
267
+ ACTION_SPACE = Discrete(NUM_ACTIONS)
268
+ OBSERVATION_SPACE = Discrete(NUM_STATES)
269
+ PAYOUT_MATRIX = np.array(
270
+ [[[+4.0, +1.0], [+0.0, +0.0]], [[+0.0, +0.0], [+2.0, +2.0]]]
271
+ )
272
+ NAME = "AsymmetricIteratedBoS"
273
+
274
+
275
+ def define_greed_fear_matrix_game(greed, fear):
276
+ class GreedFearGame(TwoPlayersTwoActionsInfoMixin, MatrixSequentialSocialDilemma):
277
+ NUM_AGENTS = 2
278
+ NUM_ACTIONS = 2
279
+ NUM_STATES = NUM_ACTIONS**NUM_AGENTS + 1
280
+ ACTION_SPACE = Discrete(NUM_ACTIONS)
281
+ OBSERVATION_SPACE = Discrete(NUM_STATES)
282
+ R = 3
283
+ P = 1
284
+ T = R + greed
285
+ S = P - fear
286
+ PAYOUT_MATRIX = np.array([[[R, R], [S, T]], [[T, S], [P, P]]])
287
+ NAME = "IteratedGreedFear"
288
+
289
+ def __str__(self):
290
+ return f"{self.NAME} with greed={greed} and fear={fear}"
291
+
292
+ return GreedFearGame
293
+
294
+
295
+ class IteratedBoSAndPD(
296
+ NPlayersNDiscreteActionsInfoMixin, MatrixSequentialSocialDilemma
297
+ ):
298
+ """
299
+ A two-agent environment for the BOTS + PD game.
300
+ """
301
+
302
+ NUM_AGENTS = 2
303
+ NUM_ACTIONS = 3
304
+ NUM_STATES = NUM_ACTIONS**NUM_AGENTS + 1
305
+ ACTION_SPACE = Discrete(NUM_ACTIONS)
306
+ OBSERVATION_SPACE = Discrete(NUM_STATES)
307
+ PAYOUT_MATRIX = np.array(
308
+ [
309
+ [[3.5, +1], [+0, +0], [-3, +2]],
310
+ [[+0.0, +0], [+1, +3], [-3, +2]],
311
+ [[+2.0, -3], [+2, -3], [-1, -1]],
312
+ ]
313
+ )
314
+ NAME = "IteratedBoSAndPD"
deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/memory_leaking_env.py ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ import uuid
3
+
4
+ from ray.rllib.examples.envs.classes.random_env import RandomEnv
5
+ from ray.rllib.utils.annotations import override
6
+
7
+ logger = logging.getLogger(__name__)
8
+
9
+
10
+ class MemoryLeakingEnv(RandomEnv):
11
+ """An env that leaks very little memory.
12
+
13
+ Useful for proving that our memory-leak tests can catch the
14
+ slightest leaks.
15
+ """
16
+
17
+ def __init__(self, config=None):
18
+ super().__init__(config)
19
+ self._leak = {}
20
+ self._steps_after_reset = 0
21
+
22
+ @override(RandomEnv)
23
+ def reset(self, *, seed=None, options=None):
24
+ self._steps_after_reset = 0
25
+ return super().reset(seed=seed, options=options)
26
+
27
+ @override(RandomEnv)
28
+ def step(self, action):
29
+ self._steps_after_reset += 1
30
+
31
+ # Only leak once an episode.
32
+ if self._steps_after_reset == 2:
33
+ self._leak[uuid.uuid4().hex.upper()] = 1
34
+
35
+ return super().step(action)
deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/mock_env.py ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gymnasium as gym
2
+ import numpy as np
3
+ from typing import Optional
4
+
5
+ from ray.rllib.env.vector_env import VectorEnv
6
+ from ray.rllib.utils.annotations import override
7
+
8
+
9
+ class MockEnv(gym.Env):
10
+ """Mock environment for testing purposes.
11
+
12
+ Observation=0, reward=1.0, episode-len is configurable.
13
+ Actions are ignored.
14
+ """
15
+
16
+ def __init__(self, episode_length, config=None):
17
+ self.episode_length = episode_length
18
+ self.config = config
19
+ self.i = 0
20
+ self.observation_space = gym.spaces.Discrete(1)
21
+ self.action_space = gym.spaces.Discrete(2)
22
+
23
+ def reset(self, *, seed=None, options=None):
24
+ self.i = 0
25
+ return 0, {}
26
+
27
+ def step(self, action):
28
+ self.i += 1
29
+ terminated = truncated = self.i >= self.episode_length
30
+ return 0, 1.0, terminated, truncated, {}
31
+
32
+
33
+ class MockEnv2(gym.Env):
34
+ """Mock environment for testing purposes.
35
+
36
+ Observation=ts (discrete space!), reward=100.0, episode-len is
37
+ configurable. Actions are ignored.
38
+ """
39
+
40
+ metadata = {
41
+ "render.modes": ["rgb_array"],
42
+ }
43
+ render_mode: Optional[str] = "rgb_array"
44
+
45
+ def __init__(self, episode_length):
46
+ self.episode_length = episode_length
47
+ self.i = 0
48
+ self.observation_space = gym.spaces.Discrete(self.episode_length + 1)
49
+ self.action_space = gym.spaces.Discrete(2)
50
+ self.rng_seed = None
51
+
52
+ def reset(self, *, seed=None, options=None):
53
+ self.i = 0
54
+ if seed is not None:
55
+ self.rng_seed = seed
56
+ return self.i, {}
57
+
58
+ def step(self, action):
59
+ self.i += 1
60
+ terminated = truncated = self.i >= self.episode_length
61
+ return self.i, 100.0, terminated, truncated, {}
62
+
63
+ def render(self):
64
+ # Just generate a random image here for demonstration purposes.
65
+ # Also see `gym/envs/classic_control/cartpole.py` for
66
+ # an example on how to use a Viewer object.
67
+ return np.random.randint(0, 256, size=(300, 400, 3), dtype=np.uint8)
68
+
69
+
70
+ class MockEnv3(gym.Env):
71
+ """Mock environment for testing purposes.
72
+
73
+ Observation=ts (discrete space!), reward=100.0, episode-len is
74
+ configurable. Actions are ignored.
75
+ """
76
+
77
+ def __init__(self, episode_length):
78
+ self.episode_length = episode_length
79
+ self.i = 0
80
+ self.observation_space = gym.spaces.Discrete(100)
81
+ self.action_space = gym.spaces.Discrete(2)
82
+
83
+ def reset(self, *, seed=None, options=None):
84
+ self.i = 0
85
+ return self.i, {"timestep": 0}
86
+
87
+ def step(self, action):
88
+ self.i += 1
89
+ terminated = truncated = self.i >= self.episode_length
90
+ return self.i, self.i, terminated, truncated, {"timestep": self.i}
91
+
92
+
93
+ class VectorizedMockEnv(VectorEnv):
94
+ """Vectorized version of the MockEnv.
95
+
96
+ Contains `num_envs` MockEnv instances, each one having its own
97
+ `episode_length` horizon.
98
+ """
99
+
100
+ def __init__(self, episode_length, num_envs):
101
+ super().__init__(
102
+ observation_space=gym.spaces.Discrete(1),
103
+ action_space=gym.spaces.Discrete(2),
104
+ num_envs=num_envs,
105
+ )
106
+ self.envs = [MockEnv(episode_length) for _ in range(num_envs)]
107
+
108
+ @override(VectorEnv)
109
+ def vector_reset(self, *, seeds=None, options=None):
110
+ seeds = seeds or [None] * self.num_envs
111
+ options = options or [None] * self.num_envs
112
+ obs_and_infos = [
113
+ e.reset(seed=seeds[i], options=options[i]) for i, e in enumerate(self.envs)
114
+ ]
115
+ return [oi[0] for oi in obs_and_infos], [oi[1] for oi in obs_and_infos]
116
+
117
+ @override(VectorEnv)
118
+ def reset_at(self, index, *, seed=None, options=None):
119
+ return self.envs[index].reset(seed=seed, options=options)
120
+
121
+ @override(VectorEnv)
122
+ def vector_step(self, actions):
123
+ obs_batch, rew_batch, terminated_batch, truncated_batch, info_batch = (
124
+ [],
125
+ [],
126
+ [],
127
+ [],
128
+ [],
129
+ )
130
+ for i in range(len(self.envs)):
131
+ obs, rew, terminated, truncated, info = self.envs[i].step(actions[i])
132
+ obs_batch.append(obs)
133
+ rew_batch.append(rew)
134
+ terminated_batch.append(terminated)
135
+ truncated_batch.append(truncated)
136
+ info_batch.append(info)
137
+ return obs_batch, rew_batch, terminated_batch, truncated_batch, info_batch
138
+
139
+ @override(VectorEnv)
140
+ def get_sub_environments(self):
141
+ return self.envs
142
+
143
+
144
+ class MockVectorEnv(VectorEnv):
145
+ """A custom vector env that uses a single(!) CartPole sub-env.
146
+
147
+ However, this env pretends to be a vectorized one to illustrate how one
148
+ could create custom VectorEnvs w/o the need for actual vectorizations of
149
+ sub-envs under the hood.
150
+ """
151
+
152
+ def __init__(self, episode_length, mocked_num_envs):
153
+ self.env = gym.make("CartPole-v1")
154
+ super().__init__(
155
+ observation_space=self.env.observation_space,
156
+ action_space=self.env.action_space,
157
+ num_envs=mocked_num_envs,
158
+ )
159
+ self.episode_len = episode_length
160
+ self.ts = 0
161
+
162
+ @override(VectorEnv)
163
+ def vector_reset(self, *, seeds=None, options=None):
164
+ # Since we only have one underlying sub-environment, just use the first seed
165
+ # and the first options dict (the user of this env thinks, there are
166
+ # `self.num_envs` sub-environments and sends that many seeds/options).
167
+ seeds = seeds or [None]
168
+ options = options or [None]
169
+ obs, infos = self.env.reset(seed=seeds[0], options=options[0])
170
+ # Simply repeat the single obs/infos to pretend we really have
171
+ # `self.num_envs` sub-environments.
172
+ return (
173
+ [obs for _ in range(self.num_envs)],
174
+ [infos for _ in range(self.num_envs)],
175
+ )
176
+
177
+ @override(VectorEnv)
178
+ def reset_at(self, index, *, seed=None, options=None):
179
+ self.ts = 0
180
+ return self.env.reset(seed=seed, options=options)
181
+
182
+ @override(VectorEnv)
183
+ def vector_step(self, actions):
184
+ self.ts += 1
185
+ # Apply all actions sequentially to the same env.
186
+ # Whether this would make a lot of sense is debatable.
187
+ obs_batch, rew_batch, terminated_batch, truncated_batch, info_batch = (
188
+ [],
189
+ [],
190
+ [],
191
+ [],
192
+ [],
193
+ )
194
+ for i in range(self.num_envs):
195
+ obs, rew, terminated, truncated, info = self.env.step(actions[i])
196
+ # Artificially truncate once time step limit has been reached.
197
+ # Note: Also terminate/truncate, when underlying CartPole is
198
+ # terminated/truncated.
199
+ if self.ts >= self.episode_len:
200
+ truncated = True
201
+ obs_batch.append(obs)
202
+ rew_batch.append(rew)
203
+ terminated_batch.append(terminated)
204
+ truncated_batch.append(truncated)
205
+ info_batch.append(info)
206
+ if terminated or truncated:
207
+ remaining = self.num_envs - (i + 1)
208
+ obs_batch.extend([obs for _ in range(remaining)])
209
+ rew_batch.extend([rew for _ in range(remaining)])
210
+ terminated_batch.extend([terminated for _ in range(remaining)])
211
+ truncated_batch.extend([truncated for _ in range(remaining)])
212
+ info_batch.extend([info for _ in range(remaining)])
213
+ break
214
+ return obs_batch, rew_batch, terminated_batch, truncated_batch, info_batch
215
+
216
+ @override(VectorEnv)
217
+ def get_sub_environments(self):
218
+ # You may also leave this method as-is, in which case, it would
219
+ # return an empty list.
220
+ return [self.env for _ in range(self.num_envs)]
deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/pendulum_mass.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from gymnasium.envs.classic_control.pendulum import PendulumEnv
2
+ from gymnasium.utils import EzPickle
3
+ import numpy as np
4
+
5
+ from ray.rllib.env.apis.task_settable_env import TaskSettableEnv
6
+
7
+
8
+ class PendulumMassEnv(PendulumEnv, EzPickle, TaskSettableEnv):
9
+ """PendulumMassEnv varies the weight of the pendulum
10
+
11
+ Tasks are defined to be weight uniformly sampled between [0.5,2]
12
+ """
13
+
14
+ def sample_tasks(self, n_tasks):
15
+ # Sample new pendulum masses (random floats between 0.5 and 2).
16
+ return np.random.uniform(low=0.5, high=2.0, size=(n_tasks,))
17
+
18
+ def set_task(self, task):
19
+ """
20
+ Args:
21
+ task: Task of the meta-learning environment (here: mass of
22
+ the pendulum).
23
+ """
24
+ # self.m is the mass property of the pendulum.
25
+ self.m = task
26
+
27
+ def get_task(self):
28
+ """
29
+ Returns:
30
+ float: The current mass of the pendulum (self.m in the PendulumEnv
31
+ object).
32
+ """
33
+ return self.m
deepseek/lib/python3.10/site-packages/ray/rllib/examples/envs/classes/stateless_cartpole.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from gymnasium.spaces import Box
2
+ import numpy as np
3
+
4
+ from gymnasium.envs.classic_control import CartPoleEnv
5
+
6
+
7
+ class StatelessCartPole(CartPoleEnv):
8
+ """Partially observable variant of the CartPole gym environment.
9
+
10
+ https://github.com/openai/gym/blob/master/gym/envs/classic_control/
11
+ cartpole.py
12
+
13
+ We delete the x- and angular velocity components of the state, so that it
14
+ can only be solved by a memory enhanced model (policy).
15
+ """
16
+
17
+ def __init__(self, config=None):
18
+ super().__init__()
19
+
20
+ # Fix our observation-space (remove 2 velocity components).
21
+ high = np.array(
22
+ [
23
+ self.x_threshold * 2,
24
+ self.theta_threshold_radians * 2,
25
+ ],
26
+ dtype=np.float32,
27
+ )
28
+
29
+ self.observation_space = Box(low=-high, high=high, dtype=np.float32)
30
+
31
+ def step(self, action):
32
+ next_obs, reward, done, truncated, info = super().step(action)
33
+ # next_obs is [x-pos, x-veloc, angle, angle-veloc]
34
+ return np.array([next_obs[0], next_obs[2]]), reward, done, truncated, info
35
+
36
+ def reset(self, *, seed=None, options=None):
37
+ init_obs, init_info = super().reset(seed=seed, options=options)
38
+ # init_obs is [x-pos, x-veloc, angle, angle-veloc]
39
+ return np.array([init_obs[0], init_obs[2]]), init_info
deepseek/lib/python3.10/site-packages/ray/rllib/examples/fault_tolerance/__init__.py ADDED
File without changes
deepseek/lib/python3.10/site-packages/ray/rllib/examples/fault_tolerance/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (188 Bytes). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/fault_tolerance/__pycache__/crashing_and_stalling_env.cpython-310.pyc ADDED
Binary file (6.98 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/fault_tolerance/crashing_and_stalling_env.py ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Example demonstrating that RLlib can learn (at scale) in unstable environments.
2
+
3
+ This script uses the `CartPoleCrashing` environment, an adapted cartpole env whose
4
+ instability is configurable through setting the probability of a crash and/or stall
5
+ (sleep for a configurable amount of time) during `reset()` and/or `step()`.
6
+
7
+ RLlib has two major flags for EnvRunner fault tolerance, which can be independently
8
+ set to True:
9
+ 1) `config.fault_tolerance(restart_failed_sub_environments=True)` causes only the
10
+ (gymnasium) environment object on an EnvRunner to be closed (try calling `close()` on
11
+ the faulty object), garbage collected, and finally recreated from scratch. Note that
12
+ during this process, the containing EnvRunner remaing up and running and sampling
13
+ simply continues after the env recycling. This is the lightest and fastest form of
14
+ fault tolerance and should be attempted first.
15
+ 2) `config.fault_tolerance(restart_failed_env_runners=True)` causes the entire
16
+ EnvRunner (a Ray remote actor) to be restarted. This restart logically includes the
17
+ gymnasium environment, the RLModule, and all connector pipelines on the EnvRunner.
18
+ Use this option only if you face problems with the first option
19
+ (restart_failed_sub_environments=True), such as incomplete cleanups and memory leaks.
20
+
21
+
22
+ How to run this script
23
+ ----------------------
24
+ `python [script file name].py --enable-new-api-stack
25
+
26
+ You can switch on the fault tolerant behavior (1) (restart_failed_sub_environments)
27
+ through the `--restart-failed-envs` flag. If this flag is not set, the script will
28
+ recreate the entire (faulty) EnvRunner.
29
+
30
+ You can switch on stalling (besides crashing) through the `--stall` command line flag.
31
+ If set, besides crashing on `reset()` and/or `step()`, there is also a chance of
32
+ stalling for a few seconds on each of these events.
33
+
34
+ For debugging, use the following additional command line options
35
+ `--no-tune --num-env-runners=0`
36
+ which should allow you to set breakpoints anywhere in the RLlib code and
37
+ have the execution stop there for inspection and debugging.
38
+
39
+ For logging to your WandB account, use:
40
+ `--wandb-key=[your WandB API key] --wandb-project=[some project name]
41
+ --wandb-run-name=[optional: WandB run name (within the defined project)]`
42
+
43
+
44
+ Results to expect
45
+ -----------------
46
+ You should see the following (or very similar) console output when running this script
47
+ with:
48
+ `--algo=PPO --stall --restart-failed-envs --stop-reward=450.0`
49
+ +---------------------+------------+----------------+--------+------------------+
50
+ | Trial name | status | loc | iter | total time (s) |
51
+ | | | | | |
52
+ |---------------------+------------+----------------+--------+------------------+
53
+ | PPO_env_ba39b_00000 | TERMINATED | 127.0.0.1:1401 | 22 | 133.497 |
54
+ +---------------------+------------+----------------+--------+------------------+
55
+ +------------------------+------------------------+------------------------+
56
+ | episode_return_mean | num_episodes_lifetim | num_env_steps_traine |
57
+ | | e | d_lifetime |
58
+ |------------------------+------------------------+------------------------|
59
+ | 450.24 | 542 | 88628 |
60
+ +------------------------+------------------------+------------------------+
61
+
62
+ For APPO and testing restarting the entire EnvRunners, you could run the script with:
63
+ `--algo=APPO --stall --stop-reward=450.0`
64
+ +----------------------+------------+----------------+--------+------------------+
65
+ | Trial name | status | loc | iter | total time (s) |
66
+ | | | | | |
67
+ |----------------------+------------+----------------+--------+------------------+
68
+ | APPO_env_ba39b_00000 | TERMINATED | 127.0.0.1:4653 | 10 | 101.531 |
69
+ +----------------------+------------+----------------+--------+------------------+
70
+ +------------------------+------------------------+------------------------+
71
+ | episode_return_mean | num_episodes_lifetim | num_env_steps_traine |
72
+ | | e | d_lifetime |
73
+ |------------------------+------------------------+------------------------|
74
+ | 478.85 | 2546 | 321500 |
75
+ +------------------------+------------------------+------------------------+
76
+ """
77
+ from gymnasium.wrappers import TimeLimit
78
+
79
+ from ray import tune
80
+ from ray.rllib.core.rl_module.default_model_config import DefaultModelConfig
81
+ from ray.rllib.examples.envs.classes.cartpole_crashing import (
82
+ CartPoleCrashing,
83
+ MultiAgentCartPoleCrashing,
84
+ )
85
+ from ray.rllib.utils.test_utils import (
86
+ add_rllib_example_script_args,
87
+ run_rllib_example_script_experiment,
88
+ )
89
+
90
+ parser = add_rllib_example_script_args(
91
+ default_reward=450.0,
92
+ default_timesteps=2000000,
93
+ )
94
+ parser.set_defaults(
95
+ enable_new_api_stack=True,
96
+ num_env_runners=4,
97
+ num_envs_per_env_runner=2,
98
+ )
99
+ # Use `parser` to add your own custom command line options to this script
100
+ # and (if needed) use their values to set up `config` below.
101
+ parser.add_argument(
102
+ "--stall",
103
+ action="store_true",
104
+ help="Whether to also stall the env from time to time",
105
+ )
106
+ parser.add_argument(
107
+ "--restart-failed-envs",
108
+ action="store_true",
109
+ help="Whether to restart a failed environment (vs restarting the entire "
110
+ "EnvRunner).",
111
+ )
112
+
113
+
114
+ if __name__ == "__main__":
115
+ args = parser.parse_args()
116
+
117
+ # Register our environment with tune.
118
+ if args.num_agents > 0:
119
+ tune.register_env("env", lambda cfg: MultiAgentCartPoleCrashing(cfg))
120
+ else:
121
+ tune.register_env(
122
+ "env",
123
+ lambda cfg: TimeLimit(CartPoleCrashing(cfg), max_episode_steps=500),
124
+ )
125
+
126
+ base_config = (
127
+ tune.registry.get_trainable_cls(args.algo)
128
+ .get_default_config()
129
+ .environment(
130
+ "env",
131
+ env_config={
132
+ "num_agents": args.num_agents,
133
+ # Probability to crash during step().
134
+ "p_crash": 0.0001,
135
+ # Probability to crash during reset().
136
+ "p_crash_reset": 0.001,
137
+ "crash_on_worker_indices": [1, 2],
138
+ "init_time_s": 2.0,
139
+ # Probability to stall during step().
140
+ "p_stall": 0.0005,
141
+ # Probability to stall during reset().
142
+ "p_stall_reset": 0.001,
143
+ # Stall from 2 to 5sec (or 0.0 if --stall not set).
144
+ "stall_time_sec": (2, 5) if args.stall else 0.0,
145
+ # EnvRunner indices to stall on.
146
+ "stall_on_worker_indices": [2, 3],
147
+ },
148
+ )
149
+ # Switch on resiliency.
150
+ .fault_tolerance(
151
+ # Recreate any failed EnvRunners.
152
+ restart_failed_env_runners=True,
153
+ # Restart any failed environment (w/o recreating the EnvRunner). Note that
154
+ # this is the much faster option.
155
+ restart_failed_sub_environments=args.restart_failed_envs,
156
+ )
157
+ )
158
+
159
+ # Use more stabilizing hyperparams for APPO.
160
+ if args.algo == "APPO":
161
+ base_config.training(
162
+ grad_clip=40.0,
163
+ entropy_coeff=0.0,
164
+ vf_loss_coeff=0.05,
165
+ )
166
+ base_config.rl_module(
167
+ model_config=DefaultModelConfig(vf_share_layers=True),
168
+ )
169
+
170
+ # Add a simple multi-agent setup.
171
+ if args.num_agents > 0:
172
+ base_config.multi_agent(
173
+ policies={f"p{i}" for i in range(args.num_agents)},
174
+ policy_mapping_fn=lambda aid, *a, **kw: f"p{aid}",
175
+ )
176
+
177
+ run_rllib_example_script_experiment(base_config, args=args)
deepseek/lib/python3.10/site-packages/ray/rllib/examples/inference/__init__.py ADDED
File without changes
deepseek/lib/python3.10/site-packages/ray/rllib/examples/inference/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (182 Bytes). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/inference/__pycache__/policy_inference_after_training_with_lstm.cpython-310.pyc ADDED
Binary file (4.08 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/inference/policy_inference_after_training.py ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Example on how to compute actions in production on an already trained policy.
2
+
3
+ This example uses the simplest setup possible: An RLModule (policy net) recovered
4
+ from a checkpoint and a manual env-loop (CartPole-v1). No ConnectorV2s or EnvRunners are
5
+ used in this example.
6
+
7
+ This example:
8
+ - shows how to use an already existing checkpoint to extract a single-agent RLModule
9
+ from (our policy network).
10
+ - shows how to setup this recovered policy net for action computations (with or
11
+ without using exploration).
12
+ - shows have the policy run through a very simple gymnasium based env-loop, w/o
13
+ using RLlib's ConnectorV2s or EnvRunners.
14
+
15
+
16
+ How to run this script
17
+ ----------------------
18
+ `python [script file name].py --enable-new-api-stack --stop-reward=200.0`
19
+
20
+ Use the `--explore-during-inference` option to switch on exploratory behavior
21
+ during inference. Normally, you should not explore during inference, though,
22
+ unless your environment has a stochastic optimal solution.
23
+ Use the `--num-episodes-during-inference=[int]` option to set the number of
24
+ episodes to run through during the inference phase using the restored RLModule.
25
+
26
+ For debugging, use the following additional command line options
27
+ `--no-tune --num-env-runners=0`
28
+ which should allow you to set breakpoints anywhere in the RLlib code and
29
+ have the execution stop there for inspection and debugging.
30
+
31
+ Note that the shown GPU settings in this script also work in case you are not
32
+ running via tune, but instead are using the `--no-tune` command line option.
33
+
34
+ For logging to your WandB account, use:
35
+ `--wandb-key=[your WandB API key] --wandb-project=[some project name]
36
+ --wandb-run-name=[optional: WandB run name (within the defined project)]`
37
+
38
+ You can visualize experiment results in ~/ray_results using TensorBoard.
39
+
40
+
41
+ Results to expect
42
+ -----------------
43
+
44
+ For the training step - depending on your `--stop-reward` setting, you should see
45
+ something similar to this:
46
+
47
+ Number of trials: 1/1 (1 TERMINATED)
48
+ +-----------------------------+------------+-----------------+--------+
49
+ | Trial name | status | loc | iter |
50
+ | | | | |
51
+ |-----------------------------+------------+-----------------+--------+
52
+ | PPO_CartPole-v1_6660c_00000 | TERMINATED | 127.0.0.1:43566 | 8 |
53
+ +-----------------------------+------------+-----------------+--------+
54
+ +------------------+------------------------+------------------------+
55
+ | total time (s) | num_env_steps_sample | num_env_steps_traine |
56
+ | | d_lifetime | d_lifetime |
57
+ +------------------+------------------------+------------------------+
58
+ | 21.0283 | 32000 | 32000 |
59
+ +------------------+------------------------+------------------------+
60
+
61
+ Then, after restoring the RLModule for the inference phase, your output should
62
+ look similar to:
63
+
64
+ Training completed. Restoring new RLModule for action inference.
65
+ Episode done: Total reward = 500.0
66
+ Episode done: Total reward = 500.0
67
+ Episode done: Total reward = 500.0
68
+ Episode done: Total reward = 500.0
69
+ Episode done: Total reward = 500.0
70
+ Episode done: Total reward = 500.0
71
+ Episode done: Total reward = 500.0
72
+ Episode done: Total reward = 500.0
73
+ Episode done: Total reward = 500.0
74
+ Episode done: Total reward = 500.0
75
+ Done performing action inference through 10 Episodes
76
+ """
77
+ import gymnasium as gym
78
+ import numpy as np
79
+ import os
80
+
81
+ from ray.rllib.core import DEFAULT_MODULE_ID
82
+ from ray.rllib.core.columns import Columns
83
+ from ray.rllib.core.rl_module.rl_module import RLModule
84
+ from ray.rllib.utils.framework import try_import_torch
85
+ from ray.rllib.utils.numpy import convert_to_numpy, softmax
86
+ from ray.rllib.utils.metrics import (
87
+ ENV_RUNNER_RESULTS,
88
+ EPISODE_RETURN_MEAN,
89
+ )
90
+ from ray.rllib.utils.test_utils import (
91
+ add_rllib_example_script_args,
92
+ run_rllib_example_script_experiment,
93
+ )
94
+ from ray.tune.registry import get_trainable_cls
95
+
96
+ torch, _ = try_import_torch()
97
+
98
+ parser = add_rllib_example_script_args(default_reward=200.0)
99
+ parser.set_defaults(
100
+ # Make sure that - by default - we produce checkpoints during training.
101
+ checkpoint_freq=1,
102
+ checkpoint_at_end=True,
103
+ # Use CartPole-v1 by default.
104
+ env="CartPole-v1",
105
+ # Script only runs on new API stack.
106
+ enable_new_api_stack=True,
107
+ )
108
+ parser.add_argument(
109
+ "--explore-during-inference",
110
+ action="store_true",
111
+ help="Whether the trained policy should use exploration during action "
112
+ "inference.",
113
+ )
114
+ parser.add_argument(
115
+ "--num-episodes-during-inference",
116
+ type=int,
117
+ default=10,
118
+ help="Number of episodes to do inference over (after restoring from a checkpoint).",
119
+ )
120
+
121
+
122
+ if __name__ == "__main__":
123
+ args = parser.parse_args()
124
+
125
+ assert (
126
+ args.enable_new_api_stack
127
+ ), "Must set --enable-new-api-stack when running this script!"
128
+
129
+ base_config = get_trainable_cls(args.algo).get_default_config()
130
+
131
+ print("Training policy until desired reward/timesteps/iterations. ...")
132
+ results = run_rllib_example_script_experiment(base_config, args)
133
+
134
+ print("Training completed. Restoring new RLModule for action inference.")
135
+ # Get the last checkpoint from the above training run.
136
+ best_result = results.get_best_result(
137
+ metric=f"{ENV_RUNNER_RESULTS}/{EPISODE_RETURN_MEAN}", mode="max"
138
+ )
139
+ # Create new RLModule and restore its state from the last algo checkpoint.
140
+ # Note that the checkpoint for the RLModule can be found deeper inside the algo
141
+ # checkpoint's subdirectories ([algo dir] -> "learner/" -> "module_state/" ->
142
+ # "[module ID]):
143
+ rl_module = RLModule.from_checkpoint(
144
+ os.path.join(
145
+ best_result.checkpoint.path,
146
+ "learner_group",
147
+ "learner",
148
+ "rl_module",
149
+ DEFAULT_MODULE_ID,
150
+ )
151
+ )
152
+
153
+ # Create an env to do inference in.
154
+ env = gym.make(args.env)
155
+ obs, info = env.reset()
156
+
157
+ num_episodes = 0
158
+ episode_return = 0.0
159
+
160
+ while num_episodes < args.num_episodes_during_inference:
161
+ # Compute an action using a B=1 observation "batch".
162
+ input_dict = {Columns.OBS: torch.from_numpy(obs).unsqueeze(0)}
163
+ # No exploration.
164
+ if not args.explore_during_inference:
165
+ rl_module_out = rl_module.forward_inference(input_dict)
166
+ # Using exploration.
167
+ else:
168
+ rl_module_out = rl_module.forward_exploration(input_dict)
169
+
170
+ # For discrete action spaces used here, normally, an RLModule "only"
171
+ # produces action logits, from which we then have to sample.
172
+ # However, you can also write custom RLModules that output actions
173
+ # directly, performing the sampling step already inside their
174
+ # `forward_...()` methods.
175
+ logits = convert_to_numpy(rl_module_out[Columns.ACTION_DIST_INPUTS])
176
+ # Perform the sampling step in numpy for simplicity.
177
+ action = np.random.choice(env.action_space.n, p=softmax(logits[0]))
178
+ # Send the computed action `a` to the env.
179
+ obs, reward, terminated, truncated, _ = env.step(action)
180
+ episode_return += reward
181
+ # Is the episode `done`? -> Reset.
182
+ if terminated or truncated:
183
+ print(f"Episode done: Total reward = {episode_return}")
184
+ obs, info = env.reset()
185
+ num_episodes += 1
186
+ episode_return = 0.0
187
+
188
+ print(f"Done performing action inference through {num_episodes} Episodes")
deepseek/lib/python3.10/site-packages/ray/rllib/examples/inference/policy_inference_after_training_w_connector.py ADDED
@@ -0,0 +1,274 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Example on how to compute actions in production on an already trained policy.
2
+
3
+ This example uses a more complex setup including a gymnasium environment, an
4
+ RLModule (one or more neural networks/policies), an env-to-module/module-to-env
5
+ ConnectorV2 pair, and an Episode object to store the ongoing episode in.
6
+ The RLModule contains an LSTM that requires its own previous STATE_OUT as new input
7
+ at every episode step to compute a new action.
8
+
9
+ This example:
10
+ - shows how to use an already existing checkpoint to extract a single-agent RLModule
11
+ from (our policy network).
12
+ - shows how to setup this recovered policy net for action computations (with or
13
+ without using exploration).
14
+ - shows how to create a more complex env-loop in which the action-computing RLModule
15
+ requires its own previous state outputs as new input and how to use RLlib's Episode
16
+ APIs to achieve this.
17
+
18
+
19
+ How to run this script
20
+ ----------------------
21
+ `python [script file name].py --enable-new-api-stack --stop-reward=200.0`
22
+
23
+ Use the `--explore-during-inference` option to switch on exploratory behavior
24
+ during inference. Normally, you should not explore during inference, though,
25
+ unless your environment has a stochastic optimal solution.
26
+ Use the `--num-episodes-during-inference=[int]` option to set the number of
27
+ episodes to run through during the inference phase using the restored RLModule.
28
+
29
+ For debugging, use the following additional command line options
30
+ `--no-tune --num-env-runners=0`
31
+ which should allow you to set breakpoints anywhere in the RLlib code and
32
+ have the execution stop there for inspection and debugging.
33
+
34
+ Note that the shown GPU settings in this script also work in case you are not
35
+ running via tune, but instead are using the `--no-tune` command line option.
36
+
37
+ For logging to your WandB account, use:
38
+ `--wandb-key=[your WandB API key] --wandb-project=[some project name]
39
+ --wandb-run-name=[optional: WandB run name (within the defined project)]`
40
+
41
+ You can visualize experiment results in ~/ray_results using TensorBoard.
42
+
43
+
44
+ Results to expect
45
+ -----------------
46
+
47
+ For the training step - depending on your `--stop-reward` setting, you should see
48
+ something similar to this:
49
+
50
+ Number of trials: 1/1 (1 TERMINATED)
51
+ +--------------------------------+------------+-----------------+--------+
52
+ | Trial name | status | loc | iter |
53
+ | | | | |
54
+ |--------------------------------+------------+-----------------+--------+
55
+ | PPO_stateless-cart_cc890_00000 | TERMINATED | 127.0.0.1:72238 | 7 |
56
+ +--------------------------------+------------+-----------------+--------+
57
+ +------------------+------------------------+------------------------+
58
+ | total time (s) | num_env_steps_sample | num_env_steps_traine |
59
+ | | d_lifetime | d_lifetime |
60
+ +------------------+------------------------+------------------------+
61
+ | 31.9655 | 28000 | 28000 |
62
+ +------------------+------------------------+------------------------+
63
+
64
+ Then, after restoring the RLModule for the inference phase, your output should
65
+ look similar to:
66
+
67
+ Training completed. Creating an env-loop for inference ...
68
+ Env ...
69
+ Env-to-module ConnectorV2 ...
70
+ RLModule restored ...
71
+ Module-to-env ConnectorV2 ...
72
+ Episode done: Total reward = 103.0
73
+ Episode done: Total reward = 90.0
74
+ Episode done: Total reward = 100.0
75
+ Episode done: Total reward = 111.0
76
+ Episode done: Total reward = 85.0
77
+ Episode done: Total reward = 90.0
78
+ Episode done: Total reward = 100.0
79
+ Episode done: Total reward = 102.0
80
+ Episode done: Total reward = 97.0
81
+ Episode done: Total reward = 81.0
82
+ Done performing action inference through 10 Episodes
83
+ """
84
+ import os
85
+
86
+ from ray.rllib.connectors.env_to_module import EnvToModulePipeline
87
+ from ray.rllib.connectors.module_to_env import ModuleToEnvPipeline
88
+ from ray.rllib.core import (
89
+ COMPONENT_ENV_RUNNER,
90
+ COMPONENT_ENV_TO_MODULE_CONNECTOR,
91
+ COMPONENT_MODULE_TO_ENV_CONNECTOR,
92
+ COMPONENT_LEARNER_GROUP,
93
+ COMPONENT_LEARNER,
94
+ COMPONENT_RL_MODULE,
95
+ DEFAULT_MODULE_ID,
96
+ )
97
+ from ray.rllib.core.columns import Columns
98
+ from ray.rllib.core.rl_module.default_model_config import DefaultModelConfig
99
+ from ray.rllib.core.rl_module.rl_module import RLModule
100
+ from ray.rllib.env.single_agent_episode import SingleAgentEpisode
101
+ from ray.rllib.examples.envs.classes.stateless_cartpole import StatelessCartPole
102
+ from ray.rllib.utils.framework import try_import_torch
103
+ from ray.rllib.utils.metrics import (
104
+ ENV_RUNNER_RESULTS,
105
+ EPISODE_RETURN_MEAN,
106
+ )
107
+ from ray.rllib.utils.test_utils import (
108
+ add_rllib_example_script_args,
109
+ run_rllib_example_script_experiment,
110
+ )
111
+ from ray.tune.registry import get_trainable_cls, register_env
112
+
113
+ torch, _ = try_import_torch()
114
+
115
+
116
+ def _env_creator(cfg):
117
+ return StatelessCartPole(cfg)
118
+
119
+
120
+ register_env("stateless-cart", _env_creator)
121
+
122
+
123
+ parser = add_rllib_example_script_args(default_reward=200.0)
124
+ parser.set_defaults(
125
+ # Script only runs on new API stack.
126
+ enable_new_api_stack=True,
127
+ # Make sure that - by default - we produce checkpoints during training.
128
+ checkpoint_freq=1,
129
+ checkpoint_at_end=True,
130
+ # Use StatelessCartPole by default.
131
+ env="stateless-cart",
132
+ )
133
+ parser.add_argument(
134
+ "--explore-during-inference",
135
+ action="store_true",
136
+ help="Whether the trained policy should use exploration during action "
137
+ "inference.",
138
+ )
139
+ parser.add_argument(
140
+ "--num-episodes-during-inference",
141
+ type=int,
142
+ default=10,
143
+ help="Number of episodes to do inference over (after restoring from a checkpoint).",
144
+ )
145
+
146
+
147
+ if __name__ == "__main__":
148
+ args = parser.parse_args()
149
+
150
+ base_config = (
151
+ get_trainable_cls(args.algo)
152
+ .get_default_config()
153
+ .training(
154
+ num_epochs=6,
155
+ lr=0.0003,
156
+ vf_loss_coeff=0.01,
157
+ )
158
+ # Add an LSTM setup to the default RLModule used.
159
+ .rl_module(model_config=DefaultModelConfig(use_lstm=True))
160
+ )
161
+
162
+ print("Training LSTM-policy until desired reward/timesteps/iterations. ...")
163
+ results = run_rllib_example_script_experiment(base_config, args)
164
+
165
+ # Get the last checkpoint from the above training run.
166
+ metric_key = metric = f"{ENV_RUNNER_RESULTS}/{EPISODE_RETURN_MEAN}"
167
+ best_result = results.get_best_result(metric=metric_key, mode="max")
168
+
169
+ print(
170
+ "Training completed (R="
171
+ f"{best_result.metrics[ENV_RUNNER_RESULTS][EPISODE_RETURN_MEAN]}). "
172
+ "Creating an env-loop for inference ..."
173
+ )
174
+
175
+ print("Env ...", end="")
176
+ env = _env_creator(base_config.env_config)
177
+ print(" ok")
178
+
179
+ # Create the env-to-module pipeline from the checkpoint.
180
+ print("Restore env-to-module connector from checkpoint ...", end="")
181
+ env_to_module = EnvToModulePipeline.from_checkpoint(
182
+ os.path.join(
183
+ best_result.checkpoint.path,
184
+ COMPONENT_ENV_RUNNER,
185
+ COMPONENT_ENV_TO_MODULE_CONNECTOR,
186
+ )
187
+ )
188
+ print(" ok")
189
+
190
+ print("Restore RLModule from checkpoint ...", end="")
191
+ # Create RLModule from a checkpoint.
192
+ rl_module = RLModule.from_checkpoint(
193
+ os.path.join(
194
+ best_result.checkpoint.path,
195
+ COMPONENT_LEARNER_GROUP,
196
+ COMPONENT_LEARNER,
197
+ COMPONENT_RL_MODULE,
198
+ DEFAULT_MODULE_ID,
199
+ )
200
+ )
201
+ print(" ok")
202
+
203
+ # For the module-to-env pipeline, we will use the convenient config utility.
204
+ print("Restore module-to-env connector from checkpoint ...", end="")
205
+ module_to_env = ModuleToEnvPipeline.from_checkpoint(
206
+ os.path.join(
207
+ best_result.checkpoint.path,
208
+ COMPONENT_ENV_RUNNER,
209
+ COMPONENT_MODULE_TO_ENV_CONNECTOR,
210
+ )
211
+ )
212
+ print(" ok")
213
+
214
+ # Now our setup is complete:
215
+ # [gym.Env] -> env-to-module -> [RLModule] -> module-to-env -> [gym.Env] ... repeat
216
+ num_episodes = 0
217
+
218
+ obs, _ = env.reset()
219
+ episode = SingleAgentEpisode(
220
+ observations=[obs],
221
+ observation_space=env.observation_space,
222
+ action_space=env.action_space,
223
+ )
224
+
225
+ while num_episodes < args.num_episodes_during_inference:
226
+ shared_data = {}
227
+ input_dict = env_to_module(
228
+ episodes=[episode], # ConnectorV2 pipelines operate on lists of episodes.
229
+ rl_module=rl_module,
230
+ explore=args.explore_during_inference,
231
+ shared_data=shared_data,
232
+ )
233
+ # No exploration.
234
+ if not args.explore_during_inference:
235
+ rl_module_out = rl_module.forward_inference(input_dict)
236
+ # Using exploration.
237
+ else:
238
+ rl_module_out = rl_module.forward_exploration(input_dict)
239
+
240
+ to_env = module_to_env(
241
+ batch=rl_module_out,
242
+ episodes=[episode], # ConnectorV2 pipelines operate on lists of episodes.
243
+ rl_module=rl_module,
244
+ explore=args.explore_during_inference,
245
+ shared_data=shared_data,
246
+ )
247
+ # Send the computed action to the env. Note that the RLModule and the
248
+ # connector pipelines work on batched data (B=1 in this case), whereas the Env
249
+ # is not vectorized here, so we need to use `action[0]`.
250
+ action = to_env.pop(Columns.ACTIONS)[0]
251
+ obs, reward, terminated, truncated, _ = env.step(action)
252
+ # Keep our `SingleAgentEpisode` instance updated at all times.
253
+ episode.add_env_step(
254
+ obs,
255
+ action,
256
+ reward,
257
+ terminated=terminated,
258
+ truncated=truncated,
259
+ # Same here: [0] b/c RLModule output is batched (w/ B=1).
260
+ extra_model_outputs={k: v[0] for k, v in to_env.items()},
261
+ )
262
+
263
+ # Is the episode `done`? -> Reset.
264
+ if episode.is_done:
265
+ print(f"Episode done: Total reward = {episode.get_return()}")
266
+ obs, info = env.reset()
267
+ episode = SingleAgentEpisode(
268
+ observations=[obs],
269
+ observation_space=env.observation_space,
270
+ action_space=env.action_space,
271
+ )
272
+ num_episodes += 1
273
+
274
+ print(f"Done performing action inference through {num_episodes} Episodes")
deepseek/lib/python3.10/site-packages/ray/rllib/examples/inference/policy_inference_after_training_with_lstm.py ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # @OldAPIStack
2
+ """
3
+ Example showing how you can use your trained policy for inference
4
+ (computing actions) in an environment.
5
+
6
+ Includes options for LSTM-based models (--use-lstm), attention-net models
7
+ (--use-attention), and plain (non-recurrent) models.
8
+ """
9
+ import argparse
10
+ import gymnasium as gym
11
+ import numpy as np
12
+ import os
13
+
14
+ import ray
15
+ from ray import air, tune
16
+ from ray.air.constants import TRAINING_ITERATION
17
+ from ray.rllib.algorithms.algorithm import Algorithm
18
+ from ray.rllib.utils.metrics import (
19
+ ENV_RUNNER_RESULTS,
20
+ EPISODE_RETURN_MEAN,
21
+ NUM_ENV_STEPS_SAMPLED_LIFETIME,
22
+ )
23
+ from ray.tune.registry import get_trainable_cls
24
+
25
+ parser = argparse.ArgumentParser()
26
+ parser.add_argument(
27
+ "--run", type=str, default="PPO", help="The RLlib-registered algorithm to use."
28
+ )
29
+ parser.add_argument("--num-cpus", type=int, default=0)
30
+ parser.add_argument(
31
+ "--framework",
32
+ choices=["tf", "tf2", "torch"],
33
+ default="torch",
34
+ help="The DL framework specifier.",
35
+ )
36
+ parser.add_argument(
37
+ "--prev-action",
38
+ action="store_true",
39
+ help="Feed most recent action to the LSTM as part of its input.",
40
+ )
41
+ parser.add_argument(
42
+ "--prev-reward",
43
+ action="store_true",
44
+ help="Feed most recent reward to the LSTM as part of its input.",
45
+ )
46
+ parser.add_argument(
47
+ "--stop-iters",
48
+ type=int,
49
+ default=2,
50
+ help="Number of iterations to train before we do inference.",
51
+ )
52
+ parser.add_argument(
53
+ "--stop-timesteps",
54
+ type=int,
55
+ default=100000,
56
+ help="Number of timesteps to train before we do inference.",
57
+ )
58
+ parser.add_argument(
59
+ "--stop-reward",
60
+ type=float,
61
+ default=0.8,
62
+ help="Reward at which we stop training before we do inference.",
63
+ )
64
+ parser.add_argument(
65
+ "--explore-during-inference",
66
+ action="store_true",
67
+ help="Whether the trained policy should use exploration during action "
68
+ "inference.",
69
+ )
70
+ parser.add_argument(
71
+ "--num-episodes-during-inference",
72
+ type=int,
73
+ default=10,
74
+ help="Number of episodes to do inference over after training.",
75
+ )
76
+
77
+ if __name__ == "__main__":
78
+ args = parser.parse_args()
79
+
80
+ ray.init(num_cpus=args.num_cpus or None)
81
+
82
+ config = (
83
+ get_trainable_cls(args.run)
84
+ .get_default_config()
85
+ .api_stack(
86
+ enable_env_runner_and_connector_v2=False,
87
+ enable_rl_module_and_learner=False,
88
+ )
89
+ .environment("FrozenLake-v1")
90
+ # Run with tracing enabled for tf2?
91
+ .framework(args.framework)
92
+ .training(
93
+ model={
94
+ "use_lstm": True,
95
+ "lstm_cell_size": 256,
96
+ "lstm_use_prev_action": args.prev_action,
97
+ "lstm_use_prev_reward": args.prev_reward,
98
+ },
99
+ )
100
+ # Use GPUs iff `RLLIB_NUM_GPUS` env var set to > 0.
101
+ .resources(num_gpus=int(os.environ.get("RLLIB_NUM_GPUS", "0")))
102
+ )
103
+
104
+ stop = {
105
+ TRAINING_ITERATION: args.stop_iters,
106
+ NUM_ENV_STEPS_SAMPLED_LIFETIME: args.stop_timesteps,
107
+ f"{ENV_RUNNER_RESULTS}/{EPISODE_RETURN_MEAN}": args.stop_reward,
108
+ }
109
+
110
+ print("Training policy until desired reward/timesteps/iterations. ...")
111
+ tuner = tune.Tuner(
112
+ args.run,
113
+ param_space=config,
114
+ run_config=air.RunConfig(
115
+ stop=stop,
116
+ verbose=2,
117
+ checkpoint_config=air.CheckpointConfig(
118
+ checkpoint_frequency=1,
119
+ checkpoint_at_end=True,
120
+ ),
121
+ ),
122
+ )
123
+ results = tuner.fit()
124
+
125
+ print("Training completed. Restoring new Algorithm for action inference.")
126
+ # Get the last checkpoint from the above training run.
127
+ checkpoint = results.get_best_result().checkpoint
128
+ # Create new Algorithm from the last checkpoint.
129
+ algo = Algorithm.from_checkpoint(checkpoint)
130
+
131
+ # Create the env to do inference in.
132
+ env = gym.make("FrozenLake-v1")
133
+ obs, info = env.reset()
134
+
135
+ # In case the model needs previous-reward/action inputs, keep track of
136
+ # these via these variables here (we'll have to pass them into the
137
+ # compute_actions methods below).
138
+ init_prev_a = prev_a = None
139
+ init_prev_r = prev_r = None
140
+
141
+ # Set LSTM's initial internal state.
142
+ lstm_cell_size = config["model"]["lstm_cell_size"]
143
+ # range(2) b/c h- and c-states of the LSTM.
144
+ if algo.config.enable_rl_module_and_learner:
145
+ init_state = state = algo.get_policy().model.get_initial_state()
146
+ else:
147
+ init_state = state = [np.zeros([lstm_cell_size], np.float32) for _ in range(2)]
148
+ # Do we need prev-action/reward as part of the input?
149
+ if args.prev_action:
150
+ init_prev_a = prev_a = 0
151
+ if args.prev_reward:
152
+ init_prev_r = prev_r = 0.0
153
+
154
+ num_episodes = 0
155
+
156
+ while num_episodes < args.num_episodes_during_inference:
157
+ # Compute an action (`a`).
158
+ a, state_out, _ = algo.compute_single_action(
159
+ observation=obs,
160
+ state=state,
161
+ prev_action=prev_a,
162
+ prev_reward=prev_r,
163
+ explore=args.explore_during_inference,
164
+ policy_id="default_policy", # <- default value
165
+ )
166
+ # Send the computed action `a` to the env.
167
+ obs, reward, done, truncated, info = env.step(a)
168
+ # Is the episode `done`? -> Reset.
169
+ if done:
170
+ obs, info = env.reset()
171
+ num_episodes += 1
172
+ state = init_state
173
+ prev_a = init_prev_a
174
+ prev_r = init_prev_r
175
+ # Episode is still ongoing -> Continue.
176
+ else:
177
+ state = state_out
178
+ if init_prev_a is not None:
179
+ prev_a = a
180
+ if init_prev_r is not None:
181
+ prev_r = reward
182
+
183
+ algo.stop()
184
+
185
+ ray.shutdown()
deepseek/lib/python3.10/site-packages/ray/rllib/examples/learners/classes/__pycache__/intrinsic_curiosity_learners.cpython-310.pyc ADDED
Binary file (5.56 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (184 Bytes). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/custom_heuristic_policy.cpython-310.pyc ADDED
Binary file (3.55 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/different_spaces_for_agents.cpython-310.pyc ADDED
Binary file (3.96 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/multi_agent_cartpole.cpython-310.pyc ADDED
Binary file (2.14 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/multi_agent_pendulum.cpython-310.pyc ADDED
Binary file (2.41 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/pettingzoo_independent_learning.cpython-310.pyc ADDED
Binary file (4.29 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/pettingzoo_shared_value_function.cpython-310.pyc ADDED
Binary file (438 Bytes). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/rock_paper_scissors_heuristic_vs_learned.cpython-310.pyc ADDED
Binary file (4.31 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/rock_paper_scissors_learned_vs_learned.cpython-310.pyc ADDED
Binary file (3.02 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/self_play_league_based_with_open_spiel.cpython-310.pyc ADDED
Binary file (7.22 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/self_play_with_open_spiel.cpython-310.pyc ADDED
Binary file (6.09 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/__pycache__/two_step_game_with_grouped_agents.cpython-310.pyc ADDED
Binary file (3.58 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/custom_heuristic_policy.py ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Example of running a custom heuristic (hand-coded) policy alongside trainable ones.
2
+
3
+ This example has two RLModules (as action computing policies):
4
+ (1) one trained by a PPOLearner
5
+ (2) one hand-coded policy that acts at random in the env (doesn't learn).
6
+
7
+ The environment is MultiAgentCartPole, in which there are n agents both policies
8
+
9
+
10
+ How to run this script
11
+ ----------------------
12
+ `python [script file name].py --enable-new-api-stack --num-agents=2`
13
+
14
+ For debugging, use the following additional command line options
15
+ `--no-tune --num-env-runners=0`
16
+ which should allow you to set breakpoints anywhere in the RLlib code and
17
+ have the execution stop there for inspection and debugging.
18
+
19
+ For logging to your WandB account, use:
20
+ `--wandb-key=[your WandB API key] --wandb-project=[some project name]
21
+ --wandb-run-name=[optional: WandB run name (within the defined project)]`
22
+
23
+
24
+ Results to expect
25
+ -----------------
26
+ In the console output, you can see the PPO policy ("learnable_policy") does much
27
+ better than "random":
28
+
29
+ +-------------------+------------+----------+------+----------------+
30
+ | Trial name | status | loc | iter | total time (s) |
31
+ | | | | | |
32
+ |-------------------+------------+----------+------+----------------+
33
+ | PPO_multi_agen... | TERMINATED | 127. ... | 20 | 58.646 |
34
+ +-------------------+------------+----------+------+----------------+
35
+
36
+ +--------+-------------------+-----------------+--------------------+
37
+ | ts | combined reward | reward random | reward |
38
+ | | | | learnable_policy |
39
+ +--------+-------------------+-----------------+--------------------|
40
+ | 80000 | 481.26 | 78.41 | 464.41 |
41
+ +--------+-------------------+-----------------+--------------------+
42
+ """
43
+
44
+ from ray.rllib.algorithms.ppo import PPOConfig
45
+ from ray.rllib.core.rl_module.rl_module import RLModuleSpec
46
+ from ray.rllib.core.rl_module.multi_rl_module import MultiRLModuleSpec
47
+ from ray.rllib.examples.envs.classes.multi_agent import MultiAgentCartPole
48
+ from ray.rllib.examples.rl_modules.classes.random_rlm import RandomRLModule
49
+ from ray.rllib.utils.test_utils import (
50
+ add_rllib_example_script_args,
51
+ run_rllib_example_script_experiment,
52
+ )
53
+ from ray.tune.registry import register_env
54
+
55
+
56
+ parser = add_rllib_example_script_args(
57
+ default_iters=40, default_reward=500.0, default_timesteps=200000
58
+ )
59
+ parser.set_defaults(
60
+ enable_new_api_stack=True,
61
+ num_agents=2,
62
+ )
63
+
64
+
65
+ if __name__ == "__main__":
66
+ args = parser.parse_args()
67
+
68
+ assert args.num_agents == 2, "Must set --num-agents=2 when running this script!"
69
+
70
+ # Simple environment with n independent cartpole entities.
71
+ register_env(
72
+ "multi_agent_cartpole",
73
+ lambda _: MultiAgentCartPole({"num_agents": args.num_agents}),
74
+ )
75
+
76
+ base_config = (
77
+ PPOConfig()
78
+ .environment("multi_agent_cartpole")
79
+ .multi_agent(
80
+ policies={"learnable_policy", "random"},
81
+ # Map to either random behavior or PPO learning behavior based on
82
+ # the agent's ID.
83
+ policy_mapping_fn=lambda agent_id, *args, **kwargs: [
84
+ "learnable_policy",
85
+ "random",
86
+ ][agent_id % 2],
87
+ # We need to specify this here, b/c the `forward_train` method of
88
+ # `RandomRLModule` (ModuleID="random") throws a not-implemented error.
89
+ policies_to_train=["learnable_policy"],
90
+ )
91
+ .rl_module(
92
+ rl_module_spec=MultiRLModuleSpec(
93
+ rl_module_specs={
94
+ "learnable_policy": RLModuleSpec(),
95
+ "random": RLModuleSpec(module_class=RandomRLModule),
96
+ }
97
+ ),
98
+ )
99
+ )
100
+
101
+ run_rllib_example_script_experiment(base_config, args)
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/different_spaces_for_agents.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Example showing how to create a multi-agent env, in which the different agents
3
+ have different observation and action spaces.
4
+
5
+ These spaces do NOT necessarily have to be specified manually by the user. Instead,
6
+ RLlib tries to automatically infer them from the env provided spaces dicts
7
+ (agentID -> obs/act space) and the policy mapping fn (mapping agent IDs to policy IDs).
8
+
9
+ How to run this script
10
+ ----------------------
11
+ `python [script file name].py --enable-new-api-stack --num-agents=2`
12
+
13
+ For debugging, use the following additional command line options
14
+ `--no-tune --num-env-runners=0`
15
+ which should allow you to set breakpoints anywhere in the RLlib code and
16
+ have the execution stop there for inspection and debugging.
17
+
18
+ For logging to your WandB account, use:
19
+ `--wandb-key=[your WandB API key] --wandb-project=[some project name]
20
+ --wandb-run-name=[optional: WandB run name (within the defined project)]`
21
+ """
22
+
23
+ import gymnasium as gym
24
+
25
+ from ray.rllib.env.multi_agent_env import MultiAgentEnv
26
+ from ray.rllib.utils.test_utils import (
27
+ add_rllib_example_script_args,
28
+ run_rllib_example_script_experiment,
29
+ )
30
+ from ray.tune.registry import get_trainable_cls
31
+
32
+
33
+ class BasicMultiAgentMultiSpaces(MultiAgentEnv):
34
+ """A simple multi-agent example environment where agents have different spaces.
35
+
36
+ agent0: obs=Box(10,), act=Discrete(2)
37
+ agent1: obs=Box(20,), act=Discrete(3)
38
+
39
+ The logic of the env doesn't really matter for this example. The point of this env
40
+ is to show how to use multi-agent envs, in which the different agents utilize
41
+ different obs- and action spaces.
42
+ """
43
+
44
+ def __init__(self, config=None):
45
+ self.agents = ["agent0", "agent1"]
46
+
47
+ self.terminateds = set()
48
+ self.truncateds = set()
49
+
50
+ # Provide full (preferred format) observation- and action-spaces as Dicts
51
+ # mapping agent IDs to the individual agents' spaces.
52
+ self.observation_spaces = {
53
+ "agent0": gym.spaces.Box(low=-1.0, high=1.0, shape=(10,)),
54
+ "agent1": gym.spaces.Box(low=-1.0, high=1.0, shape=(20,)),
55
+ }
56
+ self.action_spaces = {
57
+ "agent0": gym.spaces.Discrete(2),
58
+ "agent1": gym.spaces.Discrete(3),
59
+ }
60
+
61
+ super().__init__()
62
+
63
+ def reset(self, *, seed=None, options=None):
64
+ self.terminateds = set()
65
+ self.truncateds = set()
66
+ return {i: self.get_observation_space(i).sample() for i in self.agents}, {}
67
+
68
+ def step(self, action_dict):
69
+ obs, rew, terminated, truncated, info = {}, {}, {}, {}, {}
70
+ for i, action in action_dict.items():
71
+ obs[i] = self.get_observation_space(i).sample()
72
+ rew[i] = 0.0
73
+ terminated[i] = False
74
+ truncated[i] = False
75
+ info[i] = {}
76
+ terminated["__all__"] = len(self.terminateds) == len(self.agents)
77
+ truncated["__all__"] = len(self.truncateds) == len(self.agents)
78
+ return obs, rew, terminated, truncated, info
79
+
80
+
81
+ parser = add_rllib_example_script_args(
82
+ default_iters=10, default_reward=80.0, default_timesteps=10000
83
+ )
84
+
85
+
86
+ if __name__ == "__main__":
87
+ args = parser.parse_args()
88
+
89
+ assert (
90
+ args.enable_new_api_stack
91
+ ), "Must set --enable-new-api-stack when running this script!"
92
+
93
+ base_config = (
94
+ get_trainable_cls(args.algo)
95
+ .get_default_config()
96
+ .environment(env=BasicMultiAgentMultiSpaces)
97
+ .training(train_batch_size=1024)
98
+ .multi_agent(
99
+ # Use a simple set of policy IDs. Spaces for the individual policies
100
+ # are inferred automatically using reverse lookup via the
101
+ # `policy_mapping_fn` and the env provided spaces for the different
102
+ # agents. Alternatively, you could use:
103
+ # policies: {main0: PolicySpec(...), main1: PolicySpec}
104
+ policies={"main0", "main1"},
105
+ # Simple mapping fn, mapping agent0 to main0 and agent1 to main1.
106
+ policy_mapping_fn=(lambda aid, episode, **kw: f"main{aid[-1]}"),
107
+ # Only train main0.
108
+ policies_to_train=["main0"],
109
+ )
110
+ )
111
+
112
+ run_rllib_example_script_experiment(base_config, args)
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/pettingzoo_parameter_sharing.py ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Runs the PettingZoo Waterworld multi-agent env in RLlib using single policy learning.
2
+
3
+ Other than the `pettingzoo_independent_learning.py` example (in this same folder),
4
+ this example simply trains a single policy (shared by all agents).
5
+
6
+ See: https://pettingzoo.farama.org/environments/sisl/waterworld/
7
+ for more details on the environment.
8
+
9
+
10
+ How to run this script
11
+ ----------------------
12
+ `python [script file name].py --enable-new-api-stack --num-agents=2`
13
+
14
+ Control the number of agents and policies (RLModules) via --num-agents and
15
+ --num-policies.
16
+
17
+ This works with hundreds of agents and policies, but note that initializing
18
+ many policies might take some time.
19
+
20
+ For debugging, use the following additional command line options
21
+ `--no-tune --num-env-runners=0`
22
+ which should allow you to set breakpoints anywhere in the RLlib code and
23
+ have the execution stop there for inspection and debugging.
24
+
25
+ For logging to your WandB account, use:
26
+ `--wandb-key=[your WandB API key] --wandb-project=[some project name]
27
+ --wandb-run-name=[optional: WandB run name (within the defined project)]`
28
+
29
+
30
+ Results to expect
31
+ -----------------
32
+ The above options can reach a combined reward of roughly ~0.0 after about 500k-1M env
33
+ timesteps. Keep in mind, though, that in this setup, the agents do not have the
34
+ opportunity to benefit from or even out other agents' mistakes (and behavior in general)
35
+ as everyone is using the same policy. Hence, this example learns a more generic policy,
36
+ which might be less specialized to certain "niche exploitation opportunities" inside
37
+ the env:
38
+
39
+ +---------------------+----------+-----------------+--------+-----------------+
40
+ | Trial name | status | loc | iter | total time (s) |
41
+ |---------------------+----------+-----------------+--------+-----------------+
42
+ | PPO_env_91f49_00000 | RUNNING | 127.0.0.1:63676 | 200 | 605.176 |
43
+ +---------------------+----------+-----------------+--------+-----------------+
44
+
45
+ +--------+-------------------+-------------+
46
+ | ts | combined reward | reward p0 |
47
+ +--------+-------------------+-------------|
48
+ | 800000 | 0.323752 | 0.161876 |
49
+ +--------+-------------------+-------------+
50
+ """
51
+ from pettingzoo.sisl import waterworld_v4
52
+
53
+ from ray.rllib.core.rl_module.multi_rl_module import MultiRLModuleSpec
54
+ from ray.rllib.core.rl_module.rl_module import RLModuleSpec
55
+ from ray.rllib.env.wrappers.pettingzoo_env import PettingZooEnv
56
+ from ray.rllib.utils.test_utils import (
57
+ add_rllib_example_script_args,
58
+ run_rllib_example_script_experiment,
59
+ )
60
+ from ray.tune.registry import get_trainable_cls, register_env
61
+
62
+
63
+ parser = add_rllib_example_script_args(
64
+ default_iters=200,
65
+ default_timesteps=1000000,
66
+ default_reward=0.0,
67
+ )
68
+
69
+
70
+ if __name__ == "__main__":
71
+ args = parser.parse_args()
72
+
73
+ assert args.num_agents > 0, "Must set --num-agents > 0 when running this script!"
74
+ assert (
75
+ args.enable_new_api_stack
76
+ ), "Must set --enable-new-api-stack when running this script!"
77
+
78
+ # Here, we use the "Agent Environment Cycle" (AEC) PettingZoo environment type.
79
+ # For a "Parallel" environment example, see the rock paper scissors examples
80
+ # in this same repository folder.
81
+ register_env("env", lambda _: PettingZooEnv(waterworld_v4.env()))
82
+
83
+ base_config = (
84
+ get_trainable_cls(args.algo)
85
+ .get_default_config()
86
+ .environment("env")
87
+ .multi_agent(
88
+ policies={"p0"},
89
+ # All agents map to the exact same policy.
90
+ policy_mapping_fn=(lambda aid, *args, **kwargs: "p0"),
91
+ )
92
+ .training(
93
+ model={
94
+ "vf_share_layers": True,
95
+ },
96
+ vf_loss_coeff=0.005,
97
+ )
98
+ .rl_module(
99
+ rl_module_spec=MultiRLModuleSpec(
100
+ rl_module_specs={"p0": RLModuleSpec()},
101
+ ),
102
+ )
103
+ )
104
+
105
+ run_rllib_example_script_experiment(base_config, args)
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/rock_paper_scissors_learned_vs_learned.py ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """A simple multi-agent env with two agents play rock paper scissors.
2
+
3
+ This demonstrates running two learning policies in competition, both using the same
4
+ RLlib algorithm (PPO by default).
5
+
6
+ The combined reward as well as individual rewards should roughly remain at 0.0 as no
7
+ policy should - in the long run - be able to learn a better strategy than chosing
8
+ actions at random. However, it could be possible that - for some time - one or the other
9
+ policy can exploit a "stochastic weakness" of the opponent policy. For example a policy
10
+ `A` learns that its opponent `B` has learnt to choose "paper" more often, which in
11
+ return makes `A` choose "scissors" more often as a countermeasure.
12
+ """
13
+
14
+ import re
15
+
16
+ from pettingzoo.classic import rps_v2
17
+
18
+ from ray.rllib.connectors.env_to_module import FlattenObservations
19
+ from ray.rllib.core.rl_module.default_model_config import DefaultModelConfig
20
+ from ray.rllib.core.rl_module.multi_rl_module import MultiRLModuleSpec
21
+ from ray.rllib.core.rl_module.rl_module import RLModuleSpec
22
+ from ray.rllib.env.wrappers.pettingzoo_env import ParallelPettingZooEnv
23
+ from ray.rllib.utils.test_utils import (
24
+ add_rllib_example_script_args,
25
+ run_rllib_example_script_experiment,
26
+ )
27
+ from ray.tune.registry import get_trainable_cls, register_env
28
+
29
+
30
+ parser = add_rllib_example_script_args(
31
+ default_iters=50,
32
+ default_timesteps=200000,
33
+ default_reward=6.0,
34
+ )
35
+ parser.set_defaults(
36
+ enable_new_api_stack=True,
37
+ num_agents=2,
38
+ )
39
+ parser.add_argument(
40
+ "--use-lstm",
41
+ action="store_true",
42
+ help="Whether to use an LSTM wrapped module instead of a simple MLP one. With LSTM "
43
+ "the reward diff can reach 7.0, without only 5.0.",
44
+ )
45
+
46
+
47
+ register_env(
48
+ "RockPaperScissors",
49
+ lambda _: ParallelPettingZooEnv(rps_v2.parallel_env()),
50
+ )
51
+
52
+
53
+ if __name__ == "__main__":
54
+ args = parser.parse_args()
55
+
56
+ assert args.num_agents == 2, "Must set --num-agents=2 when running this script!"
57
+
58
+ base_config = (
59
+ get_trainable_cls(args.algo)
60
+ .get_default_config()
61
+ .environment("RockPaperScissors")
62
+ .env_runners(
63
+ env_to_module_connector=lambda env: FlattenObservations(multi_agent=True),
64
+ )
65
+ .multi_agent(
66
+ policies={"p0", "p1"},
67
+ # `player_0` uses `p0`, `player_1` uses `p1`.
68
+ policy_mapping_fn=lambda aid, episode: re.sub("^player_", "p", aid),
69
+ )
70
+ .training(
71
+ vf_loss_coeff=0.005,
72
+ )
73
+ .rl_module(
74
+ model_config=DefaultModelConfig(
75
+ use_lstm=args.use_lstm,
76
+ # Use a simpler FCNet when we also have an LSTM.
77
+ fcnet_hiddens=[32] if args.use_lstm else [256, 256],
78
+ lstm_cell_size=256,
79
+ max_seq_len=15,
80
+ vf_share_layers=True,
81
+ ),
82
+ rl_module_spec=MultiRLModuleSpec(
83
+ rl_module_specs={
84
+ "p0": RLModuleSpec(),
85
+ "p1": RLModuleSpec(),
86
+ }
87
+ ),
88
+ )
89
+ )
90
+
91
+ run_rllib_example_script_experiment(base_config, args)
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/self_play_with_open_spiel.py ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Example showing how one can implement a simple self-play training workflow.
2
+
3
+ Uses the open spiel adapter of RLlib with the "connect_four" game and
4
+ a multi-agent setup with a "main" policy and n "main_v[x]" policies
5
+ (x=version number), which are all at-some-point-frozen copies of
6
+ "main". At the very beginning, "main" plays against RandomPolicy.
7
+
8
+ Checks for the training progress after each training update via a custom
9
+ callback. We simply measure the win rate of "main" vs the opponent
10
+ ("main_v[x]" or RandomPolicy at the beginning) by looking through the
11
+ achieved rewards in the episodes in the train batch. If this win rate
12
+ reaches some configurable threshold, we add a new policy to
13
+ the policy map (a frozen copy of the current "main" one) and change the
14
+ policy_mapping_fn to make new matches of "main" vs any of the previous
15
+ versions of "main" (including the just added one).
16
+
17
+ After training for n iterations, a configurable number of episodes can
18
+ be played by the user against the "main" agent on the command line.
19
+ """
20
+
21
+ import functools
22
+
23
+ import numpy as np
24
+
25
+ from ray.air.constants import TRAINING_ITERATION
26
+ from ray.rllib.core.rl_module.default_model_config import DefaultModelConfig
27
+ from ray.rllib.core.rl_module.multi_rl_module import MultiRLModuleSpec
28
+ from ray.rllib.core.rl_module.rl_module import RLModuleSpec
29
+ from ray.rllib.env.utils import try_import_pyspiel, try_import_open_spiel
30
+ from ray.rllib.env.wrappers.open_spiel import OpenSpielEnv
31
+ from ray.rllib.examples.rl_modules.classes.random_rlm import RandomRLModule
32
+ from ray.rllib.examples.multi_agent.utils import (
33
+ ask_user_for_action,
34
+ SelfPlayCallback,
35
+ SelfPlayCallbackOldAPIStack,
36
+ )
37
+ from ray.rllib.examples._old_api_stack.policy.random_policy import RandomPolicy
38
+ from ray.rllib.policy.policy import PolicySpec
39
+ from ray.rllib.utils.metrics import NUM_ENV_STEPS_SAMPLED_LIFETIME
40
+ from ray.rllib.utils.test_utils import (
41
+ add_rllib_example_script_args,
42
+ run_rllib_example_script_experiment,
43
+ )
44
+ from ray.tune.registry import get_trainable_cls, register_env
45
+
46
+ open_spiel = try_import_open_spiel(error=True)
47
+ pyspiel = try_import_pyspiel(error=True)
48
+
49
+ # Import after try_import_open_spiel, so we can error out with hints.
50
+ from open_spiel.python.rl_environment import Environment # noqa: E402
51
+
52
+
53
+ parser = add_rllib_example_script_args(default_timesteps=2000000)
54
+ parser.set_defaults(env="connect_four")
55
+ parser.add_argument(
56
+ "--win-rate-threshold",
57
+ type=float,
58
+ default=0.95,
59
+ help="Win-rate at which we setup another opponent by freezing the "
60
+ "current main policy and playing against a uniform distribution "
61
+ "of previously frozen 'main's from here on.",
62
+ )
63
+ parser.add_argument(
64
+ "--min-league-size",
65
+ type=float,
66
+ default=3,
67
+ help="Minimum number of policies/RLModules to consider the test passed. "
68
+ "The initial league size is 2: `main` and `random`. "
69
+ "`--min-league-size=3` thus means that one new policy/RLModule has been "
70
+ "added so far (b/c the `main` one has reached the `--win-rate-threshold "
71
+ "against the `random` Policy/RLModule).",
72
+ )
73
+ parser.add_argument(
74
+ "--num-episodes-human-play",
75
+ type=int,
76
+ default=10,
77
+ help="How many episodes to play against the user on the command "
78
+ "line after training has finished.",
79
+ )
80
+ parser.add_argument(
81
+ "--from-checkpoint",
82
+ type=str,
83
+ default=None,
84
+ help="Full path to a checkpoint file for restoring a previously saved "
85
+ "Algorithm state.",
86
+ )
87
+
88
+
89
+ if __name__ == "__main__":
90
+ args = parser.parse_args()
91
+
92
+ register_env("open_spiel_env", lambda _: OpenSpielEnv(pyspiel.load_game(args.env)))
93
+
94
+ def agent_to_module_mapping_fn(agent_id, episode, **kwargs):
95
+ # agent_id = [0|1] -> module depends on episode ID
96
+ # This way, we make sure that both modules sometimes play agent0
97
+ # (start player) and sometimes agent1 (player to move 2nd).
98
+ return "main" if hash(episode.id_) % 2 == agent_id else "random"
99
+
100
+ def policy_mapping_fn(agent_id, episode, worker, **kwargs):
101
+ return "main" if episode.episode_id % 2 == agent_id else "random"
102
+
103
+ config = (
104
+ get_trainable_cls(args.algo)
105
+ .get_default_config()
106
+ .api_stack(
107
+ enable_rl_module_and_learner=args.enable_new_api_stack,
108
+ enable_env_runner_and_connector_v2=args.enable_new_api_stack,
109
+ )
110
+ .environment("open_spiel_env")
111
+ .framework(args.framework)
112
+ # Set up the main piece in this experiment: The league-bases self-play
113
+ # callback, which controls adding new policies/Modules to the league and
114
+ # properly matching the different policies in the league with each other.
115
+ .callbacks(
116
+ functools.partial(
117
+ (
118
+ SelfPlayCallback
119
+ if args.enable_new_api_stack
120
+ else SelfPlayCallbackOldAPIStack
121
+ ),
122
+ win_rate_threshold=args.win_rate_threshold,
123
+ )
124
+ )
125
+ .env_runners(
126
+ num_env_runners=(args.num_env_runners or 2),
127
+ num_envs_per_env_runner=1 if args.enable_new_api_stack else 5,
128
+ )
129
+ .resources(
130
+ num_cpus_for_main_process=1,
131
+ )
132
+ .multi_agent(
133
+ # Initial policy map: Random and default algo one. This will be expanded
134
+ # to more policy snapshots taken from "main" against which "main"
135
+ # will then play (instead of "random"). This is done in the
136
+ # custom callback defined above (`SelfPlayCallback`).
137
+ policies=(
138
+ {
139
+ # Our main policy, we'd like to optimize.
140
+ "main": PolicySpec(),
141
+ # An initial random opponent to play against.
142
+ "random": PolicySpec(policy_class=RandomPolicy),
143
+ }
144
+ if not args.enable_new_api_stack
145
+ else {"main", "random"}
146
+ ),
147
+ # Assign agent 0 and 1 randomly to the "main" policy or
148
+ # to the opponent ("random" at first). Make sure (via episode_id)
149
+ # that "main" always plays against "random" (and not against
150
+ # another "main").
151
+ policy_mapping_fn=(
152
+ agent_to_module_mapping_fn
153
+ if args.enable_new_api_stack
154
+ else policy_mapping_fn
155
+ ),
156
+ # Always just train the "main" policy.
157
+ policies_to_train=["main"],
158
+ )
159
+ .rl_module(
160
+ model_config=DefaultModelConfig(fcnet_hiddens=[512, 512]),
161
+ rl_module_spec=MultiRLModuleSpec(
162
+ rl_module_specs={
163
+ "main": RLModuleSpec(),
164
+ "random": RLModuleSpec(module_class=RandomRLModule),
165
+ }
166
+ ),
167
+ )
168
+ )
169
+
170
+ # Only for PPO, change the `num_epochs` setting.
171
+ if args.algo == "PPO":
172
+ config.training(num_epochs=20)
173
+
174
+ stop = {
175
+ NUM_ENV_STEPS_SAMPLED_LIFETIME: args.stop_timesteps,
176
+ TRAINING_ITERATION: args.stop_iters,
177
+ "league_size": args.min_league_size,
178
+ }
179
+
180
+ # Train the "main" policy to play really well using self-play.
181
+ results = None
182
+ if not args.from_checkpoint:
183
+ results = run_rllib_example_script_experiment(config, args, stop=stop)
184
+
185
+ # Restore trained Algorithm (set to non-explore behavior) and play against
186
+ # human on command line.
187
+ if args.num_episodes_human_play > 0:
188
+ num_episodes = 0
189
+ config.explore = False
190
+ algo = config.build()
191
+ if args.from_checkpoint:
192
+ algo.restore(args.from_checkpoint)
193
+ else:
194
+ checkpoint = results.get_best_result().checkpoint
195
+ if not checkpoint:
196
+ raise ValueError("No last checkpoint found in results!")
197
+ algo.restore(checkpoint)
198
+
199
+ # Play from the command line against the trained agent
200
+ # in an actual (non-RLlib-wrapped) open-spiel env.
201
+ human_player = 1
202
+ env = Environment(args.env)
203
+
204
+ while num_episodes < args.num_episodes_human_play:
205
+ print("You play as {}".format("o" if human_player else "x"))
206
+ time_step = env.reset()
207
+ while not time_step.last():
208
+ player_id = time_step.observations["current_player"]
209
+ if player_id == human_player:
210
+ action = ask_user_for_action(time_step)
211
+ else:
212
+ obs = np.array(time_step.observations["info_state"][player_id])
213
+ action = algo.compute_single_action(obs, policy_id="main")
214
+ # In case computer chooses an invalid action, pick a
215
+ # random one.
216
+ legal = time_step.observations["legal_actions"][player_id]
217
+ if action not in legal:
218
+ action = np.random.choice(legal)
219
+ time_step = env.step([action])
220
+ print(f"\n{env.get_state}")
221
+
222
+ print(f"\n{env.get_state}")
223
+
224
+ print("End of game!")
225
+ if time_step.rewards[human_player] > 0:
226
+ print("You win")
227
+ elif time_step.rewards[human_player] < 0:
228
+ print("You lose")
229
+ else:
230
+ print("Draw")
231
+ # Switch order of players.
232
+ human_player = 1 - human_player
233
+
234
+ num_episodes += 1
235
+
236
+ algo.stop()
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/utils/__init__.py ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+
3
+ from ray.rllib.examples.multi_agent.utils.self_play_callback import SelfPlayCallback
4
+ from ray.rllib.examples.multi_agent.utils.self_play_league_based_callback import (
5
+ SelfPlayLeagueBasedCallback,
6
+ )
7
+ from ray.rllib.examples.multi_agent.utils.self_play_callback_old_api_stack import (
8
+ SelfPlayCallbackOldAPIStack,
9
+ )
10
+ from ray.rllib.examples.multi_agent.utils.self_play_league_based_callback_old_api_stack import ( # noqa
11
+ SelfPlayLeagueBasedCallbackOldAPIStack,
12
+ )
13
+
14
+
15
+ def ask_user_for_action(time_step):
16
+ """Asks the user for a valid action on the command line and returns it.
17
+
18
+ Re-queries the user until she picks a valid one.
19
+
20
+ Args:
21
+ time_step: The open spiel Environment time-step object.
22
+ """
23
+ pid = time_step.observations["current_player"]
24
+ legal_moves = time_step.observations["legal_actions"][pid]
25
+ choice = -1
26
+ while choice not in legal_moves:
27
+ print("Choose an action from {}:".format(legal_moves))
28
+ sys.stdout.flush()
29
+ choice_str = input()
30
+ try:
31
+ choice = int(choice_str)
32
+ except ValueError:
33
+ continue
34
+ return choice
35
+
36
+
37
+ __all__ = [
38
+ "ask_user_for_action",
39
+ "SelfPlayCallback",
40
+ "SelfPlayLeagueBasedCallback",
41
+ "SelfPlayCallbackOldAPIStack",
42
+ "SelfPlayLeagueBasedCallbackOldAPIStack",
43
+ ]
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/utils/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (1.36 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/utils/__pycache__/self_play_callback.cpython-310.pyc ADDED
Binary file (2.32 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/utils/__pycache__/self_play_callback_old_api_stack.cpython-310.pyc ADDED
Binary file (2.28 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/utils/__pycache__/self_play_league_based_callback.cpython-310.pyc ADDED
Binary file (4.74 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/multi_agent/utils/__pycache__/self_play_league_based_callback_old_api_stack.cpython-310.pyc ADDED
Binary file (4.34 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/ray_serve/classes/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (190 Bytes). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/rl_modules/__init__.py ADDED
File without changes
deepseek/lib/python3.10/site-packages/ray/rllib/examples/rl_modules/__pycache__/action_masking_rl_module.cpython-310.pyc ADDED
Binary file (4.67 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/rl_modules/__pycache__/autoregressive_actions_rl_module.cpython-310.pyc ADDED
Binary file (3.61 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/rl_modules/__pycache__/custom_cnn_rl_module.cpython-310.pyc ADDED
Binary file (4.42 kB). View file
 
deepseek/lib/python3.10/site-packages/ray/rllib/examples/rl_modules/classes/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (373 Bytes). View file