ZTWHHH commited on
Commit
49b65a0
·
verified ·
1 Parent(s): fcbff79

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/__init__.cpython-310.pyc +0 -0
  2. minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/action_dist.cpython-310.pyc +0 -0
  3. minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/catalog.cpython-310.pyc +0 -0
  4. minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/distributions.cpython-310.pyc +0 -0
  5. minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/modelv2.cpython-310.pyc +0 -0
  6. minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/preprocessors.cpython-310.pyc +0 -0
  7. minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/repeated_values.cpython-310.pyc +0 -0
  8. minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/utils.cpython-310.pyc +0 -0
  9. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__init__.py +11 -0
  10. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/__init__.cpython-310.pyc +0 -0
  11. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/attention_net.cpython-310.pyc +0 -0
  12. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/complex_input_net.cpython-310.pyc +0 -0
  13. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/fcnet.cpython-310.pyc +0 -0
  14. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/misc.cpython-310.pyc +0 -0
  15. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/noop.cpython-310.pyc +0 -0
  16. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/recurrent_net.cpython-310.pyc +0 -0
  17. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/tf_action_dist.cpython-310.pyc +0 -0
  18. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/tf_distributions.cpython-310.pyc +0 -0
  19. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/tf_modelv2.cpython-310.pyc +0 -0
  20. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/visionnet.cpython-310.pyc +0 -0
  21. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/attention_net.py +573 -0
  22. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/complex_input_net.py +214 -0
  23. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/fcnet.py +148 -0
  24. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__init__.py +17 -0
  25. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__pycache__/__init__.cpython-310.pyc +0 -0
  26. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__pycache__/gru_gate.cpython-310.pyc +0 -0
  27. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__pycache__/multi_head_attention.cpython-310.pyc +0 -0
  28. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__pycache__/noisy_layer.cpython-310.pyc +0 -0
  29. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__pycache__/relative_multi_head_attention.cpython-310.pyc +0 -0
  30. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__pycache__/skip_connection.cpython-310.pyc +0 -0
  31. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/gru_gate.py +58 -0
  32. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/multi_head_attention.py +61 -0
  33. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/noisy_layer.py +118 -0
  34. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/relative_multi_head_attention.py +147 -0
  35. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/skip_connection.py +46 -0
  36. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/misc.py +90 -0
  37. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/noop.py +17 -0
  38. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/recurrent_net.py +292 -0
  39. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/tf_action_dist.py +735 -0
  40. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/tf_distributions.py +552 -0
  41. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/tf_modelv2.py +142 -0
  42. minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/visionnet.py +264 -0
  43. minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/__pycache__/__init__.cpython-310.pyc +0 -0
  44. minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/__pycache__/torch_modelv2.cpython-310.pyc +0 -0
  45. minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/modules/__pycache__/__init__.cpython-310.pyc +0 -0
  46. minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/modules/__pycache__/convtranspose2d_stack.cpython-310.pyc +0 -0
  47. minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/modules/__pycache__/gru_gate.cpython-310.pyc +0 -0
  48. minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/modules/__pycache__/multi_head_attention.cpython-310.pyc +0 -0
  49. minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/modules/__pycache__/noisy_layer.cpython-310.pyc +0 -0
  50. minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/modules/__pycache__/relative_multi_head_attention.cpython-310.pyc +0 -0
minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (496 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/action_dist.cpython-310.pyc ADDED
Binary file (4.33 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/catalog.cpython-310.pyc ADDED
Binary file (21.1 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/distributions.cpython-310.pyc ADDED
Binary file (8.96 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/modelv2.cpython-310.pyc ADDED
Binary file (14.9 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/preprocessors.cpython-310.pyc ADDED
Binary file (14.4 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/repeated_values.cpython-310.pyc ADDED
Binary file (7.16 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/__pycache__/utils.cpython-310.pyc ADDED
Binary file (6.51 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__init__.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from ray.rllib.models.tf.tf_modelv2 import TFModelV2
2
+ from ray.rllib.models.tf.fcnet import FullyConnectedNetwork
3
+ from ray.rllib.models.tf.recurrent_net import RecurrentNetwork
4
+ from ray.rllib.models.tf.visionnet import VisionNetwork
5
+
6
+ __all__ = [
7
+ "FullyConnectedNetwork",
8
+ "RecurrentNetwork",
9
+ "TFModelV2",
10
+ "VisionNetwork",
11
+ ]
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (490 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/attention_net.cpython-310.pyc ADDED
Binary file (15.8 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/complex_input_net.cpython-310.pyc ADDED
Binary file (4.71 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/fcnet.cpython-310.pyc ADDED
Binary file (3.49 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/misc.cpython-310.pyc ADDED
Binary file (2.55 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/noop.cpython-310.pyc ADDED
Binary file (960 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/recurrent_net.cpython-310.pyc ADDED
Binary file (8.72 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/tf_action_dist.cpython-310.pyc ADDED
Binary file (28.2 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/tf_distributions.cpython-310.pyc ADDED
Binary file (20.9 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/tf_modelv2.cpython-310.pyc ADDED
Binary file (5.05 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/__pycache__/visionnet.cpython-310.pyc ADDED
Binary file (5.03 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/attention_net.py ADDED
@@ -0,0 +1,573 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ [1] - Attention Is All You Need - Vaswani, Jones, Shazeer, Parmar,
3
+ Uszkoreit, Gomez, Kaiser - Google Brain/Research, U Toronto - 2017.
4
+ https://arxiv.org/pdf/1706.03762.pdf
5
+ [2] - Stabilizing Transformers for Reinforcement Learning - E. Parisotto
6
+ et al. - DeepMind - 2019. https://arxiv.org/pdf/1910.06764.pdf
7
+ [3] - Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context.
8
+ Z. Dai, Z. Yang, et al. - Carnegie Mellon U - 2019.
9
+ https://www.aclweb.org/anthology/P19-1285.pdf
10
+ """
11
+ import gymnasium as gym
12
+ from gymnasium.spaces import Box, Discrete, MultiDiscrete
13
+ import numpy as np
14
+ import tree # pip install dm_tree
15
+ from typing import Any, Dict, Optional, Union
16
+
17
+ from ray.rllib.models.modelv2 import ModelV2
18
+ from ray.rllib.models.tf.layers import (
19
+ GRUGate,
20
+ RelativeMultiHeadAttention,
21
+ SkipConnection,
22
+ )
23
+ from ray.rllib.models.tf.tf_modelv2 import TFModelV2
24
+ from ray.rllib.models.tf.recurrent_net import RecurrentNetwork
25
+ from ray.rllib.policy.sample_batch import SampleBatch
26
+ from ray.rllib.policy.view_requirement import ViewRequirement
27
+ from ray.rllib.utils.annotations import OldAPIStack, override
28
+ from ray.rllib.utils.framework import try_import_tf
29
+ from ray.rllib.utils.spaces.space_utils import get_base_struct_from_space
30
+ from ray.rllib.utils.tf_utils import flatten_inputs_to_1d_tensor, one_hot
31
+ from ray.rllib.utils.typing import ModelConfigDict, TensorType, List
32
+ from ray.rllib.utils.deprecation import deprecation_warning
33
+ from ray.util import log_once
34
+
35
+ tf1, tf, tfv = try_import_tf()
36
+
37
+
38
+ @OldAPIStack
39
+ class PositionwiseFeedforward(tf.keras.layers.Layer if tf else object):
40
+ """A 2x linear layer with ReLU activation in between described in [1].
41
+
42
+ Each timestep coming from the attention head will be passed through this
43
+ layer separately.
44
+ """
45
+
46
+ def __init__(
47
+ self,
48
+ out_dim: int,
49
+ hidden_dim: int,
50
+ output_activation: Optional[Any] = None,
51
+ **kwargs,
52
+ ):
53
+ super().__init__(**kwargs)
54
+
55
+ self._hidden_layer = tf.keras.layers.Dense(
56
+ hidden_dim,
57
+ activation=tf.nn.relu,
58
+ )
59
+
60
+ self._output_layer = tf.keras.layers.Dense(
61
+ out_dim, activation=output_activation
62
+ )
63
+ if log_once("positionwise_feedforward_tf"):
64
+ deprecation_warning(
65
+ old="rllib.models.tf.attention_net.PositionwiseFeedforward",
66
+ )
67
+
68
+ def call(self, inputs: TensorType, **kwargs) -> TensorType:
69
+ del kwargs
70
+ output = self._hidden_layer(inputs)
71
+ return self._output_layer(output)
72
+
73
+
74
+ @OldAPIStack
75
+ class TrXLNet(RecurrentNetwork):
76
+ """A TrXL net Model described in [1]."""
77
+
78
+ def __init__(
79
+ self,
80
+ observation_space: gym.spaces.Space,
81
+ action_space: gym.spaces.Space,
82
+ num_outputs: int,
83
+ model_config: ModelConfigDict,
84
+ name: str,
85
+ num_transformer_units: int,
86
+ attention_dim: int,
87
+ num_heads: int,
88
+ head_dim: int,
89
+ position_wise_mlp_dim: int,
90
+ ):
91
+ """Initializes a TrXLNet object.
92
+
93
+ Args:
94
+ num_transformer_units: The number of Transformer repeats to
95
+ use (denoted L in [2]).
96
+ attention_dim: The input and output dimensions of one
97
+ Transformer unit.
98
+ num_heads: The number of attention heads to use in parallel.
99
+ Denoted as `H` in [3].
100
+ head_dim: The dimension of a single(!) attention head within
101
+ a multi-head attention unit. Denoted as `d` in [3].
102
+ position_wise_mlp_dim: The dimension of the hidden layer
103
+ within the position-wise MLP (after the multi-head attention
104
+ block within one Transformer unit). This is the size of the
105
+ first of the two layers within the PositionwiseFeedforward. The
106
+ second layer always has size=`attention_dim`.
107
+ """
108
+ if log_once("trxl_net_tf"):
109
+ deprecation_warning(
110
+ old="rllib.models.tf.attention_net.TrXLNet",
111
+ )
112
+ super().__init__(
113
+ observation_space, action_space, num_outputs, model_config, name
114
+ )
115
+
116
+ self.num_transformer_units = num_transformer_units
117
+ self.attention_dim = attention_dim
118
+ self.num_heads = num_heads
119
+ self.head_dim = head_dim
120
+ self.max_seq_len = model_config["max_seq_len"]
121
+ self.obs_dim = observation_space.shape[0]
122
+
123
+ inputs = tf.keras.layers.Input(
124
+ shape=(self.max_seq_len, self.obs_dim), name="inputs"
125
+ )
126
+ E_out = tf.keras.layers.Dense(attention_dim)(inputs)
127
+
128
+ for _ in range(self.num_transformer_units):
129
+ MHA_out = SkipConnection(
130
+ RelativeMultiHeadAttention(
131
+ out_dim=attention_dim,
132
+ num_heads=num_heads,
133
+ head_dim=head_dim,
134
+ input_layernorm=False,
135
+ output_activation=None,
136
+ ),
137
+ fan_in_layer=None,
138
+ )(E_out)
139
+ E_out = SkipConnection(
140
+ PositionwiseFeedforward(attention_dim, position_wise_mlp_dim)
141
+ )(MHA_out)
142
+ E_out = tf.keras.layers.LayerNormalization(axis=-1)(E_out)
143
+
144
+ # Postprocess TrXL output with another hidden layer and compute values.
145
+ logits = tf.keras.layers.Dense(
146
+ self.num_outputs, activation=tf.keras.activations.linear, name="logits"
147
+ )(E_out)
148
+
149
+ self.base_model = tf.keras.models.Model([inputs], [logits])
150
+
151
+ @override(RecurrentNetwork)
152
+ def forward_rnn(
153
+ self, inputs: TensorType, state: List[TensorType], seq_lens: TensorType
154
+ ) -> (TensorType, List[TensorType]):
155
+ # To make Attention work with current RLlib's ModelV2 API:
156
+ # We assume `state` is the history of L recent observations (all
157
+ # concatenated into one tensor) and append the current inputs to the
158
+ # end and only keep the most recent (up to `max_seq_len`). This allows
159
+ # us to deal with timestep-wise inference and full sequence training
160
+ # within the same logic.
161
+ observations = state[0]
162
+ observations = tf.concat((observations, inputs), axis=1)[:, -self.max_seq_len :]
163
+ logits = self.base_model([observations])
164
+ T = tf.shape(inputs)[1] # Length of input segment (time).
165
+ logits = logits[:, -T:]
166
+
167
+ return logits, [observations]
168
+
169
+ @override(RecurrentNetwork)
170
+ def get_initial_state(self) -> List[np.ndarray]:
171
+ # State is the T last observations concat'd together into one Tensor.
172
+ # Plus all Transformer blocks' E(l) outputs concat'd together (up to
173
+ # tau timesteps).
174
+ return [np.zeros((self.max_seq_len, self.obs_dim), np.float32)]
175
+
176
+
177
+ class GTrXLNet(RecurrentNetwork):
178
+ """A GTrXL net Model described in [2].
179
+
180
+ This is still in an experimental phase.
181
+ Can be used as a drop-in replacement for LSTMs in PPO and IMPALA.
182
+
183
+ To use this network as a replacement for an RNN, configure your Algorithm
184
+ as follows:
185
+
186
+ Examples:
187
+ >> config["model"]["custom_model"] = GTrXLNet
188
+ >> config["model"]["max_seq_len"] = 10
189
+ >> config["model"]["custom_model_config"] = {
190
+ >> num_transformer_units=1,
191
+ >> attention_dim=32,
192
+ >> num_heads=2,
193
+ >> memory_inference=100,
194
+ >> memory_training=50,
195
+ >> etc..
196
+ >> }
197
+ """
198
+
199
+ def __init__(
200
+ self,
201
+ observation_space: gym.spaces.Space,
202
+ action_space: gym.spaces.Space,
203
+ num_outputs: Optional[int],
204
+ model_config: ModelConfigDict,
205
+ name: str,
206
+ *,
207
+ num_transformer_units: int = 1,
208
+ attention_dim: int = 64,
209
+ num_heads: int = 2,
210
+ memory_inference: int = 50,
211
+ memory_training: int = 50,
212
+ head_dim: int = 32,
213
+ position_wise_mlp_dim: int = 32,
214
+ init_gru_gate_bias: float = 2.0,
215
+ ):
216
+ """Initializes a GTrXLNet instance.
217
+
218
+ Args:
219
+ num_transformer_units: The number of Transformer repeats to
220
+ use (denoted L in [2]).
221
+ attention_dim: The input and output dimensions of one
222
+ Transformer unit.
223
+ num_heads: The number of attention heads to use in parallel.
224
+ Denoted as `H` in [3].
225
+ memory_inference: The number of timesteps to concat (time
226
+ axis) and feed into the next transformer unit as inference
227
+ input. The first transformer unit will receive this number of
228
+ past observations (plus the current one), instead.
229
+ memory_training: The number of timesteps to concat (time
230
+ axis) and feed into the next transformer unit as training
231
+ input (plus the actual input sequence of len=max_seq_len).
232
+ The first transformer unit will receive this number of
233
+ past observations (plus the input sequence), instead.
234
+ head_dim: The dimension of a single(!) attention head within
235
+ a multi-head attention unit. Denoted as `d` in [3].
236
+ position_wise_mlp_dim: The dimension of the hidden layer
237
+ within the position-wise MLP (after the multi-head attention
238
+ block within one Transformer unit). This is the size of the
239
+ first of the two layers within the PositionwiseFeedforward. The
240
+ second layer always has size=`attention_dim`.
241
+ init_gru_gate_bias: Initial bias values for the GRU gates
242
+ (two GRUs per Transformer unit, one after the MHA, one after
243
+ the position-wise MLP).
244
+ """
245
+ super().__init__(
246
+ observation_space, action_space, num_outputs, model_config, name
247
+ )
248
+
249
+ self.num_transformer_units = num_transformer_units
250
+ self.attention_dim = attention_dim
251
+ self.num_heads = num_heads
252
+ self.memory_inference = memory_inference
253
+ self.memory_training = memory_training
254
+ self.head_dim = head_dim
255
+ self.max_seq_len = model_config["max_seq_len"]
256
+ self.obs_dim = observation_space.shape[0]
257
+
258
+ # Raw observation input (plus (None) time axis).
259
+ input_layer = tf.keras.layers.Input(shape=(None, self.obs_dim), name="inputs")
260
+ memory_ins = [
261
+ tf.keras.layers.Input(
262
+ shape=(None, self.attention_dim),
263
+ dtype=tf.float32,
264
+ name="memory_in_{}".format(i),
265
+ )
266
+ for i in range(self.num_transformer_units)
267
+ ]
268
+
269
+ # Map observation dim to input/output transformer (attention) dim.
270
+ E_out = tf.keras.layers.Dense(self.attention_dim)(input_layer)
271
+ # Output, collected and concat'd to build the internal, tau-len
272
+ # Memory units used for additional contextual information.
273
+ memory_outs = [E_out]
274
+
275
+ # 2) Create L Transformer blocks according to [2].
276
+ for i in range(self.num_transformer_units):
277
+ # RelativeMultiHeadAttention part.
278
+ MHA_out = SkipConnection(
279
+ RelativeMultiHeadAttention(
280
+ out_dim=self.attention_dim,
281
+ num_heads=num_heads,
282
+ head_dim=head_dim,
283
+ input_layernorm=True,
284
+ output_activation=tf.nn.relu,
285
+ ),
286
+ fan_in_layer=GRUGate(init_gru_gate_bias),
287
+ name="mha_{}".format(i + 1),
288
+ )(E_out, memory=memory_ins[i])
289
+ # Position-wise MLP part.
290
+ E_out = SkipConnection(
291
+ tf.keras.Sequential(
292
+ (
293
+ tf.keras.layers.LayerNormalization(axis=-1),
294
+ PositionwiseFeedforward(
295
+ out_dim=self.attention_dim,
296
+ hidden_dim=position_wise_mlp_dim,
297
+ output_activation=tf.nn.relu,
298
+ ),
299
+ )
300
+ ),
301
+ fan_in_layer=GRUGate(init_gru_gate_bias),
302
+ name="pos_wise_mlp_{}".format(i + 1),
303
+ )(MHA_out)
304
+ # Output of position-wise MLP == E(l-1), which is concat'd
305
+ # to the current Mem block (M(l-1)) to yield E~(l-1), which is then
306
+ # used by the next transformer block.
307
+ memory_outs.append(E_out)
308
+
309
+ self._logits = None
310
+ self._value_out = None
311
+
312
+ # Postprocess TrXL output with another hidden layer and compute values.
313
+ if num_outputs is not None:
314
+ self._logits = tf.keras.layers.Dense(
315
+ self.num_outputs, activation=None, name="logits"
316
+ )(E_out)
317
+ values_out = tf.keras.layers.Dense(1, activation=None, name="values")(E_out)
318
+ outs = [self._logits, values_out]
319
+ else:
320
+ outs = [E_out]
321
+ self.num_outputs = self.attention_dim
322
+
323
+ self.trxl_model = tf.keras.Model(
324
+ inputs=[input_layer] + memory_ins, outputs=outs + memory_outs[:-1]
325
+ )
326
+
327
+ self.trxl_model.summary()
328
+
329
+ # __sphinx_doc_begin__
330
+ # Setup trajectory views (`memory-inference` x past memory outs).
331
+ for i in range(self.num_transformer_units):
332
+ space = Box(-1.0, 1.0, shape=(self.attention_dim,))
333
+ self.view_requirements["state_in_{}".format(i)] = ViewRequirement(
334
+ "state_out_{}".format(i),
335
+ shift="-{}:-1".format(self.memory_inference),
336
+ # Repeat the incoming state every max-seq-len times.
337
+ batch_repeat_value=self.max_seq_len,
338
+ space=space,
339
+ )
340
+ self.view_requirements["state_out_{}".format(i)] = ViewRequirement(
341
+ space=space, used_for_training=False
342
+ )
343
+ # __sphinx_doc_end__
344
+
345
+ @override(ModelV2)
346
+ def forward(
347
+ self, input_dict, state: List[TensorType], seq_lens: TensorType
348
+ ) -> (TensorType, List[TensorType]):
349
+ assert seq_lens is not None
350
+
351
+ # Add the time dim to observations.
352
+ B = tf.shape(seq_lens)[0]
353
+ observations = input_dict[SampleBatch.OBS]
354
+
355
+ shape = tf.shape(observations)
356
+ T = shape[0] // B
357
+ observations = tf.reshape(observations, tf.concat([[-1, T], shape[1:]], axis=0))
358
+
359
+ all_out = self.trxl_model([observations] + state)
360
+
361
+ if self._logits is not None:
362
+ out = tf.reshape(all_out[0], [-1, self.num_outputs])
363
+ self._value_out = all_out[1]
364
+ memory_outs = all_out[2:]
365
+ else:
366
+ out = tf.reshape(all_out[0], [-1, self.attention_dim])
367
+ memory_outs = all_out[1:]
368
+
369
+ return out, [tf.reshape(m, [-1, self.attention_dim]) for m in memory_outs]
370
+
371
+ @override(RecurrentNetwork)
372
+ def get_initial_state(self) -> List[np.ndarray]:
373
+ return [
374
+ tf.zeros(self.view_requirements["state_in_{}".format(i)].space.shape)
375
+ for i in range(self.num_transformer_units)
376
+ ]
377
+
378
+ @override(ModelV2)
379
+ def value_function(self) -> TensorType:
380
+ return tf.reshape(self._value_out, [-1])
381
+
382
+
383
+ class AttentionWrapper(TFModelV2):
384
+ """GTrXL wrapper serving as interface for ModelV2s that set use_attention."""
385
+
386
+ def __init__(
387
+ self,
388
+ obs_space: gym.spaces.Space,
389
+ action_space: gym.spaces.Space,
390
+ num_outputs: int,
391
+ model_config: ModelConfigDict,
392
+ name: str,
393
+ ):
394
+ if log_once("attention_wrapper_tf_deprecation"):
395
+ deprecation_warning(
396
+ old="ray.rllib.models.tf.attention_net.AttentionWrapper"
397
+ )
398
+ super().__init__(obs_space, action_space, None, model_config, name)
399
+
400
+ self.use_n_prev_actions = model_config["attention_use_n_prev_actions"]
401
+ self.use_n_prev_rewards = model_config["attention_use_n_prev_rewards"]
402
+
403
+ self.action_space_struct = get_base_struct_from_space(self.action_space)
404
+ self.action_dim = 0
405
+
406
+ for space in tree.flatten(self.action_space_struct):
407
+ if isinstance(space, Discrete):
408
+ self.action_dim += space.n
409
+ elif isinstance(space, MultiDiscrete):
410
+ self.action_dim += np.sum(space.nvec)
411
+ elif space.shape is not None:
412
+ self.action_dim += int(np.prod(space.shape))
413
+ else:
414
+ self.action_dim += int(len(space))
415
+
416
+ # Add prev-action/reward nodes to input to LSTM.
417
+ if self.use_n_prev_actions:
418
+ self.num_outputs += self.use_n_prev_actions * self.action_dim
419
+ if self.use_n_prev_rewards:
420
+ self.num_outputs += self.use_n_prev_rewards
421
+
422
+ cfg = model_config
423
+
424
+ self.attention_dim = cfg["attention_dim"]
425
+
426
+ if self.num_outputs is not None:
427
+ in_space = gym.spaces.Box(
428
+ float("-inf"), float("inf"), shape=(self.num_outputs,), dtype=np.float32
429
+ )
430
+ else:
431
+ in_space = obs_space
432
+
433
+ # Construct GTrXL sub-module w/ num_outputs=None (so it does not
434
+ # create a logits/value output; we'll do this ourselves in this wrapper
435
+ # here).
436
+ self.gtrxl = GTrXLNet(
437
+ in_space,
438
+ action_space,
439
+ None,
440
+ model_config,
441
+ "gtrxl",
442
+ num_transformer_units=cfg["attention_num_transformer_units"],
443
+ attention_dim=self.attention_dim,
444
+ num_heads=cfg["attention_num_heads"],
445
+ head_dim=cfg["attention_head_dim"],
446
+ memory_inference=cfg["attention_memory_inference"],
447
+ memory_training=cfg["attention_memory_training"],
448
+ position_wise_mlp_dim=cfg["attention_position_wise_mlp_dim"],
449
+ init_gru_gate_bias=cfg["attention_init_gru_gate_bias"],
450
+ )
451
+
452
+ # `self.num_outputs` right now is the number of nodes coming from the
453
+ # attention net.
454
+ input_ = tf.keras.layers.Input(shape=(self.gtrxl.num_outputs,))
455
+
456
+ # Set final num_outputs to correct value (depending on action space).
457
+ self.num_outputs = num_outputs
458
+
459
+ # Postprocess GTrXL output with another hidden layer and compute
460
+ # values.
461
+ out = tf.keras.layers.Dense(self.num_outputs, activation=None)(input_)
462
+ self._logits_branch = tf.keras.models.Model([input_], [out])
463
+
464
+ out = tf.keras.layers.Dense(1, activation=None)(input_)
465
+ self._value_branch = tf.keras.models.Model([input_], [out])
466
+
467
+ self.view_requirements = self.gtrxl.view_requirements
468
+ self.view_requirements["obs"].space = self.obs_space
469
+
470
+ # Add prev-a/r to this model's view, if required.
471
+ if self.use_n_prev_actions:
472
+ self.view_requirements[SampleBatch.PREV_ACTIONS] = ViewRequirement(
473
+ SampleBatch.ACTIONS,
474
+ space=self.action_space,
475
+ shift="-{}:-1".format(self.use_n_prev_actions),
476
+ )
477
+ if self.use_n_prev_rewards:
478
+ self.view_requirements[SampleBatch.PREV_REWARDS] = ViewRequirement(
479
+ SampleBatch.REWARDS, shift="-{}:-1".format(self.use_n_prev_rewards)
480
+ )
481
+
482
+ @override(RecurrentNetwork)
483
+ def forward(
484
+ self,
485
+ input_dict: Dict[str, TensorType],
486
+ state: List[TensorType],
487
+ seq_lens: TensorType,
488
+ ) -> (TensorType, List[TensorType]):
489
+ assert seq_lens is not None
490
+ # Push obs through "unwrapped" net's `forward()` first.
491
+ wrapped_out, _ = self._wrapped_forward(input_dict, [], None)
492
+
493
+ # Concat. prev-action/reward if required.
494
+ prev_a_r = []
495
+
496
+ # Prev actions.
497
+ if self.use_n_prev_actions:
498
+ prev_n_actions = input_dict[SampleBatch.PREV_ACTIONS]
499
+ # If actions are not processed yet (in their original form as
500
+ # have been sent to environment):
501
+ # Flatten/one-hot into 1D array.
502
+ if self.model_config["_disable_action_flattening"]:
503
+ # Merge prev n actions into flat tensor.
504
+ flat = flatten_inputs_to_1d_tensor(
505
+ prev_n_actions,
506
+ spaces_struct=self.action_space_struct,
507
+ time_axis=True,
508
+ )
509
+ # Fold time-axis into flattened data.
510
+ flat = tf.reshape(flat, [tf.shape(flat)[0], -1])
511
+ prev_a_r.append(flat)
512
+ # If actions are already flattened (but not one-hot'd yet!),
513
+ # one-hot discrete/multi-discrete actions here and concatenate the
514
+ # n most recent actions together.
515
+ else:
516
+ if isinstance(self.action_space, Discrete):
517
+ for i in range(self.use_n_prev_actions):
518
+ prev_a_r.append(
519
+ one_hot(prev_n_actions[:, i], self.action_space)
520
+ )
521
+ elif isinstance(self.action_space, MultiDiscrete):
522
+ for i in range(
523
+ 0, self.use_n_prev_actions, self.action_space.shape[0]
524
+ ):
525
+ prev_a_r.append(
526
+ one_hot(
527
+ tf.cast(
528
+ prev_n_actions[
529
+ :, i : i + self.action_space.shape[0]
530
+ ],
531
+ tf.float32,
532
+ ),
533
+ space=self.action_space,
534
+ )
535
+ )
536
+ else:
537
+ prev_a_r.append(
538
+ tf.reshape(
539
+ tf.cast(prev_n_actions, tf.float32),
540
+ [-1, self.use_n_prev_actions * self.action_dim],
541
+ )
542
+ )
543
+ # Prev rewards.
544
+ if self.use_n_prev_rewards:
545
+ prev_a_r.append(
546
+ tf.reshape(
547
+ tf.cast(input_dict[SampleBatch.PREV_REWARDS], tf.float32),
548
+ [-1, self.use_n_prev_rewards],
549
+ )
550
+ )
551
+
552
+ # Concat prev. actions + rewards to the "main" input.
553
+ if prev_a_r:
554
+ wrapped_out = tf.concat([wrapped_out] + prev_a_r, axis=1)
555
+
556
+ # Then through our GTrXL.
557
+ input_dict["obs_flat"] = input_dict["obs"] = wrapped_out
558
+
559
+ self._features, memory_outs = self.gtrxl(input_dict, state, seq_lens)
560
+ model_out = self._logits_branch(self._features)
561
+ return model_out, memory_outs
562
+
563
+ @override(ModelV2)
564
+ def value_function(self) -> TensorType:
565
+ assert self._features is not None, "Must call forward() first!"
566
+ return tf.reshape(self._value_branch(self._features), [-1])
567
+
568
+ @override(ModelV2)
569
+ def get_initial_state(self) -> Union[List[np.ndarray], List[TensorType]]:
570
+ return [
571
+ np.zeros(self.gtrxl.view_requirements["state_in_{}".format(i)].space.shape)
572
+ for i in range(self.gtrxl.num_transformer_units)
573
+ ]
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/complex_input_net.py ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from gymnasium.spaces import Box, Discrete, MultiDiscrete
2
+ import numpy as np
3
+ import tree # pip install dm_tree
4
+
5
+ from ray.rllib.models.catalog import ModelCatalog
6
+ from ray.rllib.models.modelv2 import ModelV2, restore_original_dimensions
7
+ from ray.rllib.models.tf.misc import normc_initializer
8
+ from ray.rllib.models.tf.tf_modelv2 import TFModelV2
9
+ from ray.rllib.models.utils import get_filter_config
10
+ from ray.rllib.policy.sample_batch import SampleBatch
11
+ from ray.rllib.utils.annotations import OldAPIStack, override
12
+ from ray.rllib.utils.framework import try_import_tf
13
+ from ray.rllib.utils.spaces.space_utils import flatten_space
14
+ from ray.rllib.utils.tf_utils import one_hot
15
+
16
+ tf1, tf, tfv = try_import_tf()
17
+
18
+
19
+ # __sphinx_doc_begin__
20
+ @OldAPIStack
21
+ class ComplexInputNetwork(TFModelV2):
22
+ """TFModelV2 concat'ing CNN outputs to flat input(s), followed by FC(s).
23
+
24
+ Note: This model should be used for complex (Dict or Tuple) observation
25
+ spaces that have one or more image components.
26
+
27
+ The data flow is as follows:
28
+
29
+ `obs` (e.g. Tuple[img0, img1, discrete0]) -> `CNN0 + CNN1 + ONE-HOT`
30
+ `CNN0 + CNN1 + ONE-HOT` -> concat all flat outputs -> `out`
31
+ `out` -> (optional) FC-stack -> `out2`
32
+ `out2` -> action (logits) and vaulue heads.
33
+ """
34
+
35
+ def __init__(self, obs_space, action_space, num_outputs, model_config, name):
36
+
37
+ self.original_space = (
38
+ obs_space.original_space
39
+ if hasattr(obs_space, "original_space")
40
+ else obs_space
41
+ )
42
+
43
+ self.processed_obs_space = (
44
+ self.original_space
45
+ if model_config.get("_disable_preprocessor_api")
46
+ else obs_space
47
+ )
48
+ super().__init__(
49
+ self.original_space, action_space, num_outputs, model_config, name
50
+ )
51
+
52
+ self.flattened_input_space = flatten_space(self.original_space)
53
+
54
+ # Build the CNN(s) given obs_space's image components.
55
+ self.cnns = {}
56
+ self.one_hot = {}
57
+ self.flatten_dims = {}
58
+ self.flatten = {}
59
+ concat_size = 0
60
+ for i, component in enumerate(self.flattened_input_space):
61
+ # Image space.
62
+ if len(component.shape) == 3 and isinstance(component, Box):
63
+ config = {
64
+ "conv_filters": model_config["conv_filters"]
65
+ if "conv_filters" in model_config
66
+ else get_filter_config(component.shape),
67
+ "conv_activation": model_config.get("conv_activation"),
68
+ "post_fcnet_hiddens": [],
69
+ }
70
+ self.cnns[i] = ModelCatalog.get_model_v2(
71
+ component,
72
+ action_space,
73
+ num_outputs=None,
74
+ model_config=config,
75
+ framework="tf",
76
+ name="cnn_{}".format(i),
77
+ )
78
+ concat_size += int(self.cnns[i].num_outputs)
79
+ # Discrete|MultiDiscrete inputs -> One-hot encode.
80
+ elif isinstance(component, (Discrete, MultiDiscrete)):
81
+ if isinstance(component, Discrete):
82
+ size = component.n
83
+ else:
84
+ size = np.sum(component.nvec)
85
+ config = {
86
+ "fcnet_hiddens": model_config["fcnet_hiddens"],
87
+ "fcnet_activation": model_config.get("fcnet_activation"),
88
+ "post_fcnet_hiddens": [],
89
+ }
90
+ self.one_hot[i] = ModelCatalog.get_model_v2(
91
+ Box(-1.0, 1.0, (size,), np.float32),
92
+ action_space,
93
+ num_outputs=None,
94
+ model_config=config,
95
+ framework="tf",
96
+ name="one_hot_{}".format(i),
97
+ )
98
+ concat_size += int(self.one_hot[i].num_outputs)
99
+ # Everything else (1D Box).
100
+ else:
101
+ size = int(np.prod(component.shape))
102
+ config = {
103
+ "fcnet_hiddens": model_config["fcnet_hiddens"],
104
+ "fcnet_activation": model_config.get("fcnet_activation"),
105
+ "post_fcnet_hiddens": [],
106
+ }
107
+ self.flatten[i] = ModelCatalog.get_model_v2(
108
+ Box(-1.0, 1.0, (size,), np.float32),
109
+ action_space,
110
+ num_outputs=None,
111
+ model_config=config,
112
+ framework="tf",
113
+ name="flatten_{}".format(i),
114
+ )
115
+ self.flatten_dims[i] = size
116
+ concat_size += int(self.flatten[i].num_outputs)
117
+
118
+ # Optional post-concat FC-stack.
119
+ post_fc_stack_config = {
120
+ "fcnet_hiddens": model_config.get("post_fcnet_hiddens", []),
121
+ "fcnet_activation": model_config.get("post_fcnet_activation", "relu"),
122
+ }
123
+ self.post_fc_stack = ModelCatalog.get_model_v2(
124
+ Box(float("-inf"), float("inf"), shape=(concat_size,), dtype=np.float32),
125
+ self.action_space,
126
+ None,
127
+ post_fc_stack_config,
128
+ framework="tf",
129
+ name="post_fc_stack",
130
+ )
131
+
132
+ # Actions and value heads.
133
+ self.logits_and_value_model = None
134
+ self._value_out = None
135
+ if num_outputs:
136
+ # Action-distribution head.
137
+ concat_layer = tf.keras.layers.Input((self.post_fc_stack.num_outputs,))
138
+ logits_layer = tf.keras.layers.Dense(
139
+ num_outputs,
140
+ activation=None,
141
+ kernel_initializer=normc_initializer(0.01),
142
+ name="logits",
143
+ )(concat_layer)
144
+
145
+ # Create the value branch model.
146
+ value_layer = tf.keras.layers.Dense(
147
+ 1,
148
+ activation=None,
149
+ kernel_initializer=normc_initializer(0.01),
150
+ name="value_out",
151
+ )(concat_layer)
152
+ self.logits_and_value_model = tf.keras.models.Model(
153
+ concat_layer, [logits_layer, value_layer]
154
+ )
155
+ else:
156
+ self.num_outputs = self.post_fc_stack.num_outputs
157
+
158
+ @override(ModelV2)
159
+ def forward(self, input_dict, state, seq_lens):
160
+ if SampleBatch.OBS in input_dict and "obs_flat" in input_dict:
161
+ orig_obs = input_dict[SampleBatch.OBS]
162
+ else:
163
+ orig_obs = restore_original_dimensions(
164
+ input_dict[SampleBatch.OBS], self.processed_obs_space, tensorlib="tf"
165
+ )
166
+ # Push image observations through our CNNs.
167
+ outs = []
168
+ for i, component in enumerate(tree.flatten(orig_obs)):
169
+ if i in self.cnns:
170
+ cnn_out, _ = self.cnns[i](SampleBatch({SampleBatch.OBS: component}))
171
+ outs.append(cnn_out)
172
+ elif i in self.one_hot:
173
+ if "int" in component.dtype.name:
174
+ one_hot_in = {
175
+ SampleBatch.OBS: one_hot(
176
+ component, self.flattened_input_space[i]
177
+ )
178
+ }
179
+ else:
180
+ one_hot_in = {SampleBatch.OBS: component}
181
+ one_hot_out, _ = self.one_hot[i](SampleBatch(one_hot_in))
182
+ outs.append(one_hot_out)
183
+ else:
184
+ nn_out, _ = self.flatten[i](
185
+ SampleBatch(
186
+ {
187
+ SampleBatch.OBS: tf.cast(
188
+ tf.reshape(component, [-1, self.flatten_dims[i]]),
189
+ tf.float32,
190
+ )
191
+ }
192
+ )
193
+ )
194
+ outs.append(nn_out)
195
+ # Concat all outputs and the non-image inputs.
196
+ out = tf.concat(outs, axis=1)
197
+ # Push through (optional) FC-stack (this may be an empty stack).
198
+ out, _ = self.post_fc_stack(SampleBatch({SampleBatch.OBS: out}))
199
+
200
+ # No logits/value branches.
201
+ if not self.logits_and_value_model:
202
+ return out, []
203
+
204
+ # Logits- and value branches.
205
+ logits, values = self.logits_and_value_model(out)
206
+ self._value_out = tf.reshape(values, [-1])
207
+ return logits, []
208
+
209
+ @override(ModelV2)
210
+ def value_function(self):
211
+ return self._value_out
212
+
213
+
214
+ # __sphinx_doc_end__
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/fcnet.py ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ import gymnasium as gym
3
+ from typing import Dict
4
+
5
+ from ray.rllib.models.tf.misc import normc_initializer
6
+ from ray.rllib.models.tf.tf_modelv2 import TFModelV2
7
+ from ray.rllib.models.utils import get_activation_fn
8
+ from ray.rllib.utils.annotations import OldAPIStack
9
+ from ray.rllib.utils.framework import try_import_tf
10
+ from ray.rllib.utils.typing import TensorType, List, ModelConfigDict
11
+
12
+ tf1, tf, tfv = try_import_tf()
13
+
14
+
15
+ @OldAPIStack
16
+ class FullyConnectedNetwork(TFModelV2):
17
+ """Generic fully connected network implemented in ModelV2 API."""
18
+
19
+ def __init__(
20
+ self,
21
+ obs_space: gym.spaces.Space,
22
+ action_space: gym.spaces.Space,
23
+ num_outputs: int,
24
+ model_config: ModelConfigDict,
25
+ name: str,
26
+ ):
27
+ super(FullyConnectedNetwork, self).__init__(
28
+ obs_space, action_space, num_outputs, model_config, name
29
+ )
30
+
31
+ hiddens = list(model_config.get("fcnet_hiddens", [])) + list(
32
+ model_config.get("post_fcnet_hiddens", [])
33
+ )
34
+ activation = model_config.get("fcnet_activation")
35
+ if not model_config.get("fcnet_hiddens", []):
36
+ activation = model_config.get("post_fcnet_activation")
37
+ activation = get_activation_fn(activation)
38
+ no_final_linear = model_config.get("no_final_linear")
39
+ vf_share_layers = model_config.get("vf_share_layers")
40
+ free_log_std = model_config.get("free_log_std")
41
+
42
+ # Generate free-floating bias variables for the second half of
43
+ # the outputs.
44
+ if free_log_std:
45
+ assert num_outputs % 2 == 0, (
46
+ "num_outputs must be divisible by two",
47
+ num_outputs,
48
+ )
49
+ num_outputs = num_outputs // 2
50
+ self.log_std_var = tf.Variable(
51
+ [0.0] * num_outputs, dtype=tf.float32, name="log_std"
52
+ )
53
+
54
+ # We are using obs_flat, so take the flattened shape as input.
55
+ inputs = tf.keras.layers.Input(
56
+ shape=(int(np.prod(obs_space.shape)),), name="observations"
57
+ )
58
+ # Last hidden layer output (before logits outputs).
59
+ last_layer = inputs
60
+ # The action distribution outputs.
61
+ logits_out = None
62
+ i = 1
63
+
64
+ # Create layers 0 to second-last.
65
+ for size in hiddens[:-1]:
66
+ last_layer = tf.keras.layers.Dense(
67
+ size,
68
+ name="fc_{}".format(i),
69
+ activation=activation,
70
+ kernel_initializer=normc_initializer(1.0),
71
+ )(last_layer)
72
+ i += 1
73
+
74
+ # The last layer is adjusted to be of size num_outputs, but it's a
75
+ # layer with activation.
76
+ if no_final_linear and num_outputs:
77
+ logits_out = tf.keras.layers.Dense(
78
+ num_outputs,
79
+ name="fc_out",
80
+ activation=activation,
81
+ kernel_initializer=normc_initializer(1.0),
82
+ )(last_layer)
83
+ # Finish the layers with the provided sizes (`hiddens`), plus -
84
+ # iff num_outputs > 0 - a last linear layer of size num_outputs.
85
+ else:
86
+ if len(hiddens) > 0:
87
+ last_layer = tf.keras.layers.Dense(
88
+ hiddens[-1],
89
+ name="fc_{}".format(i),
90
+ activation=activation,
91
+ kernel_initializer=normc_initializer(1.0),
92
+ )(last_layer)
93
+ if num_outputs:
94
+ logits_out = tf.keras.layers.Dense(
95
+ num_outputs,
96
+ name="fc_out",
97
+ activation=None,
98
+ kernel_initializer=normc_initializer(0.01),
99
+ )(last_layer)
100
+ # Adjust num_outputs to be the number of nodes in the last layer.
101
+ else:
102
+ self.num_outputs = ([int(np.prod(obs_space.shape))] + hiddens[-1:])[-1]
103
+
104
+ # Concat the log std vars to the end of the state-dependent means.
105
+ if free_log_std and logits_out is not None:
106
+
107
+ def tiled_log_std(x):
108
+ return tf.tile(tf.expand_dims(self.log_std_var, 0), [tf.shape(x)[0], 1])
109
+
110
+ log_std_out = tf.keras.layers.Lambda(tiled_log_std)(inputs)
111
+ logits_out = tf.keras.layers.Concatenate(axis=1)([logits_out, log_std_out])
112
+
113
+ last_vf_layer = None
114
+ if not vf_share_layers:
115
+ # Build a parallel set of hidden layers for the value net.
116
+ last_vf_layer = inputs
117
+ i = 1
118
+ for size in hiddens:
119
+ last_vf_layer = tf.keras.layers.Dense(
120
+ size,
121
+ name="fc_value_{}".format(i),
122
+ activation=activation,
123
+ kernel_initializer=normc_initializer(1.0),
124
+ )(last_vf_layer)
125
+ i += 1
126
+
127
+ value_out = tf.keras.layers.Dense(
128
+ 1,
129
+ name="value_out",
130
+ activation=None,
131
+ kernel_initializer=normc_initializer(0.01),
132
+ )(last_vf_layer if last_vf_layer is not None else last_layer)
133
+
134
+ self.base_model = tf.keras.Model(
135
+ inputs, [(logits_out if logits_out is not None else last_layer), value_out]
136
+ )
137
+
138
+ def forward(
139
+ self,
140
+ input_dict: Dict[str, TensorType],
141
+ state: List[TensorType],
142
+ seq_lens: TensorType,
143
+ ) -> (TensorType, List[TensorType]):
144
+ model_out, self._value_out = self.base_model(input_dict["obs_flat"])
145
+ return model_out, state
146
+
147
+ def value_function(self) -> TensorType:
148
+ return tf.reshape(self._value_out, [-1])
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__init__.py ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from ray.rllib.models.tf.layers.gru_gate import GRUGate
2
+ from ray.rllib.models.tf.layers.noisy_layer import NoisyLayer
3
+ from ray.rllib.models.tf.layers.relative_multi_head_attention import (
4
+ PositionalEmbedding,
5
+ RelativeMultiHeadAttention,
6
+ )
7
+ from ray.rllib.models.tf.layers.skip_connection import SkipConnection
8
+ from ray.rllib.models.tf.layers.multi_head_attention import MultiHeadAttention
9
+
10
+ __all__ = [
11
+ "GRUGate",
12
+ "MultiHeadAttention",
13
+ "NoisyLayer",
14
+ "PositionalEmbedding",
15
+ "RelativeMultiHeadAttention",
16
+ "SkipConnection",
17
+ ]
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (679 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__pycache__/gru_gate.cpython-310.pyc ADDED
Binary file (2.2 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__pycache__/multi_head_attention.cpython-310.pyc ADDED
Binary file (2.3 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__pycache__/noisy_layer.cpython-310.pyc ADDED
Binary file (3.43 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__pycache__/relative_multi_head_attention.cpython-310.pyc ADDED
Binary file (4.71 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/__pycache__/skip_connection.cpython-310.pyc ADDED
Binary file (1.85 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/gru_gate.py ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from ray.rllib.utils.framework import try_import_tf
2
+ from ray.rllib.utils.typing import TensorType, TensorShape
3
+ from ray.rllib.utils.deprecation import deprecation_warning
4
+ from ray.util import log_once
5
+
6
+ tf1, tf, tfv = try_import_tf()
7
+
8
+
9
+ class GRUGate(tf.keras.layers.Layer if tf else object):
10
+ def __init__(self, init_bias: float = 0.0, **kwargs):
11
+ super().__init__(**kwargs)
12
+ self._init_bias = init_bias
13
+ if log_once("gru_gate"):
14
+ deprecation_warning(
15
+ old="rllib.models.tf.layers.GRUGate",
16
+ )
17
+
18
+ def build(self, input_shape: TensorShape):
19
+ h_shape, x_shape = input_shape
20
+ if x_shape[-1] != h_shape[-1]:
21
+ raise ValueError(
22
+ "Both inputs to GRUGate must have equal size in last axis!"
23
+ )
24
+
25
+ dim = int(h_shape[-1])
26
+ self._w_r = self.add_weight(shape=(dim, dim))
27
+ self._w_z = self.add_weight(shape=(dim, dim))
28
+ self._w_h = self.add_weight(shape=(dim, dim))
29
+
30
+ self._u_r = self.add_weight(shape=(dim, dim))
31
+ self._u_z = self.add_weight(shape=(dim, dim))
32
+ self._u_h = self.add_weight(shape=(dim, dim))
33
+
34
+ def bias_initializer(shape, dtype):
35
+ return tf.fill(shape, tf.cast(self._init_bias, dtype=dtype))
36
+
37
+ self._bias_z = self.add_weight(shape=(dim,), initializer=bias_initializer)
38
+
39
+ def call(self, inputs: TensorType, **kwargs) -> TensorType:
40
+ # Pass in internal state first.
41
+ h, X = inputs
42
+
43
+ r = tf.tensordot(X, self._w_r, axes=1) + tf.tensordot(h, self._u_r, axes=1)
44
+ r = tf.nn.sigmoid(r)
45
+
46
+ z = (
47
+ tf.tensordot(X, self._w_z, axes=1)
48
+ + tf.tensordot(h, self._u_z, axes=1)
49
+ - self._bias_z
50
+ )
51
+ z = tf.nn.sigmoid(z)
52
+
53
+ h_next = tf.tensordot(X, self._w_h, axes=1) + tf.tensordot(
54
+ (h * r), self._u_h, axes=1
55
+ )
56
+ h_next = tf.nn.tanh(h_next)
57
+
58
+ return (1 - z) * h + z * h_next
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/multi_head_attention.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ [1] - Attention Is All You Need - Vaswani, Jones, Shazeer, Parmar,
3
+ Uszkoreit, Gomez, Kaiser - Google Brain/Research, U Toronto - 2017.
4
+ https://arxiv.org/pdf/1706.03762.pdf
5
+ """
6
+ from ray.rllib.utils.framework import try_import_tf
7
+ from ray.rllib.utils.typing import TensorType
8
+ from ray.rllib.utils.deprecation import deprecation_warning
9
+ from ray.util import log_once
10
+
11
+ tf1, tf, tfv = try_import_tf()
12
+
13
+
14
+ class MultiHeadAttention(tf.keras.layers.Layer if tf else object):
15
+ """A multi-head attention layer described in [1]."""
16
+
17
+ def __init__(self, out_dim: int, num_heads: int, head_dim: int, **kwargs):
18
+ super().__init__(**kwargs)
19
+
20
+ # No bias or non-linearity.
21
+ self._num_heads = num_heads
22
+ self._head_dim = head_dim
23
+ self._qkv_layer = tf.keras.layers.Dense(
24
+ 3 * num_heads * head_dim, use_bias=False
25
+ )
26
+ self._linear_layer = tf.keras.layers.TimeDistributed(
27
+ tf.keras.layers.Dense(out_dim, use_bias=False)
28
+ )
29
+ if log_once("multi_head_attention"):
30
+ deprecation_warning(
31
+ old="rllib.models.tf.layers.MultiHeadAttention",
32
+ )
33
+
34
+ def call(self, inputs: TensorType) -> TensorType:
35
+ L = tf.shape(inputs)[1] # length of segment
36
+ H = self._num_heads # number of attention heads
37
+ D = self._head_dim # attention head dimension
38
+
39
+ qkv = self._qkv_layer(inputs)
40
+
41
+ queries, keys, values = tf.split(qkv, 3, -1)
42
+ queries = queries[:, -L:] # only query based on the segment
43
+
44
+ queries = tf.reshape(queries, [-1, L, H, D])
45
+ keys = tf.reshape(keys, [-1, L, H, D])
46
+ values = tf.reshape(values, [-1, L, H, D])
47
+
48
+ score = tf.einsum("bihd,bjhd->bijh", queries, keys)
49
+ score = score / D**0.5
50
+
51
+ # causal mask of the same length as the sequence
52
+ mask = tf.sequence_mask(tf.range(1, L + 1), dtype=score.dtype)
53
+ mask = mask[None, :, :, None]
54
+
55
+ masked_score = score * mask + 1e30 * (mask - 1.0)
56
+ wmat = tf.nn.softmax(masked_score, axis=2)
57
+
58
+ out = tf.einsum("bijh,bjhd->bihd", wmat, values)
59
+ shape = tf.concat([tf.shape(out)[:2], [H * D]], axis=0)
60
+ out = tf.reshape(out, shape)
61
+ return self._linear_layer(out)
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/noisy_layer.py ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+
3
+ from ray.rllib.models.utils import get_activation_fn
4
+ from ray.rllib.utils.framework import (
5
+ get_variable,
6
+ try_import_tf,
7
+ TensorType,
8
+ TensorShape,
9
+ )
10
+ from ray.rllib.utils.deprecation import deprecation_warning
11
+ from ray.util import log_once
12
+
13
+ tf1, tf, tfv = try_import_tf()
14
+
15
+
16
+ class NoisyLayer(tf.keras.layers.Layer if tf else object):
17
+ r"""A Layer that adds learnable Noise to some previous layer's outputs.
18
+
19
+ Consists of:
20
+ - a common dense layer: y = w^{T}x + b
21
+ - a noisy layer: y = (w + \epsilon_w*\sigma_w)^{T}x +
22
+ (b+\epsilon_b*\sigma_b)
23
+ , where \epsilon are random variables sampled from factorized normal
24
+ distributions and \sigma are trainable variables which are expected to
25
+ vanish along the training procedure.
26
+ """
27
+
28
+ def __init__(
29
+ self, prefix: str, out_size: int, sigma0: float, activation: str = "relu"
30
+ ):
31
+ """Initializes a NoisyLayer object.
32
+
33
+ Args:
34
+ prefix:
35
+ out_size: Output size for Noisy Layer
36
+ sigma0: Initialization value for sigma_b (bias noise)
37
+ non_linear: Non-linear activation for Noisy Layer
38
+ """
39
+ super().__init__()
40
+ self.prefix = prefix
41
+ self.out_size = out_size
42
+ # TF noise generation can be unreliable on GPU
43
+ # If generating the noise on the CPU,
44
+ # lowering sigma0 to 0.1 may be helpful
45
+ self.sigma0 = sigma0 # 0.5~GPU, 0.1~CPU
46
+ self.activation = activation
47
+ # Variables.
48
+ self.w = None # Weight matrix.
49
+ self.b = None # Biases.
50
+ self.sigma_w = None # Noise for weight matrix
51
+ self.sigma_b = None # Noise for biases.
52
+ if log_once("noisy_layer"):
53
+ deprecation_warning(
54
+ old="rllib.models.tf.layers.NoisyLayer",
55
+ )
56
+
57
+ def build(self, input_shape: TensorShape):
58
+ in_size = int(input_shape[1])
59
+
60
+ self.sigma_w = get_variable(
61
+ value=tf.keras.initializers.RandomUniform(
62
+ minval=-1.0 / np.sqrt(float(in_size)),
63
+ maxval=1.0 / np.sqrt(float(in_size)),
64
+ ),
65
+ trainable=True,
66
+ tf_name=self.prefix + "_sigma_w",
67
+ shape=[in_size, self.out_size],
68
+ dtype=tf.float32,
69
+ )
70
+
71
+ self.sigma_b = get_variable(
72
+ value=tf.keras.initializers.Constant(self.sigma0 / np.sqrt(float(in_size))),
73
+ trainable=True,
74
+ tf_name=self.prefix + "_sigma_b",
75
+ shape=[self.out_size],
76
+ dtype=tf.float32,
77
+ )
78
+
79
+ self.w = get_variable(
80
+ value=tf.keras.initializers.GlorotUniform(),
81
+ tf_name=self.prefix + "_fc_w",
82
+ trainable=True,
83
+ shape=[in_size, self.out_size],
84
+ dtype=tf.float32,
85
+ )
86
+
87
+ self.b = get_variable(
88
+ value=tf.keras.initializers.Zeros(),
89
+ tf_name=self.prefix + "_fc_b",
90
+ trainable=True,
91
+ shape=[self.out_size],
92
+ dtype=tf.float32,
93
+ )
94
+
95
+ def call(self, inputs: TensorType) -> TensorType:
96
+ in_size = int(inputs.shape[1])
97
+ epsilon_in = tf.random.normal(shape=[in_size])
98
+ epsilon_out = tf.random.normal(shape=[self.out_size])
99
+ epsilon_in = self._f_epsilon(epsilon_in)
100
+ epsilon_out = self._f_epsilon(epsilon_out)
101
+ epsilon_w = tf.matmul(
102
+ a=tf.expand_dims(epsilon_in, -1), b=tf.expand_dims(epsilon_out, 0)
103
+ )
104
+ epsilon_b = epsilon_out
105
+
106
+ action_activation = (
107
+ tf.matmul(inputs, self.w + self.sigma_w * epsilon_w)
108
+ + self.b
109
+ + self.sigma_b * epsilon_b
110
+ )
111
+
112
+ fn = get_activation_fn(self.activation, framework="tf")
113
+ if fn is not None:
114
+ action_activation = fn(action_activation)
115
+ return action_activation
116
+
117
+ def _f_epsilon(self, x: TensorType) -> TensorType:
118
+ return tf.math.sign(x) * tf.math.sqrt(tf.math.abs(x))
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/relative_multi_head_attention.py ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional
2
+
3
+ from ray.rllib.utils.framework import try_import_tf
4
+ from ray.rllib.utils.typing import TensorType
5
+ from ray.rllib.utils.deprecation import deprecation_warning
6
+ from ray.util import log_once
7
+
8
+ tf1, tf, tfv = try_import_tf()
9
+
10
+
11
+ class RelativeMultiHeadAttention(tf.keras.layers.Layer if tf else object):
12
+ """A RelativeMultiHeadAttention layer as described in [3].
13
+
14
+ Uses segment level recurrence with state reuse.
15
+ """
16
+
17
+ def __init__(
18
+ self,
19
+ out_dim: int,
20
+ num_heads: int,
21
+ head_dim: int,
22
+ input_layernorm: bool = False,
23
+ output_activation: Optional["tf.nn.activation"] = None,
24
+ **kwargs
25
+ ):
26
+ """Initializes a RelativeMultiHeadAttention keras Layer object.
27
+
28
+ Args:
29
+ out_dim: The output dimensions of the multi-head attention
30
+ unit.
31
+ num_heads: The number of attention heads to use.
32
+ Denoted `H` in [2].
33
+ head_dim: The dimension of a single(!) attention head within
34
+ a multi-head attention unit. Denoted as `d` in [3].
35
+ input_layernorm: Whether to prepend a LayerNorm before
36
+ everything else. Should be True for building a GTrXL.
37
+ output_activation (Optional[tf.nn.activation]): Optional tf.nn
38
+ activation function. Should be relu for GTrXL.
39
+ **kwargs:
40
+ """
41
+ if log_once("relative_multi_head_attention"):
42
+ deprecation_warning(
43
+ old="rllib.models.tf.layers.RelativeMultiHeadAttention",
44
+ )
45
+ super().__init__(**kwargs)
46
+
47
+ # No bias or non-linearity.
48
+ self._num_heads = num_heads
49
+ self._head_dim = head_dim
50
+ # 3=Query, key, and value inputs.
51
+ self._qkv_layer = tf.keras.layers.Dense(
52
+ 3 * num_heads * head_dim, use_bias=False
53
+ )
54
+ self._linear_layer = tf.keras.layers.TimeDistributed(
55
+ tf.keras.layers.Dense(out_dim, use_bias=False, activation=output_activation)
56
+ )
57
+
58
+ self._uvar = self.add_weight(shape=(num_heads, head_dim))
59
+ self._vvar = self.add_weight(shape=(num_heads, head_dim))
60
+
61
+ # Constant (non-trainable) sinusoid rel pos encoding matrix, which
62
+ # depends on this incoming time dimension.
63
+ # For inference, we prepend the memory to the current timestep's
64
+ # input: Tau + 1. For training, we prepend the memory to the input
65
+ # sequence: Tau + T.
66
+ self._pos_embedding = PositionalEmbedding(out_dim)
67
+ self._pos_proj = tf.keras.layers.Dense(num_heads * head_dim, use_bias=False)
68
+
69
+ self._input_layernorm = None
70
+ if input_layernorm:
71
+ self._input_layernorm = tf.keras.layers.LayerNormalization(axis=-1)
72
+
73
+ def call(
74
+ self, inputs: TensorType, memory: Optional[TensorType] = None
75
+ ) -> TensorType:
76
+ T = tf.shape(inputs)[1] # length of segment (time)
77
+ H = self._num_heads # number of attention heads
78
+ d = self._head_dim # attention head dimension
79
+
80
+ # Add previous memory chunk (as const, w/o gradient) to input.
81
+ # Tau (number of (prev) time slices in each memory chunk).
82
+ Tau = tf.shape(memory)[1]
83
+ inputs = tf.concat([tf.stop_gradient(memory), inputs], axis=1)
84
+
85
+ # Apply the Layer-Norm.
86
+ if self._input_layernorm is not None:
87
+ inputs = self._input_layernorm(inputs)
88
+
89
+ qkv = self._qkv_layer(inputs)
90
+
91
+ queries, keys, values = tf.split(qkv, 3, -1)
92
+ # Cut out memory timesteps from query.
93
+ queries = queries[:, -T:]
94
+
95
+ # Splitting up queries into per-head dims (d).
96
+ queries = tf.reshape(queries, [-1, T, H, d])
97
+ keys = tf.reshape(keys, [-1, Tau + T, H, d])
98
+ values = tf.reshape(values, [-1, Tau + T, H, d])
99
+
100
+ R = self._pos_embedding(Tau + T)
101
+ R = self._pos_proj(R)
102
+ R = tf.reshape(R, [Tau + T, H, d])
103
+
104
+ # b=batch
105
+ # i and j=time indices (i=max-timesteps (inputs); j=Tau memory space)
106
+ # h=head
107
+ # d=head-dim (over which we will reduce-sum)
108
+ score = tf.einsum("bihd,bjhd->bijh", queries + self._uvar, keys)
109
+ pos_score = tf.einsum("bihd,jhd->bijh", queries + self._vvar, R)
110
+ score = score + self.rel_shift(pos_score)
111
+ score = score / d**0.5
112
+
113
+ # Causal mask of the same length as the sequence.
114
+ mask = tf.sequence_mask(tf.range(Tau + 1, Tau + T + 1), dtype=score.dtype)
115
+ mask = mask[None, :, :, None]
116
+
117
+ masked_score = score * mask + 1e30 * (mask - 1.0)
118
+ wmat = tf.nn.softmax(masked_score, axis=2)
119
+
120
+ out = tf.einsum("bijh,bjhd->bihd", wmat, values)
121
+ out = tf.reshape(out, tf.concat((tf.shape(out)[:2], [H * d]), axis=0))
122
+ return self._linear_layer(out)
123
+
124
+ @staticmethod
125
+ def rel_shift(x: TensorType) -> TensorType:
126
+ # Transposed version of the shift approach described in [3].
127
+ # https://github.com/kimiyoung/transformer-xl/blob/
128
+ # 44781ed21dbaec88b280f74d9ae2877f52b492a5/tf/model.py#L31
129
+ x_size = tf.shape(x)
130
+
131
+ x = tf.pad(x, [[0, 0], [0, 0], [1, 0], [0, 0]])
132
+ x = tf.reshape(x, [x_size[0], x_size[2] + 1, x_size[1], x_size[3]])
133
+ x = x[:, 1:, :, :]
134
+ x = tf.reshape(x, x_size)
135
+
136
+ return x
137
+
138
+
139
+ class PositionalEmbedding(tf.keras.layers.Layer if tf else object):
140
+ def __init__(self, out_dim, **kwargs):
141
+ super().__init__(**kwargs)
142
+ self.inverse_freq = 1 / (10000 ** (tf.range(0, out_dim, 2.0) / out_dim))
143
+
144
+ def call(self, seq_length):
145
+ pos_offsets = tf.cast(tf.range(seq_length - 1, -1, -1), tf.float32)
146
+ inputs = pos_offsets[:, None] * self.inverse_freq[None, :]
147
+ return tf.concat((tf.sin(inputs), tf.cos(inputs)), axis=-1)
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/layers/skip_connection.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional, Any
2
+
3
+ from ray.rllib.utils.framework import try_import_tf
4
+ from ray.rllib.utils.typing import TensorType
5
+ from ray.rllib.utils.deprecation import deprecation_warning
6
+ from ray.util import log_once
7
+
8
+ tf1, tf, tfv = try_import_tf()
9
+
10
+
11
+ class SkipConnection(tf.keras.layers.Layer if tf else object):
12
+ """Skip connection layer.
13
+
14
+ Adds the original input to the output (regular residual layer) OR uses
15
+ input as hidden state input to a given fan_in_layer.
16
+ """
17
+
18
+ def __init__(self, layer: Any, fan_in_layer: Optional[Any] = None, **kwargs):
19
+ """Initializes a SkipConnection keras layer object.
20
+
21
+ Args:
22
+ layer (tf.keras.layers.Layer): Any layer processing inputs.
23
+ fan_in_layer (Optional[tf.keras.layers.Layer]): An optional
24
+ layer taking two inputs: The original input and the output
25
+ of `layer`.
26
+ """
27
+ if log_once("skip_connection"):
28
+ deprecation_warning(
29
+ old="rllib.models.tf.layers.SkipConnection",
30
+ )
31
+ super().__init__(**kwargs)
32
+ self._layer = layer
33
+ self._fan_in_layer = fan_in_layer
34
+
35
+ def call(self, inputs: TensorType, **kwargs) -> TensorType:
36
+ # del kwargs
37
+ outputs = self._layer(inputs, **kwargs)
38
+ # Residual case, just add inputs to outputs.
39
+ if self._fan_in_layer is None:
40
+ outputs = outputs + inputs
41
+ # Fan-in e.g. RNN: Call fan-in with `inputs` and `outputs`.
42
+ else:
43
+ # NOTE: In the GRU case, `inputs` is the state input.
44
+ outputs = self._fan_in_layer((inputs, outputs))
45
+
46
+ return outputs
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/misc.py ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ from typing import Tuple, Any, Optional
3
+
4
+ from ray.rllib.utils.annotations import DeveloperAPI
5
+ from ray.rllib.utils.framework import try_import_tf
6
+ from ray.rllib.utils.typing import TensorType
7
+
8
+ tf1, tf, tfv = try_import_tf()
9
+
10
+
11
+ # TODO: (sven) obsolete this class.
12
+ @DeveloperAPI
13
+ def normc_initializer(std: float = 1.0) -> Any:
14
+ def _initializer(shape, dtype=None, partition_info=None):
15
+ out = np.random.randn(*shape).astype(
16
+ dtype.name if hasattr(dtype, "name") else dtype or np.float32
17
+ )
18
+ out *= std / np.sqrt(np.square(out).sum(axis=0, keepdims=True))
19
+ return tf.constant(out)
20
+
21
+ return _initializer
22
+
23
+
24
+ @DeveloperAPI
25
+ def conv2d(
26
+ x: TensorType,
27
+ num_filters: int,
28
+ name: str,
29
+ filter_size: Tuple[int, int] = (3, 3),
30
+ stride: Tuple[int, int] = (1, 1),
31
+ pad: str = "SAME",
32
+ dtype: Optional[Any] = None,
33
+ collections: Optional[Any] = None,
34
+ ) -> TensorType:
35
+
36
+ if dtype is None:
37
+ dtype = tf.float32
38
+
39
+ with tf1.variable_scope(name):
40
+ stride_shape = [1, stride[0], stride[1], 1]
41
+ filter_shape = [
42
+ filter_size[0],
43
+ filter_size[1],
44
+ int(x.get_shape()[3]),
45
+ num_filters,
46
+ ]
47
+
48
+ # There are "num input feature maps * filter height * filter width"
49
+ # inputs to each hidden unit.
50
+ fan_in = np.prod(filter_shape[:3])
51
+ # Each unit in the lower layer receives a gradient from: "num output
52
+ # feature maps * filter height * filter width" / pooling size.
53
+ fan_out = np.prod(filter_shape[:2]) * num_filters
54
+ # Initialize weights with random weights.
55
+ w_bound = np.sqrt(6 / (fan_in + fan_out))
56
+
57
+ w = tf1.get_variable(
58
+ "W",
59
+ filter_shape,
60
+ dtype,
61
+ tf1.random_uniform_initializer(-w_bound, w_bound),
62
+ collections=collections,
63
+ )
64
+ b = tf1.get_variable(
65
+ "b",
66
+ [1, 1, 1, num_filters],
67
+ initializer=tf1.constant_initializer(0.0),
68
+ collections=collections,
69
+ )
70
+ return tf1.nn.conv2d(x, w, stride_shape, pad) + b
71
+
72
+
73
+ @DeveloperAPI
74
+ def linear(
75
+ x: TensorType,
76
+ size: int,
77
+ name: str,
78
+ initializer: Optional[Any] = None,
79
+ bias_init: float = 0.0,
80
+ ) -> TensorType:
81
+ w = tf1.get_variable(name + "/w", [x.get_shape()[1], size], initializer=initializer)
82
+ b = tf1.get_variable(
83
+ name + "/b", [size], initializer=tf1.constant_initializer(bias_init)
84
+ )
85
+ return tf.matmul(x, w) + b
86
+
87
+
88
+ @DeveloperAPI
89
+ def flatten(x: TensorType) -> TensorType:
90
+ return tf.reshape(x, [-1, np.prod(x.get_shape().as_list()[1:])])
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/noop.py ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from ray.rllib.models.modelv2 import ModelV2
2
+ from ray.rllib.models.tf.tf_modelv2 import TFModelV2
3
+ from ray.rllib.utils.annotations import OldAPIStack, override
4
+ from ray.rllib.utils.framework import try_import_tf
5
+
6
+ _, tf, _ = try_import_tf()
7
+
8
+
9
+ @OldAPIStack
10
+ class NoopModel(TFModelV2):
11
+ """Trivial model that just returns the obs flattened.
12
+
13
+ This is the model used if use_state_preprocessor=False."""
14
+
15
+ @override(ModelV2)
16
+ def forward(self, input_dict, state, seq_lens):
17
+ return tf.cast(input_dict["obs_flat"], tf.float32), state
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/recurrent_net.py ADDED
@@ -0,0 +1,292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ import gymnasium as gym
3
+ from gymnasium.spaces import Discrete, MultiDiscrete
4
+ import logging
5
+ import tree # pip install dm_tree
6
+ from typing import Dict, List, Tuple
7
+
8
+ from ray.rllib.models.modelv2 import ModelV2
9
+ from ray.rllib.models.tf.tf_modelv2 import TFModelV2
10
+ from ray.rllib.policy.rnn_sequencing import add_time_dimension
11
+ from ray.rllib.policy.sample_batch import SampleBatch
12
+ from ray.rllib.policy.view_requirement import ViewRequirement
13
+ from ray.rllib.utils.annotations import OldAPIStack, override
14
+ from ray.rllib.utils.framework import try_import_tf
15
+ from ray.rllib.utils.spaces.space_utils import get_base_struct_from_space
16
+ from ray.rllib.utils.tf_utils import flatten_inputs_to_1d_tensor, one_hot
17
+ from ray.rllib.utils.typing import ModelConfigDict, TensorType
18
+ from ray.rllib.utils.deprecation import deprecation_warning
19
+ from ray.util.debug import log_once
20
+
21
+ tf1, tf, tfv = try_import_tf()
22
+ logger = logging.getLogger(__name__)
23
+
24
+
25
+ @OldAPIStack
26
+ class RecurrentNetwork(TFModelV2):
27
+ """Helper class to simplify implementing RNN models with TFModelV2.
28
+
29
+ Instead of implementing forward(), you can implement forward_rnn() which
30
+ takes batches with the time dimension added already.
31
+
32
+ Here is an example implementation for a subclass
33
+ ``MyRNNClass(RecurrentNetwork)``::
34
+
35
+ def __init__(self, *args, **kwargs):
36
+ super(MyModelClass, self).__init__(*args, **kwargs)
37
+ cell_size = 256
38
+
39
+ # Define input layers
40
+ input_layer = tf.keras.layers.Input(
41
+ shape=(None, obs_space.shape[0]))
42
+ state_in_h = tf.keras.layers.Input(shape=(256, ))
43
+ state_in_c = tf.keras.layers.Input(shape=(256, ))
44
+ seq_in = tf.keras.layers.Input(shape=(), dtype=tf.int32)
45
+
46
+ # Send to LSTM cell
47
+ lstm_out, state_h, state_c = tf.keras.layers.LSTM(
48
+ cell_size, return_sequences=True, return_state=True,
49
+ name="lstm")(
50
+ inputs=input_layer,
51
+ mask=tf.sequence_mask(seq_in),
52
+ initial_state=[state_in_h, state_in_c])
53
+ output_layer = tf.keras.layers.Dense(...)(lstm_out)
54
+
55
+ # Create the RNN model
56
+ self.rnn_model = tf.keras.Model(
57
+ inputs=[input_layer, seq_in, state_in_h, state_in_c],
58
+ outputs=[output_layer, state_h, state_c])
59
+ self.rnn_model.summary()
60
+ """
61
+
62
+ @override(ModelV2)
63
+ def forward(
64
+ self,
65
+ input_dict: Dict[str, TensorType],
66
+ state: List[TensorType],
67
+ seq_lens: TensorType,
68
+ ) -> Tuple[TensorType, List[TensorType]]:
69
+ """Adds time dimension to batch before sending inputs to forward_rnn().
70
+
71
+ You should implement forward_rnn() in your subclass."""
72
+ # Creating a __init__ function that acts as a passthrough and adding the warning
73
+ # there led to errors probably due to the multiple inheritance. We encountered
74
+ # the same error if we add the Deprecated decorator. We therefore add the
75
+ # deprecation warning here.
76
+ if log_once("recurrent_network_tf"):
77
+ deprecation_warning(
78
+ old="ray.rllib.models.tf.recurrent_net.RecurrentNetwork"
79
+ )
80
+ assert seq_lens is not None
81
+ flat_inputs = input_dict["obs_flat"]
82
+ inputs = add_time_dimension(
83
+ padded_inputs=flat_inputs, seq_lens=seq_lens, framework="tf"
84
+ )
85
+ output, new_state = self.forward_rnn(
86
+ inputs,
87
+ state,
88
+ seq_lens,
89
+ )
90
+ return tf.reshape(output, [-1, self.num_outputs]), new_state
91
+
92
+ def forward_rnn(
93
+ self, inputs: TensorType, state: List[TensorType], seq_lens: TensorType
94
+ ) -> Tuple[TensorType, List[TensorType]]:
95
+ """Call the model with the given input tensors and state.
96
+
97
+ Args:
98
+ inputs: observation tensor with shape [B, T, obs_size].
99
+ state: list of state tensors, each with shape [B, T, size].
100
+ seq_lens: 1d tensor holding input sequence lengths.
101
+
102
+ Returns:
103
+ (outputs, new_state): The model output tensor of shape
104
+ [B, T, num_outputs] and the list of new state tensors each with
105
+ shape [B, size].
106
+
107
+ Sample implementation for the ``MyRNNClass`` example::
108
+
109
+ def forward_rnn(self, inputs, state, seq_lens):
110
+ model_out, h, c = self.rnn_model([inputs, seq_lens] + state)
111
+ return model_out, [h, c]
112
+ """
113
+ raise NotImplementedError("You must implement this for a RNN model")
114
+
115
+ def get_initial_state(self) -> List[TensorType]:
116
+ """Get the initial recurrent state values for the model.
117
+
118
+ Returns:
119
+ list of np.array objects, if any
120
+
121
+ Sample implementation for the ``MyRNNClass`` example::
122
+
123
+ def get_initial_state(self):
124
+ return [
125
+ np.zeros(self.cell_size, np.float32),
126
+ np.zeros(self.cell_size, np.float32),
127
+ ]
128
+ """
129
+ raise NotImplementedError("You must implement this for a RNN model")
130
+
131
+
132
+ @OldAPIStack
133
+ class LSTMWrapper(RecurrentNetwork):
134
+ """An LSTM wrapper serving as an interface for ModelV2s that set use_lstm."""
135
+
136
+ def __init__(
137
+ self,
138
+ obs_space: gym.spaces.Space,
139
+ action_space: gym.spaces.Space,
140
+ num_outputs: int,
141
+ model_config: ModelConfigDict,
142
+ name: str,
143
+ ):
144
+ super(LSTMWrapper, self).__init__(
145
+ obs_space, action_space, None, model_config, name
146
+ )
147
+ # At this point, self.num_outputs is the number of nodes coming
148
+ # from the wrapped (underlying) model. In other words, self.num_outputs
149
+ # is the input size for the LSTM layer.
150
+ # If None, set it to the observation space.
151
+ if self.num_outputs is None:
152
+ self.num_outputs = int(np.prod(self.obs_space.shape))
153
+
154
+ self.cell_size = model_config["lstm_cell_size"]
155
+ self.use_prev_action = model_config["lstm_use_prev_action"]
156
+ self.use_prev_reward = model_config["lstm_use_prev_reward"]
157
+
158
+ self.action_space_struct = get_base_struct_from_space(self.action_space)
159
+ self.action_dim = 0
160
+
161
+ for space in tree.flatten(self.action_space_struct):
162
+ if isinstance(space, Discrete):
163
+ self.action_dim += space.n
164
+ elif isinstance(space, MultiDiscrete):
165
+ self.action_dim += np.sum(space.nvec)
166
+ elif space.shape is not None:
167
+ self.action_dim += int(np.prod(space.shape))
168
+ else:
169
+ self.action_dim += int(len(space))
170
+
171
+ # Add prev-action/reward nodes to input to LSTM.
172
+ if self.use_prev_action:
173
+ self.num_outputs += self.action_dim
174
+ if self.use_prev_reward:
175
+ self.num_outputs += 1
176
+
177
+ # Define input layers.
178
+ input_layer = tf.keras.layers.Input(
179
+ shape=(None, self.num_outputs), name="inputs"
180
+ )
181
+
182
+ # Set self.num_outputs to the number of output nodes desired by the
183
+ # caller of this constructor.
184
+ self.num_outputs = num_outputs
185
+
186
+ state_in_h = tf.keras.layers.Input(shape=(self.cell_size,), name="h")
187
+ state_in_c = tf.keras.layers.Input(shape=(self.cell_size,), name="c")
188
+ seq_in = tf.keras.layers.Input(shape=(), name="seq_in", dtype=tf.int32)
189
+
190
+ # Preprocess observation with a hidden layer and send to LSTM cell
191
+ lstm_out, state_h, state_c = tf.keras.layers.LSTM(
192
+ self.cell_size, return_sequences=True, return_state=True, name="lstm"
193
+ )(
194
+ inputs=input_layer,
195
+ mask=tf.sequence_mask(seq_in),
196
+ initial_state=[state_in_h, state_in_c],
197
+ )
198
+
199
+ # Postprocess LSTM output with another hidden layer and compute values
200
+ logits = tf.keras.layers.Dense(
201
+ self.num_outputs, activation=tf.keras.activations.linear, name="logits"
202
+ )(lstm_out)
203
+ values = tf.keras.layers.Dense(1, activation=None, name="values")(lstm_out)
204
+
205
+ # Create the RNN model
206
+ self._rnn_model = tf.keras.Model(
207
+ inputs=[input_layer, seq_in, state_in_h, state_in_c],
208
+ outputs=[logits, values, state_h, state_c],
209
+ )
210
+ # Print out model summary in INFO logging mode.
211
+ if logger.isEnabledFor(logging.INFO):
212
+ self._rnn_model.summary()
213
+
214
+ # Add prev-a/r to this model's view, if required.
215
+ if model_config["lstm_use_prev_action"]:
216
+ self.view_requirements[SampleBatch.PREV_ACTIONS] = ViewRequirement(
217
+ SampleBatch.ACTIONS, space=self.action_space, shift=-1
218
+ )
219
+ if model_config["lstm_use_prev_reward"]:
220
+ self.view_requirements[SampleBatch.PREV_REWARDS] = ViewRequirement(
221
+ SampleBatch.REWARDS, shift=-1
222
+ )
223
+
224
+ @override(RecurrentNetwork)
225
+ def forward(
226
+ self,
227
+ input_dict: Dict[str, TensorType],
228
+ state: List[TensorType],
229
+ seq_lens: TensorType,
230
+ ) -> Tuple[TensorType, List[TensorType]]:
231
+ assert seq_lens is not None
232
+ # Push obs through "unwrapped" net's `forward()` first.
233
+ wrapped_out, _ = self._wrapped_forward(input_dict, [], None)
234
+
235
+ # Concat. prev-action/reward if required.
236
+ prev_a_r = []
237
+
238
+ # Prev actions.
239
+ if self.model_config["lstm_use_prev_action"]:
240
+ prev_a = input_dict[SampleBatch.PREV_ACTIONS]
241
+ # If actions are not processed yet (in their original form as
242
+ # have been sent to environment):
243
+ # Flatten/one-hot into 1D array.
244
+ if self.model_config["_disable_action_flattening"]:
245
+ prev_a_r.append(
246
+ flatten_inputs_to_1d_tensor(
247
+ prev_a,
248
+ spaces_struct=self.action_space_struct,
249
+ time_axis=False,
250
+ )
251
+ )
252
+ # If actions are already flattened (but not one-hot'd yet!),
253
+ # one-hot discrete/multi-discrete actions here.
254
+ else:
255
+ if isinstance(self.action_space, (Discrete, MultiDiscrete)):
256
+ prev_a = one_hot(prev_a, self.action_space)
257
+ prev_a_r.append(
258
+ tf.reshape(tf.cast(prev_a, tf.float32), [-1, self.action_dim])
259
+ )
260
+ # Prev rewards.
261
+ if self.model_config["lstm_use_prev_reward"]:
262
+ prev_a_r.append(
263
+ tf.reshape(
264
+ tf.cast(input_dict[SampleBatch.PREV_REWARDS], tf.float32), [-1, 1]
265
+ )
266
+ )
267
+
268
+ # Concat prev. actions + rewards to the "main" input.
269
+ if prev_a_r:
270
+ wrapped_out = tf.concat([wrapped_out] + prev_a_r, axis=1)
271
+
272
+ # Push everything through our LSTM.
273
+ input_dict["obs_flat"] = wrapped_out
274
+ return super().forward(input_dict, state, seq_lens)
275
+
276
+ @override(RecurrentNetwork)
277
+ def forward_rnn(
278
+ self, inputs: TensorType, state: List[TensorType], seq_lens: TensorType
279
+ ) -> Tuple[TensorType, List[TensorType]]:
280
+ model_out, self._value_out, h, c = self._rnn_model([inputs, seq_lens] + state)
281
+ return model_out, [h, c]
282
+
283
+ @override(ModelV2)
284
+ def get_initial_state(self) -> List[np.ndarray]:
285
+ return [
286
+ np.zeros(self.cell_size, np.float32),
287
+ np.zeros(self.cell_size, np.float32),
288
+ ]
289
+
290
+ @override(ModelV2)
291
+ def value_function(self) -> TensorType:
292
+ return tf.reshape(self._value_out, [-1])
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/tf_action_dist.py ADDED
@@ -0,0 +1,735 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import functools
2
+ import gymnasium as gym
3
+ from math import log
4
+ import numpy as np
5
+ import tree # pip install dm_tree
6
+ from typing import Optional
7
+
8
+ from ray.rllib.models.action_dist import ActionDistribution
9
+ from ray.rllib.models.modelv2 import ModelV2
10
+ from ray.rllib.utils import MIN_LOG_NN_OUTPUT, MAX_LOG_NN_OUTPUT, SMALL_NUMBER
11
+ from ray.rllib.utils.annotations import OldAPIStack, override
12
+ from ray.rllib.utils.framework import try_import_tf, try_import_tfp
13
+ from ray.rllib.utils.spaces.space_utils import get_base_struct_from_space
14
+ from ray.rllib.utils.typing import TensorType, List, Union, Tuple, ModelConfigDict
15
+
16
+ tf1, tf, tfv = try_import_tf()
17
+ tfp = try_import_tfp()
18
+
19
+
20
+ @OldAPIStack
21
+ class TFActionDistribution(ActionDistribution):
22
+ """TF-specific extensions for building action distributions."""
23
+
24
+ @override(ActionDistribution)
25
+ def __init__(self, inputs: List[TensorType], model: ModelV2):
26
+ super().__init__(inputs, model)
27
+ self.sample_op = self._build_sample_op()
28
+ self.sampled_action_logp_op = self.logp(self.sample_op)
29
+
30
+ def _build_sample_op(self) -> TensorType:
31
+ """Implement this instead of sample(), to enable op reuse.
32
+
33
+ This is needed since the sample op is non-deterministic and is shared
34
+ between sample() and sampled_action_logp().
35
+ """
36
+ raise NotImplementedError
37
+
38
+ @override(ActionDistribution)
39
+ def sample(self) -> TensorType:
40
+ """Draw a sample from the action distribution."""
41
+ return self.sample_op
42
+
43
+ @override(ActionDistribution)
44
+ def sampled_action_logp(self) -> TensorType:
45
+ """Returns the log probability of the sampled action."""
46
+ return self.sampled_action_logp_op
47
+
48
+
49
+ @OldAPIStack
50
+ class Categorical(TFActionDistribution):
51
+ """Categorical distribution for discrete action spaces."""
52
+
53
+ def __init__(
54
+ self, inputs: List[TensorType], model: ModelV2 = None, temperature: float = 1.0
55
+ ):
56
+ assert temperature > 0.0, "Categorical `temperature` must be > 0.0!"
57
+ # Allow softmax formula w/ temperature != 1.0:
58
+ # Divide inputs by temperature.
59
+ super().__init__(inputs / temperature, model)
60
+
61
+ @override(ActionDistribution)
62
+ def deterministic_sample(self) -> TensorType:
63
+ return tf.math.argmax(self.inputs, axis=1)
64
+
65
+ @override(ActionDistribution)
66
+ def logp(self, x: TensorType) -> TensorType:
67
+ return -tf.nn.sparse_softmax_cross_entropy_with_logits(
68
+ logits=self.inputs, labels=tf.cast(x, tf.int32)
69
+ )
70
+
71
+ @override(ActionDistribution)
72
+ def entropy(self) -> TensorType:
73
+ a0 = self.inputs - tf.reduce_max(self.inputs, axis=1, keepdims=True)
74
+ ea0 = tf.exp(a0)
75
+ z0 = tf.reduce_sum(ea0, axis=1, keepdims=True)
76
+ p0 = ea0 / z0
77
+ return tf.reduce_sum(p0 * (tf.math.log(z0) - a0), axis=1)
78
+
79
+ @override(ActionDistribution)
80
+ def kl(self, other: ActionDistribution) -> TensorType:
81
+ a0 = self.inputs - tf.reduce_max(self.inputs, axis=1, keepdims=True)
82
+ a1 = other.inputs - tf.reduce_max(other.inputs, axis=1, keepdims=True)
83
+ ea0 = tf.exp(a0)
84
+ ea1 = tf.exp(a1)
85
+ z0 = tf.reduce_sum(ea0, axis=1, keepdims=True)
86
+ z1 = tf.reduce_sum(ea1, axis=1, keepdims=True)
87
+ p0 = ea0 / z0
88
+ return tf.reduce_sum(p0 * (a0 - tf.math.log(z0) - a1 + tf.math.log(z1)), axis=1)
89
+
90
+ @override(TFActionDistribution)
91
+ def _build_sample_op(self) -> TensorType:
92
+ return tf.squeeze(tf.random.categorical(self.inputs, 1), axis=1)
93
+
94
+ @staticmethod
95
+ @override(ActionDistribution)
96
+ def required_model_output_shape(action_space, model_config):
97
+ return action_space.n
98
+
99
+
100
+ @OldAPIStack
101
+ def get_categorical_class_with_temperature(t: float):
102
+ """Categorical distribution class that has customized default temperature."""
103
+
104
+ class CategoricalWithTemperature(Categorical):
105
+ def __init__(self, inputs, model=None, temperature=t):
106
+ super().__init__(inputs, model, temperature)
107
+
108
+ return CategoricalWithTemperature
109
+
110
+
111
+ @OldAPIStack
112
+ class MultiCategorical(TFActionDistribution):
113
+ """MultiCategorical distribution for MultiDiscrete action spaces."""
114
+
115
+ def __init__(
116
+ self,
117
+ inputs: List[TensorType],
118
+ model: ModelV2,
119
+ input_lens: Union[List[int], np.ndarray, Tuple[int, ...]],
120
+ action_space=None,
121
+ ):
122
+ # skip TFActionDistribution init
123
+ ActionDistribution.__init__(self, inputs, model)
124
+ self.cats = [
125
+ Categorical(input_, model)
126
+ for input_ in tf.split(inputs, input_lens, axis=1)
127
+ ]
128
+ self.action_space = action_space
129
+ if self.action_space is None:
130
+ self.action_space = gym.spaces.MultiDiscrete(
131
+ [c.inputs.shape[1] for c in self.cats]
132
+ )
133
+ self.sample_op = self._build_sample_op()
134
+ self.sampled_action_logp_op = self.logp(self.sample_op)
135
+
136
+ @override(ActionDistribution)
137
+ def deterministic_sample(self) -> TensorType:
138
+ sample_ = tf.stack([cat.deterministic_sample() for cat in self.cats], axis=1)
139
+ if isinstance(self.action_space, gym.spaces.Box):
140
+ return tf.cast(
141
+ tf.reshape(sample_, [-1] + list(self.action_space.shape)),
142
+ self.action_space.dtype,
143
+ )
144
+ return sample_
145
+
146
+ @override(ActionDistribution)
147
+ def logp(self, actions: TensorType) -> TensorType:
148
+ # If tensor is provided, unstack it into list.
149
+ if isinstance(actions, tf.Tensor):
150
+ if isinstance(self.action_space, gym.spaces.Box):
151
+ actions = tf.reshape(
152
+ actions, [-1, int(np.prod(self.action_space.shape))]
153
+ )
154
+ elif isinstance(self.action_space, gym.spaces.MultiDiscrete):
155
+ actions.set_shape((None, len(self.cats)))
156
+ actions = tf.unstack(tf.cast(actions, tf.int32), axis=1)
157
+ logps = tf.stack([cat.logp(act) for cat, act in zip(self.cats, actions)])
158
+ return tf.reduce_sum(logps, axis=0)
159
+
160
+ @override(ActionDistribution)
161
+ def multi_entropy(self) -> TensorType:
162
+ return tf.stack([cat.entropy() for cat in self.cats], axis=1)
163
+
164
+ @override(ActionDistribution)
165
+ def entropy(self) -> TensorType:
166
+ return tf.reduce_sum(self.multi_entropy(), axis=1)
167
+
168
+ @override(ActionDistribution)
169
+ def multi_kl(self, other: ActionDistribution) -> TensorType:
170
+ return tf.stack(
171
+ [cat.kl(oth_cat) for cat, oth_cat in zip(self.cats, other.cats)], axis=1
172
+ )
173
+
174
+ @override(ActionDistribution)
175
+ def kl(self, other: ActionDistribution) -> TensorType:
176
+ return tf.reduce_sum(self.multi_kl(other), axis=1)
177
+
178
+ @override(TFActionDistribution)
179
+ def _build_sample_op(self) -> TensorType:
180
+ sample_op = tf.stack([cat.sample() for cat in self.cats], axis=1)
181
+ if isinstance(self.action_space, gym.spaces.Box):
182
+ return tf.cast(
183
+ tf.reshape(sample_op, [-1] + list(self.action_space.shape)),
184
+ dtype=self.action_space.dtype,
185
+ )
186
+ return sample_op
187
+
188
+ @staticmethod
189
+ @override(ActionDistribution)
190
+ def required_model_output_shape(
191
+ action_space: gym.Space, model_config: ModelConfigDict
192
+ ) -> Union[int, np.ndarray]:
193
+ # Int Box.
194
+ if isinstance(action_space, gym.spaces.Box):
195
+ assert action_space.dtype.name.startswith("int")
196
+ low_ = np.min(action_space.low)
197
+ high_ = np.max(action_space.high)
198
+ assert np.all(action_space.low == low_)
199
+ assert np.all(action_space.high == high_)
200
+ return np.prod(action_space.shape, dtype=np.int32) * (high_ - low_ + 1)
201
+ # MultiDiscrete space.
202
+ else:
203
+ # nvec is already integer, so no casting needed.
204
+ return np.sum(action_space.nvec)
205
+
206
+
207
+ @OldAPIStack
208
+ class SlateMultiCategorical(Categorical):
209
+ """MultiCategorical distribution for MultiDiscrete action spaces.
210
+
211
+ The action space must be uniform, meaning all nvec items have the same size, e.g.
212
+ MultiDiscrete([10, 10, 10]), where 10 is the number of candidates to pick from
213
+ and 3 is the slate size (pick 3 out of 10). When picking candidates, no candidate
214
+ must be picked more than once.
215
+ """
216
+
217
+ def __init__(
218
+ self,
219
+ inputs: List[TensorType],
220
+ model: ModelV2 = None,
221
+ temperature: float = 1.0,
222
+ action_space: Optional[gym.spaces.MultiDiscrete] = None,
223
+ all_slates=None,
224
+ ):
225
+ assert temperature > 0.0, "Categorical `temperature` must be > 0.0!"
226
+ # Allow softmax formula w/ temperature != 1.0:
227
+ # Divide inputs by temperature.
228
+ super().__init__(inputs / temperature, model)
229
+ self.action_space = action_space
230
+ # Assert uniformness of the action space (all discrete buckets have the same
231
+ # size).
232
+ assert isinstance(self.action_space, gym.spaces.MultiDiscrete) and all(
233
+ n == self.action_space.nvec[0] for n in self.action_space.nvec
234
+ )
235
+ self.all_slates = all_slates
236
+
237
+ @override(ActionDistribution)
238
+ def deterministic_sample(self) -> TensorType:
239
+ # Get a sample from the underlying Categorical (batch of ints).
240
+ sample = super().deterministic_sample()
241
+ # Use the sampled ints to pick the actual slates.
242
+ return tf.gather(self.all_slates, sample)
243
+
244
+ @override(ActionDistribution)
245
+ def logp(self, x: TensorType) -> TensorType:
246
+ # TODO: Implement.
247
+ return tf.ones_like(self.inputs[:, 0])
248
+
249
+
250
+ @OldAPIStack
251
+ class GumbelSoftmax(TFActionDistribution):
252
+ """GumbelSoftmax distr. (for differentiable sampling in discr. actions
253
+
254
+ The Gumbel Softmax distribution [1] (also known as the Concrete [2]
255
+ distribution) is a close cousin of the relaxed one-hot categorical
256
+ distribution, whose tfp implementation we will use here plus
257
+ adjusted `sample_...` and `log_prob` methods. See discussion at [0].
258
+
259
+ [0] https://stackoverflow.com/questions/56226133/
260
+ soft-actor-critic-with-discrete-action-space
261
+
262
+ [1] Categorical Reparametrization with Gumbel-Softmax (Jang et al, 2017):
263
+ https://arxiv.org/abs/1611.01144
264
+ [2] The Concrete Distribution: A Continuous Relaxation of Discrete Random
265
+ Variables (Maddison et al, 2017) https://arxiv.org/abs/1611.00712
266
+ """
267
+
268
+ def __init__(
269
+ self, inputs: List[TensorType], model: ModelV2 = None, temperature: float = 1.0
270
+ ):
271
+ """Initializes a GumbelSoftmax distribution.
272
+
273
+ Args:
274
+ temperature: Temperature parameter. For low temperatures,
275
+ the expected value approaches a categorical random variable.
276
+ For high temperatures, the expected value approaches a uniform
277
+ distribution.
278
+ """
279
+ assert temperature >= 0.0
280
+ self.dist = tfp.distributions.RelaxedOneHotCategorical(
281
+ temperature=temperature, logits=inputs
282
+ )
283
+ self.probs = tf.nn.softmax(self.dist._distribution.logits)
284
+ super().__init__(inputs, model)
285
+
286
+ @override(ActionDistribution)
287
+ def deterministic_sample(self) -> TensorType:
288
+ # Return the dist object's prob values.
289
+ return self.probs
290
+
291
+ @override(ActionDistribution)
292
+ def logp(self, x: TensorType) -> TensorType:
293
+ # Override since the implementation of tfp.RelaxedOneHotCategorical
294
+ # yields positive values.
295
+ if x.shape != self.dist.logits.shape:
296
+ values = tf.one_hot(
297
+ x, self.dist.logits.shape.as_list()[-1], dtype=tf.float32
298
+ )
299
+ assert values.shape == self.dist.logits.shape, (
300
+ values.shape,
301
+ self.dist.logits.shape,
302
+ )
303
+
304
+ # [0]'s implementation (see line below) seems to be an approximation
305
+ # to the actual Gumbel Softmax density.
306
+ return -tf.reduce_sum(
307
+ -x * tf.nn.log_softmax(self.dist.logits, axis=-1), axis=-1
308
+ )
309
+
310
+ @override(TFActionDistribution)
311
+ def _build_sample_op(self) -> TensorType:
312
+ return self.dist.sample()
313
+
314
+ @staticmethod
315
+ @override(ActionDistribution)
316
+ def required_model_output_shape(
317
+ action_space: gym.Space, model_config: ModelConfigDict
318
+ ) -> Union[int, np.ndarray]:
319
+ return action_space.n
320
+
321
+
322
+ @OldAPIStack
323
+ class DiagGaussian(TFActionDistribution):
324
+ """Action distribution where each vector element is a gaussian.
325
+
326
+ The first half of the input vector defines the gaussian means, and the
327
+ second half the gaussian standard deviations.
328
+ """
329
+
330
+ def __init__(
331
+ self,
332
+ inputs: List[TensorType],
333
+ model: ModelV2,
334
+ *,
335
+ action_space: Optional[gym.spaces.Space] = None
336
+ ):
337
+ mean, log_std = tf.split(inputs, 2, axis=1)
338
+ self.mean = mean
339
+ self.log_std = log_std
340
+ self.std = tf.exp(log_std)
341
+ # Remember to squeeze action samples in case action space is Box(shape)
342
+ self.zero_action_dim = action_space and action_space.shape == ()
343
+ super().__init__(inputs, model)
344
+
345
+ @override(ActionDistribution)
346
+ def deterministic_sample(self) -> TensorType:
347
+ return self.mean
348
+
349
+ @override(ActionDistribution)
350
+ def logp(self, x: TensorType) -> TensorType:
351
+ # Cover case where action space is Box(shape=()).
352
+ if int(tf.shape(x).shape[0]) == 1:
353
+ x = tf.expand_dims(x, axis=1)
354
+ return (
355
+ -0.5
356
+ * tf.reduce_sum(
357
+ tf.math.square((tf.cast(x, tf.float32) - self.mean) / self.std), axis=1
358
+ )
359
+ - 0.5 * np.log(2.0 * np.pi) * tf.cast(tf.shape(x)[1], tf.float32)
360
+ - tf.reduce_sum(self.log_std, axis=1)
361
+ )
362
+
363
+ @override(ActionDistribution)
364
+ def kl(self, other: ActionDistribution) -> TensorType:
365
+ assert isinstance(other, DiagGaussian)
366
+ return tf.reduce_sum(
367
+ other.log_std
368
+ - self.log_std
369
+ + (tf.math.square(self.std) + tf.math.square(self.mean - other.mean))
370
+ / (2.0 * tf.math.square(other.std))
371
+ - 0.5,
372
+ axis=1,
373
+ )
374
+
375
+ @override(ActionDistribution)
376
+ def entropy(self) -> TensorType:
377
+ return tf.reduce_sum(self.log_std + 0.5 * np.log(2.0 * np.pi * np.e), axis=1)
378
+
379
+ @override(TFActionDistribution)
380
+ def _build_sample_op(self) -> TensorType:
381
+ sample = self.mean + self.std * tf.random.normal(tf.shape(self.mean))
382
+ if self.zero_action_dim:
383
+ return tf.squeeze(sample, axis=-1)
384
+ return sample
385
+
386
+ @staticmethod
387
+ @override(ActionDistribution)
388
+ def required_model_output_shape(
389
+ action_space: gym.Space, model_config: ModelConfigDict
390
+ ) -> Union[int, np.ndarray]:
391
+ return np.prod(action_space.shape, dtype=np.int32) * 2
392
+
393
+
394
+ @OldAPIStack
395
+ class SquashedGaussian(TFActionDistribution):
396
+ """A tanh-squashed Gaussian distribution defined by: mean, std, low, high.
397
+
398
+ The distribution will never return low or high exactly, but
399
+ `low`+SMALL_NUMBER or `high`-SMALL_NUMBER respectively.
400
+ """
401
+
402
+ def __init__(
403
+ self,
404
+ inputs: List[TensorType],
405
+ model: ModelV2,
406
+ low: float = -1.0,
407
+ high: float = 1.0,
408
+ ):
409
+ """Parameterizes the distribution via `inputs`.
410
+
411
+ Args:
412
+ low: The lowest possible sampling value
413
+ (excluding this value).
414
+ high: The highest possible sampling value
415
+ (excluding this value).
416
+ """
417
+ assert tfp is not None
418
+ mean, log_std = tf.split(inputs, 2, axis=-1)
419
+ # Clip `scale` values (coming from NN) to reasonable values.
420
+ log_std = tf.clip_by_value(log_std, MIN_LOG_NN_OUTPUT, MAX_LOG_NN_OUTPUT)
421
+ std = tf.exp(log_std)
422
+ self.distr = tfp.distributions.Normal(loc=mean, scale=std)
423
+ assert np.all(np.less(low, high))
424
+ self.low = low
425
+ self.high = high
426
+ super().__init__(inputs, model)
427
+
428
+ @override(ActionDistribution)
429
+ def deterministic_sample(self) -> TensorType:
430
+ mean = self.distr.mean()
431
+ return self._squash(mean)
432
+
433
+ @override(TFActionDistribution)
434
+ def _build_sample_op(self) -> TensorType:
435
+ return self._squash(self.distr.sample())
436
+
437
+ @override(ActionDistribution)
438
+ def logp(self, x: TensorType) -> TensorType:
439
+ # Unsquash values (from [low,high] to ]-inf,inf[)
440
+ unsquashed_values = tf.cast(self._unsquash(x), self.inputs.dtype)
441
+ # Get log prob of unsquashed values from our Normal.
442
+ log_prob_gaussian = self.distr.log_prob(unsquashed_values)
443
+ # For safety reasons, clamp somehow, only then sum up.
444
+ log_prob_gaussian = tf.clip_by_value(log_prob_gaussian, -100, 100)
445
+ log_prob_gaussian = tf.reduce_sum(log_prob_gaussian, axis=-1)
446
+ # Get log-prob for squashed Gaussian.
447
+ unsquashed_values_tanhd = tf.math.tanh(unsquashed_values)
448
+ log_prob = log_prob_gaussian - tf.reduce_sum(
449
+ tf.math.log(1 - unsquashed_values_tanhd**2 + SMALL_NUMBER), axis=-1
450
+ )
451
+ return log_prob
452
+
453
+ def sample_logp(self):
454
+ z = self.distr.sample()
455
+ actions = self._squash(z)
456
+ return actions, tf.reduce_sum(
457
+ self.distr.log_prob(z) - tf.math.log(1 - actions * actions + SMALL_NUMBER),
458
+ axis=-1,
459
+ )
460
+
461
+ @override(ActionDistribution)
462
+ def entropy(self) -> TensorType:
463
+ raise ValueError("Entropy not defined for SquashedGaussian!")
464
+
465
+ @override(ActionDistribution)
466
+ def kl(self, other: ActionDistribution) -> TensorType:
467
+ raise ValueError("KL not defined for SquashedGaussian!")
468
+
469
+ def _squash(self, raw_values: TensorType) -> TensorType:
470
+ # Returned values are within [low, high] (including `low` and `high`).
471
+ squashed = ((tf.math.tanh(raw_values) + 1.0) / 2.0) * (
472
+ self.high - self.low
473
+ ) + self.low
474
+ return tf.clip_by_value(squashed, self.low, self.high)
475
+
476
+ def _unsquash(self, values: TensorType) -> TensorType:
477
+ normed_values = (values - self.low) / (self.high - self.low) * 2.0 - 1.0
478
+ # Stabilize input to atanh.
479
+ save_normed_values = tf.clip_by_value(
480
+ normed_values, -1.0 + SMALL_NUMBER, 1.0 - SMALL_NUMBER
481
+ )
482
+ unsquashed = tf.math.atanh(save_normed_values)
483
+ return unsquashed
484
+
485
+ @staticmethod
486
+ @override(ActionDistribution)
487
+ def required_model_output_shape(
488
+ action_space: gym.Space, model_config: ModelConfigDict
489
+ ) -> Union[int, np.ndarray]:
490
+ return np.prod(action_space.shape, dtype=np.int32) * 2
491
+
492
+
493
+ @OldAPIStack
494
+ class Beta(TFActionDistribution):
495
+ """
496
+ A Beta distribution is defined on the interval [0, 1] and parameterized by
497
+ shape parameters alpha and beta (also called concentration parameters).
498
+
499
+ PDF(x; alpha, beta) = x**(alpha - 1) (1 - x)**(beta - 1) / Z
500
+ with Z = Gamma(alpha) Gamma(beta) / Gamma(alpha + beta)
501
+ and Gamma(n) = (n - 1)!
502
+ """
503
+
504
+ def __init__(
505
+ self,
506
+ inputs: List[TensorType],
507
+ model: ModelV2,
508
+ low: float = 0.0,
509
+ high: float = 1.0,
510
+ ):
511
+ # Stabilize input parameters (possibly coming from a linear layer).
512
+ inputs = tf.clip_by_value(inputs, log(SMALL_NUMBER), -log(SMALL_NUMBER))
513
+ inputs = tf.math.log(tf.math.exp(inputs) + 1.0) + 1.0
514
+ self.low = low
515
+ self.high = high
516
+ alpha, beta = tf.split(inputs, 2, axis=-1)
517
+ # Note: concentration0==beta, concentration1=alpha (!)
518
+ self.dist = tfp.distributions.Beta(concentration1=alpha, concentration0=beta)
519
+ super().__init__(inputs, model)
520
+
521
+ @override(ActionDistribution)
522
+ def deterministic_sample(self) -> TensorType:
523
+ mean = self.dist.mean()
524
+ return self._squash(mean)
525
+
526
+ @override(TFActionDistribution)
527
+ def _build_sample_op(self) -> TensorType:
528
+ return self._squash(self.dist.sample())
529
+
530
+ @override(ActionDistribution)
531
+ def logp(self, x: TensorType) -> TensorType:
532
+ unsquashed_values = self._unsquash(x)
533
+ return tf.math.reduce_sum(self.dist.log_prob(unsquashed_values), axis=-1)
534
+
535
+ def _squash(self, raw_values: TensorType) -> TensorType:
536
+ return raw_values * (self.high - self.low) + self.low
537
+
538
+ def _unsquash(self, values: TensorType) -> TensorType:
539
+ return (values - self.low) / (self.high - self.low)
540
+
541
+ @staticmethod
542
+ @override(ActionDistribution)
543
+ def required_model_output_shape(
544
+ action_space: gym.Space, model_config: ModelConfigDict
545
+ ) -> Union[int, np.ndarray]:
546
+ return np.prod(action_space.shape, dtype=np.int32) * 2
547
+
548
+
549
+ @OldAPIStack
550
+ class Deterministic(TFActionDistribution):
551
+ """Action distribution that returns the input values directly.
552
+
553
+ This is similar to DiagGaussian with standard deviation zero (thus only
554
+ requiring the "mean" values as NN output).
555
+ """
556
+
557
+ @override(ActionDistribution)
558
+ def deterministic_sample(self) -> TensorType:
559
+ return self.inputs
560
+
561
+ @override(TFActionDistribution)
562
+ def logp(self, x: TensorType) -> TensorType:
563
+ return tf.zeros_like(self.inputs)
564
+
565
+ @override(TFActionDistribution)
566
+ def _build_sample_op(self) -> TensorType:
567
+ return self.inputs
568
+
569
+ @staticmethod
570
+ @override(ActionDistribution)
571
+ def required_model_output_shape(
572
+ action_space: gym.Space, model_config: ModelConfigDict
573
+ ) -> Union[int, np.ndarray]:
574
+ return np.prod(action_space.shape, dtype=np.int32)
575
+
576
+
577
+ @OldAPIStack
578
+ class MultiActionDistribution(TFActionDistribution):
579
+ """Action distribution that operates on a set of actions.
580
+
581
+ Args:
582
+ inputs (Tensor list): A list of tensors from which to compute samples.
583
+ """
584
+
585
+ def __init__(
586
+ self, inputs, model, *, child_distributions, input_lens, action_space, **kwargs
587
+ ):
588
+ ActionDistribution.__init__(self, inputs, model)
589
+
590
+ self.action_space_struct = get_base_struct_from_space(action_space)
591
+
592
+ self.input_lens = np.array(input_lens, dtype=np.int32)
593
+ split_inputs = tf.split(inputs, self.input_lens, axis=1)
594
+ self.flat_child_distributions = tree.map_structure(
595
+ lambda dist, input_: dist(input_, model, **kwargs),
596
+ child_distributions,
597
+ split_inputs,
598
+ )
599
+
600
+ @override(ActionDistribution)
601
+ def logp(self, x):
602
+ # Single tensor input (all merged).
603
+ if isinstance(x, (tf.Tensor, np.ndarray)):
604
+ split_indices = []
605
+ for dist in self.flat_child_distributions:
606
+ if isinstance(dist, Categorical):
607
+ split_indices.append(1)
608
+ elif (
609
+ isinstance(dist, MultiCategorical) and dist.action_space is not None
610
+ ):
611
+ split_indices.append(np.prod(dist.action_space.shape))
612
+ else:
613
+ sample = dist.sample()
614
+ # Cover Box(shape=()) case.
615
+ if len(sample.shape) == 1:
616
+ split_indices.append(1)
617
+ else:
618
+ split_indices.append(tf.shape(sample)[1])
619
+ split_x = tf.split(x, split_indices, axis=1)
620
+ # Structured or flattened (by single action component) input.
621
+ else:
622
+ split_x = tree.flatten(x)
623
+
624
+ def map_(val, dist):
625
+ # Remove extra categorical dimension.
626
+ if isinstance(dist, Categorical):
627
+ val = tf.cast(
628
+ tf.squeeze(val, axis=-1) if len(val.shape) > 1 else val, tf.int32
629
+ )
630
+ return dist.logp(val)
631
+
632
+ # Remove extra categorical dimension and take the logp of each
633
+ # component.
634
+ flat_logps = tree.map_structure(map_, split_x, self.flat_child_distributions)
635
+
636
+ return functools.reduce(lambda a, b: a + b, flat_logps)
637
+
638
+ @override(ActionDistribution)
639
+ def kl(self, other):
640
+ kl_list = [
641
+ d.kl(o)
642
+ for d, o in zip(
643
+ self.flat_child_distributions, other.flat_child_distributions
644
+ )
645
+ ]
646
+ return functools.reduce(lambda a, b: a + b, kl_list)
647
+
648
+ @override(ActionDistribution)
649
+ def entropy(self):
650
+ entropy_list = [d.entropy() for d in self.flat_child_distributions]
651
+ return functools.reduce(lambda a, b: a + b, entropy_list)
652
+
653
+ @override(ActionDistribution)
654
+ def sample(self):
655
+ child_distributions = tree.unflatten_as(
656
+ self.action_space_struct, self.flat_child_distributions
657
+ )
658
+ return tree.map_structure(lambda s: s.sample(), child_distributions)
659
+
660
+ @override(ActionDistribution)
661
+ def deterministic_sample(self):
662
+ child_distributions = tree.unflatten_as(
663
+ self.action_space_struct, self.flat_child_distributions
664
+ )
665
+ return tree.map_structure(
666
+ lambda s: s.deterministic_sample(), child_distributions
667
+ )
668
+
669
+ @override(TFActionDistribution)
670
+ def sampled_action_logp(self):
671
+ p = self.flat_child_distributions[0].sampled_action_logp()
672
+ for c in self.flat_child_distributions[1:]:
673
+ p += c.sampled_action_logp()
674
+ return p
675
+
676
+ @override(ActionDistribution)
677
+ def required_model_output_shape(self, action_space, model_config):
678
+ return np.sum(self.input_lens, dtype=np.int32)
679
+
680
+
681
+ @OldAPIStack
682
+ class Dirichlet(TFActionDistribution):
683
+ """Dirichlet distribution for continuous actions that are between
684
+ [0,1] and sum to 1.
685
+
686
+ e.g. actions that represent resource allocation."""
687
+
688
+ def __init__(self, inputs: List[TensorType], model: ModelV2):
689
+ """Input is a tensor of logits. The exponential of logits is used to
690
+ parametrize the Dirichlet distribution as all parameters need to be
691
+ positive. An arbitrary small epsilon is added to the concentration
692
+ parameters to be zero due to numerical error.
693
+
694
+ See issue #4440 for more details.
695
+ """
696
+ self.epsilon = 1e-7
697
+ concentration = tf.exp(inputs) + self.epsilon
698
+ self.dist = tf1.distributions.Dirichlet(
699
+ concentration=concentration,
700
+ validate_args=True,
701
+ allow_nan_stats=False,
702
+ )
703
+ super().__init__(concentration, model)
704
+
705
+ @override(ActionDistribution)
706
+ def deterministic_sample(self) -> TensorType:
707
+ return tf.nn.softmax(self.dist.concentration)
708
+
709
+ @override(ActionDistribution)
710
+ def logp(self, x: TensorType) -> TensorType:
711
+ # Support of Dirichlet are positive real numbers. x is already
712
+ # an array of positive numbers, but we clip to avoid zeros due to
713
+ # numerical errors.
714
+ x = tf.maximum(x, self.epsilon)
715
+ x = x / tf.reduce_sum(x, axis=-1, keepdims=True)
716
+ return self.dist.log_prob(x)
717
+
718
+ @override(ActionDistribution)
719
+ def entropy(self) -> TensorType:
720
+ return self.dist.entropy()
721
+
722
+ @override(ActionDistribution)
723
+ def kl(self, other: ActionDistribution) -> TensorType:
724
+ return self.dist.kl_divergence(other.dist)
725
+
726
+ @override(TFActionDistribution)
727
+ def _build_sample_op(self) -> TensorType:
728
+ return self.dist.sample()
729
+
730
+ @staticmethod
731
+ @override(ActionDistribution)
732
+ def required_model_output_shape(
733
+ action_space: gym.Space, model_config: ModelConfigDict
734
+ ) -> Union[int, np.ndarray]:
735
+ return np.prod(action_space.shape, dtype=np.int32)
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/tf_distributions.py ADDED
@@ -0,0 +1,552 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """The main difference between this and the old ActionDistribution is that this one
2
+ has more explicit input args. So that the input format does not have to be guessed from
3
+ the code. This matches the design pattern of torch distribution which developers may
4
+ already be familiar with.
5
+ """
6
+ import gymnasium as gym
7
+ import tree
8
+ import numpy as np
9
+ from typing import Dict, Iterable, List, Optional
10
+ import abc
11
+
12
+
13
+ from ray.rllib.models.distributions import Distribution
14
+ from ray.rllib.utils.annotations import override, DeveloperAPI
15
+ from ray.rllib.utils.framework import try_import_tf, try_import_tfp
16
+ from ray.rllib.utils.typing import TensorType, Union, Tuple
17
+
18
+
19
+ _, tf, _ = try_import_tf()
20
+ tfp = try_import_tfp()
21
+
22
+ # TODO (Kourosh) Write unittest for this class similar to torch distributions.
23
+
24
+
25
+ @DeveloperAPI
26
+ class TfDistribution(Distribution, abc.ABC):
27
+ """Wrapper class for tfp.distributions."""
28
+
29
+ def __init__(self, *args, **kwargs):
30
+ super().__init__()
31
+ self._dist = self._get_tf_distribution(*args, **kwargs)
32
+
33
+ @abc.abstractmethod
34
+ def _get_tf_distribution(self, *args, **kwargs) -> "tfp.distributions.Distribution":
35
+ """Returns the tfp.distributions.Distribution object to use."""
36
+
37
+ @override(Distribution)
38
+ def logp(self, value: TensorType, **kwargs) -> TensorType:
39
+ return self._dist.log_prob(value, **kwargs)
40
+
41
+ @override(Distribution)
42
+ def entropy(self) -> TensorType:
43
+ return self._dist.entropy()
44
+
45
+ @override(Distribution)
46
+ def kl(self, other: "Distribution") -> TensorType:
47
+ return self._dist.kl_divergence(other._dist)
48
+
49
+ @override(Distribution)
50
+ def sample(
51
+ self, *, sample_shape=()
52
+ ) -> Union[TensorType, Tuple[TensorType, TensorType]]:
53
+ sample = self._dist.sample(sample_shape)
54
+ return sample
55
+
56
+ @override(Distribution)
57
+ def rsample(
58
+ self, *, sample_shape=()
59
+ ) -> Union[TensorType, Tuple[TensorType, TensorType]]:
60
+ raise NotImplementedError
61
+
62
+
63
+ @DeveloperAPI
64
+ class TfCategorical(TfDistribution):
65
+ """Wrapper class for Categorical distribution.
66
+
67
+ Creates a categorical distribution parameterized by either :attr:`probs` or
68
+ :attr:`logits` (but not both).
69
+
70
+ Samples are integers from :math:`\{0, \ldots, K-1\}` where `K` is
71
+ ``probs.size(-1)``.
72
+
73
+ If `probs` is 1-dimensional with length-`K`, each element is the relative
74
+ probability of sampling the class at that index.
75
+
76
+ If `probs` is N-dimensional, the first N-1 dimensions are treated as a batch of
77
+ relative probability vectors.
78
+
79
+ .. testcode::
80
+ :skipif: True
81
+
82
+ m = TfCategorical([ 0.25, 0.25, 0.25, 0.25 ])
83
+ m.sample(sample_shape=(2,)) # equal probability of 0, 1, 2, 3
84
+
85
+ .. testoutput::
86
+
87
+ tf.Tensor([2 3], shape=(2,), dtype=int32)
88
+
89
+ Args:
90
+ probs: The probablities of each event.
91
+ logits: Event log probabilities (unnormalized)
92
+ temperature: In case of using logits, this parameter can be used to determine
93
+ the sharpness of the distribution. i.e.
94
+ ``probs = softmax(logits / temperature)``. The temperature must be strictly
95
+ positive. A low value (e.g. 1e-10) will result in argmax sampling while a
96
+ larger value will result in uniform sampling.
97
+ """
98
+
99
+ @override(TfDistribution)
100
+ def __init__(
101
+ self,
102
+ probs: "tf.Tensor" = None,
103
+ logits: "tf.Tensor" = None,
104
+ ) -> None:
105
+ # We assert this here because to_deterministic makes this assumption.
106
+ assert (probs is None) != (
107
+ logits is None
108
+ ), "Exactly one out of `probs` and `logits` must be set!"
109
+
110
+ self.probs = probs
111
+ self.logits = logits
112
+ self.one_hot = tfp.distributions.OneHotCategorical(logits=logits, probs=probs)
113
+ super().__init__(logits=logits, probs=probs)
114
+
115
+ @override(Distribution)
116
+ def logp(self, value: TensorType, **kwargs) -> TensorType:
117
+ # This prevents an error in which float values at the boundaries of the range
118
+ # of the distribution are passed to this function.
119
+ return -tf.nn.sparse_softmax_cross_entropy_with_logits(
120
+ logits=self.logits if self.logits is not None else tf.log(self.probs),
121
+ labels=tf.cast(value, tf.int32),
122
+ )
123
+
124
+ @override(TfDistribution)
125
+ def _get_tf_distribution(
126
+ self,
127
+ probs: "tf.Tensor" = None,
128
+ logits: "tf.Tensor" = None,
129
+ ) -> "tfp.distributions.Distribution":
130
+ return tfp.distributions.Categorical(probs=probs, logits=logits)
131
+
132
+ @staticmethod
133
+ @override(Distribution)
134
+ def required_input_dim(space: gym.Space, **kwargs) -> int:
135
+ assert isinstance(space, gym.spaces.Discrete)
136
+ return int(space.n)
137
+
138
+ @override(Distribution)
139
+ def rsample(self, sample_shape=()):
140
+ one_hot_sample = self.one_hot.sample(sample_shape)
141
+ return tf.stop_gradients(one_hot_sample - self.probs) + self.probs
142
+
143
+ @classmethod
144
+ @override(Distribution)
145
+ def from_logits(cls, logits: TensorType, **kwargs) -> "TfCategorical":
146
+ return TfCategorical(logits=logits, **kwargs)
147
+
148
+ def to_deterministic(self) -> "TfDeterministic":
149
+ if self.probs is not None:
150
+ probs_or_logits = self.probs
151
+ else:
152
+ probs_or_logits = self.logits
153
+
154
+ return TfDeterministic(loc=tf.math.argmax(probs_or_logits, axis=-1))
155
+
156
+
157
+ @DeveloperAPI
158
+ class TfDiagGaussian(TfDistribution):
159
+ """Wrapper class for Normal distribution.
160
+
161
+ Creates a normal distribution parameterized by :attr:`loc` and :attr:`scale`. In
162
+ case of multi-dimensional distribution, the variance is assumed to be diagonal.
163
+
164
+ .. testcode::
165
+ :skipif: True
166
+
167
+ m = TfDiagGaussian(loc=[0.0, 0.0], scale=[1.0, 1.0])
168
+ m.sample(sample_shape=(2,)) # 2d normal dist with loc=0 and scale=1
169
+
170
+ .. testoutput::
171
+
172
+ tensor([[ 0.1046, -0.6120], [ 0.234, 0.556]])
173
+
174
+ .. testcode::
175
+ :skipif: True
176
+
177
+ # scale is None
178
+ m = TfDiagGaussian(loc=[0.0, 1.0])
179
+ m.sample(sample_shape=(2,)) # normally distributed with loc=0 and scale=1
180
+
181
+ .. testoutput::
182
+
183
+ tensor([0.1046, 0.6120])
184
+
185
+
186
+ Args:
187
+ loc: mean of the distribution (often referred to as mu). If scale is None, the
188
+ second half of the `loc` will be used as the log of scale.
189
+ scale: standard deviation of the distribution (often referred to as sigma).
190
+ Has to be positive.
191
+ """
192
+
193
+ @override(TfDistribution)
194
+ def __init__(
195
+ self,
196
+ loc: Union[float, TensorType],
197
+ scale: Optional[Union[float, TensorType]] = None,
198
+ ):
199
+ self.loc = loc
200
+ super().__init__(loc=loc, scale=scale)
201
+
202
+ @override(TfDistribution)
203
+ def _get_tf_distribution(self, loc, scale) -> "tfp.distributions.Distribution":
204
+ return tfp.distributions.Normal(loc=loc, scale=scale)
205
+
206
+ @override(TfDistribution)
207
+ def logp(self, value: TensorType) -> TensorType:
208
+ return tf.math.reduce_sum(super().logp(value), axis=-1)
209
+
210
+ @override(TfDistribution)
211
+ def entropy(self) -> TensorType:
212
+ return tf.math.reduce_sum(super().entropy(), axis=-1)
213
+
214
+ @override(TfDistribution)
215
+ def kl(self, other: "TfDistribution") -> TensorType:
216
+ return tf.math.reduce_sum(super().kl(other), axis=-1)
217
+
218
+ @staticmethod
219
+ @override(Distribution)
220
+ def required_input_dim(space: gym.Space, **kwargs) -> int:
221
+ assert isinstance(space, gym.spaces.Box)
222
+ return int(np.prod(space.shape, dtype=np.int32) * 2)
223
+
224
+ @override(Distribution)
225
+ def rsample(self, sample_shape=()):
226
+ eps = tf.random.normal(sample_shape)
227
+ return self._dist.loc + eps * self._dist.scale
228
+
229
+ @classmethod
230
+ @override(Distribution)
231
+ def from_logits(cls, logits: TensorType, **kwargs) -> "TfDiagGaussian":
232
+ loc, log_std = tf.split(logits, num_or_size_splits=2, axis=-1)
233
+ scale = tf.math.exp(log_std)
234
+ return TfDiagGaussian(loc=loc, scale=scale)
235
+
236
+ def to_deterministic(self) -> "TfDeterministic":
237
+ return TfDeterministic(loc=self.loc)
238
+
239
+
240
+ @DeveloperAPI
241
+ class TfDeterministic(Distribution):
242
+ """The distribution that returns the input values directly.
243
+
244
+ This is similar to DiagGaussian with standard deviation zero (thus only
245
+ requiring the "mean" values as NN output).
246
+
247
+ Note: entropy is always zero, ang logp and kl are not implemented.
248
+
249
+ .. testcode::
250
+ :skipif: True
251
+
252
+ m = TfDeterministic(loc=tf.constant([0.0, 0.0]))
253
+ m.sample(sample_shape=(2,))
254
+
255
+ .. testoutput::
256
+
257
+ Tensor([[ 0.0, 0.0], [ 0.0, 0.0]])
258
+
259
+ Args:
260
+ loc: the determinsitic value to return
261
+ """
262
+
263
+ @override(Distribution)
264
+ def __init__(self, loc: "tf.Tensor") -> None:
265
+ super().__init__()
266
+ self.loc = loc
267
+
268
+ @override(Distribution)
269
+ def sample(
270
+ self,
271
+ *,
272
+ sample_shape: Tuple[int, ...] = (),
273
+ **kwargs,
274
+ ) -> Union[TensorType, Tuple[TensorType, TensorType]]:
275
+ shape = sample_shape + self.loc.shape
276
+ return tf.ones(shape, dtype=self.loc.dtype) * self.loc
277
+
278
+ @override(Distribution)
279
+ def rsample(
280
+ self,
281
+ *,
282
+ sample_shape: Tuple[int, ...] = None,
283
+ **kwargs,
284
+ ) -> Union[TensorType, Tuple[TensorType, TensorType]]:
285
+ raise NotImplementedError
286
+
287
+ @override(Distribution)
288
+ def logp(self, value: TensorType, **kwargs) -> TensorType:
289
+ return tf.zeros_like(self.loc)
290
+
291
+ @override(Distribution)
292
+ def entropy(self, **kwargs) -> TensorType:
293
+ raise RuntimeError(f"`entropy()` not supported for {self.__class__.__name__}.")
294
+
295
+ @override(Distribution)
296
+ def kl(self, other: "Distribution", **kwargs) -> TensorType:
297
+ raise RuntimeError(f"`kl()` not supported for {self.__class__.__name__}.")
298
+
299
+ @staticmethod
300
+ @override(Distribution)
301
+ def required_input_dim(space: gym.Space, **kwargs) -> int:
302
+ assert isinstance(space, gym.spaces.Box)
303
+ return int(np.prod(space.shape, dtype=np.int32))
304
+
305
+ @classmethod
306
+ @override(Distribution)
307
+ def from_logits(cls, logits: TensorType, **kwargs) -> "TfDeterministic":
308
+ return TfDeterministic(loc=logits)
309
+
310
+ def to_deterministic(self) -> "TfDeterministic":
311
+ return self
312
+
313
+
314
+ @DeveloperAPI
315
+ class TfMultiCategorical(Distribution):
316
+ """MultiCategorical distribution for MultiDiscrete action spaces."""
317
+
318
+ @override(Distribution)
319
+ def __init__(
320
+ self,
321
+ categoricals: List[TfCategorical],
322
+ ):
323
+ super().__init__()
324
+ self._cats = categoricals
325
+
326
+ @override(Distribution)
327
+ def sample(self) -> TensorType:
328
+ arr = [cat.sample() for cat in self._cats]
329
+ sample_ = tf.stack(arr, axis=-1)
330
+ return sample_
331
+
332
+ @override(Distribution)
333
+ def rsample(self, sample_shape=()):
334
+ arr = [cat.rsample() for cat in self._cats]
335
+ sample_ = tf.stack(arr, axis=-1)
336
+ return sample_
337
+
338
+ @override(Distribution)
339
+ def logp(self, value: tf.Tensor) -> TensorType:
340
+ actions = tf.unstack(tf.cast(value, tf.int32), axis=-1)
341
+ logps = tf.stack([cat.logp(act) for cat, act in zip(self._cats, actions)])
342
+ return tf.reduce_sum(logps, axis=0)
343
+
344
+ @override(Distribution)
345
+ def entropy(self) -> TensorType:
346
+ return tf.reduce_sum(
347
+ tf.stack([cat.entropy() for cat in self._cats], axis=-1), axis=-1
348
+ )
349
+
350
+ @override(Distribution)
351
+ def kl(self, other: Distribution) -> TensorType:
352
+ kls = tf.stack(
353
+ [cat.kl(oth_cat) for cat, oth_cat in zip(self._cats, other._cats)], axis=-1
354
+ )
355
+ return tf.reduce_sum(kls, axis=-1)
356
+
357
+ @staticmethod
358
+ @override(Distribution)
359
+ def required_input_dim(space: gym.Space, **kwargs) -> int:
360
+ assert isinstance(space, gym.spaces.MultiDiscrete)
361
+ return int(np.sum(space.nvec))
362
+
363
+ @classmethod
364
+ @override(Distribution)
365
+ def from_logits(
366
+ cls,
367
+ logits: tf.Tensor,
368
+ input_lens: List[int],
369
+ **kwargs,
370
+ ) -> "TfMultiCategorical":
371
+ """Creates this Distribution from logits (and additional arguments).
372
+
373
+ If you wish to create this distribution from logits only, please refer to
374
+ `Distribution.get_partial_dist_cls()`.
375
+
376
+ Args:
377
+ logits: The tensor containing logits to be separated by logit_lens.
378
+ child_distribution_cls_struct: A struct of Distribution classes that can
379
+ be instantiated from the given logits.
380
+ input_lens: A list of integers that indicate the length of the logits
381
+ vectors to be passed into each child distribution.
382
+ **kwargs: Forward compatibility kwargs.
383
+ """
384
+ categoricals = [
385
+ TfCategorical(logits=logits)
386
+ for logits in tf.split(logits, input_lens, axis=-1)
387
+ ]
388
+
389
+ return TfMultiCategorical(categoricals=categoricals)
390
+
391
+ def to_deterministic(self) -> "TfMultiDistribution":
392
+ return TfMultiDistribution([cat.to_deterministic() for cat in self._cats])
393
+
394
+
395
+ @DeveloperAPI
396
+ class TfMultiDistribution(Distribution):
397
+ """Action distribution that operates on multiple, possibly nested actions."""
398
+
399
+ def __init__(
400
+ self,
401
+ child_distribution_struct: Union[Tuple, List, Dict],
402
+ ):
403
+ """Initializes a TfMultiDistribution object.
404
+
405
+ Args:
406
+ child_distribution_struct: Any struct
407
+ that contains the child distribution classes to use to
408
+ instantiate the child distributions from `logits`.
409
+ """
410
+ super().__init__()
411
+ self._original_struct = child_distribution_struct
412
+ self._flat_child_distributions = tree.flatten(child_distribution_struct)
413
+
414
+ @override(Distribution)
415
+ def rsample(
416
+ self,
417
+ *,
418
+ sample_shape: Tuple[int, ...] = None,
419
+ **kwargs,
420
+ ) -> Union[TensorType, Tuple[TensorType, TensorType]]:
421
+ rsamples = []
422
+ for dist in self._flat_child_distributions:
423
+ rsample = dist.rsample(sample_shape=sample_shape, **kwargs)
424
+ rsamples.append(rsample)
425
+
426
+ rsamples = tree.unflatten_as(self._original_struct, rsamples)
427
+ return rsamples
428
+
429
+ @override(Distribution)
430
+ def logp(self, value):
431
+ # Single tensor input (all merged).
432
+ if isinstance(value, (tf.Tensor, np.ndarray)):
433
+ split_indices = []
434
+ for dist in self._flat_child_distributions:
435
+ if isinstance(dist, TfCategorical):
436
+ split_indices.append(1)
437
+ elif isinstance(dist, TfMultiCategorical):
438
+ split_indices.append(len(dist._cats))
439
+ else:
440
+ sample = dist.sample()
441
+ # Cover Box(shape=()) case.
442
+ if len(sample.shape) == 1:
443
+ split_indices.append(1)
444
+ else:
445
+ split_indices.append(tf.shape(sample)[1])
446
+ split_value = tf.split(value, split_indices, axis=1)
447
+ # Structured or flattened (by single action component) input.
448
+ else:
449
+ split_value = tree.flatten(value)
450
+
451
+ def map_(val, dist):
452
+ # Remove extra dimension if present.
453
+ if (
454
+ isinstance(dist, TfCategorical)
455
+ and len(val.shape) > 1
456
+ and val.shape[-1] == 1
457
+ ):
458
+ val = tf.squeeze(val, axis=-1)
459
+
460
+ return dist.logp(val)
461
+
462
+ # Remove extra categorical dimension and take the logp of each
463
+ # component.
464
+ flat_logps = tree.map_structure(
465
+ map_, split_value, self._flat_child_distributions
466
+ )
467
+
468
+ return sum(flat_logps)
469
+
470
+ @override(Distribution)
471
+ def kl(self, other):
472
+ kl_list = [
473
+ d.kl(o)
474
+ for d, o in zip(
475
+ self._flat_child_distributions, other._flat_child_distributions
476
+ )
477
+ ]
478
+ return sum(kl_list)
479
+
480
+ @override(Distribution)
481
+ def entropy(self):
482
+ entropy_list = [d.entropy() for d in self._flat_child_distributions]
483
+ return sum(entropy_list)
484
+
485
+ @override(Distribution)
486
+ def sample(self):
487
+ child_distributions_struct = tree.unflatten_as(
488
+ self._original_struct, self._flat_child_distributions
489
+ )
490
+ return tree.map_structure(lambda s: s.sample(), child_distributions_struct)
491
+
492
+ @staticmethod
493
+ @override(Distribution)
494
+ def required_input_dim(space: gym.Space, input_lens: List[int], **kwargs) -> int:
495
+ return sum(input_lens)
496
+
497
+ @classmethod
498
+ @override(Distribution)
499
+ def from_logits(
500
+ cls,
501
+ logits: tf.Tensor,
502
+ child_distribution_cls_struct: Union[Dict, Iterable],
503
+ input_lens: Union[Dict, List[int]],
504
+ space: gym.Space,
505
+ **kwargs,
506
+ ) -> "TfMultiDistribution":
507
+ """Creates this Distribution from logits (and additional arguments).
508
+
509
+ If you wish to create this distribution from logits only, please refer to
510
+ `Distribution.get_partial_dist_cls()`.
511
+
512
+ Args:
513
+ logits: The tensor containing logits to be separated by `input_lens`.
514
+ child_distribution_cls_struct: A struct of Distribution classes that can
515
+ be instantiated from the given logits.
516
+ child_distribution_cls_struct: A struct of Distribution classes that can
517
+ be instantiated from the given logits.
518
+ input_lens: A list or dict of integers that indicate the length of each
519
+ logit. If this is given as a dict, the structure should match the
520
+ structure of child_distribution_cls_struct.
521
+ space: The possibly nested output space.
522
+ **kwargs: Forward compatibility kwargs.
523
+
524
+ Returns:
525
+ A TfMultiDistribution object.
526
+ """
527
+ logit_lens = tree.flatten(input_lens)
528
+ child_distribution_cls_list = tree.flatten(child_distribution_cls_struct)
529
+ split_logits = tf.split(logits, logit_lens, axis=1)
530
+
531
+ child_distribution_list = tree.map_structure(
532
+ lambda dist, input_: dist.from_logits(input_),
533
+ child_distribution_cls_list,
534
+ list(split_logits),
535
+ )
536
+
537
+ child_distribution_struct = tree.unflatten_as(
538
+ child_distribution_cls_struct, child_distribution_list
539
+ )
540
+
541
+ return TfMultiDistribution(
542
+ child_distribution_struct=child_distribution_struct,
543
+ )
544
+
545
+ def to_deterministic(self) -> "TfMultiDistribution":
546
+ flat_deterministic_dists = [
547
+ dist.to_deterministic for dist in self._flat_child_distributions
548
+ ]
549
+ deterministic_dists = tree.unflatten_as(
550
+ self._original_struct, flat_deterministic_dists
551
+ )
552
+ return TfMultiDistribution(deterministic_dists)
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/tf_modelv2.py ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import contextlib
2
+ import gymnasium as gym
3
+ import re
4
+ from typing import Dict, List, Union
5
+
6
+ from ray.util import log_once
7
+ from ray.rllib.models.modelv2 import ModelV2
8
+ from ray.rllib.utils.annotations import OldAPIStack, override
9
+ from ray.rllib.utils.deprecation import deprecation_warning
10
+ from ray.rllib.utils.framework import try_import_tf
11
+ from ray.rllib.utils.typing import ModelConfigDict, TensorType
12
+
13
+ tf1, tf, tfv = try_import_tf()
14
+
15
+
16
+ @OldAPIStack
17
+ class TFModelV2(ModelV2):
18
+ """TF version of ModelV2, which should contain a tf keras Model.
19
+
20
+ Note that this class by itself is not a valid model unless you
21
+ implement forward() in a subclass."""
22
+
23
+ def __init__(
24
+ self,
25
+ obs_space: gym.spaces.Space,
26
+ action_space: gym.spaces.Space,
27
+ num_outputs: int,
28
+ model_config: ModelConfigDict,
29
+ name: str,
30
+ ):
31
+ """Initializes a TFModelV2 instance.
32
+
33
+ Here is an example implementation for a subclass
34
+ ``MyModelClass(TFModelV2)``::
35
+
36
+ def __init__(self, *args, **kwargs):
37
+ super(MyModelClass, self).__init__(*args, **kwargs)
38
+ input_layer = tf.keras.layers.Input(...)
39
+ hidden_layer = tf.keras.layers.Dense(...)(input_layer)
40
+ output_layer = tf.keras.layers.Dense(...)(hidden_layer)
41
+ value_layer = tf.keras.layers.Dense(...)(hidden_layer)
42
+ self.base_model = tf.keras.Model(
43
+ input_layer, [output_layer, value_layer])
44
+ """
45
+ super().__init__(
46
+ obs_space, action_space, num_outputs, model_config, name, framework="tf"
47
+ )
48
+
49
+ # Deprecated: TFModelV2 now automatically track their variables.
50
+ self.var_list = []
51
+
52
+ if tf1.executing_eagerly():
53
+ self.graph = None
54
+ else:
55
+ self.graph = tf1.get_default_graph()
56
+
57
+ def context(self) -> contextlib.AbstractContextManager:
58
+ """Returns a contextmanager for the current TF graph."""
59
+ if self.graph:
60
+ return self.graph.as_default()
61
+ else:
62
+ return ModelV2.context(self)
63
+
64
+ def update_ops(self) -> List[TensorType]:
65
+ """Return the list of update ops for this model.
66
+
67
+ For example, this should include any BatchNorm update ops."""
68
+ return []
69
+
70
+ def register_variables(self, variables: List[TensorType]) -> None:
71
+ """Register the given list of variables with this model."""
72
+ if log_once("deprecated_tfmodelv2_register_variables"):
73
+ deprecation_warning(old="TFModelV2.register_variables", error=False)
74
+ self.var_list.extend(variables)
75
+
76
+ @override(ModelV2)
77
+ def variables(
78
+ self, as_dict: bool = False
79
+ ) -> Union[List[TensorType], Dict[str, TensorType]]:
80
+ if as_dict:
81
+ # Old way using `register_variables`.
82
+ if self.var_list:
83
+ return {v.name: v for v in self.var_list}
84
+ # New way: Automatically determine the var tree.
85
+ else:
86
+ return self._find_sub_modules("", self.__dict__)
87
+
88
+ # Old way using `register_variables`.
89
+ if self.var_list:
90
+ return list(self.var_list)
91
+ # New way: Automatically determine the var tree.
92
+ else:
93
+ return list(self.variables(as_dict=True).values())
94
+
95
+ @override(ModelV2)
96
+ def trainable_variables(
97
+ self, as_dict: bool = False
98
+ ) -> Union[List[TensorType], Dict[str, TensorType]]:
99
+ if as_dict:
100
+ return {
101
+ k: v for k, v in self.variables(as_dict=True).items() if v.trainable
102
+ }
103
+ return [v for v in self.variables() if v.trainable]
104
+
105
+ @staticmethod
106
+ def _find_sub_modules(current_key, struct):
107
+ # Keras Model: key=k + "." + var-name (replace '/' by '.').
108
+ if isinstance(struct, tf.keras.models.Model) or isinstance(struct, tf.Module):
109
+ ret = {}
110
+ for var in struct.variables:
111
+ name = re.sub("/", ".", var.name)
112
+ key = current_key + "." + name
113
+ ret[key] = var
114
+ return ret
115
+ # Other TFModelV2: Include its vars into ours.
116
+ elif isinstance(struct, TFModelV2):
117
+ return {
118
+ current_key + "." + key: var
119
+ for key, var in struct.variables(as_dict=True).items()
120
+ }
121
+ # tf.Variable
122
+ elif isinstance(struct, tf.Variable):
123
+ return {current_key: struct}
124
+ # List/Tuple.
125
+ elif isinstance(struct, (tuple, list)):
126
+ ret = {}
127
+ for i, value in enumerate(struct):
128
+ sub_vars = TFModelV2._find_sub_modules(
129
+ current_key + "_{}".format(i), value
130
+ )
131
+ ret.update(sub_vars)
132
+ return ret
133
+ # Dict.
134
+ elif isinstance(struct, dict):
135
+ if current_key:
136
+ current_key += "_"
137
+ ret = {}
138
+ for key, value in struct.items():
139
+ sub_vars = TFModelV2._find_sub_modules(current_key + str(key), value)
140
+ ret.update(sub_vars)
141
+ return ret
142
+ return {}
minigpt2/lib/python3.10/site-packages/ray/rllib/models/tf/visionnet.py ADDED
@@ -0,0 +1,264 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gymnasium as gym
2
+ from typing import Dict, List
3
+
4
+ from ray.rllib.models.tf.tf_modelv2 import TFModelV2
5
+ from ray.rllib.models.tf.misc import normc_initializer
6
+ from ray.rllib.models.utils import get_activation_fn, get_filter_config
7
+ from ray.rllib.utils.annotations import OldAPIStack
8
+ from ray.rllib.utils.framework import try_import_tf
9
+ from ray.rllib.utils.typing import ModelConfigDict, TensorType
10
+
11
+ tf1, tf, tfv = try_import_tf()
12
+
13
+
14
+ @OldAPIStack
15
+ class VisionNetwork(TFModelV2):
16
+ """Generic vision network implemented in ModelV2 API.
17
+
18
+ An additional post-conv fully connected stack can be added and configured
19
+ via the config keys:
20
+ `post_fcnet_hiddens`: Dense layer sizes after the Conv2D stack.
21
+ `post_fcnet_activation`: Activation function to use for this FC stack.
22
+ """
23
+
24
+ def __init__(
25
+ self,
26
+ obs_space: gym.spaces.Space,
27
+ action_space: gym.spaces.Space,
28
+ num_outputs: int,
29
+ model_config: ModelConfigDict,
30
+ name: str,
31
+ ):
32
+ if not model_config.get("conv_filters"):
33
+ model_config["conv_filters"] = get_filter_config(obs_space.shape)
34
+
35
+ super(VisionNetwork, self).__init__(
36
+ obs_space, action_space, num_outputs, model_config, name
37
+ )
38
+
39
+ activation = get_activation_fn(
40
+ self.model_config.get("conv_activation"), framework="tf"
41
+ )
42
+ filters = self.model_config["conv_filters"]
43
+ assert len(filters) > 0, "Must provide at least 1 entry in `conv_filters`!"
44
+
45
+ # Post FC net config.
46
+ post_fcnet_hiddens = model_config.get("post_fcnet_hiddens", [])
47
+ post_fcnet_activation = get_activation_fn(
48
+ model_config.get("post_fcnet_activation"), framework="tf"
49
+ )
50
+
51
+ no_final_linear = self.model_config.get("no_final_linear")
52
+ vf_share_layers = self.model_config.get("vf_share_layers")
53
+
54
+ input_shape = obs_space.shape
55
+ self.data_format = "channels_last"
56
+
57
+ inputs = tf.keras.layers.Input(shape=input_shape, name="observations")
58
+ last_layer = inputs
59
+ # Whether the last layer is the output of a Flattened (rather than
60
+ # a n x (1,1) Conv2D).
61
+ self.last_layer_is_flattened = False
62
+
63
+ # Build the action layers
64
+ for i, (out_size, kernel, stride) in enumerate(filters[:-1], 1):
65
+ last_layer = tf.keras.layers.Conv2D(
66
+ out_size,
67
+ kernel,
68
+ strides=stride
69
+ if isinstance(stride, (list, tuple))
70
+ else (stride, stride),
71
+ activation=activation,
72
+ padding="same",
73
+ data_format="channels_last",
74
+ name="conv{}".format(i),
75
+ )(last_layer)
76
+
77
+ out_size, kernel, stride = filters[-1]
78
+
79
+ # No final linear: Last layer has activation function and exits with
80
+ # num_outputs nodes (this could be a 1x1 conv or a FC layer, depending
81
+ # on `post_fcnet_...` settings).
82
+ if no_final_linear and num_outputs:
83
+ last_layer = tf.keras.layers.Conv2D(
84
+ out_size if post_fcnet_hiddens else num_outputs,
85
+ kernel,
86
+ strides=stride
87
+ if isinstance(stride, (list, tuple))
88
+ else (stride, stride),
89
+ activation=activation,
90
+ padding="valid",
91
+ data_format="channels_last",
92
+ name="conv_out",
93
+ )(last_layer)
94
+ # Add (optional) post-fc-stack after last Conv2D layer.
95
+ layer_sizes = post_fcnet_hiddens[:-1] + (
96
+ [num_outputs] if post_fcnet_hiddens else []
97
+ )
98
+ feature_out = last_layer
99
+
100
+ for i, out_size in enumerate(layer_sizes):
101
+ feature_out = last_layer
102
+ last_layer = tf.keras.layers.Dense(
103
+ out_size,
104
+ name="post_fcnet_{}".format(i),
105
+ activation=post_fcnet_activation,
106
+ kernel_initializer=normc_initializer(1.0),
107
+ )(last_layer)
108
+
109
+ # Finish network normally (w/o overriding last layer size with
110
+ # `num_outputs`), then add another linear one of size `num_outputs`.
111
+ else:
112
+ last_layer = tf.keras.layers.Conv2D(
113
+ out_size,
114
+ kernel,
115
+ strides=stride
116
+ if isinstance(stride, (list, tuple))
117
+ else (stride, stride),
118
+ activation=activation,
119
+ padding="valid",
120
+ data_format="channels_last",
121
+ name="conv{}".format(len(filters)),
122
+ )(last_layer)
123
+
124
+ # num_outputs defined. Use that to create an exact
125
+ # `num_output`-sized (1,1)-Conv2D.
126
+ if num_outputs:
127
+ if post_fcnet_hiddens:
128
+ last_cnn = last_layer = tf.keras.layers.Conv2D(
129
+ post_fcnet_hiddens[0],
130
+ [1, 1],
131
+ activation=post_fcnet_activation,
132
+ padding="same",
133
+ data_format="channels_last",
134
+ name="conv_out",
135
+ )(last_layer)
136
+ # Add (optional) post-fc-stack after last Conv2D layer.
137
+ for i, out_size in enumerate(
138
+ post_fcnet_hiddens[1:] + [num_outputs]
139
+ ):
140
+ feature_out = last_layer
141
+ last_layer = tf.keras.layers.Dense(
142
+ out_size,
143
+ name="post_fcnet_{}".format(i + 1),
144
+ activation=post_fcnet_activation
145
+ if i < len(post_fcnet_hiddens) - 1
146
+ else None,
147
+ kernel_initializer=normc_initializer(1.0),
148
+ )(last_layer)
149
+ else:
150
+ feature_out = last_layer
151
+ last_cnn = last_layer = tf.keras.layers.Conv2D(
152
+ num_outputs,
153
+ [1, 1],
154
+ activation=None,
155
+ padding="same",
156
+ data_format="channels_last",
157
+ name="conv_out",
158
+ )(last_layer)
159
+
160
+ if last_cnn.shape[1] != 1 or last_cnn.shape[2] != 1:
161
+ raise ValueError(
162
+ "Given `conv_filters` ({}) do not result in a [B, 1, "
163
+ "1, {} (`num_outputs`)] shape (but in {})! Please "
164
+ "adjust your Conv2D stack such that the dims 1 and 2 "
165
+ "are both 1.".format(
166
+ self.model_config["conv_filters"],
167
+ self.num_outputs,
168
+ list(last_cnn.shape),
169
+ )
170
+ )
171
+
172
+ # num_outputs not known -> Flatten, then set self.num_outputs
173
+ # to the resulting number of nodes.
174
+ else:
175
+ self.last_layer_is_flattened = True
176
+ last_layer = tf.keras.layers.Flatten(data_format="channels_last")(
177
+ last_layer
178
+ )
179
+
180
+ # Add (optional) post-fc-stack after last Conv2D layer.
181
+ for i, out_size in enumerate(post_fcnet_hiddens):
182
+ last_layer = tf.keras.layers.Dense(
183
+ out_size,
184
+ name="post_fcnet_{}".format(i),
185
+ activation=post_fcnet_activation,
186
+ kernel_initializer=normc_initializer(1.0),
187
+ )(last_layer)
188
+ feature_out = last_layer
189
+ self.num_outputs = last_layer.shape[1]
190
+ logits_out = last_layer
191
+
192
+ # Build the value layers
193
+ if vf_share_layers:
194
+ if not self.last_layer_is_flattened:
195
+ feature_out = tf.keras.layers.Lambda(
196
+ lambda x: tf.squeeze(x, axis=[1, 2])
197
+ )(feature_out)
198
+ value_out = tf.keras.layers.Dense(
199
+ 1,
200
+ name="value_out",
201
+ activation=None,
202
+ kernel_initializer=normc_initializer(0.01),
203
+ )(feature_out)
204
+ else:
205
+ # build a parallel set of hidden layers for the value net
206
+ last_layer = inputs
207
+ for i, (out_size, kernel, stride) in enumerate(filters[:-1], 1):
208
+ last_layer = tf.keras.layers.Conv2D(
209
+ out_size,
210
+ kernel,
211
+ strides=stride
212
+ if isinstance(stride, (list, tuple))
213
+ else (stride, stride),
214
+ activation=activation,
215
+ padding="same",
216
+ data_format="channels_last",
217
+ name="conv_value_{}".format(i),
218
+ )(last_layer)
219
+ out_size, kernel, stride = filters[-1]
220
+ last_layer = tf.keras.layers.Conv2D(
221
+ out_size,
222
+ kernel,
223
+ strides=stride
224
+ if isinstance(stride, (list, tuple))
225
+ else (stride, stride),
226
+ activation=activation,
227
+ padding="valid",
228
+ data_format="channels_last",
229
+ name="conv_value_{}".format(len(filters)),
230
+ )(last_layer)
231
+ last_layer = tf.keras.layers.Conv2D(
232
+ 1,
233
+ [1, 1],
234
+ activation=None,
235
+ padding="same",
236
+ data_format="channels_last",
237
+ name="conv_value_out",
238
+ )(last_layer)
239
+ value_out = tf.keras.layers.Lambda(lambda x: tf.squeeze(x, axis=[1, 2]))(
240
+ last_layer
241
+ )
242
+
243
+ self.base_model = tf.keras.Model(inputs, [logits_out, value_out])
244
+
245
+ def forward(
246
+ self,
247
+ input_dict: Dict[str, TensorType],
248
+ state: List[TensorType],
249
+ seq_lens: TensorType,
250
+ ) -> (TensorType, List[TensorType]):
251
+ obs = input_dict["obs"]
252
+ if self.data_format == "channels_first":
253
+ obs = tf.transpose(obs, [0, 2, 3, 1])
254
+ # Explicit cast to float32 needed in eager.
255
+ model_out, self._value_out = self.base_model(tf.cast(obs, tf.float32))
256
+ # Our last layer is already flat.
257
+ if self.last_layer_is_flattened:
258
+ return model_out, state
259
+ # Last layer is a n x [1,1] Conv2D -> Flatten.
260
+ else:
261
+ return tf.squeeze(model_out, axis=[1, 2]), state
262
+
263
+ def value_function(self) -> TensorType:
264
+ return tf.reshape(self._value_out, [-1])
minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (176 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/__pycache__/torch_modelv2.cpython-310.pyc ADDED
Binary file (3.02 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/modules/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (586 Bytes). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/modules/__pycache__/convtranspose2d_stack.cpython-310.pyc ADDED
Binary file (2.77 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/modules/__pycache__/gru_gate.cpython-310.pyc ADDED
Binary file (2.02 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/modules/__pycache__/multi_head_attention.cpython-310.pyc ADDED
Binary file (2.43 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/modules/__pycache__/noisy_layer.cpython-310.pyc ADDED
Binary file (2.98 kB). View file
 
minigpt2/lib/python3.10/site-packages/ray/rllib/models/torch/modules/__pycache__/relative_multi_head_attention.cpython-310.pyc ADDED
Binary file (5.39 kB). View file