code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def __call__(self, value: typing.Any) -> trajectory.Transition:
"""Converts `value` to a Transition. Performs data validation and pruning.
- If `value` is already a `Transition`, only validation is performed.
- If `value` is a `Trajectory` and `squeeze_time_dim = True` then
`value` it must have tenso... | Converts `value` to a Transition. Performs data validation and pruning.
- If `value` is already a `Transition`, only validation is performed.
- If `value` is a `Trajectory` and `squeeze_time_dim = True` then
`value` it must have tensors with shape `[B, T=2]` outer dims.
This is converted to a `Tran... | __call__ | python | tensorflow/agents | tf_agents/agents/data_converter.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/data_converter.py | Apache-2.0 |
def __call__(self, value: typing.Any) -> trajectory.Transition:
"""Convert `value` to an N-step Transition; validate data & prune.
- If `value` is already a `Transition`, only validation is performed.
- If `value` is a `Trajectory` with tensors containing a time dimension
having `T != n + 1`, a `Valu... | Convert `value` to an N-step Transition; validate data & prune.
- If `value` is already a `Transition`, only validation is performed.
- If `value` is a `Trajectory` with tensors containing a time dimension
having `T != n + 1`, a `ValueError` is raised.
Args:
value: A `Trajectory` or `Transitio... | __call__ | python | tensorflow/agents | tf_agents/agents/data_converter.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/data_converter.py | Apache-2.0 |
def __init__(
self,
data_context: DataContext,
gamma: types.Float,
n: typing.Optional[int] = None,
):
"""Create the AsNStepTransition converter.
For more details on how `Trajectory` objects are converted to N-step
`Transition` objects, see
`tf_agents.trajectories.trajectory.to... | Create the AsNStepTransition converter.
For more details on how `Trajectory` objects are converted to N-step
`Transition` objects, see
`tf_agents.trajectories.trajectory.to_n_step_transition`.
Args:
data_context: An instance of `DataContext`, typically accessed from the
`TFAgent.data_con... | __init__ | python | tensorflow/agents | tf_agents/agents/data_converter.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/data_converter.py | Apache-2.0 |
def __call__(self, value: typing.Any) -> trajectory.Transition:
"""Convert `value` to an N-step Transition; validate data & prune.
- If `value` is already a `Transition`, only validation is performed.
- If `value` is a `Trajectory` with tensors containing a time dimension
having `T != n + 1`, a `Valu... | Convert `value` to an N-step Transition; validate data & prune.
- If `value` is already a `Transition`, only validation is performed.
- If `value` is a `Trajectory` with tensors containing a time dimension
having `T != n + 1`, a `ValueError` is raised.
Args:
value: A `Trajectory` or `Transitio... | __call__ | python | tensorflow/agents | tf_agents/agents/data_converter.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/data_converter.py | Apache-2.0 |
def test_loss_and_train_output(
test: test_utils.TestCase,
expect_equal_loss_values: bool,
agent: tf_agent.TFAgent,
experience: types.NestedTensor,
weights: Optional[types.Tensor] = None,
**kwargs
):
"""Tests that loss() and train() outputs are equivalent.
Checks that the outputs have the s... | Tests that loss() and train() outputs are equivalent.
Checks that the outputs have the same structures and shapes, and compares
loss values based on `expect_equal_loss_values`.
Args:
test: An instance of `test_utils.TestCase`.
expect_equal_loss_values: Whether to expect `LossInfo.loss` to have the same
... | test_loss_and_train_output | python | tensorflow/agents | tf_agents/agents/test_util.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/test_util.py | Apache-2.0 |
def __init__(
self,
time_step_spec: ts.TimeStep,
action_spec: types.NestedTensorSpec,
policy: tf_policy.TFPolicy,
collect_policy: tf_policy.TFPolicy,
train_sequence_length: Optional[int],
num_outer_dims: int = 2,
training_data_spec: Optional[types.NestedTensorSpec] = None... | Meant to be called by subclass constructors.
Args:
time_step_spec: A nest of tf.TypeSpec representing the time_steps.
Provided by the user.
action_spec: A nest of BoundedTensorSpec representing the actions.
Provided by the user.
policy: An instance of `tf_policy.TFPolicy` represen... | __init__ | python | tensorflow/agents | tf_agents/agents/tf_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/tf_agent.py | Apache-2.0 |
def initialize(self) -> Optional[tf.Operation]:
"""Initializes the agent.
Returns:
An operation that can be used to initialize the agent.
Raises:
RuntimeError: If the class was not initialized properly (`super.__init__`
was not called).
"""
if self._enable_functions and getattr... | Initializes the agent.
Returns:
An operation that can be used to initialize the agent.
Raises:
RuntimeError: If the class was not initialized properly (`super.__init__`
was not called).
| initialize | python | tensorflow/agents | tf_agents/agents/tf_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/tf_agent.py | Apache-2.0 |
def preprocess_sequence(
self, experience: types.NestedTensor
) -> types.NestedTensor:
"""Defines preprocess_sequence function to be fed into replay buffers.
This defines how we preprocess the collected data before training.
Defaults to pass through for most agents.
Structure of `experience` mu... | Defines preprocess_sequence function to be fed into replay buffers.
This defines how we preprocess the collected data before training.
Defaults to pass through for most agents.
Structure of `experience` must match that of `self.collect_data_spec`.
Args:
experience: a `Trajectory` shaped [batch, ... | preprocess_sequence | python | tensorflow/agents | tf_agents/agents/tf_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/tf_agent.py | Apache-2.0 |
def train(
self,
experience: types.NestedTensor,
weights: Optional[types.Tensor] = None,
**kwargs
) -> LossInfo:
"""Trains the agent.
Args:
experience: A batch of experience data in the form of a `Trajectory`. The
structure of `experience` must match that of `self.traini... | Trains the agent.
Args:
experience: A batch of experience data in the form of a `Trajectory`. The
structure of `experience` must match that of `self.training_data_spec`.
All tensors in `experience` must be shaped `[batch, time, ...]` where
`time` must be equal to `self.train_step_leng... | train | python | tensorflow/agents | tf_agents/agents/tf_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/tf_agent.py | Apache-2.0 |
def loss(
self,
experience: types.NestedTensor,
weights: Optional[types.Tensor] = None,
training: bool = False,
**kwargs
) -> LossInfo:
"""Gets loss from the agent.
If the user calls this from _train, it must be in a `tf.GradientTape` scope
in order to apply gradients to tra... | Gets loss from the agent.
If the user calls this from _train, it must be in a `tf.GradientTape` scope
in order to apply gradients to trainable variables.
If intermediate gradient steps are needed, _loss and _train will return
different values since _loss only supports updating all gradients at once
... | loss | python | tensorflow/agents | tf_agents/agents/tf_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/tf_agent.py | Apache-2.0 |
def training_data_spec(self) -> types.NestedTensorSpec:
"""Returns a trajectory spec, as expected by the train() function."""
if self._training_data_spec is not None:
return self._training_data_spec
else:
return self.collect_data_spec | Returns a trajectory spec, as expected by the train() function. | training_data_spec | python | tensorflow/agents | tf_agents/agents/tf_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/tf_agent.py | Apache-2.0 |
def _preprocess_sequence(
self, experience: types.NestedTensor
) -> types.NestedTensor:
"""Defines preprocess_sequence function to be fed into replay buffers.
This defines how we preprocess the collected data before training.
Defaults to pass through for most agents. Subclasses may override this.
... | Defines preprocess_sequence function to be fed into replay buffers.
This defines how we preprocess the collected data before training.
Defaults to pass through for most agents. Subclasses may override this.
Args:
experience: a `Trajectory` shaped [batch, time, ...] or [time, ...] which
repre... | _preprocess_sequence | python | tensorflow/agents | tf_agents/agents/tf_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/tf_agent.py | Apache-2.0 |
def _loss(
self,
experience: types.NestedTensor,
weights: types.Tensor,
training: bool,
**kwargs
) -> Optional[LossInfo]:
"""Computes loss.
This method does not increment self.train_step_counter or upgrade gradients.
By default, any networks are called with `training=False`.... | Computes loss.
This method does not increment self.train_step_counter or upgrade gradients.
By default, any networks are called with `training=False`.
Args:
experience: A batch of experience data in the form of a `Trajectory`. The
structure of `experience` must match that of `self.training_d... | _loss | python | tensorflow/agents | tf_agents/agents/tf_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/tf_agent.py | Apache-2.0 |
def _train(
self, experience: types.NestedTensor, weights: types.Tensor
) -> LossInfo:
"""Returns an op to train the agent.
This method *must* increment self.train_step_counter exactly once.
TODO(b/126271669): Consider automatically incrementing this.
Args:
experience: A batch of experie... | Returns an op to train the agent.
This method *must* increment self.train_step_counter exactly once.
TODO(b/126271669): Consider automatically incrementing this.
Args:
experience: A batch of experience data in the form of a `Trajectory`. The
structure of `experience` must match that of `self... | _train | python | tensorflow/agents | tf_agents/agents/tf_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/tf_agent.py | Apache-2.0 |
def __init__(
self,
time_step_spec: ts.TimeStep,
action_spec: types.NestedTensorSpec,
cloning_network: network.Network,
optimizer: types.Optimizer,
num_outer_dims: Literal[1, 2] = 1, # pylint: disable=bad-whitespace
epsilon_greedy: types.Float = 0.1,
loss_fn: Optional[
... | Creates an instance of a Behavioral Cloning agent.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of BoundedTensorSpec representing the actions.
cloning_network: A `tf_agents.networks.Network` to be used by the agent.
The network will be called as ... | __init__ | python | tensorflow/agents | tf_agents/agents/behavioral_cloning/behavioral_cloning_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/behavioral_cloning/behavioral_cloning_agent.py | Apache-2.0 |
def create_arbitrary_trajectory():
"""Creates an arbitrary trajectory for unit testing BehavioralCloningAgent.
This trajectory contains Tensors shaped `[6, 1, ...]` where `6` is the number
of time steps and `1` is the batch.
Observations are unbounded but actions are bounded to take values within
`[1, 2]`. ... | Creates an arbitrary trajectory for unit testing BehavioralCloningAgent.
This trajectory contains Tensors shaped `[6, 1, ...]` where `6` is the number
of time steps and `1` is the batch.
Observations are unbounded but actions are bounded to take values within
`[1, 2]`. The action space is discrete.
Policy ... | create_arbitrary_trajectory | python | tensorflow/agents | tf_agents/agents/behavioral_cloning/behavioral_cloning_agent_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/behavioral_cloning/behavioral_cloning_agent_test.py | Apache-2.0 |
def verifyTrainAndRestore(
self, observation_spec, action_spec, actor_net, loss_fn=None
):
"""Helper function for testing correct variable updating and restoring."""
batch_size = 2
observations = tensor_spec.sample_spec_nest(
observation_spec, outer_dims=(batch_size,)
)
actions = ten... | Helper function for testing correct variable updating and restoring. | verifyTrainAndRestore | python | tensorflow/agents | tf_agents/agents/behavioral_cloning/behavioral_cloning_agent_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/behavioral_cloning/behavioral_cloning_agent_test.py | Apache-2.0 |
def __init__(
self,
time_step_spec: ts.TimeStep,
action_spec: types.NestedTensorSpec,
categorical_q_network: network.Network,
optimizer: types.Optimizer,
observation_and_action_constraint_splitter: Optional[
types.Splitter
] = None,
min_q_value: types.Float = -1... | Creates a Categorical DQN Agent.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A `BoundedTensorSpec` representing the actions.
categorical_q_network: A categorical_q_network.CategoricalQNetwork that
returns the q_distribution for each action.
optim... | __init__ | python | tensorflow/agents | tf_agents/agents/categorical_dqn/categorical_dqn_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/categorical_dqn/categorical_dqn_agent.py | Apache-2.0 |
def _loss(
self,
experience,
td_errors_loss_fn=tf.compat.v1.losses.huber_loss,
gamma=1.0,
reward_scale_factor=1.0,
weights=None,
training=False,
):
"""Computes critic loss for CategoricalDQN training.
See Algorithm 1 and the discussion immediately preceding it in pag... | Computes critic loss for CategoricalDQN training.
See Algorithm 1 and the discussion immediately preceding it in page 6 of
"A Distributional Perspective on Reinforcement Learning"
Bellemare et al., 2017
https://arxiv.org/abs/1707.06887
Args:
experience: A batch of experience data in the ... | _loss | python | tensorflow/agents | tf_agents/agents/categorical_dqn/categorical_dqn_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/categorical_dqn/categorical_dqn_agent.py | Apache-2.0 |
def _next_q_distribution(self, next_time_steps):
"""Compute the q distribution of the next state for TD error computation.
Args:
next_time_steps: A batch of next timesteps
Returns:
A [batch_size, num_atoms] tensor representing the Q-distribution for the
next state.
"""
network_ob... | Compute the q distribution of the next state for TD error computation.
Args:
next_time_steps: A batch of next timesteps
Returns:
A [batch_size, num_atoms] tensor representing the Q-distribution for the
next state.
| _next_q_distribution | python | tensorflow/agents | tf_agents/agents/categorical_dqn/categorical_dqn_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/categorical_dqn/categorical_dqn_agent.py | Apache-2.0 |
def project_distribution(
supports: types.Tensor,
weights: types.Tensor,
target_support: types.Tensor,
validate_args: bool = False,
) -> types.Tensor:
"""Projects a batch of (support, weights) onto target_support.
Based on equation (7) in (Bellemare et al., 2017):
https://arxiv.org/abs/1707.068... | Projects a batch of (support, weights) onto target_support.
Based on equation (7) in (Bellemare et al., 2017):
https://arxiv.org/abs/1707.06887
In the rest of the comments we will refer to this equation simply as Eq7.
This code is not easy to digest, so we will use a running example to clarify
what is goi... | project_distribution | python | tensorflow/agents | tf_agents/agents/categorical_dqn/categorical_dqn_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/categorical_dqn/categorical_dqn_agent.py | Apache-2.0 |
def __init__(
self,
root_dir,
env_name,
num_iterations=200,
max_episode_frames=108000, # ALE frames
terminal_on_life_loss=False,
conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1)),
fc_layer_params=(512,),
# Params for collect
initial_collec... | A simple Atari train and eval for DQN.
Args:
root_dir: Directory to write log files to.
env_name: Fully-qualified name of the Atari environment (i.e. Pong-v0).
num_iterations: Number of train/eval iterations to run.
max_episode_frames: Maximum length of a single episode, in ALE frames.
... | __init__ | python | tensorflow/agents | tf_agents/agents/categorical_dqn/examples/train_eval_atari.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/categorical_dqn/examples/train_eval_atari.py | Apache-2.0 |
def _initial_collect(self):
"""Collect initial experience before training begins."""
logging.info('Collecting initial experience...')
time_step_spec = ts.time_step_spec(self._env.observation_spec())
random_policy = random_py_policy.RandomPyPolicy(
time_step_spec, self._env.action_spec()
)
... | Collect initial experience before training begins. | _initial_collect | python | tensorflow/agents | tf_agents/agents/categorical_dqn/examples/train_eval_atari.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/categorical_dqn/examples/train_eval_atari.py | Apache-2.0 |
def _collect_step(self, time_step, metric_observers, train=False):
"""Run a single step (or 2 steps on life loss) in the environment."""
if train:
policy = self._collect_policy
else:
policy = self._eval_policy
with self._action_timer:
action_step = policy.action(time_step)
with se... | Run a single step (or 2 steps on life loss) in the environment. | _collect_step | python | tensorflow/agents | tf_agents/agents/categorical_dqn/examples/train_eval_atari.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/categorical_dqn/examples/train_eval_atari.py | Apache-2.0 |
def _maybe_log(self, sess, global_step_val, total_loss):
"""Log some stats if global_step_val is a multiple of log_interval."""
if global_step_val % self._log_interval == 0:
logging.info('step = %d, loss = %f', global_step_val, total_loss.loss)
logging.info('%s', 'action_time = {}'.format(self._acti... | Log some stats if global_step_val is a multiple of log_interval. | _maybe_log | python | tensorflow/agents | tf_agents/agents/categorical_dqn/examples/train_eval_atari.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/categorical_dqn/examples/train_eval_atari.py | Apache-2.0 |
def get_run_args():
"""Builds a dict of run arguments from flags."""
run_args = {}
if FLAGS.num_iterations:
run_args['num_iterations'] = FLAGS.num_iterations
if FLAGS.initial_collect_steps:
run_args['initial_collect_steps'] = FLAGS.initial_collect_steps
if FLAGS.replay_buffer_capacity:
run_args['r... | Builds a dict of run arguments from flags. | get_run_args | python | tensorflow/agents | tf_agents/agents/categorical_dqn/examples/train_eval_atari.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/categorical_dqn/examples/train_eval_atari.py | Apache-2.0 |
def __init__(
self,
time_step_spec: ts.TimeStep,
action_spec: types.NestedTensorSpec,
critic_network: network.Network,
actor_network: network.Network,
actor_optimizer: types.Optimizer,
critic_optimizer: types.Optimizer,
alpha_optimizer: types.Optimizer,
cql_alpha: U... | Creates a CQL-SAC Agent.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of BoundedTensorSpec representing the actions.
critic_network: A function critic_network((observations, actions)) that
returns the q_values for each observation and action.
... | __init__ | python | tensorflow/agents | tf_agents/agents/cql/cql_sac_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/cql/cql_sac_agent.py | Apache-2.0 |
def _train(self, experience, weights):
"""Returns a train op to update the agent's networks.
This method trains with the provided batched experience.
Args:
experience: A time-stacked trajectory object.
weights: Optional scalar or elementwise (per-batch-entry) importance
weights.
R... | Returns a train op to update the agent's networks.
This method trains with the provided batched experience.
Args:
experience: A time-stacked trajectory object.
weights: Optional scalar or elementwise (per-batch-entry) importance
weights.
Returns:
A train_op.
Raises:
V... | _train | python | tensorflow/agents | tf_agents/agents/cql/cql_sac_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/cql/cql_sac_agent.py | Apache-2.0 |
def _transpose_tile_and_batch_dims(
self, original_tensor: types.Tensor
) -> types.Tensor:
"""Transposes [tile, batch, ...] to [batch, tile, ...]."""
return tf.transpose(
original_tensor, [1, 0] + list(range(2, len(original_tensor.shape)))
) | Transposes [tile, batch, ...] to [batch, tile, ...]. | _transpose_tile_and_batch_dims | python | tensorflow/agents | tf_agents/agents/cql/cql_sac_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/cql/cql_sac_agent.py | Apache-2.0 |
def _actions_and_log_probs(
self, time_steps: ts.TimeStep, training: Optional[bool] = False
) -> Tuple[types.Tensor, types.Tensor]:
"""Get actions and corresponding log probabilities from policy."""
# Get raw action distribution from policy, and initialize bijectors list.
batch_size = nest_utils.get... | Get actions and corresponding log probabilities from policy. | _actions_and_log_probs | python | tensorflow/agents | tf_agents/agents/cql/cql_sac_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/cql/cql_sac_agent.py | Apache-2.0 |
def _sample_and_transpose_actions_and_log_probs(
self,
time_steps: ts.TimeStep,
num_action_samples: int,
training: Optional[bool] = False,
) -> Tuple[types.Tensor, types.Tensor]:
"""Samples actions and corresponding log probabilities from policy."""
# Get raw action distribution from p... | Samples actions and corresponding log probabilities from policy. | _sample_and_transpose_actions_and_log_probs | python | tensorflow/agents | tf_agents/agents/cql/cql_sac_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/cql/cql_sac_agent.py | Apache-2.0 |
def _flattened_multibatch_tensor(
self, original_tensor: types.Tensor
) -> types.Tensor:
"""Flattens the batch and tile dimensions into a single dimension.
Args:
original_tensor: Input tensor of shape [batch_size, tile, dim].
Returns:
Flattened tensor with the outer dimension (batch_si... | Flattens the batch and tile dimensions into a single dimension.
Args:
original_tensor: Input tensor of shape [batch_size, tile, dim].
Returns:
Flattened tensor with the outer dimension (batch_size * tile).
| _flattened_multibatch_tensor | python | tensorflow/agents | tf_agents/agents/cql/cql_sac_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/cql/cql_sac_agent.py | Apache-2.0 |
def _get_q_values(
self,
target_input: Tuple[types.Tensor, types.Tensor],
step_type: types.Tensor,
reshape_batch_size: Optional[int],
training: Optional[bool] = False,
) -> Tuple[types.Tensor, types.Tensor]:
"""Gets the Q-values of target_input.
Uses the smaller of the critic ne... | Gets the Q-values of target_input.
Uses the smaller of the critic network outputs since learned Q functions
can overestimate Q-values.
Args:
target_input: Tuple of (observation, sampled actions) tensors.
step_type: `Tensor` of `StepType` enum values.
reshape_batch_size: Batch size to res... | _get_q_values | python | tensorflow/agents | tf_agents/agents/cql/cql_sac_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/cql/cql_sac_agent.py | Apache-2.0 |
def _cql_loss(
self,
time_steps: ts.TimeStep,
actions: types.Tensor,
training: Optional[bool] = False,
) -> types.Tensor:
"""Computes CQL loss for SAC training in continuous action spaces.
Extends the standard critic loss to minimize Q-values sampled from a policy
and maximize val... | Computes CQL loss for SAC training in continuous action spaces.
Extends the standard critic loss to minimize Q-values sampled from a policy
and maximize values of the dataset actions.
Based on the `CQL(H)` equation (4) in (Kumar et al., 2020):
```
log_sum_exp(Q(s, a')) - Q(s, a)
```
Othe... | _cql_loss | python | tensorflow/agents | tf_agents/agents/cql/cql_sac_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/cql/cql_sac_agent.py | Apache-2.0 |
def actor_loss(
self,
time_steps: ts.TimeStep,
actions: types.Tensor,
weights: Optional[types.Tensor] = None,
training: Optional[bool] = True,
) -> types.Tensor:
"""Computes actor_loss equivalent to the SAC actor_loss.
Uses behavioral cloning for the first `self._num_bc_steps` o... | Computes actor_loss equivalent to the SAC actor_loss.
Uses behavioral cloning for the first `self._num_bc_steps` of training.
Args:
time_steps: A batch of timesteps.
actions: A batch of actions.
weights: Optional scalar or elementwise (per-batch-entry) importance
weights.
train... | actor_loss | python | tensorflow/agents | tf_agents/agents/cql/cql_sac_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/cql/cql_sac_agent.py | Apache-2.0 |
def _critic_loss_with_optional_entropy_term(
self,
time_steps: ts.TimeStep,
actions: types.Tensor,
next_time_steps: ts.TimeStep,
td_errors_loss_fn: types.LossFn,
gamma: types.Float = 1.0,
reward_scale_factor: types.Float = 1.0,
weights: Optional[types.Tensor] = None,
... | Computes the critic loss for CQL-SAC training.
The original SAC critic loss is:
```
(q(s, a) - (r(s, a) + \gamma q(s', a') - \gamma \alpha \log \pi(a'|s')))^2
```
The CQL-SAC critic loss makes the entropy term optional.
CQL may value unseen actions higher since it lower-bounds the value of
... | _critic_loss_with_optional_entropy_term | python | tensorflow/agents | tf_agents/agents/cql/cql_sac_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/cql/cql_sac_agent.py | Apache-2.0 |
def __init__(
self,
input_tensor_spec,
output_tensor_spec,
fc_layer_params=None,
dropout_layer_params=None,
conv_layer_params=None,
activation_fn=tf.keras.activations.relu,
kernel_initializer=None,
last_kernel_initializer=None,
name='ActorNetwork',
):
""... | Creates an instance of `ActorNetwork`.
Args:
input_tensor_spec: A nest of `tensor_spec.TensorSpec` representing the
inputs.
output_tensor_spec: A nest of `tensor_spec.BoundedTensorSpec` representing
the outputs.
fc_layer_params: Optional list of fully_connected parameters, where e... | __init__ | python | tensorflow/agents | tf_agents/agents/ddpg/actor_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ddpg/actor_network.py | Apache-2.0 |
def __init__(
self,
input_tensor_spec,
output_tensor_spec,
conv_layer_params=None,
input_fc_layer_params=(200, 100),
lstm_size=(40,),
output_fc_layer_params=(200, 100),
activation_fn=tf.keras.activations.relu,
name='ActorRnnNetwork',
):
"""Creates an instance ... | Creates an instance of `ActorRnnNetwork`.
Args:
input_tensor_spec: A nest of `tensor_spec.TensorSpec` representing the
input observations.
output_tensor_spec: A nest of `tensor_spec.BoundedTensorSpec` representing
the actions.
conv_layer_params: Optional list of convolution layers... | __init__ | python | tensorflow/agents | tf_agents/agents/ddpg/actor_rnn_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ddpg/actor_rnn_network.py | Apache-2.0 |
def __init__(
self,
input_tensor_spec,
observation_conv_layer_params=None,
observation_fc_layer_params=None,
observation_dropout_layer_params=None,
action_fc_layer_params=None,
action_dropout_layer_params=None,
joint_fc_layer_params=None,
joint_dropout_layer_params=... | Creates an instance of `CriticNetwork`.
Args:
input_tensor_spec: A tuple of (observation, action) each a nest of
`tensor_spec.TensorSpec` representing the inputs.
observation_conv_layer_params: Optional list of convolution layer
parameters for observations, where each item is a length-t... | __init__ | python | tensorflow/agents | tf_agents/agents/ddpg/critic_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ddpg/critic_network.py | Apache-2.0 |
def __init__(
self,
input_tensor_spec,
observation_conv_layer_params=None,
observation_fc_layer_params=(200,),
action_fc_layer_params=(200,),
joint_fc_layer_params=(100,),
lstm_size=None,
output_fc_layer_params=(200, 100),
activation_fn=tf.keras.activations.relu,
... | Creates an instance of `CriticRnnNetwork`.
Args:
input_tensor_spec: A tuple of (observation, action) each of type
`tensor_spec.TensorSpec` representing the inputs.
observation_conv_layer_params: Optional list of convolution layers
parameters to apply to the observations, where each item... | __init__ | python | tensorflow/agents | tf_agents/agents/ddpg/critic_rnn_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ddpg/critic_rnn_network.py | Apache-2.0 |
def __init__(
self,
time_step_spec: ts.TimeStep,
action_spec: types.NestedTensorSpec,
actor_network: network.Network,
critic_network: network.Network,
actor_optimizer: Optional[types.Optimizer] = None,
critic_optimizer: Optional[types.Optimizer] = None,
ou_stddev: types.F... | Creates a DDPG Agent.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of BoundedTensorSpec representing the actions.
actor_network: A tf_agents.network.Network to be used by the agent. The
network will be called with call(observation, step_type[, po... | __init__ | python | tensorflow/agents | tf_agents/agents/ddpg/ddpg_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ddpg/ddpg_agent.py | Apache-2.0 |
def _get_target_updater(self, tau=1.0, period=1):
"""Performs a soft update of the target network parameters.
For each weight w_s in the original network, and its corresponding
weight w_t in the target network, a soft update is:
w_t = (1- tau) x w_t + tau x ws
Args:
tau: A float scalar in [0... | Performs a soft update of the target network parameters.
For each weight w_s in the original network, and its corresponding
weight w_t in the target network, a soft update is:
w_t = (1- tau) x w_t + tau x ws
Args:
tau: A float scalar in [0, 1]. Default `tau=1.0` means hard update.
period: ... | _get_target_updater | python | tensorflow/agents | tf_agents/agents/ddpg/ddpg_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ddpg/ddpg_agent.py | Apache-2.0 |
def critic_loss(
self,
time_steps: ts.TimeStep,
actions: types.NestedTensor,
next_time_steps: ts.TimeStep,
weights: Optional[types.Tensor] = None,
training: bool = False,
) -> types.Tensor:
"""Computes the critic loss for DDPG training.
Args:
time_steps: A batch of t... | Computes the critic loss for DDPG training.
Args:
time_steps: A batch of timesteps.
actions: A batch of actions.
next_time_steps: A batch of next timesteps.
weights: Optional scalar or element-wise (per-batch-entry) importance
weights.
training: Whether this loss is being used... | critic_loss | python | tensorflow/agents | tf_agents/agents/ddpg/ddpg_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ddpg/ddpg_agent.py | Apache-2.0 |
def actor_loss(
self,
time_steps: ts.TimeStep,
weights: Optional[types.Tensor] = None,
training: bool = False,
) -> types.Tensor:
"""Computes the actor_loss for DDPG training.
Args:
time_steps: A batch of timesteps.
weights: Optional scalar or element-wise (per-batch-entry... | Computes the actor_loss for DDPG training.
Args:
time_steps: A batch of timesteps.
weights: Optional scalar or element-wise (per-batch-entry) importance
weights.
training: Whether this loss is being used for training.
Returns:
actor_loss: A scalar actor loss.
| actor_loss | python | tensorflow/agents | tf_agents/agents/ddpg/ddpg_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ddpg/ddpg_agent.py | Apache-2.0 |
def train_eval(
root_dir,
env_name='HalfCheetah-v2',
eval_env_name=None,
env_load_fn=suite_mujoco.load,
num_iterations=2000000,
actor_fc_layers=(400, 300),
critic_obs_fc_layers=(400,),
critic_action_fc_layers=None,
critic_joint_fc_layers=(300,),
# Params for collect
initial_c... | A simple train and eval for DDPG. | train_eval | python | tensorflow/agents | tf_agents/agents/ddpg/examples/v2/train_eval.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ddpg/examples/v2/train_eval.py | Apache-2.0 |
def create_actor_network(fc_layer_units, action_spec):
"""Create an actor network for DDPG."""
flat_action_spec = tf.nest.flatten(action_spec)
if len(flat_action_spec) > 1:
raise ValueError('Only a single action tensor is supported by this network')
flat_action_spec = flat_action_spec[0]
fc_layers = [den... | Create an actor network for DDPG. | create_actor_network | python | tensorflow/agents | tf_agents/agents/ddpg/examples/v2/train_eval.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ddpg/examples/v2/train_eval.py | Apache-2.0 |
def train_eval(
root_dir,
env_name='cartpole',
task_name='balance',
observations_allowlist='position',
num_iterations=100000,
actor_fc_layers=(400, 300),
actor_output_fc_layers=(100,),
actor_lstm_size=(40,),
critic_obs_fc_layers=(400,),
critic_action_fc_layers=None,
critic_jo... | A simple train and eval for DDPG. | train_eval | python | tensorflow/agents | tf_agents/agents/ddpg/examples/v2/train_eval_rnn.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ddpg/examples/v2/train_eval_rnn.py | Apache-2.0 |
def _check_network_output(self, net, label):
"""Check outputs of q_net and target_q_net against expected shape.
Subclasses that require different q_network outputs should override
this function.
Args:
net: A `Network`.
label: A label to print in case of a mismatch.
"""
outputs = ne... | Check outputs of q_net and target_q_net against expected shape.
Subclasses that require different q_network outputs should override
this function.
Args:
net: A `Network`.
label: A label to print in case of a mismatch.
| _check_network_output | python | tensorflow/agents | tf_agents/agents/dqn/dqn_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/dqn/dqn_agent.py | Apache-2.0 |
def _get_target_updater(self, tau=1.0, period=1):
"""Performs a soft update of the target network parameters.
For each weight w_s in the q network, and its corresponding
weight w_t in the target_q_network, a soft update is:
w_t = (1 - tau) * w_t + tau * w_s
Args:
tau: A float scalar in [0, 1... | Performs a soft update of the target network parameters.
For each weight w_s in the q network, and its corresponding
weight w_t in the target_q_network, a soft update is:
w_t = (1 - tau) * w_t + tau * w_s
Args:
tau: A float scalar in [0, 1]. Default `tau=1.0` means hard update.
period: Ste... | _get_target_updater | python | tensorflow/agents | tf_agents/agents/dqn/dqn_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/dqn/dqn_agent.py | Apache-2.0 |
def _loss(
self,
experience,
td_errors_loss_fn=None,
gamma=1.0,
reward_scale_factor=1.0,
weights=None,
training=False,
):
"""Computes loss for DQN training.
Args:
experience: A batch of experience data in the form of a `Trajectory` or
`Transition`. The ... | Computes loss for DQN training.
Args:
experience: A batch of experience data in the form of a `Trajectory` or
`Transition`. The structure of `experience` must match that of
`self.collect_policy.step_spec`. If a `Trajectory`, all tensors in
`experience` must be shaped `[B, T, ...]` wh... | _loss | python | tensorflow/agents | tf_agents/agents/dqn/dqn_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/dqn/dqn_agent.py | Apache-2.0 |
def _compute_next_q_values(self, next_time_steps, info):
"""Compute the q value of the next state for TD error computation.
Args:
next_time_steps: A batch of next timesteps
info: PolicyStep.info that may be used by other agents inherited from
dqn_agent.
Returns:
A tensor of Q val... | Compute the q value of the next state for TD error computation.
Args:
next_time_steps: A batch of next timesteps
info: PolicyStep.info that may be used by other agents inherited from
dqn_agent.
Returns:
A tensor of Q values for the given next state.
| _compute_next_q_values | python | tensorflow/agents | tf_agents/agents/dqn/dqn_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/dqn/dqn_agent.py | Apache-2.0 |
def _compute_next_q_values(self, next_time_steps, info):
"""Compute the q value of the next state for TD error computation.
Args:
next_time_steps: A batch of next timesteps
info: PolicyStep.info that may be used by other agents inherited from
dqn_agent.
Returns:
A tensor of Q val... | Compute the q value of the next state for TD error computation.
Args:
next_time_steps: A batch of next timesteps
info: PolicyStep.info that may be used by other agents inherited from
dqn_agent.
Returns:
A tensor of Q values for the given next state.
| _compute_next_q_values | python | tensorflow/agents | tf_agents/agents/dqn/dqn_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/dqn/dqn_agent.py | Apache-2.0 |
def _compute_next_q_values(self, next_time_steps, info):
"""Compute the q value of the next state for TD error computation.
Args:
next_time_steps: A batch of next timesteps
info: PolicyStep.info that may be used by other agents inherited from
dqn_agent.
Returns:
A tensor of Q val... | Compute the q value of the next state for TD error computation.
Args:
next_time_steps: A batch of next timesteps
info: PolicyStep.info that may be used by other agents inherited from
dqn_agent.
Returns:
A tensor of Q values for the given next state.
| _compute_next_q_values | python | tensorflow/agents | tf_agents/agents/dqn/dqn_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/dqn/dqn_agent.py | Apache-2.0 |
def testLossNStepMidMidLastFirst(self, agent_class):
"""Tests that n-step loss handles LAST time steps properly."""
q_net = DummyNet(self._observation_spec, self._action_spec)
agent = agent_class(
self._time_step_spec,
self._action_spec,
q_network=q_net,
optimizer=None,
... | Tests that n-step loss handles LAST time steps properly. | testLossNStepMidMidLastFirst | python | tensorflow/agents | tf_agents/agents/dqn/dqn_agent_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/dqn/dqn_agent_test.py | Apache-2.0 |
def train_eval(
root_dir,
env_name='CartPole-v0',
num_iterations=100000,
train_sequence_length=1,
# Params for QNetwork
fc_layer_params=(100,),
# Params for QRnnNetwork
input_fc_layer_params=(50,),
lstm_size=(20,),
output_fc_layer_params=(20,),
# Params for collect
initia... | A simple train and eval for DQN. | train_eval | python | tensorflow/agents | tf_agents/agents/dqn/examples/v2/train_eval.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/dqn/examples/v2/train_eval.py | Apache-2.0 |
def tanh_and_scale_to_spec(inputs, spec):
"""Maps inputs with arbitrary range to range defined by spec using `tanh`."""
means = (spec.maximum + spec.minimum) / 2.0
magnitudes = (spec.maximum - spec.minimum) / 2.0
return means + magnitudes * tf.tanh(inputs) | Maps inputs with arbitrary range to range defined by spec using `tanh`. | tanh_and_scale_to_spec | python | tensorflow/agents | tf_agents/agents/ppo/ppo_actor_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_actor_network.py | Apache-2.0 |
def create_sequential_actor_net(
self, fc_layer_units, action_tensor_spec, seed=None
):
"""Helper method for creating the actor network."""
self._seed_stream = self.seed_stream_class(
seed=seed, salt='tf_agents_sequential_layers'
)
def _get_seed():
seed = self._seed_stream()
... | Helper method for creating the actor network. | create_sequential_actor_net | python | tensorflow/agents | tf_agents/agents/ppo/ppo_actor_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_actor_network.py | Apache-2.0 |
def __init__(
self,
time_step_spec: ts.TimeStep,
action_spec: types.NestedTensorSpec,
optimizer: Optional[types.Optimizer] = None,
actor_net: Optional[network.Network] = None,
value_net: Optional[network.Network] = None,
greedy_eval: bool = True,
importance_ratio_clipping... | Creates a PPO Agent.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of `BoundedTensorSpec` representing the actions.
optimizer: Optimizer to use for the agent, default to using
`tf.compat.v1.train.AdamOptimizer`.
actor_net: A `network.Distrib... | __init__ | python | tensorflow/agents | tf_agents/agents/ppo/ppo_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_agent.py | Apache-2.0 |
def compute_advantages(
self,
rewards: types.NestedTensor,
returns: types.Tensor,
discounts: types.Tensor,
value_preds: types.Tensor,
) -> types.Tensor:
"""Compute advantages, optionally using GAE.
Based on baselines ppo1 implementation. Removes final timestep, as it needs
t... | Compute advantages, optionally using GAE.
Based on baselines ppo1 implementation. Removes final timestep, as it needs
to use this timestep for next-step value prediction for TD error
computation.
Args:
rewards: Tensor of per-timestep rewards.
returns: Tensor of per-timestep returns.
... | compute_advantages | python | tensorflow/agents | tf_agents/agents/ppo/ppo_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_agent.py | Apache-2.0 |
def get_loss(
self,
time_steps: ts.TimeStep,
actions: types.NestedTensorSpec,
act_log_probs: types.Tensor,
returns: types.Tensor,
normalized_advantages: types.Tensor,
action_distribution_parameters: types.NestedTensor,
weights: types.Tensor,
train_step: tf.Variable,... | Compute the loss and create optimization op for one training epoch.
All tensors should have a single batch dimension.
Args:
time_steps: A minibatch of TimeStep tuples.
actions: A minibatch of actions.
act_log_probs: A minibatch of action probabilities (probability under the
sampling ... | get_loss | python | tensorflow/agents | tf_agents/agents/ppo/ppo_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_agent.py | Apache-2.0 |
def compute_return_and_advantage(
self, next_time_steps: ts.TimeStep, value_preds: types.Tensor
) -> Tuple[types.Tensor, types.Tensor]:
"""Compute the Monte Carlo return and advantage.
Args:
next_time_steps: batched tensor of TimeStep tuples after action is taken.
value_preds: Batched value... | Compute the Monte Carlo return and advantage.
Args:
next_time_steps: batched tensor of TimeStep tuples after action is taken.
value_preds: Batched value prediction tensor. Should have one more entry
in time index than time_steps, with the final value corresponding to the
value predictio... | compute_return_and_advantage | python | tensorflow/agents | tf_agents/agents/ppo/ppo_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_agent.py | Apache-2.0 |
def _preprocess(self, experience):
"""Performs advantage calculation for the collected experience.
Args:
experience: A (batch of) experience in the form of a `Trajectory`. The
structure of `experience` must match that of `self.collect_data_spec`.
All tensors in `experience` must be shaped... | Performs advantage calculation for the collected experience.
Args:
experience: A (batch of) experience in the form of a `Trajectory`. The
structure of `experience` must match that of `self.collect_data_spec`.
All tensors in `experience` must be shaped `[batch, time + 1, ...]` or
[time... | _preprocess | python | tensorflow/agents | tf_agents/agents/ppo/ppo_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_agent.py | Apache-2.0 |
def _preprocess_sequence(self, experience):
"""Performs advantage calculation for the collected experience.
This function is a no-op if self._compute_value_and_advantage_in_train is
True, which means advantage calculation happens as part of agent.train().
Args:
experience: A (batch of) experienc... | Performs advantage calculation for the collected experience.
This function is a no-op if self._compute_value_and_advantage_in_train is
True, which means advantage calculation happens as part of agent.train().
Args:
experience: A (batch of) experience in the form of a `Trajectory`. The
struct... | _preprocess_sequence | python | tensorflow/agents | tf_agents/agents/ppo/ppo_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_agent.py | Apache-2.0 |
def entropy_regularization_loss(
self,
time_steps: ts.TimeStep,
entropy: types.Tensor,
weights: types.Tensor,
debug_summaries: bool = False,
) -> types.Tensor:
"""Create regularization loss tensor based on agent parameters."""
if self._entropy_regularization > 0:
nest_utils... | Create regularization loss tensor based on agent parameters. | entropy_regularization_loss | python | tensorflow/agents | tf_agents/agents/ppo/ppo_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_agent.py | Apache-2.0 |
def value_estimation_loss(
self,
time_steps: ts.TimeStep,
returns: types.Tensor,
weights: types.Tensor,
old_value_predictions: Optional[types.Tensor] = None,
debug_summaries: bool = False,
training: bool = False,
) -> types.Tensor:
"""Computes the value estimation loss fo... | Computes the value estimation loss for actor-critic training.
All tensors should have a single batch dimension.
Args:
time_steps: A batch of timesteps.
returns: Per-timestep returns for value function to predict. (Should come
from TD-lambda computation.)
weights: Optional scalar or e... | value_estimation_loss | python | tensorflow/agents | tf_agents/agents/ppo/ppo_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_agent.py | Apache-2.0 |
def policy_gradient_loss(
self,
time_steps: ts.TimeStep,
actions: types.NestedTensor,
sample_action_log_probs: types.Tensor,
advantages: types.Tensor,
current_policy_distribution: types.NestedDistribution,
weights: types.Tensor,
debug_summaries: bool = False,
) -> types... | Create tensor for policy gradient loss.
All tensors should have a single batch dimension.
Args:
time_steps: TimeSteps with observations for each timestep.
actions: Tensor of actions for timesteps, aligned on index.
sample_action_log_probs: Tensor of sample probability of each action.
a... | policy_gradient_loss | python | tensorflow/agents | tf_agents/agents/ppo/ppo_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_agent.py | Apache-2.0 |
def kl_penalty_loss(
self,
time_steps: ts.TimeStep,
action_distribution_parameters: types.NestedTensor,
current_policy_distribution: types.NestedDistribution,
weights: types.Tensor,
debug_summaries: bool = False,
) -> types.Tensor:
"""Compute a loss that penalizes policy steps ... | Compute a loss that penalizes policy steps with high KL.
Based on KL divergence from old (data-collection) policy to new (updated)
policy.
All tensors should have a single batch dimension.
Args:
time_steps: TimeStep tuples with observations for each timestep. Used for
computing new acti... | kl_penalty_loss | python | tensorflow/agents | tf_agents/agents/ppo/ppo_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_agent.py | Apache-2.0 |
def update_adaptive_kl_beta(
self, kl_divergence: types.Tensor
) -> Optional[tf.Operation]:
"""Create update op for adaptive KL penalty coefficient.
Args:
kl_divergence: KL divergence of old policy to new policy for all
timesteps.
Returns:
update_op: An op which runs the update... | Create update op for adaptive KL penalty coefficient.
Args:
kl_divergence: KL divergence of old policy to new policy for all
timesteps.
Returns:
update_op: An op which runs the update for the adaptive kl penalty term.
| update_adaptive_kl_beta | python | tensorflow/agents | tf_agents/agents/ppo/ppo_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_agent.py | Apache-2.0 |
def _get_discount(experience) -> types.Tensor:
"""Try to get the discount entry from `experience`.
Typically experience is either a Trajectory or a Transition.
Args:
experience: Data collected from e.g. a replay buffer.
Returns:
discount: The discount tensor stored in `experience`.
"""
if isinsta... | Try to get the discount entry from `experience`.
Typically experience is either a Trajectory or a Transition.
Args:
experience: Data collected from e.g. a replay buffer.
Returns:
discount: The discount tensor stored in `experience`.
| _get_discount | python | tensorflow/agents | tf_agents/agents/ppo/ppo_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_agent.py | Apache-2.0 |
def _compute_returns_fn(rewards, discounts, next_state_return=0.0):
"""Python implementation of computing discounted returns."""
returns = np.zeros_like(rewards)
for t in range(len(returns) - 1, -1, -1):
returns[t] = rewards[t] + discounts[t] * next_state_return
next_state_return = returns[t]
return ret... | Python implementation of computing discounted returns. | _compute_returns_fn | python | tensorflow/agents | tf_agents/agents/ppo/ppo_agent_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_agent_test.py | Apache-2.0 |
def __init__(
self,
time_step_spec: ts.TimeStep,
action_spec: types.NestedTensorSpec,
optimizer: Optional[types.Optimizer] = None,
actor_net: Optional[network.Network] = None,
value_net: Optional[network.Network] = None,
greedy_eval: bool = True,
importance_ratio_clipping... | Creates a PPO Agent implementing the clipped probability ratios.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of BoundedTensorSpec representing the actions.
optimizer: Optimizer to use for the agent.
actor_net: A function actor_net(observations, ac... | __init__ | python | tensorflow/agents | tf_agents/agents/ppo/ppo_clip_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_clip_agent.py | Apache-2.0 |
def __init__(
self,
time_step_spec: ts.TimeStep,
action_spec: types.NestedTensorSpec,
actor_net: network.Network,
value_net: network.Network,
num_epochs: int,
initial_adaptive_kl_beta: types.Float,
adaptive_kl_target: types.Float,
adaptive_kl_tolerance: types.Float,... | Creates a PPO Agent implementing the KL penalty loss.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of `BoundedTensorSpec` representing the actions.
actor_net: A `network.DistributionNetwork` which maps observations to
action distributions. Common... | __init__ | python | tensorflow/agents | tf_agents/agents/ppo/ppo_kl_penalty_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_kl_penalty_agent.py | Apache-2.0 |
def get_initial_value_state(
self, batch_size: types.Int
) -> types.NestedTensor:
"""Returns the initial state of the value network.
Args:
batch_size: A constant or Tensor holding the batch size. Can be None, in
which case the state will not have a batch dimension added.
Returns:
... | Returns the initial state of the value network.
Args:
batch_size: A constant or Tensor holding the batch size. Can be None, in
which case the state will not have a batch dimension added.
Returns:
A nest of zero tensors matching the spec of the value network state.
| get_initial_value_state | python | tensorflow/agents | tf_agents/agents/ppo/ppo_policy.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_policy.py | Apache-2.0 |
def apply_value_network(
self,
observations: types.NestedTensor,
step_types: types.Tensor,
value_state: Optional[types.NestedTensor] = None,
training: bool = False,
) -> types.NestedTensor:
"""Apply value network to time_step, potentially a sequence.
If observation_normalizer is... | Apply value network to time_step, potentially a sequence.
If observation_normalizer is not None, applies observation normalization.
Args:
observations: A (possibly nested) observation tensor with outer_dims
either (batch_size,) or (batch_size, time_index). If observations is a
time serie... | apply_value_network | python | tensorflow/agents | tf_agents/agents/ppo/ppo_policy.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_policy.py | Apache-2.0 |
def make_trajectory_mask(batched_traj: trajectory.Trajectory) -> types.Tensor:
"""Mask boundary trajectories and those with invalid returns and advantages.
Args:
batched_traj: Trajectory, doubly-batched [batch_dim, time_dim,...]. It must
be preprocessed already.
Returns:
A mask, type tf.float32, t... | Mask boundary trajectories and those with invalid returns and advantages.
Args:
batched_traj: Trajectory, doubly-batched [batch_dim, time_dim,...]. It must
be preprocessed already.
Returns:
A mask, type tf.float32, that is 0.0 for all between-episode Trajectory
(batched_traj.step_type is LAST)... | make_trajectory_mask | python | tensorflow/agents | tf_agents/agents/ppo/ppo_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_utils.py | Apache-2.0 |
def make_timestep_mask(
batched_next_time_step: ts.TimeStep, allow_partial_episodes: bool = False
) -> types.Tensor:
"""Create a mask for transitions and optionally final incomplete episodes.
Args:
batched_next_time_step: Next timestep, doubly-batched [batch_dim, time_dim,
...].
allow_partial_epi... | Create a mask for transitions and optionally final incomplete episodes.
Args:
batched_next_time_step: Next timestep, doubly-batched [batch_dim, time_dim,
...].
allow_partial_episodes: If true, then steps on incomplete episodes are
allowed.
Returns:
A mask, type tf.float32, that is 0.0 for ... | make_timestep_mask | python | tensorflow/agents | tf_agents/agents/ppo/ppo_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_utils.py | Apache-2.0 |
def nested_kl_divergence(
nested_from_distribution: types.NestedDistribution,
nested_to_distribution: types.NestedDistribution,
outer_dims: Sequence[int] = (),
) -> types.Tensor:
"""Given two nested distributions, sum the KL divergences of the leaves."""
nest_utils.assert_same_structure(
nested_fr... | Given two nested distributions, sum the KL divergences of the leaves. | nested_kl_divergence | python | tensorflow/agents | tf_agents/agents/ppo/ppo_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_utils.py | Apache-2.0 |
def get_metric_observers(metrics):
"""Returns a list of observers, one for each metric."""
def get_metric_observer(metric):
def metric_observer(time_step, action, next_time_step, policy_state):
action_step = policy_step.PolicyStep(action, policy_state, ())
traj = trajectory.from_transition(time_st... | Returns a list of observers, one for each metric. | get_metric_observers | python | tensorflow/agents | tf_agents/agents/ppo/ppo_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_utils.py | Apache-2.0 |
def get_learning_rate(optimizer):
"""Gets the current learning rate from an optimizer to be graphed."""
# Keras optimizers uses `learning_rate`.
if hasattr(optimizer, 'learning_rate'):
learning_rate = optimizer.learning_rate # pylint: disable=protected-access
# Adam optimizers store their learning rate in ... | Gets the current learning rate from an optimizer to be graphed. | get_learning_rate | python | tensorflow/agents | tf_agents/agents/ppo/ppo_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/ppo_utils.py | Apache-2.0 |
def train_eval(
root_dir,
env_name='HalfCheetah-v2',
env_load_fn=suite_mujoco.load,
random_seed=None,
# TODO(b/127576522): rename to policy_fc_layers.
actor_fc_layers=(200, 100),
value_fc_layers=(200, 100),
use_rnns=False,
lstm_size=(20,),
# Params for collect
num_environment... | A simple train and eval for PPO. | train_eval | python | tensorflow/agents | tf_agents/agents/ppo/examples/v2/train_eval_clip_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/ppo/examples/v2/train_eval_clip_agent.py | Apache-2.0 |
def _get_target_updater(self, tau=1.0, period=1):
"""Performs a soft update of the target network.
For each weight w_s in the q network, and its corresponding
weight w_t in the target_q_network, a soft update is:
w_t = (1 - tau) * w_t + tau * w_s
Args:
tau: A float scalar in [0, 1]. Default ... | Performs a soft update of the target network.
For each weight w_s in the q network, and its corresponding
weight w_t in the target_q_network, a soft update is:
w_t = (1 - tau) * w_t + tau * w_s
Args:
tau: A float scalar in [0, 1]. Default `tau=1.0` means hard update. Used
for target netw... | _get_target_updater | python | tensorflow/agents | tf_agents/agents/qtopt/qtopt_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/qtopt/qtopt_agent.py | Apache-2.0 |
def _get_target_updater_delayed(self, tau_delayed=1.0, period_delayed=1):
"""Performs a soft update of the delayed target network.
For each weight w_s in the q network, and its corresponding
weight w_t in the target_q_network, a soft update is:
w_t = (1 - tau) * w_t + tau * w_s
Args:
tau_del... | Performs a soft update of the delayed target network.
For each weight w_s in the q network, and its corresponding
weight w_t in the target_q_network, a soft update is:
w_t = (1 - tau) * w_t + tau * w_s
Args:
tau_delayed: A float scalar in [0, 1]. Default `tau=1.0` means hard
update. Used... | _get_target_updater_delayed | python | tensorflow/agents | tf_agents/agents/qtopt/qtopt_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/qtopt/qtopt_agent.py | Apache-2.0 |
def _get_target_updater_delayed_2(self, tau_delayed=1.0, period_delayed=1):
"""Performs a soft update of the delayed target network.
For each weight w_s in the q network, and its corresponding
weight w_t in the target_q_network, a soft update is:
w_t = (1 - tau) * w_t + tau * w_s
Args:
tau_d... | Performs a soft update of the delayed target network.
For each weight w_s in the q network, and its corresponding
weight w_t in the target_q_network, a soft update is:
w_t = (1 - tau) * w_t + tau * w_s
Args:
tau_delayed: A float scalar in [0, 1]. Default `tau=1.0` means hard
update. Used... | _get_target_updater_delayed_2 | python | tensorflow/agents | tf_agents/agents/qtopt/qtopt_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/qtopt/qtopt_agent.py | Apache-2.0 |
def _add_auxiliary_losses(self, transition, weights, losses_dict):
"""Computes auxiliary losses, updating losses_dict in place."""
total_auxiliary_loss = 0
if self._auxiliary_loss_fns is not None:
for auxiliary_loss_fn in self._auxiliary_loss_fns:
auxiliary_loss, auxiliary_reg_loss = auxiliary... | Computes auxiliary losses, updating losses_dict in place. | _add_auxiliary_losses | python | tensorflow/agents | tf_agents/agents/qtopt/qtopt_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/qtopt/qtopt_agent.py | Apache-2.0 |
def _loss(self, experience, weights=None, training=False):
"""Computes loss for QtOpt training.
Args:
experience: A batch of experience data in the form of a `Trajectory` or
`Transition`. The structure of `experience` must match that of
`self.collect_policy.step_spec`. If a `Trajectory`,... | Computes loss for QtOpt training.
Args:
experience: A batch of experience data in the form of a `Trajectory` or
`Transition`. The structure of `experience` must match that of
`self.collect_policy.step_spec`. If a `Trajectory`, all tensors in
`experience` must be shaped `[B, T, ...]` ... | _loss | python | tensorflow/agents | tf_agents/agents/qtopt/qtopt_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/qtopt/qtopt_agent.py | Apache-2.0 |
def VerifyTrainAndRestore(self):
"""Helper function for testing correct variable updating and restoring."""
batch_size = 2
seq_len = 2
observations = tensor_spec.sample_spec_nest(
self._observation_spec, outer_dims=(batch_size, seq_len)
)
actions = tensor_spec.sample_spec_nest(
s... | Helper function for testing correct variable updating and restoring. | VerifyTrainAndRestore | python | tensorflow/agents | tf_agents/agents/qtopt/qtopt_agent_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/qtopt/qtopt_agent_test.py | Apache-2.0 |
def __init__(
self,
time_step_spec: ts.TimeStep,
action_spec: types.NestedTensorSpec,
policy_class: PolicyClassType,
debug_summaries: bool = False,
summarize_grads_and_vars: bool = False,
train_step_counter: Optional[tf.Variable] = None,
num_outer_dims: int = 1,
nam... | Creates a fixed-policy agent with no-op for training.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of BoundedTensorSpec representing the actions.
policy_class: a tf_policy.TFPolicy or py_policy.PyPolicy class to use as a
policy.
debug_summa... | __init__ | python | tensorflow/agents | tf_agents/agents/random/fixed_policy_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/random/fixed_policy_agent.py | Apache-2.0 |
def _train(self, experience, weights):
"""Do nothing. Arguments are ignored and loss is always 0."""
del experience # Unused
del weights # Unused
# Incrementing the step counter.
self.train_step_counter.assign_add(1)
# Returning 0 loss.
return tf_agent.LossInfo(0.0, None) | Do nothing. Arguments are ignored and loss is always 0. | _train | python | tensorflow/agents | tf_agents/agents/random/fixed_policy_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/random/fixed_policy_agent.py | Apache-2.0 |
def __init__(
self,
time_step_spec: ts.TimeStep,
action_spec: types.NestedTensorSpec,
debug_summaries: bool = False,
summarize_grads_and_vars: bool = False,
train_step_counter: Optional[tf.Variable] = None,
num_outer_dims: int = 1,
name: Optional[Text] = None,
):
""... | Creates a random agent.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of BoundedTensorSpec representing the actions.
debug_summaries: A bool to gather debug summaries.
summarize_grads_and_vars: If true, gradient summaries will be written.
trai... | __init__ | python | tensorflow/agents | tf_agents/agents/random/random_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/random/random_agent.py | Apache-2.0 |
def _standard_normalize(values, axes=(0,)):
"""Standard normalizes values `values`.
Args:
values: Tensor with values to be standardized.
axes: Axes used to compute mean and variances.
Returns:
Standardized values (values - mean(values[axes])) / std(values[axes]).
"""
values_mean, values_var = tf... | Standard normalizes values `values`.
Args:
values: Tensor with values to be standardized.
axes: Axes used to compute mean and variances.
Returns:
Standardized values (values - mean(values[axes])) / std(values[axes]).
| _standard_normalize | python | tensorflow/agents | tf_agents/agents/reinforce/reinforce_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/reinforce/reinforce_agent.py | Apache-2.0 |
def _entropy_loss(distributions, spec, weights=None):
"""Computes entropy loss.
Args:
distributions: A possibly batched tuple of distributions.
spec: A nested tuple representing the action spec.
weights: Optional scalar or element-wise (per-batch-entry) importance
weights. Includes a mask for in... | Computes entropy loss.
Args:
distributions: A possibly batched tuple of distributions.
spec: A nested tuple representing the action spec.
weights: Optional scalar or element-wise (per-batch-entry) importance
weights. Includes a mask for invalid timesteps.
Returns:
A Tensor representing the ... | _entropy_loss | python | tensorflow/agents | tf_agents/agents/reinforce/reinforce_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/reinforce/reinforce_agent.py | Apache-2.0 |
def _get_initial_policy_state(policy, time_steps):
"""Gets the initial state of a policy."""
batch_size = (
tf.compat.dimension_at_index(time_steps.discount.shape, 0)
or tf.shape(time_steps.discount)[0]
)
return policy.get_initial_state(batch_size=batch_size) | Gets the initial state of a policy. | _get_initial_policy_state | python | tensorflow/agents | tf_agents/agents/reinforce/reinforce_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/reinforce/reinforce_agent.py | Apache-2.0 |
def __init__(
self,
time_step_spec: ts.TimeStep,
action_spec: types.TensorSpec,
actor_network: network.Network,
optimizer: types.Optimizer,
value_network: Optional[network.Network] = None,
value_estimation_loss_coef: types.Float = 0.2,
advantage_fn: Optional[AdvantageFnTy... | Creates a REINFORCE Agent.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of BoundedTensorSpec representing the actions.
actor_network: A tf_agents.network.Network to be used by the agent. The
network will be called with call(observation, step_type... | __init__ | python | tensorflow/agents | tf_agents/agents/reinforce/reinforce_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/reinforce/reinforce_agent.py | Apache-2.0 |
def policy_gradient_loss(
self,
actions_distribution: types.NestedDistribution,
actions: types.NestedTensor,
is_boundary: types.Tensor,
returns: types.Tensor,
num_episodes: types.Int,
weights: Optional[types.Tensor] = None,
) -> types.Tensor:
"""Computes the policy gradie... | Computes the policy gradient loss.
Args:
actions_distribution: A possibly batched tuple of action distributions.
actions: Tensor with a batch of actions.
is_boundary: Tensor of booleans that indicate if the corresponding action
was in a boundary trajectory and should be ignored.
ret... | policy_gradient_loss | python | tensorflow/agents | tf_agents/agents/reinforce/reinforce_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/reinforce/reinforce_agent.py | Apache-2.0 |
def entropy_regularization_loss(
self,
actions_distribution: types.NestedDistribution,
weights: Optional[types.Tensor] = None,
) -> types.Tensor:
"""Computes the optional entropy regularization loss.
Extending REINFORCE by entropy regularization was originally proposed in
"Function opti... | Computes the optional entropy regularization loss.
Extending REINFORCE by entropy regularization was originally proposed in
"Function optimization using connectionist reinforcement learning
algorithms." (Williams and Peng, 1991).
Args:
actions_distribution: A possibly batched tuple of action dis... | entropy_regularization_loss | python | tensorflow/agents | tf_agents/agents/reinforce/reinforce_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/reinforce/reinforce_agent.py | Apache-2.0 |
def value_estimation_loss(
self,
value_preds: types.Tensor,
returns: types.Tensor,
num_episodes: types.Int,
weights: Optional[types.Tensor] = None,
) -> types.Tensor:
"""Computes the value estimation loss.
Args:
value_preds: Per-timestep estimated values.
returns: Pe... | Computes the value estimation loss.
Args:
value_preds: Per-timestep estimated values.
returns: Per-timestep returns for value function to predict.
num_episodes: Number of episodes contained in the training data.
weights: Optional scalar or element-wise (per-batch-entry) importance
w... | value_estimation_loss | python | tensorflow/agents | tf_agents/agents/reinforce/reinforce_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/reinforce/reinforce_agent.py | Apache-2.0 |
def train_eval(
root_dir,
env_name='CartPole-v0',
num_iterations=1000,
actor_fc_layers=(100,),
value_net_fc_layers=(100,),
use_value_network=False,
use_tf_functions=True,
# Params for collect
collect_episodes_per_iteration=2,
replay_buffer_capacity=2000,
# Params for train
... | A simple train and eval for Reinforce. | train_eval | python | tensorflow/agents | tf_agents/agents/reinforce/examples/v2/train_eval.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/reinforce/examples/v2/train_eval.py | Apache-2.0 |
def __init__(
self,
time_step_spec: ts.TimeStep,
action_spec: types.NestedTensorSpec,
critic_network: network.Network,
actor_network: network.Network,
actor_optimizer: types.Optimizer,
critic_optimizer: types.Optimizer,
alpha_optimizer: types.Optimizer,
actor_loss_w... | Creates a SAC Agent.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of BoundedTensorSpec representing the actions.
critic_network: A function critic_network((observations, actions)) that
returns the q_values for each observation and action.
a... | __init__ | python | tensorflow/agents | tf_agents/agents/sac/sac_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/sac/sac_agent.py | Apache-2.0 |
def _initialize(self):
"""Returns an op to initialize the agent.
Copies weights from the Q networks to the target Q network.
"""
common.soft_variables_update(
self._critic_network_1.variables,
self._target_critic_network_1.variables,
tau=1.0,
)
common.soft_variables_upda... | Returns an op to initialize the agent.
Copies weights from the Q networks to the target Q network.
| _initialize | python | tensorflow/agents | tf_agents/agents/sac/sac_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/sac/sac_agent.py | Apache-2.0 |
def _train(self, experience, weights):
"""Returns a train op to update the agent's networks.
This method trains with the provided batched experience.
Args:
experience: A time-stacked trajectory object.
weights: Optional scalar or elementwise (per-batch-entry) importance
weights.
R... | Returns a train op to update the agent's networks.
This method trains with the provided batched experience.
Args:
experience: A time-stacked trajectory object.
weights: Optional scalar or elementwise (per-batch-entry) importance
weights.
Returns:
A train_op.
Raises:
V... | _train | python | tensorflow/agents | tf_agents/agents/sac/sac_agent.py | https://github.com/tensorflow/agents/blob/master/tf_agents/agents/sac/sac_agent.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.