code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def spec_means_and_magnitudes(action_spec):
"""Get the center and magnitude of the ranges in action spec."""
action_means = tf.nest.map_structure(
lambda spec: (spec.maximum + spec.minimum) / 2.0, action_spec
)
action_magnitudes = tf.nest.map_structure(
lambda spec: (spec.maximum - spec.minimum) / 2... | Get the center and magnitude of the ranges in action spec. | spec_means_and_magnitudes | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def scale_to_spec(tensor, spec):
"""Shapes and scales a batch into the given spec bounds.
Args:
tensor: A [batch x n] tensor with values in the range of [-1, 1].
spec: (BoundedTensorSpec) to use for scaling the action.
Returns:
A batch scaled the given spec bounds.
"""
tensor = tf.reshape(tensor... | Shapes and scales a batch into the given spec bounds.
Args:
tensor: A [batch x n] tensor with values in the range of [-1, 1].
spec: (BoundedTensorSpec) to use for scaling the action.
Returns:
A batch scaled the given spec bounds.
| scale_to_spec | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def ornstein_uhlenbeck_process(
initial_value,
damping=0.15,
stddev=0.2,
seed=None,
scope='ornstein_uhlenbeck_noise',
):
"""An op for generating noise from a zero-mean Ornstein-Uhlenbeck process.
The Ornstein-Uhlenbeck process is a process that generates temporally
correlated noise via a rand... | An op for generating noise from a zero-mean Ornstein-Uhlenbeck process.
The Ornstein-Uhlenbeck process is a process that generates temporally
correlated noise via a random walk with damping. This process describes
the velocity of a particle undergoing brownian motion in the presence of
friction. This can be us... | ornstein_uhlenbeck_process | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def __init__(
self,
initial_value,
damping=0.15,
stddev=0.2,
seed=None,
scope='ornstein_uhlenbeck_noise',
):
"""A Class for generating noise from a zero-mean Ornstein-Uhlenbeck process.
The Ornstein-Uhlenbeck process is a process that generates temporally
correlated no... | A Class for generating noise from a zero-mean Ornstein-Uhlenbeck process.
The Ornstein-Uhlenbeck process is a process that generates temporally
correlated noise via a random walk with damping. This process describes
the velocity of a particle undergoing brownian motion in the presence of
friction. This... | __init__ | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def log_probability(distributions, actions, action_spec):
"""Computes log probability of actions given distribution.
Args:
distributions: A possibly batched tuple of distributions.
actions: A possibly batched action tuple.
action_spec: A nested tuple representing the action spec.
Returns:
A Tens... | Computes log probability of actions given distribution.
Args:
distributions: A possibly batched tuple of distributions.
actions: A possibly batched action tuple.
action_spec: A nested tuple representing the action spec.
Returns:
A Tensor representing the log probability of each action in the batch... | log_probability | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def entropy(distributions, action_spec, outer_rank=None):
"""Computes total entropy of distribution.
Args:
distributions: A possibly batched tuple of distributions.
action_spec: A nested tuple representing the action spec.
outer_rank: Optional outer rank of the distributions. If not provided use
... | Computes total entropy of distribution.
Args:
distributions: A possibly batched tuple of distributions.
action_spec: A nested tuple representing the action spec.
outer_rank: Optional outer rank of the distributions. If not provided use
distribution.mode() to compute it.
Returns:
A Tensor rep... | entropy | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def discounted_future_sum(values, gamma, num_steps):
"""Discounted future sum of batch-major values.
Args:
values: A Tensor of shape [batch_size, total_steps] and dtype float32.
gamma: A float discount value.
num_steps: A positive integer number of future steps to sum.
Returns:
A Tensor of shape... | Discounted future sum of batch-major values.
Args:
values: A Tensor of shape [batch_size, total_steps] and dtype float32.
gamma: A float discount value.
num_steps: A positive integer number of future steps to sum.
Returns:
A Tensor of shape [batch_size, total_steps], where each entry `(i, j)` is
... | discounted_future_sum | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def discounted_future_sum_masked(values, gamma, num_steps, episode_lengths):
"""Discounted future sum of batch-major values.
Args:
values: A Tensor of shape [batch_size, total_steps] and dtype float32.
gamma: A float discount value.
num_steps: A positive integer number of future steps to sum.
episo... | Discounted future sum of batch-major values.
Args:
values: A Tensor of shape [batch_size, total_steps] and dtype float32.
gamma: A float discount value.
num_steps: A positive integer number of future steps to sum.
episode_lengths: A vector shape [batch_size] with num_steps per episode.
Returns:
... | discounted_future_sum_masked | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def shift_values(values, gamma, num_steps, final_values=None):
"""Shifts batch-major values in time by some amount.
Args:
values: A Tensor of shape [batch_size, total_steps] and dtype float32.
gamma: A float discount value.
num_steps: A nonnegative integer amount to shift values by.
final_values: A... | Shifts batch-major values in time by some amount.
Args:
values: A Tensor of shape [batch_size, total_steps] and dtype float32.
gamma: A float discount value.
num_steps: A nonnegative integer amount to shift values by.
final_values: A float32 Tensor of shape [batch_size] corresponding to the
val... | shift_values | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def get_episode_mask(time_steps):
"""Create a mask that is 0.0 for all final steps, 1.0 elsewhere.
Args:
time_steps: A TimeStep namedtuple representing a batch of steps.
Returns:
A float32 Tensor with 0s where step_type == LAST and 1s otherwise.
"""
episode_mask = tf.cast(
tf.not_equal(time_st... | Create a mask that is 0.0 for all final steps, 1.0 elsewhere.
Args:
time_steps: A TimeStep namedtuple representing a batch of steps.
Returns:
A float32 Tensor with 0s where step_type == LAST and 1s otherwise.
| get_episode_mask | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def get_contiguous_sub_episodes(next_time_steps_discount):
"""Computes mask on sub-episodes which includes only contiguous components.
Args:
next_time_steps_discount: Tensor of shape [batch_size, total_steps]
corresponding to environment discounts on next time steps (i.e.
next_time_steps.discount).... | Computes mask on sub-episodes which includes only contiguous components.
Args:
next_time_steps_discount: Tensor of shape [batch_size, total_steps]
corresponding to environment discounts on next time steps (i.e.
next_time_steps.discount).
Returns:
A float Tensor of shape [batch_size, total_step... | get_contiguous_sub_episodes | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def convert_q_logits_to_values(logits, support):
"""Converts a set of Q-value logits into Q-values using the provided support.
Args:
logits: A Tensor representing the Q-value logits.
support: The support of the underlying distribution.
Returns:
A Tensor containing the expected Q-values.
"""
prob... | Converts a set of Q-value logits into Q-values using the provided support.
Args:
logits: A Tensor representing the Q-value logits.
support: The support of the underlying distribution.
Returns:
A Tensor containing the expected Q-values.
| convert_q_logits_to_values | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def generate_tensor_summaries(tag, tensor, step):
"""Generates various summaries of `tensor` such as histogram, max, min, etc.
Args:
tag: A namescope tag for the summaries.
tensor: The tensor to generate summaries of.
step: Variable to use for summaries.
"""
with tf.name_scope(tag):
tf.compat.v... | Generates various summaries of `tensor` such as histogram, max, min, etc.
Args:
tag: A namescope tag for the summaries.
tensor: The tensor to generate summaries of.
step: Variable to use for summaries.
| generate_tensor_summaries | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def summarize_tensor_dict(
tensor_dict: Dict[Text, types.Tensor], step: Optional[types.Tensor]
):
"""Generates summaries of all tensors in `tensor_dict`.
Args:
tensor_dict: A dictionary {name, tensor} to summarize.
step: The global step
"""
for tag in tensor_dict:
generate_tensor_summaries(tag,... | Generates summaries of all tensors in `tensor_dict`.
Args:
tensor_dict: A dictionary {name, tensor} to summarize.
step: The global step
| summarize_tensor_dict | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def compute_returns(
rewards: types.Tensor, discounts: types.Tensor, time_major: bool = False
):
"""Compute the return from each index in an episode.
Args:
rewards: Tensor `[T]`, `[B, T]`, `[T, B]` of per-timestep reward.
discounts: Tensor `[T]`, `[B, T]`, `[T, B]` of per-timestep discount factor.
... | Compute the return from each index in an episode.
Args:
rewards: Tensor `[T]`, `[B, T]`, `[T, B]` of per-timestep reward.
discounts: Tensor `[T]`, `[B, T]`, `[T, B]` of per-timestep discount factor.
Should be `0`. for final step of each episode.
time_major: Bool, when batched inputs setting it to `... | compute_returns | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def initialize_uninitialized_variables(session, var_list=None):
"""Initialize any pending variables that are uninitialized."""
if var_list is None:
var_list = tf.compat.v1.global_variables() + tf.compat.v1.local_variables()
is_initialized = session.run(
[tf.compat.v1.is_variable_initialized(v) for v in ... | Initialize any pending variables that are uninitialized. | initialize_uninitialized_variables | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def __init__(self, ckpt_dir, max_to_keep=20, **kwargs):
"""A class for making checkpoints.
If ckpt_dir doesn't exists it creates it.
Args:
ckpt_dir: The directory to save checkpoints.
max_to_keep: Maximum number of checkpoints to keep (if greater than the
max are saved, the oldest chec... | A class for making checkpoints.
If ckpt_dir doesn't exists it creates it.
Args:
ckpt_dir: The directory to save checkpoints.
max_to_keep: Maximum number of checkpoints to keep (if greater than the
max are saved, the oldest checkpoints are deleted).
**kwargs: Items to include in the c... | __init__ | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def replicate(tensor, outer_shape):
"""Replicates a tensor so as to match the given outer shape.
Example:
- t = [[1, 2, 3], [4, 5, 6]] (shape = [2, 3])
- outer_shape = [2, 1]
The shape of the resulting tensor is: [2, 1, 2, 3]
and its content is: [[t], [t]]
Args:
tensor: A tf.Tensor.
outer_shape:... | Replicates a tensor so as to match the given outer shape.
Example:
- t = [[1, 2, 3], [4, 5, 6]] (shape = [2, 3])
- outer_shape = [2, 1]
The shape of the resulting tensor is: [2, 1, 2, 3]
and its content is: [[t], [t]]
Args:
tensor: A tf.Tensor.
outer_shape: Outer shape given as a 1D tensor of type... | replicate | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def assert_members_are_not_overridden(
base_cls, instance, allowlist=(), denylist=()
):
"""Asserts public members of `base_cls` are not overridden in `instance`.
If both `allowlist` and `denylist` are empty, no public member of
`base_cls` can be overridden. If a `allowlist` is provided, only public
members... | Asserts public members of `base_cls` are not overridden in `instance`.
If both `allowlist` and `denylist` are empty, no public member of
`base_cls` can be overridden. If a `allowlist` is provided, only public
members in `allowlist` can be overridden. If a `denylist` is provided,
all public members except those... | assert_members_are_not_overridden | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def transpose_batch_time(x):
"""Transposes the batch and time dimensions of a Tensor.
If the input tensor has rank < 2 it returns the original tensor. Retains as
much of the static shape information as possible.
Args:
x: A Tensor.
Returns:
x transposed along the first two dimensions.
"""
x_stat... | Transposes the batch and time dimensions of a Tensor.
If the input tensor has rank < 2 it returns the original tensor. Retains as
much of the static shape information as possible.
Args:
x: A Tensor.
Returns:
x transposed along the first two dimensions.
| transpose_batch_time | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def save_spec(spec, file_path):
"""Saves the given spec nest as a StructProto.
**Note**: Currently this will convert BoundedTensorSpecs into regular
TensorSpecs.
Args:
spec: A nested structure of TensorSpecs.
file_path: Path to save the encoded spec to.
"""
spec = tensor_spec.from_spec(spec)
s... | Saves the given spec nest as a StructProto.
**Note**: Currently this will convert BoundedTensorSpecs into regular
TensorSpecs.
Args:
spec: A nested structure of TensorSpecs.
file_path: Path to save the encoded spec to.
| save_spec | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def load_spec(file_path):
"""Loads a data spec from a file.
**Note**: Types for Named tuple classes will not match. Users need to convert
to these manually:
# Convert from:
# 'tensorflow.python.saved_model.nested_structure_coder.Trajectory'
# to proper TrajectorySpec.
# trajectory_spec = traje... | Loads a data spec from a file.
**Note**: Types for Named tuple classes will not match. Users need to convert
to these manually:
# Convert from:
# 'tensorflow.python.saved_model.nested_structure_coder.Trajectory'
# to proper TrajectorySpec.
# trajectory_spec = trajectory.Trajectory(*spec)
Args... | load_spec | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def extract_shared_variables(variables_1, variables_2):
"""Separates shared variables from the given collections.
Args:
variables_1: An iterable of Variables
variables_2: An iterable of Variables
Returns:
A Tuple of ObjectIdentitySets described by the set operations
```
(variables_1 - varia... | Separates shared variables from the given collections.
Args:
variables_1: An iterable of Variables
variables_2: An iterable of Variables
Returns:
A Tuple of ObjectIdentitySets described by the set operations
```
(variables_1 - variables_2,
variables_2 - variables_1,
variables_1 & va... | extract_shared_variables | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def check_no_shared_variables(network_1, network_2):
"""Checks that there are no shared trainable variables in the two networks.
Args:
network_1: A network.Network.
network_2: A network.Network.
Raises:
ValueError: if there are any common trainable variables.
ValueError: if one of the networks h... | Checks that there are no shared trainable variables in the two networks.
Args:
network_1: A network.Network.
network_2: A network.Network.
Raises:
ValueError: if there are any common trainable variables.
ValueError: if one of the networks has not yet been built
(e.g. user must call `create_v... | check_no_shared_variables | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def check_matching_networks(network_1, network_2):
"""Check that two networks have matching input specs and variables.
Args:
network_1: A network.Network.
network_2: A network.Network.
Raises:
ValueError: if the networks differ in input_spec, variables (number, dtype,
or shape).
ValueError... | Check that two networks have matching input specs and variables.
Args:
network_1: A network.Network.
network_2: A network.Network.
Raises:
ValueError: if the networks differ in input_spec, variables (number, dtype,
or shape).
ValueError: if either of the networks has not been built yet
... | check_matching_networks | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def maybe_copy_target_network_with_checks(
network, target_network=None, name=None, input_spec=None
):
"""Copies the network into target if None and checks for shared variables."""
if target_network is None:
target_network = network.copy(name=name)
target_network.create_variables(input_spec)
# Copy ma... | Copies the network into target if None and checks for shared variables. | maybe_copy_target_network_with_checks | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def aggregate_losses(
per_example_loss=None,
sample_weight=None,
global_batch_size=None,
regularization_loss=None,
):
"""Aggregates and scales per example loss and regularization losses.
If `global_batch_size` is given it would be used for scaling, otherwise it
would use the batch_dim of per_exam... | Aggregates and scales per example loss and regularization losses.
If `global_batch_size` is given it would be used for scaling, otherwise it
would use the batch_dim of per_example_loss and number of replicas.
Args:
per_example_loss: Per-example loss [B] or [B, T, ...].
sample_weight: Optional weighting ... | aggregate_losses | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def soft_device_placement():
"""Context manager for soft device placement, allowing summaries on CPU.
Eager and graph contexts have different default device placements. See
b/148408921 for details. This context manager should be used whenever using
summary writers contexts to make sure summaries work when exec... | Context manager for soft device placement, allowing summaries on CPU.
Eager and graph contexts have different default device placements. See
b/148408921 for details. This context manager should be used whenever using
summary writers contexts to make sure summaries work when executing on TPUs.
Yields:
Sets... | soft_device_placement | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def deduped_network_variables(network, *args):
"""Returns a list of variables in net1 that are not in any other nets.
Args:
network: A Keras network.
*args: other networks to check for duplicate variables.
"""
other_vars = object_identity.ObjectIdentitySet(
[v for n in args for v in n.variables]
... | Returns a list of variables in net1 that are not in any other nets.
Args:
network: A Keras network.
*args: other networks to check for duplicate variables.
| deduped_network_variables | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def safe_has_state(state):
"""Safely checks `state not in (None, (), [])`."""
# TODO(b/158804957): tf.function changes "s in ((),)" to a tensor bool expr.
# pylint: disable=literal-comparison
return state is not None and state is not () and state is not []
# pylint: enable=literal-comparison | Safely checks `state not in (None, (), [])`. | safe_has_state | python | tensorflow/agents | tf_agents/utils/common.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common.py | Apache-2.0 |
def testPeriodically(self):
"""Tests that a function is called exactly every `period` steps."""
target = tf.compat.v2.Variable(0)
period = 3
periodic_update = common.periodically(
body=lambda: tf.group(target.assign_add(1)), period=period
)
self.evaluate(tf.compat.v1.global_variables_i... | Tests that a function is called exactly every `period` steps. | testPeriodically | python | tensorflow/agents | tf_agents/utils/common_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common_test.py | Apache-2.0 |
def testPeriodOne(self):
"""Tests that the function is called every time if period == 1."""
target = tf.compat.v2.Variable(0)
periodic_update = common.periodically(
lambda: tf.group(target.assign_add(1)), period=1
)
self.evaluate(tf.compat.v1.global_variables_initializer())
for desired... | Tests that the function is called every time if period == 1. | testPeriodOne | python | tensorflow/agents | tf_agents/utils/common_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common_test.py | Apache-2.0 |
def testPeriodNone(self):
"""Tests that the function is never called if period == None."""
target = tf.compat.v2.Variable(0)
periodic_update = common.periodically(
body=lambda: target.assign_add(1), period=None
)
self.evaluate(tf.compat.v1.global_variables_initializer())
desired_value ... | Tests that the function is never called if period == None. | testPeriodNone | python | tensorflow/agents | tf_agents/utils/common_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common_test.py | Apache-2.0 |
def testPeriodVariable(self):
"""Tests that a function is called exactly every `period` steps."""
target = tf.compat.v2.Variable(0)
period = tf.compat.v2.Variable(1)
periodic_update = common.periodically(
body=lambda: tf.group(target.assign_add(1)), period=period
)
self.evaluate(tf.com... | Tests that a function is called exactly every `period` steps. | testPeriodVariable | python | tensorflow/agents | tf_agents/utils/common_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common_test.py | Apache-2.0 |
def testMultiplePeriodically(self):
"""Tests that 2 periodically ops run independently."""
target1 = tf.compat.v2.Variable(0)
periodic_update1 = common.periodically(
body=lambda: tf.group(target1.assign_add(1)), period=1
)
target2 = tf.compat.v2.Variable(0)
periodic_update2 = common.per... | Tests that 2 periodically ops run independently. | testMultiplePeriodically | python | tensorflow/agents | tf_agents/utils/common_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common_test.py | Apache-2.0 |
def testSamples(self):
"""Tests that samples follow Ornstein-Uhlenbeck process.
This is done by checking that the successive differences
`x_next - (1-theta) * x` have the expected mean and variance.
"""
# Increasing the number of samples can help reduce the variance and make the
# sample mean c... | Tests that samples follow Ornstein-Uhlenbeck process.
This is done by checking that the successive differences
`x_next - (1-theta) * x` have the expected mean and variance.
| testSamples | python | tensorflow/agents | tf_agents/utils/common_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common_test.py | Apache-2.0 |
def _compute_returns_fn(rewards, discounts):
"""Python implementation of computing discounted returns."""
returns = np.zeros(len(rewards))
next_state_return = 0.0
for t in range(len(returns) - 1, -1, -1):
returns[t] = rewards[t] + discounts[t] * next_state_return
next_state_retur... | Python implementation of computing discounted returns. | _compute_returns_fn | python | tensorflow/agents | tf_agents/utils/common_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/common_test.py | Apache-2.0 |
def reshape(t, shape): # pylint: disable=redefined-outer-name
"""Reshape composite tensor `t` to `shape`.
Args:
t: A `Tensor` or `SparseTensor`.
shape: `1D` tensor, array, or list. The new shape.
Returns:
The reshaped tensor.
"""
return (
tf.sparse.reshape(t, shape)
if isinstance(t... | Reshape composite tensor `t` to `shape`.
Args:
t: A `Tensor` or `SparseTensor`.
shape: `1D` tensor, array, or list. The new shape.
Returns:
The reshaped tensor.
| reshape | python | tensorflow/agents | tf_agents/utils/composite.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/composite.py | Apache-2.0 |
def squeeze(t, axis):
"""Squeeze composite tensor along axis `axis`.
Args:
t: A `Tensor` or `SparseTensor`.
axis: A python integer.
Returns:
The tensor with dimension `axis` removed.
Raises:
InvalidArgumentError: If `t` is a `SparseTensor` and has more than one index
stored along `axis`.
... | Squeeze composite tensor along axis `axis`.
Args:
t: A `Tensor` or `SparseTensor`.
axis: A python integer.
Returns:
The tensor with dimension `axis` removed.
Raises:
InvalidArgumentError: If `t` is a `SparseTensor` and has more than one index
stored along `axis`.
| squeeze | python | tensorflow/agents | tf_agents/utils/composite.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/composite.py | Apache-2.0 |
def expand_dims(t, axis):
"""Add a new dimension to tensor `t` along `axis`.
Args:
t: A `tf.Tensor` or `tf.SparseTensor`.
axis: A `0D` integer scalar.
Returns:
An expanded tensor.
Raises:
NotImplementedError: If `t` is a `SparseTensor` and `axis != 0`.
"""
if isinstance(t, tf.SparseTensor... | Add a new dimension to tensor `t` along `axis`.
Args:
t: A `tf.Tensor` or `tf.SparseTensor`.
axis: A `0D` integer scalar.
Returns:
An expanded tensor.
Raises:
NotImplementedError: If `t` is a `SparseTensor` and `axis != 0`.
| expand_dims | python | tensorflow/agents | tf_agents/utils/composite.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/composite.py | Apache-2.0 |
def slice_from(tensor, axis, start):
"""Slice a composite tensor along `axis` from `start`.
Examples:
```python
slice_from(tensor, 2, 1) === tensor[:, :, 1:]
sparse_to_dense(slice_from(sparse_tensor, 2, 1))
=== sparse_to_dense(sparse_tensor)[:, :, 1:]
```
Args:
tensor: A `Tensor` or `SparseTens... | Slice a composite tensor along `axis` from `start`.
Examples:
```python
slice_from(tensor, 2, 1) === tensor[:, :, 1:]
sparse_to_dense(slice_from(sparse_tensor, 2, 1))
=== sparse_to_dense(sparse_tensor)[:, :, 1:]
```
Args:
tensor: A `Tensor` or `SparseTensor`.
axis: A python integer.
start... | slice_from | python | tensorflow/agents | tf_agents/utils/composite.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/composite.py | Apache-2.0 |
def slice_to(tensor, axis, end):
"""Slice a composite tensor along `axis` from 0 to `end`.
Examples:
```python
slice_to(tensor, 2, -1) === tensor[:, :, :-1]
sparse_to_dense(slice_to(sparse_tensor, 2, -1))
=== sparse_to_dense(sparse_tensor)[:, :, :-1]
```
Args:
tensor: A `Tensor` or `SparseTenso... | Slice a composite tensor along `axis` from 0 to `end`.
Examples:
```python
slice_to(tensor, 2, -1) === tensor[:, :, :-1]
sparse_to_dense(slice_to(sparse_tensor, 2, -1))
=== sparse_to_dense(sparse_tensor)[:, :, :-1]
```
Args:
tensor: A `Tensor` or `SparseTensor`.
axis: A python integer.
en... | slice_to | python | tensorflow/agents | tf_agents/utils/composite.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/composite.py | Apache-2.0 |
def has_self_cls_arg(func_or_method):
"""Checks if it is method which takes self/cls as the first argument."""
if isinstance(func_or_method, staticmethod):
return False
if inspect.ismethod(func_or_method):
return True
if isinstance(func_or_method, classmethod):
return True
if six.PY2:
arg_name... | Checks if it is method which takes self/cls as the first argument. | has_self_cls_arg | python | tensorflow/agents | tf_agents/utils/eager_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/eager_utils.py | Apache-2.0 |
def __call__(self, *args, **kwargs):
"""If *args/**kwargs are given they would replace those given at init.
Args:
*args: List of extra arguments.
**kwargs: Dict of extra keyword arguments.
Returns:
The result of func_or_method(*args, **kwargs).
"""
# By default use the init args.... | If *args/**kwargs are given they would replace those given at init.
Args:
*args: List of extra arguments.
**kwargs: Dict of extra keyword arguments.
Returns:
The result of func_or_method(*args, **kwargs).
| __call__ | python | tensorflow/agents | tf_agents/utils/eager_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/eager_utils.py | Apache-2.0 |
def future_in_eager_mode(func_or_method):
"""Decorator that allow a function/method to run in graph and in eager modes.
When applied in graph mode it calls the function and return its outputs.
When applied in eager mode it returns a lambda function that when called
returns the outputs.
```python
@eager_ut... | Decorator that allow a function/method to run in graph and in eager modes.
When applied in graph mode it calls the function and return its outputs.
When applied in eager mode it returns a lambda function that when called
returns the outputs.
```python
@eager_utils.future_in_eager_mode
def loss_fn(x):
... | future_in_eager_mode | python | tensorflow/agents | tf_agents/utils/eager_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/eager_utils.py | Apache-2.0 |
def add_variables_summaries(grads_and_vars, step):
"""Add summaries for variables.
Args:
grads_and_vars: A list of (gradient, variable) pairs.
step: Variable to use for summaries.
"""
with tf.name_scope('summarize_vars'):
for _, var in grads_and_vars:
if isinstance(var, tf.IndexedSlices):
... | Add summaries for variables.
Args:
grads_and_vars: A list of (gradient, variable) pairs.
step: Variable to use for summaries.
| add_variables_summaries | python | tensorflow/agents | tf_agents/utils/eager_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/eager_utils.py | Apache-2.0 |
def add_gradients_summaries(grads_and_vars, step):
"""Add summaries to gradients.
Args:
grads_and_vars: A list of gradient to variable pairs (tuples).
step: Variable to use for summaries.
"""
with tf.name_scope('summarize_grads'):
for grad, var in grads_and_vars:
if grad is not None:
... | Add summaries to gradients.
Args:
grads_and_vars: A list of gradient to variable pairs (tuples).
step: Variable to use for summaries.
| add_gradients_summaries | python | tensorflow/agents | tf_agents/utils/eager_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/eager_utils.py | Apache-2.0 |
def clip_gradient_norms(gradients_to_variables, max_norm):
"""Clips the gradients by the given value.
Args:
gradients_to_variables: A list of gradient to variable pairs (tuples).
max_norm: the maximum norm value.
Returns:
A list of clipped gradient to variable pairs.
"""
clipped_grads_and_vars =... | Clips the gradients by the given value.
Args:
gradients_to_variables: A list of gradient to variable pairs (tuples).
max_norm: the maximum norm value.
Returns:
A list of clipped gradient to variable pairs.
| clip_gradient_norms | python | tensorflow/agents | tf_agents/utils/eager_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/eager_utils.py | Apache-2.0 |
def create_train_step(
loss,
optimizer,
global_step=_USE_GLOBAL_STEP,
total_loss_fn=None,
update_ops=None,
variables_to_train=None,
transform_grads_fn=None,
summarize_gradients=False,
gate_gradients=tf.compat.v1.train.Optimizer.GATE_OP,
aggregation_method=None,
check_numerics... | Creates a train_step that evaluates the gradients and returns the loss.
Args:
loss: A (possibly nested tuple of) `Tensor` or function representing the
loss.
optimizer: A tf.Optimizer to use for computing the gradients.
global_step: A `Tensor` representing the global step variable. If left as
... | create_train_step | python | tensorflow/agents | tf_agents/utils/eager_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/eager_utils.py | Apache-2.0 |
def create_train_op(
total_loss,
optimizer,
global_step=_USE_GLOBAL_STEP,
update_ops=None,
variables_to_train=None,
transform_grads_fn=None,
summarize_gradients=False,
gate_gradients=tf.compat.v1.train.Optimizer.GATE_OP,
aggregation_method=None,
check_numerics=True,
):
"""Creat... | Creates an `Operation` that evaluates the gradients and returns the loss.
Args:
total_loss: A `Tensor` representing the total loss.
optimizer: A tf.Optimizer to use for computing the gradients.
global_step: A `Tensor` representing the global step variable. If left as
`_USE_GLOBAL_STEP`, then tf.tra... | create_train_op | python | tensorflow/agents | tf_agents/utils/eager_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/eager_utils.py | Apache-2.0 |
def np_function(func=None, output_dtypes=None):
"""Decorator that allow a numpy function to be used in Eager and Graph modes.
Similar to `tf.py_func` and `tf.py_function` but it doesn't require defining
the inputs or the dtypes of the outputs a priori.
In Eager mode it would convert the tf.Tensors to np.array... | Decorator that allow a numpy function to be used in Eager and Graph modes.
Similar to `tf.py_func` and `tf.py_function` but it doesn't require defining
the inputs or the dtypes of the outputs a priori.
In Eager mode it would convert the tf.Tensors to np.arrays before passing to
`func` and then convert back th... | np_function | python | tensorflow/agents | tf_agents/utils/eager_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/eager_utils.py | Apache-2.0 |
def wrapper(*args, **kwargs):
"""Wrapper to add nested input and outputs support."""
func_with_kwargs = functools.partial(func, **kwargs)
def func_flat_outputs(*args):
return tf.nest.flatten(func_with_kwargs(*args))
def compute_output_dtypes(*args):
"""Calls the func to compute... | Wrapper to add nested input and outputs support. | wrapper | python | tensorflow/agents | tf_agents/utils/eager_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/eager_utils.py | Apache-2.0 |
def compute_output_dtypes(*args):
"""Calls the func to compute output dtypes."""
result = func(*args, **kwargs)
return tf.nest.map_structure(lambda x: x.dtype, result) | Calls the func to compute output dtypes. | compute_output_dtypes | python | tensorflow/agents | tf_agents/utils/eager_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/eager_utils.py | Apache-2.0 |
def dataset_iterator(dataset):
"""Constructs a `Dataset` iterator.
The method used to construct the iterator is conditioned on whether Graph mode
is enabled. `dataset_iterator` and `get_next` are useful when we need to
construct an iterator and iterate through it inside a `tensorflow.function`.
Args:
da... | Constructs a `Dataset` iterator.
The method used to construct the iterator is conditioned on whether Graph mode
is enabled. `dataset_iterator` and `get_next` are useful when we need to
construct an iterator and iterate through it inside a `tensorflow.function`.
Args:
dataset: a `tf.data.Dataset`.
Retur... | dataset_iterator | python | tensorflow/agents | tf_agents/utils/eager_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/eager_utils.py | Apache-2.0 |
def get_next(iterator):
"""Returns the next element in a `Dataset` iterator.
The syntax used to retrieve the next item is conditioned on whether Graph mode
is enabled. `dataset_iterator` and `get_next` are useful when we need to
construct an iterator and iterate through it inside a `tensorflow.function`.
Ar... | Returns the next element in a `Dataset` iterator.
The syntax used to retrieve the next item is conditioned on whether Graph mode
is enabled. `dataset_iterator` and `get_next` are useful when we need to
construct an iterator and iterate through it inside a `tensorflow.function`.
Args:
iterator: a `tf.data.... | get_next | python | tensorflow/agents | tf_agents/utils/eager_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/eager_utils.py | Apache-2.0 |
def get_example_encoder(spec, compress_image=False, image_quality=95):
"""Get example encoder function for the given spec.
Given a spec, returns an example encoder function. The example encoder
function takes a nest of np.array feature values as input and returns a
TF Example proto.
Example:
spec = {
... | Get example encoder function for the given spec.
Given a spec, returns an example encoder function. The example encoder
function takes a nest of np.array feature values as input and returns a
TF Example proto.
Example:
spec = {
'lidar': array_spec.ArraySpec((900,), np.float32),
'joint_posi... | get_example_encoder | python | tensorflow/agents | tf_agents/utils/example_encoding.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding.py | Apache-2.0 |
def get_example_serializer(spec, compress_image=False, image_quality=95):
"""Returns string serializer of example protos."""
encoder = get_example_encoder(
spec, compress_image=compress_image, image_quality=image_quality
)
return lambda features_nest: encoder(features_nest).SerializeToString() | Returns string serializer of example protos. | get_example_serializer | python | tensorflow/agents | tf_agents/utils/example_encoding.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding.py | Apache-2.0 |
def get_example_decoder(example_spec, batched=False, compress_image=False):
"""Get an example decoder function for a nested spec.
Given a spec, returns an example decoder function. The decoder function parses
string serialized example protos into tensors according to the given spec.
Args:
example_spec: li... | Get an example decoder function for a nested spec.
Given a spec, returns an example decoder function. The decoder function parses
string serialized example protos into tensors according to the given spec.
Args:
example_spec: list/tuple/nest of ArraySpecs describing a single example.
batched: Boolean ind... | get_example_decoder | python | tensorflow/agents | tf_agents/utils/example_encoding.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding.py | Apache-2.0 |
def _example_decoder(serialized):
"""Parses string serialized example protos into tensors."""
if batched:
raw_features = tf.io.parse_example(
serialized=serialized, features=features_dict
)
decoded_features = []
dtypes = [s.dtype for s in tf.nest.flatten(example_spec)]
... | Parses string serialized example protos into tensors. | _example_decoder | python | tensorflow/agents | tf_agents/utils/example_encoding.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding.py | Apache-2.0 |
def _validate_shape(shape):
"""Check that shape is a valid array shape."""
if not isinstance(shape, abc.Iterable):
raise TypeError(
'shape must be a tuple or other iterable object, not %s'
% type(shape).__name__
)
validated_shape = []
for i, dim in enumerate(shape):
if not dim or di... | Check that shape is a valid array shape. | _validate_shape | python | tensorflow/agents | tf_agents/utils/example_encoding.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding.py | Apache-2.0 |
def _validate_dtype(dtype):
"""Check that dtype is supported by tf.decode_raw."""
dtype = tf.as_dtype(dtype)
supported_dtypes = (
tf.half,
tf.float32,
tf.float64,
tf.uint8,
tf.int8,
tf.uint16,
tf.int16,
tf.int32,
tf.int64,
)
if dtype not in supported_dtype... | Check that dtype is supported by tf.decode_raw. | _validate_dtype | python | tensorflow/agents | tf_agents/utils/example_encoding.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding.py | Apache-2.0 |
def _check_shape_and_dtype(value, shape, dtype):
"""Check that `value` has expected shape and dtype."""
value_dtype = tf.as_dtype(value.dtype.newbyteorder('N'))
if shape != value.shape or dtype != value_dtype:
raise ValueError(
'Expected shape %s of %s, got: shape %s of %s'
% (shape, dtype.nam... | Check that `value` has expected shape and dtype. | _check_shape_and_dtype | python | tensorflow/agents | tf_agents/utils/example_encoding.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding.py | Apache-2.0 |
def _get_feature_encoder(shape, dtype, compress_image=False, image_quality=95):
"""Get feature encoder function for shape and dtype.
Args:
shape: An array shape
dtype: A list of dtypes.
compress_image: Whether to compress image. It is assumed that any uint8
tensor of rank 3 with shape (w,h,c) is ... | Get feature encoder function for shape and dtype.
Args:
shape: An array shape
dtype: A list of dtypes.
compress_image: Whether to compress image. It is assumed that any uint8
tensor of rank 3 with shape (w,h,c) is an image.
image_quality: An optional int. Defaults to 95. Quality of the compress... | _get_feature_encoder | python | tensorflow/agents | tf_agents/utils/example_encoding.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding.py | Apache-2.0 |
def _get_feature_parser(shape, dtype, compress_image=False):
"""Get tf.train.Features entry and decoder function for parsing feature.
Args:
shape: An array shape
dtype: A list of dtypes.
compress_image: Whether to decompress image. It is assumed that any uint8
tensor of rank 3 with shape (w,h,c) ... | Get tf.train.Features entry and decoder function for parsing feature.
Args:
shape: An array shape
dtype: A list of dtypes.
compress_image: Whether to decompress image. It is assumed that any uint8
tensor of rank 3 with shape (w,h,c) is an image. If the tensor was
compressed in the encoder, it... | _get_feature_parser | python | tensorflow/agents | tf_agents/utils/example_encoding.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding.py | Apache-2.0 |
def encode_spec_to_file(output_path, tensor_data_spec):
"""Save a tensor data spec to a tfrecord file.
Args:
output_path: The path to the TFRecord file which will contain the spec.
tensor_data_spec: Nested list/tuple or dict of TensorSpecs, describing the
shape of the non-batched Tensors.
"""
spe... | Save a tensor data spec to a tfrecord file.
Args:
output_path: The path to the TFRecord file which will contain the spec.
tensor_data_spec: Nested list/tuple or dict of TensorSpecs, describing the
shape of the non-batched Tensors.
| encode_spec_to_file | python | tensorflow/agents | tf_agents/utils/example_encoding_dataset.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding_dataset.py | Apache-2.0 |
def parse_encoded_spec_from_file(input_path):
"""Returns the tensor data spec stored at a path.
Args:
input_path: The path to the TFRecord file which contains the spec.
Returns:
`TensorSpec` nested structure parsed from the TFRecord file.
Raises:
IOError: File at input path does not exist.
"""
... | Returns the tensor data spec stored at a path.
Args:
input_path: The path to the TFRecord file which contains the spec.
Returns:
`TensorSpec` nested structure parsed from the TFRecord file.
Raises:
IOError: File at input path does not exist.
| parse_encoded_spec_from_file | python | tensorflow/agents | tf_agents/utils/example_encoding_dataset.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding_dataset.py | Apache-2.0 |
def __init__(
self,
output_path,
tensor_data_spec,
py_mode=False,
compress_image=False,
image_quality=95,
):
"""Creates observer object.
Args:
output_path: The path to the TFRecords file.
tensor_data_spec: Nested list/tuple or dict of TensorSpecs, describing th... | Creates observer object.
Args:
output_path: The path to the TFRecords file.
tensor_data_spec: Nested list/tuple or dict of TensorSpecs, describing the
shape of the non-batched Tensors.
py_mode: Whether the observer is being used in a py_driver.
compress_image: Whether to compress im... | __init__ | python | tensorflow/agents | tf_agents/utils/example_encoding_dataset.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding_dataset.py | Apache-2.0 |
def write(self, *data):
"""Encodes and writes (to file) a batch of data.
Args:
*data: (unpacked) list/tuple of batched np.arrays.
"""
if self._py_mode:
structured_data = data
else:
data = nest_utils.unbatch_nested_array(data)
structured_data = tf.nest.pack_sequence_as(self._... | Encodes and writes (to file) a batch of data.
Args:
*data: (unpacked) list/tuple of batched np.arrays.
| write | python | tensorflow/agents | tf_agents/utils/example_encoding_dataset.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding_dataset.py | Apache-2.0 |
def __call__(self, data):
"""If not in py_mode Wraps write() into a TF op for eager execution."""
if self._py_mode:
self.write(data)
else:
flat_data = tf.nest.flatten(data)
tf.numpy_function(self.write, flat_data, [], name='encoder_observer') | If not in py_mode Wraps write() into a TF op for eager execution. | __call__ | python | tensorflow/agents | tf_agents/utils/example_encoding_dataset.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding_dataset.py | Apache-2.0 |
def load_tfrecord_dataset(
dataset_files,
buffer_size=1000,
as_experience=False,
as_trajectories=False,
add_batch_dim=True,
decoder=None,
num_parallel_reads=None,
compress_image=False,
spec=None,
):
"""Loads a TFRecord dataset from file, sequencing samples as Trajectories.
Args:... | Loads a TFRecord dataset from file, sequencing samples as Trajectories.
Args:
dataset_files: List of paths to one or more datasets
buffer_size: (int) number of bytes in the read buffer. 0 means no buffering.
as_experience: (bool) Returns dataset as a pair of Trajectories. Samples
will be shaped as ... | load_tfrecord_dataset | python | tensorflow/agents | tf_agents/utils/example_encoding_dataset.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding_dataset.py | Apache-2.0 |
def example_nested_spec(dtype):
"""Return an example nested array spec."""
low = -10
high = 10
if dtype in (np.uint8, np.uint16):
low += -low
return {
"array_spec_1": array_spec.ArraySpec((2, 3), dtype),
"bounded_spec_1": array_spec.BoundedArraySpec((2, 3), dtype, low, high),
"empty_shap... | Return an example nested array spec. | example_nested_spec | python | tensorflow/agents | tf_agents/utils/example_encoding_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/example_encoding_test.py | Apache-2.0 |
def _load(self):
"""Load the module and insert it into the parent's globals."""
# Import the target module and insert it into the parent's namespace
module = importlib.import_module(self.__name__)
self._parent_module_globals[self._local_name] = module
# Emit a warning if one was specified
if se... | Load the module and insert it into the parent's globals. | _load | python | tensorflow/agents | tf_agents/utils/lazy_loader.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/lazy_loader.py | Apache-2.0 |
def assert_same_structure(
nest1,
nest2,
check_types: bool = True,
expand_composites: bool = False,
allow_shallow_nest1: bool = False,
message: Optional[Text] = None,
) -> None:
"""Same as tf.nest.assert_same_structure but with cleaner error messages.
Args:
nest1: an arbitrarily nested ... | Same as tf.nest.assert_same_structure but with cleaner error messages.
Args:
nest1: an arbitrarily nested structure.
nest2: an arbitrarily nested structure.
check_types: if `True` (default) types of sequences are checked as well,
including the keys of dictionaries. If set to `False`, for example a ... | assert_same_structure | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def assert_tensors_matching_dtypes_and_shapes(
tensors_1, tensors_2, caller, tensors_1_name, tensors_2_name
):
"""Checks if tensors have matching dtypes and shapes.
Args:
tensors_1: A nest of tensor objects.
tensors_2: A nest of tensor objects.
caller: The object calling `assert...`.
tensors_1_... | Checks if tensors have matching dtypes and shapes.
Args:
tensors_1: A nest of tensor objects.
tensors_2: A nest of tensor objects.
caller: The object calling `assert...`.
tensors_1_name: (str) Name to use for tensors_1 in case of an error.
tensors_2_name: (str) Name to use for tensors_2 in case o... | assert_tensors_matching_dtypes_and_shapes | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def assert_matching_dtypes_and_inner_shapes(
tensors_or_specs,
specs,
caller,
tensors_name,
specs_name,
allow_extra_fields=False,
):
"""Returns `True` if tensors and specs have matching dtypes and inner shapes.
Args:
tensors_or_specs: A nest of `Tensor` like or `tf.TypeSpec` objects.
... | Returns `True` if tensors and specs have matching dtypes and inner shapes.
Args:
tensors_or_specs: A nest of `Tensor` like or `tf.TypeSpec` objects.
specs: A nest of `tf.TypeSpec` objects.
caller: The object calling `assert...`.
tensors_name: (str) Name to use for the tensors in case of an error.
... | assert_matching_dtypes_and_inner_shapes | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def is_batched_nested_tensors(
tensors,
specs,
num_outer_dims=1,
allow_extra_fields=False,
check_dtypes=True,
):
"""Compares tensors to specs to determine if all tensors are batched or not.
For each tensor, it checks the dimensions and dtypes with respect to specs.
Returns `True` if all tens... | Compares tensors to specs to determine if all tensors are batched or not.
For each tensor, it checks the dimensions and dtypes with respect to specs.
Returns `True` if all tensors are batched and `False` if all tensors are
unbatched.
Raises a `ValueError` if the shapes are incompatible or a mix of batched an... | is_batched_nested_tensors | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def batch_nested_tensors(tensors, specs=None):
"""Add batch dimension if needed to nested tensors while checking their specs.
If specs is None, a batch dimension is added to each tensor.
If specs are provided, each tensor is compared to the corresponding spec,
and a batch dimension is added only if the tensor ... | Add batch dimension if needed to nested tensors while checking their specs.
If specs is None, a batch dimension is added to each tensor.
If specs are provided, each tensor is compared to the corresponding spec,
and a batch dimension is added only if the tensor doesn't already have it.
For each tensor, it chec... | batch_nested_tensors | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def _flatten_and_check_shape_nested_tensors(tensors, specs, num_outer_dims=1):
"""Flatten nested tensors and check their shape for use in other functions."""
assert_same_structure(
tensors,
specs,
message='Tensors and specs do not have matching structures',
)
flat_tensors = tf.nest.flatten(ten... | Flatten nested tensors and check their shape for use in other functions. | _flatten_and_check_shape_nested_tensors | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def flatten_and_check_shape_nested_specs(specs, reference_specs):
"""Flatten nested specs and check their shape for use in other functions."""
try:
flat_specs, flat_shapes = _flatten_and_check_shape_nested_tensors(
specs, reference_specs, num_outer_dims=0
)
except ValueError as exc:
raise Valu... | Flatten nested specs and check their shape for use in other functions. | flatten_and_check_shape_nested_specs | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def unbatch_nested_tensors(tensors, specs=None):
"""Remove the batch dimension if needed from nested tensors using their specs.
If specs is None, the first dimension of each tensor will be removed.
If specs are provided, each tensor is compared to the corresponding spec,
and the first dimension is removed only... | Remove the batch dimension if needed from nested tensors using their specs.
If specs is None, the first dimension of each tensor will be removed.
If specs are provided, each tensor is compared to the corresponding spec,
and the first dimension is removed only if the tensor was batched.
Args:
tensors: Nest... | unbatch_nested_tensors | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def split_nested_tensors(tensors, specs, num_or_size_splits):
"""Split batched nested tensors, on batch dim (outer dim), into a list.
Args:
tensors: Nested list/tuple or dict of batched Tensors.
specs: Nested list/tuple or dict of TensorSpecs, describing the shape of the
non-batched Tensors.
num_... | Split batched nested tensors, on batch dim (outer dim), into a list.
Args:
tensors: Nested list/tuple or dict of batched Tensors.
specs: Nested list/tuple or dict of TensorSpecs, describing the shape of the
non-batched Tensors.
num_or_size_splits: Same as argument for tf.split. Either a python inte... | split_nested_tensors | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def unstack_nested_tensors(tensors, specs):
"""Make list of unstacked nested tensors.
Args:
tensors: Nested tensors whose first dimension is to be unstacked.
specs: Tensor specs for tensors.
Returns:
A list of the unstacked nested tensors.
Raises:
ValueError: if the tensors and specs have inco... | Make list of unstacked nested tensors.
Args:
tensors: Nested tensors whose first dimension is to be unstacked.
specs: Tensor specs for tensors.
Returns:
A list of the unstacked nested tensors.
Raises:
ValueError: if the tensors and specs have incompatible dimensions or shapes.
| unstack_nested_tensors | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def stack_nested_tensors(tensors, axis=0):
"""Stacks a list of nested tensors along the dimension specified.
Args:
tensors: A list of nested tensors to be stacked.
axis: the axis along which the stack operation is applied.
Returns:
A stacked nested tensor.
"""
return tf.nest.map_structure(
... | Stacks a list of nested tensors along the dimension specified.
Args:
tensors: A list of nested tensors to be stacked.
axis: the axis along which the stack operation is applied.
Returns:
A stacked nested tensor.
| stack_nested_tensors | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def flatten_multi_batched_nested_tensors(tensors, specs):
"""Reshape tensors to contain only one batch dimension.
For each tensor, it checks the number of extra dimensions beyond those in
the spec, and reshapes tensor to have only one batch dimension.
NOTE: Each tensor's batch dimensions must be the same.
A... | Reshape tensors to contain only one batch dimension.
For each tensor, it checks the number of extra dimensions beyond those in
the spec, and reshapes tensor to have only one batch dimension.
NOTE: Each tensor's batch dimensions must be the same.
Args:
tensors: Nested list/tuple or dict of batched Tensors ... | flatten_multi_batched_nested_tensors | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def get_outer_shape(nested_tensor, spec):
"""Runtime batch dims of tensor's batch dimension `dim`.
Args:
nested_tensor: Nest of tensors.
spec: The nested spec.
Returns:
A `Tensor` containing the outer shape.
Raises:
ValueError: If `nested_tensor` and `spec` have different structures.
Type... | Runtime batch dims of tensor's batch dimension `dim`.
Args:
nested_tensor: Nest of tensors.
spec: The nested spec.
Returns:
A `Tensor` containing the outer shape.
Raises:
ValueError: If `nested_tensor` and `spec` have different structures.
TypeError: If `nested_tensor` and `spec` structures... | get_outer_shape | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def get_outer_rank(tensors, specs):
"""Compares tensors to specs to determine the number of batch dimensions.
For each tensor, it checks the dimensions with respect to specs and
returns the number of batch dimensions if all nested tensors and
specs agree with each other.
Args:
tensors: Nested list/tuple... | Compares tensors to specs to determine the number of batch dimensions.
For each tensor, it checks the dimensions with respect to specs and
returns the number of batch dimensions if all nested tensors and
specs agree with each other.
Args:
tensors: Nested list/tuple/dict of Tensors or SparseTensors.
sp... | get_outer_rank | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def unstack_nested_arrays(nested_array):
"""Unstack/unbatch a nest of numpy arrays.
Args:
nested_array: Nest of numpy arrays where each array has shape [batch_size,
...].
Returns:
A list of length batch_size where each item in the list is a nest
having the same structure as `nested_array`.
... | Unstack/unbatch a nest of numpy arrays.
Args:
nested_array: Nest of numpy arrays where each array has shape [batch_size,
...].
Returns:
A list of length batch_size where each item in the list is a nest
having the same structure as `nested_array`.
| unstack_nested_arrays | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def stack_nested_arrays(nested_arrays):
"""Stack/batch a list of nested numpy arrays.
Args:
nested_arrays: A list of nested numpy arrays of the same shape/structure.
Returns:
A nested array containing batched items, where each batched item is obtained
by stacking corresponding items from the list ... | Stack/batch a list of nested numpy arrays.
Args:
nested_arrays: A list of nested numpy arrays of the same shape/structure.
Returns:
A nested array containing batched items, where each batched item is obtained
by stacking corresponding items from the list of nested_arrays.
| stack_nested_arrays | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def get_outer_array_shape(nested_array, spec):
"""Batch dims of array's batch dimension `dim`."""
first_array = tf.nest.flatten(nested_array)[0]
first_spec = tf.nest.flatten(spec)[0]
num_outer_dims = len(first_array.shape) - len(first_spec.shape)
return first_array.shape[:num_outer_dims] | Batch dims of array's batch dimension `dim`. | get_outer_array_shape | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def where(condition, true_outputs, false_outputs):
"""Generalization of tf.where for nested structures.
This generalization handles applying where across nested structures and the
special case where the rank of the condition is smaller than the rank of the
true and false cases.
Args:
condition: A boolea... | Generalization of tf.where for nested structures.
This generalization handles applying where across nested structures and the
special case where the rank of the condition is smaller than the rank of the
true and false cases.
Args:
condition: A boolean Tensor of shape [B, ...]. The shape of condition must
... | where | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def tile_batch(tensors: types.NestedTensor, multiplier: types.Int):
"""Tile the batch dimension of a (possibly nested structure of) tensor(s).
Copied from tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py
For each tensor t in a (possibly nested structure) of tensors,
this function takes a tensor t ... | Tile the batch dimension of a (possibly nested structure of) tensor(s).
Copied from tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py
For each tensor t in a (possibly nested structure) of tensors,
this function takes a tensor t shaped `[batch_size, s0, s1, ...]` composed of
minibatch entries `t[0],... | tile_batch | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def assert_value_spec(output_spec: types.NestedTensorSpec, network_name: str):
"""Checks that `output_spec` is a nest of "value" type values.
"value" type values correspond to floating point tensors with spec shape
`()` or `(1,)`.
Args:
output_spec: The output spec returned by `network.create_variables`.
... | Checks that `output_spec` is a nest of "value" type values.
"value" type values correspond to floating point tensors with spec shape
`()` or `(1,)`.
Args:
output_spec: The output spec returned by `network.create_variables`.
network_name: The string name of the network for error messages.
Raises:
... | assert_value_spec | python | tensorflow/agents | tf_agents/utils/nest_utils.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils.py | Apache-2.0 |
def zeros_from_spec(self, spec, batch_size=None, extra_sizes=None):
"""Return tensors matching spec with desired additional dimensions.
Args:
spec: A `tf.TypeSpec`, e.g. `tf.TensorSpec` or `tf.SparseTensorSpec`.
batch_size: The desired batch size; the size of the first dimension of all
tens... | Return tensors matching spec with desired additional dimensions.
Args:
spec: A `tf.TypeSpec`, e.g. `tf.TensorSpec` or `tf.SparseTensorSpec`.
batch_size: The desired batch size; the size of the first dimension of all
tensors.
extra_sizes: An optional list of additional dimension sizes beyo... | zeros_from_spec | python | tensorflow/agents | tf_agents/utils/nest_utils_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils_test.py | Apache-2.0 |
def placeholders_from_spec(self, spec):
"""Return tensors matching spec with an added unknown batch dimension.
Args:
spec: A `tf.TypeSpec`, e.g. `tf.TensorSpec` or `tf.SparseTensorSpec`.
Returns:
A possibly nested tuple of Tensors matching the spec.
"""
tensors = []
for s in tf.nes... | Return tensors matching spec with an added unknown batch dimension.
Args:
spec: A `tf.TypeSpec`, e.g. `tf.TensorSpec` or `tf.SparseTensorSpec`.
Returns:
A possibly nested tuple of Tensors matching the spec.
| placeholders_from_spec | python | tensorflow/agents | tf_agents/utils/nest_utils_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils_test.py | Apache-2.0 |
def zeros_from_spec(self, specs, outer_dims=None):
"""Return arrays matching spec with desired additional dimensions.
Args:
specs: A nested array spec.
outer_dims: An optional list of outer dimensions, e.g. batch size.
Returns:
A nested tuple of arrays matching the spec.
"""
oute... | Return arrays matching spec with desired additional dimensions.
Args:
specs: A nested array spec.
outer_dims: An optional list of outer dimensions, e.g. batch size.
Returns:
A nested tuple of arrays matching the spec.
| zeros_from_spec | python | tensorflow/agents | tf_agents/utils/nest_utils_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/nest_utils_test.py | Apache-2.0 |
def _lookup_dependency(self, name, cached_dependencies=None):
"""Create placeholder NumPy arrays for to-be-restored attributes.
Typically `_lookup_dependency` is used to check by name whether a dependency
exists. We cheat slightly by creating a checkpointable object for `name` if
we don't already have ... | Create placeholder NumPy arrays for to-be-restored attributes.
Typically `_lookup_dependency` is used to check by name whether a dependency
exists. We cheat slightly by creating a checkpointable object for `name` if
we don't already have one, giving us attribute re-creation behavior when
loading a chec... | _lookup_dependency | python | tensorflow/agents | tf_agents/utils/numpy_storage.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/numpy_storage.py | Apache-2.0 |
def __getattribute__(self, name):
"""Un-wrap `_NumpyWrapper` objects when accessing attributes."""
value = super(NumpyState, self).__getattribute__(name)
if isinstance(value, _NumpyWrapper):
return value.array
return value | Un-wrap `_NumpyWrapper` objects when accessing attributes. | __getattribute__ | python | tensorflow/agents | tf_agents/utils/numpy_storage.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/numpy_storage.py | Apache-2.0 |
def __setattr__(self, name, value):
"""Automatically wrap NumPy arrays assigned to attributes."""
# TODO(b/126429928): Consider supporting lists/tuples.
if isinstance(value, (np.ndarray, np.generic)):
try:
existing = super(NumpyState, self).__getattribute__(name)
existing.array = value... | Automatically wrap NumPy arrays assigned to attributes. | __setattr__ | python | tensorflow/agents | tf_agents/utils/numpy_storage.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/numpy_storage.py | Apache-2.0 |
def __init__(self, data_spec, capacity):
"""Creates a NumpyStorage object.
Args:
data_spec: An ArraySpec or a list/tuple/nest of ArraySpecs describing a
single item that can be stored in this table.
capacity: The maximum number of items that can be stored in the buffer.
Raises:
V... | Creates a NumpyStorage object.
Args:
data_spec: An ArraySpec or a list/tuple/nest of ArraySpecs describing a
single item that can be stored in this table.
capacity: The maximum number of items that can be stored in the buffer.
Raises:
ValueError: If data_spec is not an instance or ne... | __init__ | python | tensorflow/agents | tf_agents/utils/numpy_storage.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/numpy_storage.py | Apache-2.0 |
def _array(self, index):
"""Creates or retrieves one of the numpy arrays backing the storage."""
array = getattr(self._np_state, self._buf_names[index])
if np.isscalar(array) or array.ndim == 0:
spec = self._flat_specs[index]
shape = (self._capacity,) + spec.shape
array = np.zeros(shape=sh... | Creates or retrieves one of the numpy arrays backing the storage. | _array | python | tensorflow/agents | tf_agents/utils/numpy_storage.py | https://github.com/tensorflow/agents/blob/master/tf_agents/utils/numpy_storage.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.