code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def load_movielens_data(data_file, delimiter=','):
"""Loads the movielens data and returns the ratings matrix."""
ratings_matrix = np.zeros([MOVIELENS_NUM_USERS, MOVIELENS_NUM_MOVIES])
with tf.io.gfile.GFile(data_file, 'r') as infile:
# The file is a csv with rows containing:
# user id | item id | rating ... | Loads the movielens data and returns the ratings matrix. | load_movielens_data | python | tensorflow/agents | tf_agents/bandits/environments/dataset_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/dataset_utilities.py | Apache-2.0 |
def _update_row(input_x, updates, row_index):
"""Updates the i-th row of tensor `x` with the values given in `updates`.
Args:
input_x: the input tensor.
updates: the values to place on the i-th row of `x`.
row_index: which row to update.
Returns:
The updated tensor (same shape as `x`).
"""
n... | Updates the i-th row of tensor `x` with the values given in `updates`.
Args:
input_x: the input tensor.
updates: the values to place on the i-th row of `x`.
row_index: which row to update.
Returns:
The updated tensor (same shape as `x`).
| _update_row | python | tensorflow/agents | tf_agents/bandits/environments/drifting_linear_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/drifting_linear_environment.py | Apache-2.0 |
def _apply_givens_rotation(cosa, sina, axis_i, axis_j, input_x):
"""Applies a Givens rotation on tensor `x`.
Reference on Givens rotations:
https://en.wikipedia.org/wiki/Givens_rotation
Args:
cosa: the cosine of the angle.
sina: the sine of the angle.
axis_i: the first axis of rotation.
axis_j... | Applies a Givens rotation on tensor `x`.
Reference on Givens rotations:
https://en.wikipedia.org/wiki/Givens_rotation
Args:
cosa: the cosine of the angle.
sina: the sine of the angle.
axis_i: the first axis of rotation.
axis_j: the second axis of rotation.
input_x: the input tensor.
Retur... | _apply_givens_rotation | python | tensorflow/agents | tf_agents/bandits/environments/drifting_linear_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/drifting_linear_environment.py | Apache-2.0 |
def __init__(
self,
observation_distribution: types.Distribution,
observation_to_reward_distribution: types.Distribution,
drift_distribution: types.Distribution,
additive_reward_distribution: types.Distribution,
):
"""Initialize the parameters of the drifting linear dynamics.
Ar... | Initialize the parameters of the drifting linear dynamics.
Args:
observation_distribution: A distribution from tfp.distributions with shape
`[batch_size, observation_dim]` Note that the values of `batch_size` and
`observation_dim` are deduced from the distribution.
observation_to_reward... | __init__ | python | tensorflow/agents | tf_agents/bandits/environments/drifting_linear_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/drifting_linear_environment.py | Apache-2.0 |
def __init__(
self,
observation_distribution: types.Distribution,
observation_to_reward_distribution: types.Distribution,
drift_distribution: types.Distribution,
additive_reward_distribution: types.Distribution,
):
"""Initialize the environment with the dynamics parameters.
Args... | Initialize the environment with the dynamics parameters.
Args:
observation_distribution: A distribution from `tfp.distributions` with
shape `[batch_size, observation_dim]`. Note that the values of
`batch_size` and `observation_dim` are deduced from the distribution.
observation_to_rewar... | __init__ | python | tensorflow/agents | tf_agents/bandits/environments/drifting_linear_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/drifting_linear_environment.py | Apache-2.0 |
def testObservationToRewardsDoesNotVary(
self, observation_shape, action_shape, batch_size, seed
):
"""Ensure that `observation_to_reward` does not change with zero drift."""
tf.compat.v1.set_random_seed(seed)
env = get_deterministic_gaussian_non_stationary_environment(
observation_shape,
... | Ensure that `observation_to_reward` does not change with zero drift. | testObservationToRewardsDoesNotVary | python | tensorflow/agents | tf_agents/bandits/environments/drifting_linear_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/drifting_linear_environment_test.py | Apache-2.0 |
def testObservationToRewardsVaries(
self, observation_shape, action_shape, batch_size, seed
):
"""Ensure that `observation_to_reward` changes with non-zero drift."""
tf.compat.v1.set_random_seed(seed)
env = get_deterministic_gaussian_non_stationary_environment(
observation_shape,
act... | Ensure that `observation_to_reward` changes with non-zero drift. | testObservationToRewardsVaries | python | tensorflow/agents | tf_agents/bandits/environments/drifting_linear_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/drifting_linear_environment_test.py | Apache-2.0 |
def __call__(self, x, enable_noise=True):
"""Outputs reward given observation.
Args:
x: Observation vector.
enable_noise: Whether to add normal noise to the reward or not.
Returns:
A scalar value: the reward.
"""
mu = np.dot(x, self.theta)
if enable_noise:
return np.ran... | Outputs reward given observation.
Args:
x: Observation vector.
enable_noise: Whether to add normal noise to the reward or not.
Returns:
A scalar value: the reward.
| __call__ | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def sliding_linear_reward_fn_generator(context_dim, num_actions, variance):
"""A function that returns `num_actions` noisy linear functions.
Every linear function has an underlying parameter consisting of `context_dim`
consecutive integers. For example, with `context_dim = 3` and
`num_actions = 2`, the paramet... | A function that returns `num_actions` noisy linear functions.
Every linear function has an underlying parameter consisting of `context_dim`
consecutive integers. For example, with `context_dim = 3` and
`num_actions = 2`, the parameter of the linear function associated with
action 1 is `[1.0, 2.0, 3.0]`.
Arg... | sliding_linear_reward_fn_generator | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def normalized_sliding_linear_reward_fn_generator(
context_dim, num_actions, variance
):
"""Similar to the function above, but returns smaller-range functions.
Every linear function has an underlying parameter consisting of `context_dim`
floats of equal distance from each other. For example, with `context_di... | Similar to the function above, but returns smaller-range functions.
Every linear function has an underlying parameter consisting of `context_dim`
floats of equal distance from each other. For example, with `context_dim = 3`,
`num_actions = 2`, the parameter of the linear function associated with
action 1 is `[... | normalized_sliding_linear_reward_fn_generator | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def structured_linear_reward_fn_generator(
context_dim, num_actions, variance, drift_coefficient=0.1
):
"""A function that returns `num_actions` noisy linear functions.
Every linear function is related to its previous one:
```
theta_new = theta_previous + a * drift
```
Args:
context_dim: Number of... | A function that returns `num_actions` noisy linear functions.
Every linear function is related to its previous one:
```
theta_new = theta_previous + a * drift
```
Args:
context_dim: Number of parameters per function.
num_actions: Number of functions returned.
variance: Variance of the noisy line... | structured_linear_reward_fn_generator | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def random_linear_multiple_reward_fn_generator(
context_dim, num_actions, num_rewards, squeeze_dims=True
):
"""A function that returns `num_actions` linear functions.
For each action, the corresponding linear function has underlying parameters
of shape [`context_dim`, 'num_rewards']. Optionally, squeeze can ... | A function that returns `num_actions` linear functions.
For each action, the corresponding linear function has underlying parameters
of shape [`context_dim`, 'num_rewards']. Optionally, squeeze can be applied.
Args:
context_dim: Number of parameters per function.
num_actions: Number of functions returne... | random_linear_multiple_reward_fn_generator | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def compute_optimal_reward(
observation, per_action_reward_fns, enable_noise=False
):
"""Computes the optimal reward.
Args:
observation: a (possibly batched) observation.
per_action_reward_fns: a list of reward functions; one per action. Each
reward function generates a reward when called with an... | Computes the optimal reward.
Args:
observation: a (possibly batched) observation.
per_action_reward_fns: a list of reward functions; one per action. Each
reward function generates a reward when called with an observation.
enable_noise: (bool) whether to add noise to the rewards.
Returns:
The... | compute_optimal_reward | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def tf_compute_optimal_reward(
observation, per_action_reward_fns, enable_noise=False
):
"""TF wrapper around `compute_optimal_reward` to be used in `tf_metrics`."""
compute_optimal_reward_fn = functools.partial(
compute_optimal_reward,
per_action_reward_fns=per_action_reward_fns,
enable_noise... | TF wrapper around `compute_optimal_reward` to be used in `tf_metrics`. | tf_compute_optimal_reward | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def compute_optimal_action(
observation, per_action_reward_fns, enable_noise=False
):
"""Computes the optimal action.
Args:
observation: a (possibly batched) observation.
per_action_reward_fns: a list of reward functions; one per action. Each
reward function generates a reward when called with an... | Computes the optimal action.
Args:
observation: a (possibly batched) observation.
per_action_reward_fns: a list of reward functions; one per action. Each
reward function generates a reward when called with an observation.
enable_noise: (bool) whether to add noise to the rewards.
Returns:
The... | compute_optimal_action | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def tf_compute_optimal_action(
observation,
per_action_reward_fns,
enable_noise=False,
action_dtype=tf.int32,
):
"""TF wrapper around `compute_optimal_action` to be used in `tf_metrics`."""
compute_optimal_action_fn = functools.partial(
compute_optimal_action,
per_action_reward_fns=per_a... | TF wrapper around `compute_optimal_action` to be used in `tf_metrics`. | tf_compute_optimal_action | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def compute_optimal_reward_with_environment_dynamics(
observation, environment_dynamics
):
"""Computes the optimal reward using the environment dynamics.
Args:
observation: a (possibly batched) observation.
environment_dynamics: environment dynamics object (an instance of
`non_stationary_stochast... | Computes the optimal reward using the environment dynamics.
Args:
observation: a (possibly batched) observation.
environment_dynamics: environment dynamics object (an instance of
`non_stationary_stochastic_environment.EnvironmentDynamics`)
Returns:
The optimal reward.
| compute_optimal_reward_with_environment_dynamics | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def compute_optimal_action_with_environment_dynamics(
observation, environment_dynamics
):
"""Computes the optimal action using the environment dynamics.
Args:
observation: a (possibly batched) observation.
environment_dynamics: environment dynamics object (an instance of
`non_stationary_stochast... | Computes the optimal action using the environment dynamics.
Args:
observation: a (possibly batched) observation.
environment_dynamics: environment dynamics object (an instance of
`non_stationary_stochastic_environment.EnvironmentDynamics`)
Returns:
The optimal action.
| compute_optimal_action_with_environment_dynamics | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def compute_optimal_action_with_classification_environment(
observation, environment
):
"""Helper function for gin configurable SuboptimalArms metric."""
del observation
return environment.compute_optimal_action() | Helper function for gin configurable SuboptimalArms metric. | compute_optimal_action_with_classification_environment | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def compute_optimal_reward_with_classification_environment(
observation, environment
):
"""Helper function for gin configurable Regret metric."""
del observation
return environment.compute_optimal_reward() | Helper function for gin configurable Regret metric. | compute_optimal_reward_with_classification_environment | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def tf_wheel_bandit_compute_optimal_action(
observation, delta, action_dtype=tf.int32
):
"""TF wrapper around `compute_optimal_action` to be used in `tf_metrics`."""
return tf.py_function(
wheel_py_environment.compute_optimal_action,
[observation, delta],
action_dtype,
) | TF wrapper around `compute_optimal_action` to be used in `tf_metrics`. | tf_wheel_bandit_compute_optimal_action | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def tf_wheel_bandit_compute_optimal_reward(
observation, delta, mu_inside, mu_high
):
"""TF wrapper around `compute_optimal_reward` to be used in `tf_metrics`."""
return tf.py_function(
wheel_py_environment.compute_optimal_reward,
[observation, delta, mu_inside, mu_high],
tf.float32,
) | TF wrapper around `compute_optimal_reward` to be used in `tf_metrics`. | tf_wheel_bandit_compute_optimal_reward | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def compute_optimal_reward_with_movielens_environment(observation, environment):
"""Helper function for gin configurable Regret metric."""
del observation
return tf.py_function(environment.compute_optimal_reward, [], tf.float32) | Helper function for gin configurable Regret metric. | compute_optimal_reward_with_movielens_environment | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def compute_optimal_action_with_movielens_environment(
observation, environment, action_dtype=tf.int32
):
"""Helper function for gin configurable SuboptimalArms metric."""
del observation
return tf.py_function(environment.compute_optimal_action, [], action_dtype) | Helper function for gin configurable SuboptimalArms metric. | compute_optimal_action_with_movielens_environment | python | tensorflow/agents | tf_agents/bandits/environments/environment_utilities.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/environment_utilities.py | Apache-2.0 |
def _observe(self):
"""Returns the u vectors of a random sample of users."""
sampled_users = random.sample(
range(self._effective_num_users), self._batch_size
)
self._previous_users = self._current_users
self._current_users = sampled_users
batched_observations = self._u_hat[sampled_users... | Returns the u vectors of a random sample of users. | _observe | python | tensorflow/agents | tf_agents/bandits/environments/movielens_py_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/movielens_py_environment.py | Apache-2.0 |
def _apply_action(self, action):
"""Computes the reward for the input actions."""
rewards = []
for i, j in zip(self._current_users, action):
rewards.append(self._approx_ratings_matrix[i, j])
return np.array(rewards) | Computes the reward for the input actions. | _apply_action | python | tensorflow/agents | tf_agents/bandits/environments/movielens_py_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/movielens_py_environment.py | Apache-2.0 |
def reward(
self, observation: types.NestedTensor, env_time: types.Int
) -> types.NestedTensor:
"""Reward for the given observation and time step.
Args:
observation: A batch of observations with spec according to
`observation_spec.`
env_time: The scalar int64 tensor of the environme... | Reward for the given observation and time step.
Args:
observation: A batch of observations with spec according to
`observation_spec.`
env_time: The scalar int64 tensor of the environment time step. This is
incremented by the environment after the reward is computed.
Returns:
... | reward | python | tensorflow/agents | tf_agents/bandits/environments/non_stationary_stochastic_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/non_stationary_stochastic_environment.py | Apache-2.0 |
def __init__(self, environment_dynamics: EnvironmentDynamics):
"""Initializes a non-stationary environment with the given dynamics.
Args:
environment_dynamics: An instance of `EnvironmentDynamics` defining how
the environment evolves over time.
"""
self._env_time = tf.compat.v2.Variable(
... | Initializes a non-stationary environment with the given dynamics.
Args:
environment_dynamics: An instance of `EnvironmentDynamics` defining how
the environment evolves over time.
| __init__ | python | tensorflow/agents | tf_agents/bandits/environments/non_stationary_stochastic_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/non_stationary_stochastic_environment.py | Apache-2.0 |
def testObservationAndRewardsVary(self):
"""Ensure that observations and rewards change in consecutive calls."""
dynamics = DummyDynamics()
env = nsse.NonStationaryStochasticEnvironment(dynamics)
self.evaluate(tf.compat.v1.global_variables_initializer())
env_time = env._env_time
observation_sam... | Ensure that observations and rewards change in consecutive calls. | testObservationAndRewardsVary | python | tensorflow/agents | tf_agents/bandits/environments/non_stationary_stochastic_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/non_stationary_stochastic_environment_test.py | Apache-2.0 |
def __init__(
self,
piece_means: np.ndarray,
change_duration_generator: Callable[[], int],
batch_size: Optional[int] = 1,
):
"""Initializes a piecewise stationary Bernoulli Bandit environment.
Args:
piece_means: a matrix (list of lists) with shape (num_pieces, num_arms)
... | Initializes a piecewise stationary Bernoulli Bandit environment.
Args:
piece_means: a matrix (list of lists) with shape (num_pieces, num_arms)
containing floats in [0, 1]. Each list contains the mean rewards for the
num_arms actions of the num_pieces pieces. The list is wrapped around
... | __init__ | python | tensorflow/agents | tf_agents/bandits/environments/piecewise_bernoulli_py_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/piecewise_bernoulli_py_environment.py | Apache-2.0 |
def __init__(
self,
observation_distribution: types.Distribution,
interval_distribution: types.Distribution,
observation_to_reward_distribution: types.Distribution,
additive_reward_distribution: types.Distribution,
):
"""Initialize the parameters of the piecewise dynamics.
Args:... | Initialize the parameters of the piecewise dynamics.
Args:
observation_distribution: A distribution from tfp.distributions with shape
`[batch_size, observation_dim]` Note that the values of `batch_size` and
`observation_dim` are deduced from the distribution.
interval_distribution: A sc... | __init__ | python | tensorflow/agents | tf_agents/bandits/environments/piecewise_stochastic_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/piecewise_stochastic_environment.py | Apache-2.0 |
def same_interval_parameters():
"""Returns the parameters of the current piece.
Returns:
The pair of `tf.Tensor` `(observation_to_reward, additive_reward)`.
"""
return [
self._current_observation_to_reward,
self._current_additive_reward,
] | Returns the parameters of the current piece.
Returns:
The pair of `tf.Tensor` `(observation_to_reward, additive_reward)`.
| same_interval_parameters | python | tensorflow/agents | tf_agents/bandits/environments/piecewise_stochastic_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/piecewise_stochastic_environment.py | Apache-2.0 |
def new_interval_parameters():
"""Update and returns the piece parameters.
Returns:
The pair of `tf.Tensor` `(observation_to_reward, additive_reward)`.
"""
tf.compat.v1.assign_add(
self._current_interval,
tf.cast(self._interval_distribution.sample(), dtype=tf.int64),... | Update and returns the piece parameters.
Returns:
The pair of `tf.Tensor` `(observation_to_reward, additive_reward)`.
| new_interval_parameters | python | tensorflow/agents | tf_agents/bandits/environments/piecewise_stochastic_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/piecewise_stochastic_environment.py | Apache-2.0 |
def __init__(
self,
observation_distribution: types.Distribution,
interval_distribution: types.Distribution,
observation_to_reward_distribution: types.Distribution,
additive_reward_distribution: types.Distribution,
):
"""Initialize the environment with the dynamics parameters.
A... | Initialize the environment with the dynamics parameters.
Args:
observation_distribution: A distribution from `tfp.distributions` with
shape `[batch_size, observation_dim]`. Note that the values of
`batch_size` and `observation_dim` are deduced from the distribution.
interval_distributio... | __init__ | python | tensorflow/agents | tf_agents/bandits/environments/piecewise_stochastic_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/piecewise_stochastic_environment.py | Apache-2.0 |
def testObservationAndRewardsVary(
self, observation_shape, action_shape, batch_size, seed
):
"""Ensure that observations and rewards change in consecutive calls."""
interval = 4
env = get_deterministic_gaussian_non_stationary_environment(
observation_shape, action_shape, batch_size, interv... | Ensure that observations and rewards change in consecutive calls. | testObservationAndRewardsVary | python | tensorflow/agents | tf_agents/bandits/environments/piecewise_stochastic_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/piecewise_stochastic_environment_test.py | Apache-2.0 |
def testActionSpec(self, observation_shape, action_shape, batch_size):
"""Ensure that the action spec is set correctly."""
interval = 3
env = get_deterministic_gaussian_non_stationary_environment(
observation_shape, action_shape, batch_size, interval
)
self.evaluate(tf.compat.v1.global_vari... | Ensure that the action spec is set correctly. | testActionSpec | python | tensorflow/agents | tf_agents/bandits/environments/piecewise_stochastic_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/piecewise_stochastic_environment_test.py | Apache-2.0 |
def __init__(
self,
observation_distribution: types.Distribution,
reward_distribution: types.Distribution,
action_spec: Optional[types.TensorSpec] = None,
):
"""Initializes an environment that returns random observations and rewards.
Note that `observation_distribution` and `reward_di... | Initializes an environment that returns random observations and rewards.
Note that `observation_distribution` and `reward_distribution` are expected
to have batch rank 1. That is, `observation_distribution.batch_shape` should
have length exactly 1. `tensorflow_probability.distributions.Independent` is
... | __init__ | python | tensorflow/agents | tf_agents/bandits/environments/random_bandit_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/random_bandit_environment.py | Apache-2.0 |
def get_gaussian_random_environment(
observation_shape, action_shape, batch_size
):
"""Returns a RandomBanditEnvironment with Gaussian observation and reward."""
overall_shape = [batch_size] + observation_shape
observation_distribution = tfd.Independent(
tfd.Normal(loc=tf.zeros(overall_shape), scale=tf.... | Returns a RandomBanditEnvironment with Gaussian observation and reward. | get_gaussian_random_environment | python | tensorflow/agents | tf_agents/bandits/environments/random_bandit_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/random_bandit_environment_test.py | Apache-2.0 |
def testObservationAndRewardShapes(
self, observation_shape, action_shape, batch_size
):
"""Exercise `reset` and `step`. Ensure correct shapes are returned."""
env = get_gaussian_random_environment(
observation_shape, action_shape, batch_size
)
observation = env.reset().observation
r... | Exercise `reset` and `step`. Ensure correct shapes are returned. | testObservationAndRewardShapes | python | tensorflow/agents | tf_agents/bandits/environments/random_bandit_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/random_bandit_environment_test.py | Apache-2.0 |
def testObservationAndRewardsVary(
self, observation_shape, action_shape, batch_size, seed
):
"""Ensure that observations and rewards change in consecutive calls."""
tf.compat.v1.set_random_seed(seed)
env = get_gaussian_random_environment(
observation_shape, action_shape, batch_size
)
... | Ensure that observations and rewards change in consecutive calls. | testObservationAndRewardsVary | python | tensorflow/agents | tf_agents/bandits/environments/random_bandit_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/random_bandit_environment_test.py | Apache-2.0 |
def __init__(
self,
global_sampling_fn: Callable[[], types.Array],
item_sampling_fn: Callable[[], types.Array],
num_items: int,
num_slots: int,
scores_weight_matrix: types.Float,
feedback_model: int = FeedbackModel.CASCADING,
click_model: int = ClickModel.GHOST_ACTIONS,
... | Initializes the environment.
In each round, global context is generated by global_sampling_fn, item
contexts are generated by item_sampling_fn. The score matrix is of shape
`[item_dim, global_dim]`, and plays the role of the weight matrix in the
inner product of item and global features. This inner pro... | __init__ | python | tensorflow/agents | tf_agents/bandits/environments/ranking_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/ranking_environment.py | Apache-2.0 |
def _step(self, action):
"""We need to override this function because the reward dtype can be int."""
# TODO(b/199824775): The trajectory module assumes all reward is float32.
# Sort this out with TF-Agents.
output = super(RankingPyEnvironment, self)._step(action)
reward = output.reward
new_rewa... | We need to override this function because the reward dtype can be int. | _step | python | tensorflow/agents | tf_agents/bandits/environments/ranking_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/ranking_environment.py | Apache-2.0 |
def __init__(
self,
global_sampling_fn: Callable[[], types.Array],
item_sampling_fn: Callable[[], types.Array],
relevance_fn: Callable[[types.Array, types.Array], float],
num_items: int,
observation_probs: Sequence[float],
batch_size: int = 1,
name: Optional[Text] = None,... | Initializes an instance of `ExplicitPositionalBiasRankingEnvironment`.
Args:
global_sampling_fn: A function that outputs a random 1d array or list of
ints or floats. This output is the global context. Its shape and type
must be consistent across calls.
item_sampling_fn: A function that ... | __init__ | python | tensorflow/agents | tf_agents/bandits/environments/ranking_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/ranking_environment.py | Apache-2.0 |
def _get_relevances(self, global_obs, slotted_items):
"""Returns the relevance of each item in a batched action."""
s_range = range(self._num_slots)
b_range = range(self._batch_size)
relevances = np.array(
[
[
self._relevance_fn(global_obs[i], slotted_items[i, j])
... | Returns the relevance of each item in a batched action. | _get_relevances | python | tensorflow/agents | tf_agents/bandits/environments/ranking_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/ranking_environment.py | Apache-2.0 |
def check_unbatched_time_step_spec(time_step, time_step_spec, batch_size):
"""Checks if time step conforms array spec, even if batched."""
if batch_size is None:
return array_spec.check_arrays_nest(time_step, time_step_spec)
return array_spec.check_arrays_nest(
time_step, array_spec.add_outer_dims_nest... | Checks if time step conforms array spec, even if batched. | check_unbatched_time_step_spec | python | tensorflow/agents | tf_agents/bandits/environments/ranking_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/ranking_environment_test.py | Apache-2.0 |
def __init__(
self,
global_context_sampling_fn: Callable[[], types.Array],
arm_context_sampling_fn: Callable[[], types.Array],
max_num_actions: int,
reward_fn: Callable[[types.Array], Sequence[float]],
num_actions_fn: Optional[Callable[[], int]] = None,
batch_size: Optional[int... | Initializes the environment.
In each round, global context is generated by global_context_sampling_fn,
per-arm contexts are generated by arm_context_sampling_fn. The reward_fn
function takes the concatenation of a global and a per-arm feature, and
outputs a possibly random reward.
In case `num_acti... | __init__ | python | tensorflow/agents | tf_agents/bandits/environments/stationary_stochastic_per_arm_py_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/stationary_stochastic_per_arm_py_environment.py | Apache-2.0 |
def check_unbatched_time_step_spec(time_step, time_step_spec, batch_size):
"""Checks if time step conforms array spec, even if batched."""
if batch_size is None:
return array_spec.check_arrays_nest(time_step, time_step_spec)
return array_spec.check_arrays_nest(
time_step, array_spec.add_outer_dims_nest... | Checks if time step conforms array spec, even if batched. | check_unbatched_time_step_spec | python | tensorflow/agents | tf_agents/bandits/environments/stationary_stochastic_per_arm_py_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/stationary_stochastic_per_arm_py_environment_test.py | Apache-2.0 |
def __init__(
self,
context_sampling_fn: Callable[[], np.ndarray],
reward_fns: Sequence[Callable[[np.ndarray], Sequence[float]]],
constraint_fns: Optional[
Sequence[Callable[[np.ndarray], Sequence[float]]]
] = None,
batch_size: Optional[int] = 1,
name: Optional[Text] ... | Initializes a Stationary Stochastic Bandit environment.
In each round, context is generated by context_sampling_fn, this context is
passed through a reward_function for each arm.
Example:
def context_sampling_fn():
return np.random.randint(0, 10, [1, 2]) # 2-dim ints between 0 and 10
... | __init__ | python | tensorflow/agents | tf_agents/bandits/environments/stationary_stochastic_py_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/stationary_stochastic_py_environment.py | Apache-2.0 |
def check_unbatched_time_step_spec(time_step, time_step_spec, batch_size):
"""Checks if time step conforms array spec, even if batched."""
if batch_size is None:
return array_spec.check_arrays_nest(time_step, time_step_spec)
if not all([spec.shape[0] == batch_size for spec in time_step]):
return False
... | Checks if time step conforms array spec, even if batched. | check_unbatched_time_step_spec | python | tensorflow/agents | tf_agents/bandits/environments/stationary_stochastic_py_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/stationary_stochastic_py_environment_test.py | Apache-2.0 |
def __init__(
self,
global_context_sampling_fn: Callable[[], types.Array],
arm_context_sampling_fn: Callable[[], types.Array],
num_actions: int,
reward_fn: Callable[[types.Array], Sequence[float]],
batch_size: Optional[int] = 1,
name: Optional[Text] = 'stationary_stochastic_str... | Initializes the environment.
In each round, global context is generated by global_context_sampling_fn,
per-arm contexts are generated by arm_context_sampling_fn.
The two feature generating functions should output a single observation, not
including either the batch_size or the number of actions.
... | __init__ | python | tensorflow/agents | tf_agents/bandits/environments/stationary_stochastic_structured_py_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/stationary_stochastic_structured_py_environment.py | Apache-2.0 |
def check_unbatched_time_step_spec(time_step, time_step_spec, batch_size):
"""Checks if time step conforms array spec, even if batched."""
if batch_size is None:
return array_spec.check_arrays_nest(time_step, time_step_spec)
return array_spec.check_arrays_nest(
time_step, array_spec.add_outer_dims_nest... | Checks if time step conforms array spec, even if batched. | check_unbatched_time_step_spec | python | tensorflow/agents | tf_agents/bandits/environments/stationary_stochastic_structured_py_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/stationary_stochastic_structured_py_environment_test.py | Apache-2.0 |
def __init__(
self,
delta: float,
mu_base: Sequence[float],
std_base: Sequence[float],
mu_high: float,
std_high: float,
batch_size: Optional[int] = None,
name: Optional[Text] = 'wheel',
):
"""Initializes the Wheel Bandit environment.
Args:
delta: float in... | Initializes the Wheel Bandit environment.
Args:
delta: float in `(0, 1)`. Exploration parameter.
mu_base: (vector of float) Mean reward for each action, if the context
norm is below delta. The size of the vector is expected to be 5 (i.e.,
equal to the number of actions.)
std_base:... | __init__ | python | tensorflow/agents | tf_agents/bandits/environments/wheel_py_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/wheel_py_environment.py | Apache-2.0 |
def _observe(self) -> types.NestedArray:
"""Returns 2-dim samples falling in the unit circle."""
theta = np.random.uniform(0.0, 2.0 * np.pi, (self._batch_size))
r = np.sqrt(np.random.uniform(size=self._batch_size))
batched_observations = np.stack(
[r * np.cos(theta), r * np.sin(theta)], axis=1
... | Returns 2-dim samples falling in the unit circle. | _observe | python | tensorflow/agents | tf_agents/bandits/environments/wheel_py_environment.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/wheel_py_environment.py | Apache-2.0 |
def test_observation_validity(self, batch_size):
"""Tests that the observations fall into the unit circle."""
env = wheel_py_environment.WheelPyEnvironment(
delta=0.5,
mu_base=[1.2, 1.0, 1.0, 1.0, 1.0],
std_base=0.01 * np.ones(5),
mu_high=50.0,
std_high=0.01,
batc... | Tests that the observations fall into the unit circle. | test_observation_validity | python | tensorflow/agents | tf_agents/bandits/environments/wheel_py_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/wheel_py_environment_test.py | Apache-2.0 |
def test_rewards_validity(self, batch_size):
"""Tests that the rewards are valid."""
env = wheel_py_environment.WheelPyEnvironment(
delta=0.5,
mu_base=[1.2, 1.0, 1.0, 1.0, 1.0],
std_base=0.01 * np.ones(5),
mu_high=50.0,
std_high=0.01,
batch_size=batch_size,
)
... | Tests that the rewards are valid. | test_rewards_validity | python | tensorflow/agents | tf_agents/bandits/environments/wheel_py_environment_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/wheel_py_environment_test.py | Apache-2.0 |
def __init__(
self,
baseline_reward_fn: Callable[[types.Tensor], types.Tensor],
name: Optional[Text] = 'RegretMetric',
dtype: float = tf.float32,
):
"""Computes the regret with respect to a baseline.
The regret is computed by computing the difference of the current reward
from the... | Computes the regret with respect to a baseline.
The regret is computed by computing the difference of the current reward
from the baseline action reward. The latter is computed by calling the input
`baseline_reward_fn` function that given a (batched) observation computes
the baseline action reward.
... | __init__ | python | tensorflow/agents | tf_agents/bandits/metrics/tf_metrics.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/metrics/tf_metrics.py | Apache-2.0 |
def call(self, trajectory):
"""Update the regret value.
Args:
trajectory: A tf_agents.trajectory.Trajectory
Returns:
The arguments, for easy chaining.
"""
baseline_reward = self._baseline_reward_fn(trajectory.observation)
trajectory_reward = trajectory.reward
if isinstance(traj... | Update the regret value.
Args:
trajectory: A tf_agents.trajectory.Trajectory
Returns:
The arguments, for easy chaining.
| call | python | tensorflow/agents | tf_agents/bandits/metrics/tf_metrics.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/metrics/tf_metrics.py | Apache-2.0 |
def __init__(
self,
baseline_action_fn: Callable[[types.Tensor], types.Tensor],
name: Optional[Text] = 'SuboptimalArmsMetric',
dtype: float = tf.float32,
):
"""Computes the number of suboptimal arms with respect to a baseline.
Args:
baseline_action_fn: function that computes the... | Computes the number of suboptimal arms with respect to a baseline.
Args:
baseline_action_fn: function that computes the action used as a baseline
for computing the metric.
name: (str) name of the metric
dtype: dtype of the metric value.
| __init__ | python | tensorflow/agents | tf_agents/bandits/metrics/tf_metrics.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/metrics/tf_metrics.py | Apache-2.0 |
def call(self, trajectory):
"""Update the metric value.
Args:
trajectory: A tf_agents.trajectory.Trajectory
Returns:
The arguments, for easy chaining.
"""
baseline_action = self._baseline_action_fn(trajectory.observation)
disagreement = tf.cast(
tf.not_equal(baseline_action... | Update the metric value.
Args:
trajectory: A tf_agents.trajectory.Trajectory
Returns:
The arguments, for easy chaining.
| call | python | tensorflow/agents | tf_agents/bandits/metrics/tf_metrics.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/metrics/tf_metrics.py | Apache-2.0 |
def __init__(
self,
constraint: constraints.BaseConstraint,
name: Optional[Text] = 'ConstraintViolationMetric',
dtype: float = tf.float32,
):
"""Computes the constraint violations given an input constraint.
Given a certain constraint, this metric computes how often the selected
ac... | Computes the constraint violations given an input constraint.
Given a certain constraint, this metric computes how often the selected
actions in the trajectory violate the constraint.
Args:
constraint: an instance of `tf_agents.bandits.policies.BaseConstraint`.
name: (str) name of the metric
... | __init__ | python | tensorflow/agents | tf_agents/bandits/metrics/tf_metrics.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/metrics/tf_metrics.py | Apache-2.0 |
def call(self, trajectory):
"""Update the constraint violations metric.
Args:
trajectory: A tf_agents.trajectory.Trajectory
Returns:
The arguments, for easy chaining.
"""
feasibility_prob_all_actions = self._constraint(trajectory.observation)
feasibility_prob_selected_actions = com... | Update the constraint violations metric.
Args:
trajectory: A tf_agents.trajectory.Trajectory
Returns:
The arguments, for easy chaining.
| call | python | tensorflow/agents | tf_agents/bandits/metrics/tf_metrics.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/metrics/tf_metrics.py | Apache-2.0 |
def __init__(
self,
estimated_reward_fn: Callable[[types.Tensor], types.Tensor],
name: Optional[Text] = 'DistanceFromGreedyMetric',
dtype: float = tf.float32,
):
"""Init function for the metric.
Args:
estimated_reward_fn: A function that takes the observation as input and
... | Init function for the metric.
Args:
estimated_reward_fn: A function that takes the observation as input and
computes the estimated rewards that the greedy policy uses.
name: (str) name of the metric
dtype: dtype of the metric value.
| __init__ | python | tensorflow/agents | tf_agents/bandits/metrics/tf_metrics.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/metrics/tf_metrics.py | Apache-2.0 |
def call(self, trajectory):
"""Update the metric value.
Args:
trajectory: A tf_agents.trajectory.Trajectory
Returns:
The arguments, for easy chaining.
"""
all_estimated_rewards = self._estimated_reward_fn(trajectory.observation)
max_estimated_rewards = tf.reduce_max(all_estimated_r... | Update the metric value.
Args:
trajectory: A tf_agents.trajectory.Trajectory
Returns:
The arguments, for easy chaining.
| call | python | tensorflow/agents | tf_agents/bandits/metrics/tf_metrics.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/metrics/tf_metrics.py | Apache-2.0 |
def __call__(self, observation, actions=None):
"""Returns the probability of input actions being feasible."""
if actions is None:
actions = tf.range(
self._action_spec.minimum, self._action_spec.maximum + 1
)
actions = tf.reshape(actions, [1, -1])
actions = tf.tile(actions, [se... | Returns the probability of input actions being feasible. | __call__ | python | tensorflow/agents | tf_agents/bandits/metrics/tf_metrics_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/metrics/tf_metrics_test.py | Apache-2.0 |
def _validate_scalarization_parameter_shape(
multi_objectives: tf.Tensor,
params: Dict[str, Union[Sequence[ScalarFloat], tf.Tensor]],
):
"""A private helper that validates the shapes of scalarization parameters.
Every scalarization parameter in the input dictionary is either a 1-D tensor
or `Sequence`, o... | A private helper that validates the shapes of scalarization parameters.
Every scalarization parameter in the input dictionary is either a 1-D tensor
or `Sequence`, or a tensor whose shape matches the shape of the input
`multi_objectives` tensor. This is invoked by the `Scalarizer.call` method.
Args:
multi... | _validate_scalarization_parameter_shape | python | tensorflow/agents | tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | Apache-2.0 |
def __init__(self, num_of_objectives: int):
"""Initialize the Scalarizer.
Args:
num_of_objectives: A non-negative integer indicating the number of
objectives to scalarize.
Raises:
ValueError: if `not isinstance(num_of_objectives, int)`.
ValueError: if `num_of_objectives < 2`.
... | Initialize the Scalarizer.
Args:
num_of_objectives: A non-negative integer indicating the number of
objectives to scalarize.
Raises:
ValueError: if `not isinstance(num_of_objectives, int)`.
ValueError: if `num_of_objectives < 2`.
| __init__ | python | tensorflow/agents | tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | Apache-2.0 |
def __call__(self, multi_objectives: tf.Tensor) -> tf.Tensor:
"""Returns a single reward by scalarizing multiple objectives.
Args:
multi_objectives: A `Tensor` of shape [batch_size, number_of_objectives],
where each column represents an objective.
Returns: A `Tensor` of shape [batch_size] re... | Returns a single reward by scalarizing multiple objectives.
Args:
multi_objectives: A `Tensor` of shape [batch_size, number_of_objectives],
where each column represents an objective.
Returns: A `Tensor` of shape [batch_size] representing scalarized rewards.
Raises:
ValueError: if `mul... | __call__ | python | tensorflow/agents | tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | Apache-2.0 |
def _validate_scalarization_parameters(self, params: Dict[str, tf.Tensor]):
"""Validates the scalarization parameters.
Each scalarization parameter in the input dictionary should be a rank-2
tensor, and the last dimension size should match `self._num_of_objectives`.
Args:
params: A dictionary fr... | Validates the scalarization parameters.
Each scalarization parameter in the input dictionary should be a rank-2
tensor, and the last dimension size should match `self._num_of_objectives`.
Args:
params: A dictionary from parameter names to parameter tensors.
Raises:
ValueError: if any inpu... | _validate_scalarization_parameters | python | tensorflow/agents | tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | Apache-2.0 |
def __init__(
self,
weights: Sequence[ScalarFloat],
multi_objective_transform: Optional[
Callable[[tf.Tensor], tf.Tensor]
] = None,
):
"""Initialize the LinearScalarizer.
Args:
weights: A `Sequence` of weights for linearly combining the objectives.
multi_objectiv... | Initialize the LinearScalarizer.
Args:
weights: A `Sequence` of weights for linearly combining the objectives.
multi_objective_transform: A `Optional` `Callable` that takes in a
`tf.Tensor` of multiple objective values and applies an arbitrary
transform that returns a `tf.Tensor` of tra... | __init__ | python | tensorflow/agents | tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | Apache-2.0 |
def set_parameters(self, weights: tf.Tensor): # pytype: disable=signature-mismatch # overriding-parameter-count-checks
"""Set the scalarization parameter of the LinearScalarizer.
Args:
weights: A a rank-2 `tf.Tensor` of weights shaped as [batch_size,
self._num_of_objectives], where `batch_size`... | Set the scalarization parameter of the LinearScalarizer.
Args:
weights: A a rank-2 `tf.Tensor` of weights shaped as [batch_size,
self._num_of_objectives], where `batch_size` should match the batch size
of the `multi_objectives` passed to the scalarizer call.
Raises:
ValueError: if ... | set_parameters | python | tensorflow/agents | tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | Apache-2.0 |
def __init__(
self,
weights: Sequence[ScalarFloat],
reference_point: Sequence[ScalarFloat],
):
"""Initialize the ChebyshevScalarizer.
Args:
weights: A `Sequence` of weights.
reference_point: A `Sequence` of coordinates for the reference point.
Raises:
ValueError: if `... | Initialize the ChebyshevScalarizer.
Args:
weights: A `Sequence` of weights.
reference_point: A `Sequence` of coordinates for the reference point.
Raises:
ValueError: if `len(weights) != len(reference_point)`.
| __init__ | python | tensorflow/agents | tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | Apache-2.0 |
def set_parameters(self, weights: tf.Tensor, reference_point: tf.Tensor): # pytype: disable=signature-mismatch # overriding-parameter-count-checks
"""Set the scalarization parameters for the ChebyshevScalarizer.
Args:
weights: A rank-2 `tf.Tensor` of weights shaped as [batch_size,
self._num_of_... | Set the scalarization parameters for the ChebyshevScalarizer.
Args:
weights: A rank-2 `tf.Tensor` of weights shaped as [batch_size,
self._num_of_objectives], where `batch_size` should match the batch size
of the `multi_objectives` passed to the scalarizer call.
reference_point: A `tf.Te... | set_parameters | python | tensorflow/agents | tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | Apache-2.0 |
def __init__(
self,
direction: Sequence[ScalarFloat],
transform_params: Sequence[PARAMS],
multi_objective_transform: Optional[
Callable[
[tf.Tensor, Sequence[ScalarFloat], Sequence[ScalarFloat]],
tf.Tensor,
]
] = None,
):
"""Initialize ... | Initialize the HyperVolumeScalarizer.
Args:
direction: A `Sequence` representing a directional vector, which will be
normalized to have unit length. Coordinates of the normalized direction
whose absolute values are less than `HyperVolumeScalarizer.ALMOST_ZERO`
will be considered zeros... | __init__ | python | tensorflow/agents | tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | Apache-2.0 |
def set_parameters(
self,
direction: tf.Tensor, # pytype: disable=signature-mismatch # overriding-parameter-count-checks
transform_params: Dict[str, tf.Tensor],
):
"""Set the scalarization parameters for the HyperVolumeScalarizer.
Args:
direction: A `tf.Tensor` representing a direct... | Set the scalarization parameters for the HyperVolumeScalarizer.
Args:
direction: A `tf.Tensor` representing a directional vector, which will be
normalized to have unit length. Coordinates of the normalized direction
whose absolute values are less than `HyperVolumeScalarizer.ALMOST_ZERO`
... | set_parameters | python | tensorflow/agents | tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/multi_objective/multi_objective_scalarizer.py | Apache-2.0 |
def _remove_num_actions_dim_from_spec(
observation_spec: types.NestedTensorSpec,
) -> types.NestedTensorSpec:
"""Removes the extra `num_actions` dimension from the observation spec."""
obs_spec_no_num_actions = {
bandit_spec_utils.GLOBAL_FEATURE_KEY: observation_spec[
bandit_spec_utils.GLOBAL_FE... | Removes the extra `num_actions` dimension from the observation spec. | _remove_num_actions_dim_from_spec | python | tensorflow/agents | tf_agents/bandits/networks/global_and_arm_feature_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/networks/global_and_arm_feature_network.py | Apache-2.0 |
def create_feed_forward_common_tower_network(
observation_spec: types.NestedTensorSpec,
global_layers: Sequence[int],
arm_layers: Sequence[int],
common_layers: Sequence[int],
output_dim: int = 1,
global_preprocessing_combiner: Optional[Callable[..., types.Tensor]] = None,
arm_preprocessing_c... | Creates a common tower network with feedforward towers.
The network produced by this function can be used either in
`GreedyRewardPredictionPolicy`, or `NeuralLinUCBPolicy`.
In the former case, the network must have `output_dim=1`, it is going to be an
instance of `QNetwork`, and used in the policy as a reward ... | create_feed_forward_common_tower_network | python | tensorflow/agents | tf_agents/bandits/networks/global_and_arm_feature_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/networks/global_and_arm_feature_network.py | Apache-2.0 |
def create_feed_forward_dot_product_network(
observation_spec: types.NestedTensorSpec,
global_layers: Sequence[int],
arm_layers: Sequence[int],
activation_fn: Callable[
[types.Tensor], types.Tensor
] = tf.keras.activations.relu,
) -> types.Network:
"""Creates a dot product network with fee... | Creates a dot product network with feedforward towers.
Args:
observation_spec: A nested tensor spec containing the specs for global as
well as per-arm observations.
global_layers: Iterable of ints. Specifies the layers of the global tower.
arm_layers: Iterable of ints. Specifies the layers of the a... | create_feed_forward_dot_product_network | python | tensorflow/agents | tf_agents/bandits/networks/global_and_arm_feature_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/networks/global_and_arm_feature_network.py | Apache-2.0 |
def __init__(
self,
observation_spec: types.NestedTensorSpec,
global_network: types.Network,
arm_network: types.Network,
common_network: types.Network,
name='GlobalAndArmCommonTowerNetwork',
) -> types.Network:
"""Initializes an instance of `GlobalAndArmCommonTowerNetwork`.
... | Initializes an instance of `GlobalAndArmCommonTowerNetwork`.
The network architecture contains networks for both the global and the arm
features. The outputs of these networks are concatenated and led through a
third (common) network which in turn outputs reward estimates.
Args:
observation_spec... | __init__ | python | tensorflow/agents | tf_agents/bandits/networks/global_and_arm_feature_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/networks/global_and_arm_feature_network.py | Apache-2.0 |
def call(self, observation, step_type=None, network_state=()):
"""Runs the observation through the network."""
global_obs = observation[bandit_spec_utils.GLOBAL_FEATURE_KEY]
arm_obs = observation[bandit_spec_utils.PER_ARM_FEATURE_KEY]
arm_output, arm_state = self._arm_network(
arm_obs, step_typ... | Runs the observation through the network. | call | python | tensorflow/agents | tf_agents/bandits/networks/global_and_arm_feature_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/networks/global_and_arm_feature_network.py | Apache-2.0 |
def __init__(
self,
observation_spec: types.NestedTensorSpec,
global_network: types.Network,
arm_network: types.Network,
name: Optional[Text] = 'GlobalAndArmDotProductNetwork',
):
"""Initializes an instance of `GlobalAndArmDotProductNetwork`.
The network architecture contains ne... | Initializes an instance of `GlobalAndArmDotProductNetwork`.
The network architecture contains networks for both the global and the arm
features. The reward estimates will be the dot product of the global and per
arm outputs.
Args:
observation_spec: The observation spec for the policy that uses t... | __init__ | python | tensorflow/agents | tf_agents/bandits/networks/global_and_arm_feature_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/networks/global_and_arm_feature_network.py | Apache-2.0 |
def call(self, observation, step_type=None, network_state=()):
"""Runs the observation through the network."""
global_obs = observation[bandit_spec_utils.GLOBAL_FEATURE_KEY]
arm_obs = observation[bandit_spec_utils.PER_ARM_FEATURE_KEY]
global_output, global_state = self._global_network(
global_... | Runs the observation through the network. | call | python | tensorflow/agents | tf_agents/bandits/networks/global_and_arm_feature_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/networks/global_and_arm_feature_network.py | Apache-2.0 |
def __init__(
self,
input_tensor_spec: types.NestedTensorSpec,
action_spec: types.NestedTensorSpec,
preprocessing_layers: Optional[Callable[..., types.Tensor]] = None,
preprocessing_combiner: Optional[Callable[..., types.Tensor]] = None,
conv_layer_params: Optional[Sequence[Any]] = N... | Creates an instance of `HeteroscedasticQNetwork`.
Args:
input_tensor_spec: A nest of `tensor_spec.TensorSpec` representing the
input observations.
action_spec: A nest of `tensor_spec.BoundedTensorSpec` representing the
actions.
preprocessing_layers: (Optional.) A nest of `tf.keras... | __init__ | python | tensorflow/agents | tf_agents/bandits/networks/heteroscedastic_q_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/networks/heteroscedastic_q_network.py | Apache-2.0 |
def call(self, observation, step_type=None, network_state=()):
"""Runs the given observation through the network.
Args:
observation: The observation to provide to the network.
step_type: The step type for the given observation. See `StepType` in
time_step.py.
network_state: A state tu... | Runs the given observation through the network.
Args:
observation: The observation to provide to the network.
step_type: The step type for the given observation. See `StepType` in
time_step.py.
network_state: A state tuple to pass to the network, mainly used by RNNs.
Returns:
A... | call | python | tensorflow/agents | tf_agents/bandits/networks/heteroscedastic_q_network.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/networks/heteroscedastic_q_network.py | Apache-2.0 |
def testVarianceBoundaryConditions(self):
"""Tests that min/max variance conditions are satisfied."""
batch_size = 3
num_state_dims = 5
min_variance = 1.0
max_variance = 2.0
eps = 0.0001
states = tf.random.uniform([batch_size, num_state_dims])
network = heteroscedastic_q_network.Heterosc... | Tests that min/max variance conditions are satisfied. | testVarianceBoundaryConditions | python | tensorflow/agents | tf_agents/bandits/networks/heteroscedastic_q_network_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/networks/heteroscedastic_q_network_test.py | Apache-2.0 |
def _distribution(self, time_step, policy_state):
"""Implementation of `distribution`. Returns a `Categorical` distribution.
The returned `Categorical` distribution has (unnormalized) probabilities
`exp(inverse_temperature * weights)`.
Args:
time_step: A `TimeStep` tuple corresponding to `time_s... | Implementation of `distribution`. Returns a `Categorical` distribution.
The returned `Categorical` distribution has (unnormalized) probabilities
`exp(inverse_temperature * weights)`.
Args:
time_step: A `TimeStep` tuple corresponding to `time_step_spec()`.
policy_state: Unused in `CategoricalPo... | _distribution | python | tensorflow/agents | tf_agents/bandits/policies/categorical_policy.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/categorical_policy.py | Apache-2.0 |
def testInverseTempUpdate(self, observation_shape, weights, seed):
"""Test that temperature updates perform as expected as it is increased."""
observation_spec = tensor_spec.TensorSpec(
shape=observation_shape, dtype=tf.float32, name='observation_spec'
)
time_step_spec = time_step.time_step_spec... | Test that temperature updates perform as expected as it is increased. | testInverseTempUpdate | python | tensorflow/agents | tf_agents/bandits/policies/categorical_policy_test.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/categorical_policy_test.py | Apache-2.0 |
def __init__(
self,
time_step_spec: types.TimeStep,
action_spec: types.BoundedTensorSpec,
name: Optional[Text] = None,
):
"""Initialization of the BaseConstraint class.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of `BoundedTenso... | Initialization of the BaseConstraint class.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of `BoundedTensorSpec` representing the actions.
name: Python str name of this constraint.
| __init__ | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
def __call__(
self,
observation: types.NestedTensor,
actions: Optional[types.Tensor] = None,
) -> types.Tensor:
"""Returns the probability of input actions being feasible.""" | Returns the probability of input actions being feasible. | __call__ | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
def __init__(
self,
time_step_spec: types.TimeStep,
action_spec: types.BoundedTensorSpec,
constraint_network: Optional[types.Network],
error_loss_fn: types.LossFn = tf.compat.v1.losses.mean_squared_error,
name: Optional[Text] = 'NeuralConstraint',
):
"""Creates a trainable cons... | Creates a trainable constraint using a neural network.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of `BoundedTensorSpec` representing the actions.
constraint_network: An instance of `tf_agents.network.Network` used to
provide estimates of actio... | __init__ | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
def compute_loss(
self,
observations: types.NestedTensor,
actions: types.NestedTensor,
rewards: types.Tensor,
weights: Optional[types.Float] = None,
training: bool = False,
) -> types.Tensor:
"""Computes loss for training the constraint network.
Args:
observations: A... | Computes loss for training the constraint network.
Args:
observations: A batch of observations.
actions: A batch of actions.
rewards: A batch of rewards.
weights: Optional scalar or elementwise (per-batch-entry) importance
weights. The output batch loss will be scaled by these weig... | compute_loss | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
def __call__(self, observation, actions=None):
"""Returns the probability of input actions being feasible."""
batch_dims = nest_utils.get_outer_shape(
observation, self._time_step_spec.observation
)
shape = tf.concat(
[
batch_dims,
tf.constant(self._num_actions, s... | Returns the probability of input actions being feasible. | __call__ | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
def __init__(
self,
time_step_spec: types.TimeStep,
action_spec: types.BoundedTensorSpec,
constraint_network: types.Network,
error_loss_fn: types.LossFn = tf.compat.v1.losses.mean_squared_error,
comparator_fn: types.ComparatorFn = tf.greater,
margin: float = 0.0,
baseline... | Creates a trainable relative constraint using a neural network.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of `BoundedTensorSpec` representing the actions.
constraint_network: An instance of `tf_agents.network.Network` used to
provide estimates... | __init__ | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
def __call__(self, observation, actions=None):
"""Returns the probability of input actions being feasible."""
predicted_values, _ = self._constraint_network(observation, training=False)
batch_dims = nest_utils.get_outer_shape(
observation, self._time_step_spec.observation
)
if self._baselin... | Returns the probability of input actions being feasible. | __call__ | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
def __init__(
self,
time_step_spec: types.TimeStep,
action_spec: types.BoundedTensorSpec,
constraint_network: types.Network,
error_loss_fn: types.LossFn = tf.compat.v1.losses.mean_squared_error,
comparator_fn: types.ComparatorFn = tf.greater,
absolute_value: float = 0.0,
... | Creates a trainable absolute constraint using a neural network.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of `BoundedTensorSpec` representing the actions.
constraint_network: An instance of `tf_agents.network.Network` used to
provide estimates... | __init__ | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
def __call__(self, observation, actions=None):
"""Returns the probability of input actions being feasible."""
predicted_values, _ = self._constraint_network(observation, training=False)
is_satisfied = self._comparator_fn(predicted_values, self._absolute_value)
return tf.cast(is_satisfied, tf.float32) | Returns the probability of input actions being feasible. | __call__ | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
def __init__(
self,
time_step_spec: types.TimeStep,
action_spec: types.BoundedTensorSpec,
constraint_network: types.Network,
quantile: float = 0.5,
comparator_fn: types.ComparatorFn = tf.greater,
quantile_value: float = 0.0,
name: Text = 'QuantileConstraint',
):
"""... | Creates a trainable quantile constraint using a neural network.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of `BoundedTensorSpec` representing the actions.
constraint_network: An instance of `tf_agents.network.Network` used to
provide estimates... | __init__ | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
def __call__(self, observation, actions=None):
"""Returns the probability of input actions being feasible."""
predicted_quantiles, _ = self._constraint_network(
observation, training=False
)
is_satisfied = self._comparator_fn(
predicted_quantiles, self._quantile_value
)
return tf... | Returns the probability of input actions being feasible. | __call__ | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
def __init__(
self,
time_step_spec: types.TimeStep,
action_spec: types.BoundedTensorSpec,
constraint_network: types.Network,
quantile: float = 0.5,
comparator_fn: types.ComparatorFn = tf.greater,
baseline_action_fn: Optional[
Callable[[types.Tensor], types.Tensor]
... | Creates a trainable relative quantile constraint using a neural network.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of `BoundedTensorSpec` representing the actions.
constraint_network: An instance of `tf_agents.network.Network` used to
provide ... | __init__ | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
def __call__(self, observation, actions=None):
"""Returns the probability of input actions being feasible."""
predicted_quantiles, _ = self._constraint_network(
observation, training=False
)
batch_dims = nest_utils.get_outer_shape(
observation, self._time_step_spec.observation
)
... | Returns the probability of input actions being feasible. | __call__ | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
def __init__(
self,
time_step_spec: types.TimeStep,
action_spec: types.BoundedTensorSpec,
input_network: Optional[types.Network] = None,
name: Optional[Text] = 'InputNetworkConstraint',
):
"""Creates a constraint using an input network.
Args:
time_step_spec: A `TimeStep` s... | Creates a constraint using an input network.
Args:
time_step_spec: A `TimeStep` spec of the expected time_steps.
action_spec: A nest of `BoundedTensorSpec` representing the actions.
input_network: An instance of `tf_agents.network.Network` used to provide
estimates of action feasibility.
... | __init__ | python | tensorflow/agents | tf_agents/bandits/policies/constraints.py | https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/policies/constraints.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.