code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def _compute_kl_constraint(self, all_samples, all_params, set_grad=True):
"""Compute KL divergence.
For each task, compute the KL divergence between the old policy
distribution and current policy distribution.
Args:
all_samples (list[list[_MAMLEpisodeBatch]]): Two
dimensional list of _MAMLEpisodeBatch of size
[meta_batch_size * (num_grad_updates + 1)]
all_params (list[dict]): A list of named parameter dictionaries.
Each dictionary contains key value pair of names (str) and
parameters (torch.Tensor).
set_grad (bool): Whether to enable gradient calculation or not.
Returns:
torch.Tensor: Calculated mean value of KL divergence.
"""
theta = dict(self._policy.named_parameters())
old_theta = dict(self._old_policy.named_parameters())
kls = []
for task_samples, task_params in zip(all_samples, all_params):
for i in range(self._num_grad_updates):
require_grad = i < self._num_grad_updates - 1 or set_grad
self._adapt(task_samples[i], set_grad=require_grad)
update_module_params(self._old_policy, task_params)
with torch.set_grad_enabled(set_grad):
# pylint: disable=protected-access
kl = self._inner_algo._compute_kl_constraint(
task_samples[-1].observations)
kls.append(kl)
update_module_params(self._policy, theta)
update_module_params(self._old_policy, old_theta)
return torch.stack(kls).mean() | Compute KL divergence.
For each task, compute the KL divergence between the old policy
distribution and current policy distribution.
Args:
all_samples (list[list[_MAMLEpisodeBatch]]): Two
dimensional list of _MAMLEpisodeBatch of size
[meta_batch_size * (num_grad_updates + 1)]
all_params (list[dict]): A list of named parameter dictionaries.
Each dictionary contains key value pair of names (str) and
parameters (torch.Tensor).
set_grad (bool): Whether to enable gradient calculation or not.
Returns:
torch.Tensor: Calculated mean value of KL divergence.
| _compute_kl_constraint | python | rlworkgroup/garage | src/garage/torch/algos/maml.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/maml.py | MIT |
def _compute_policy_entropy(self, task_samples):
"""Compute policy entropy.
Args:
task_samples (list[_MAMLEpisodeBatch]): Samples data for
one task.
Returns:
torch.Tensor: Computed entropy value.
"""
obs = torch.cat([samples.observations for samples in task_samples])
# pylint: disable=protected-access
entropies = self._inner_algo._compute_policy_entropy(obs)
return entropies.mean() | Compute policy entropy.
Args:
task_samples (list[_MAMLEpisodeBatch]): Samples data for
one task.
Returns:
torch.Tensor: Computed entropy value.
| _compute_policy_entropy | python | rlworkgroup/garage | src/garage/torch/algos/maml.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/maml.py | MIT |
def _process_samples(self, episodes):
"""Process sample data based on the collected paths.
Args:
episodes (EpisodeBatch): Collected batch of episodes.
Returns:
_MAMLEpisodeBatch: Processed samples data.
"""
paths = episodes.to_list()
for path in paths:
path['returns'] = discount_cumsum(
path['rewards'], self._inner_algo.discount).copy()
self._train_value_function(paths)
obs = torch.Tensor(episodes.padded_observations)
actions = torch.Tensor(episodes.padded_actions)
rewards = torch.Tensor(episodes.padded_rewards)
valids = torch.Tensor(episodes.lengths).int()
with torch.no_grad():
# pylint: disable=protected-access
baselines = self._inner_algo._value_function(obs)
return _MAMLEpisodeBatch(paths, obs, actions, rewards, valids,
baselines) | Process sample data based on the collected paths.
Args:
episodes (EpisodeBatch): Collected batch of episodes.
Returns:
_MAMLEpisodeBatch: Processed samples data.
| _process_samples | python | rlworkgroup/garage | src/garage/torch/algos/maml.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/maml.py | MIT |
def _log_performance(self, itr, all_samples, loss_before, loss_after,
kl_before, kl, policy_entropy):
"""Evaluate performance of this batch.
Args:
itr (int): Iteration number.
all_samples (list[list[_MAMLEpisodeBatch]]): Two
dimensional list of _MAMLEpisodeBatch of size
[meta_batch_size * (num_grad_updates + 1)]
loss_before (float): Loss before optimization step.
loss_after (float): Loss after optimization step.
kl_before (float): KL divergence before optimization step.
kl (float): KL divergence after optimization step.
policy_entropy (float): Policy entropy.
Returns:
float: The average return in last epoch cycle.
"""
tabular.record('Iteration', itr)
name_map = None
if hasattr(self._env, 'all_task_names'):
names = self._env.all_task_names
name_map = dict(zip(names, names))
rtns = log_multitask_performance(
itr,
EpisodeBatch.from_list(
env_spec=self._env.spec,
paths=[
path for task_paths in all_samples
for path in task_paths[self._num_grad_updates].paths
]),
discount=self._inner_algo.discount,
name_map=name_map)
with tabular.prefix(self._policy.name + '/'):
tabular.record('LossBefore', loss_before)
tabular.record('LossAfter', loss_after)
tabular.record('dLoss', loss_before - loss_after)
tabular.record('KLBefore', kl_before)
tabular.record('KLAfter', kl)
tabular.record('Entropy', policy_entropy)
return np.mean(rtns) | Evaluate performance of this batch.
Args:
itr (int): Iteration number.
all_samples (list[list[_MAMLEpisodeBatch]]): Two
dimensional list of _MAMLEpisodeBatch of size
[meta_batch_size * (num_grad_updates + 1)]
loss_before (float): Loss before optimization step.
loss_after (float): Loss after optimization step.
kl_before (float): KL divergence before optimization step.
kl (float): KL divergence after optimization step.
policy_entropy (float): Policy entropy.
Returns:
float: The average return in last epoch cycle.
| _log_performance | python | rlworkgroup/garage | src/garage/torch/algos/maml.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/maml.py | MIT |
def adapt_policy(self, exploration_policy, exploration_episodes):
"""Adapt the policy by one gradient steps for a task.
Args:
exploration_policy (Policy): A policy which was returned from
get_exploration_policy(), and which generated
exploration_episodes by interacting with an environment.
The caller may not use this object after passing it into this
method.
exploration_episodes (EpisodeBatch): Episodes with which to adapt,
generated by exploration_policy exploring the environment.
Returns:
Policy: A policy adapted to the task represented by the
exploration_episodes.
"""
old_policy, self._policy = self._policy, exploration_policy
self._inner_algo.policy = exploration_policy
self._inner_optimizer.module = exploration_policy
batch_samples = self._process_samples(exploration_episodes)
self._adapt(batch_samples, set_grad=False)
self._policy = old_policy
self._inner_algo.policy = self._inner_optimizer.module = old_policy
return exploration_policy | Adapt the policy by one gradient steps for a task.
Args:
exploration_policy (Policy): A policy which was returned from
get_exploration_policy(), and which generated
exploration_episodes by interacting with an environment.
The caller may not use this object after passing it into this
method.
exploration_episodes (EpisodeBatch): Episodes with which to adapt,
generated by exploration_policy exploring the environment.
Returns:
Policy: A policy adapted to the task represented by the
exploration_episodes.
| adapt_policy | python | rlworkgroup/garage | src/garage/torch/algos/maml.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/maml.py | MIT |
def _get_log_alpha(self, samples_data):
"""Return the value of log_alpha.
Args:
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
Note:
samples_data's entries should be torch.Tensor's with the following
shapes:
observation: :math:`(N, O^*)`
action: :math:`(N, A^*)`
reward: :math:`(N, 1)`
terminal: :math:`(N, 1)`
next_observation: :math:`(N, O^*)`
Raises:
ValueError: If the number of tasks, num_tasks passed to
this algorithm doesn't match the length of the task
one-hot id in the observation vector.
Returns:
torch.Tensor: log_alpha. shape is (1, self.buffer_batch_size)
"""
obs = samples_data['observation']
log_alpha = self._log_alpha
one_hots = obs[:, -self._num_tasks:]
if (log_alpha.shape[0] != one_hots.shape[1]
or one_hots.shape[1] != self._num_tasks
or log_alpha.shape[0] != self._num_tasks):
raise ValueError(
'The number of tasks in the environment does '
'not match self._num_tasks. Are you sure that you passed '
'The correct number of tasks?')
ret = torch.mm(one_hots, log_alpha.unsqueeze(0).t()).squeeze()
return ret | Return the value of log_alpha.
Args:
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
Note:
samples_data's entries should be torch.Tensor's with the following
shapes:
observation: :math:`(N, O^*)`
action: :math:`(N, A^*)`
reward: :math:`(N, 1)`
terminal: :math:`(N, 1)`
next_observation: :math:`(N, O^*)`
Raises:
ValueError: If the number of tasks, num_tasks passed to
this algorithm doesn't match the length of the task
one-hot id in the observation vector.
Returns:
torch.Tensor: log_alpha. shape is (1, self.buffer_batch_size)
| _get_log_alpha | python | rlworkgroup/garage | src/garage/torch/algos/mtsac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/mtsac.py | MIT |
def _evaluate_policy(self, epoch):
"""Evaluate the performance of the policy via deterministic sampling.
Statistics such as (average) discounted return and success rate are
recorded.
Args:
epoch (int): The current training epoch.
Returns:
float: The average return across self._num_evaluation_episodes
episodes
"""
eval_eps = []
for eval_env in self._eval_env:
eval_eps.append(
obtain_evaluation_episodes(
self.policy,
eval_env,
self._max_episode_length_eval,
num_eps=self._num_evaluation_episodes,
deterministic=self._use_deterministic_evaluation))
eval_eps = EpisodeBatch.concatenate(*eval_eps)
last_return = log_multitask_performance(epoch, eval_eps,
self._discount)
return last_return | Evaluate the performance of the policy via deterministic sampling.
Statistics such as (average) discounted return and success rate are
recorded.
Args:
epoch (int): The current training epoch.
Returns:
float: The average return across self._num_evaluation_episodes
episodes
| _evaluate_policy | python | rlworkgroup/garage | src/garage/torch/algos/mtsac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/mtsac.py | MIT |
def to(self, device=None):
"""Put all the networks within the model on device.
Args:
device (str): ID of GPU or CPU.
"""
super().to(device)
if device is None:
device = global_device()
if not self._use_automatic_entropy_tuning:
self._log_alpha = torch.Tensor([self._fixed_alpha] *
self._num_tasks).log().to(device)
else:
self._log_alpha = torch.Tensor(
[self._initial_log_entropy] *
self._num_tasks).to(device).requires_grad_()
self._alpha_optimizer = self._optimizer([self._log_alpha],
lr=self._policy_lr) | Put all the networks within the model on device.
Args:
device (str): ID of GPU or CPU.
| to | python | rlworkgroup/garage | src/garage/torch/algos/mtsac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/mtsac.py | MIT |
def __getstate__(self):
"""Object.__getstate__.
Returns:
dict: the state to be pickled for the instance.
"""
data = self.__dict__.copy()
del data['_replay_buffers']
del data['_context_replay_buffers']
return data | Object.__getstate__.
Returns:
dict: the state to be pickled for the instance.
| __getstate__ | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def __setstate__(self, state):
"""Object.__setstate__.
Args:
state (dict): unpickled state.
"""
self.__dict__.update(state)
self._replay_buffers = {
i: PathBuffer(self._replay_buffer_size)
for i in range(self._num_train_tasks)
}
self._context_replay_buffers = {
i: PathBuffer(self._replay_buffer_size)
for i in range(self._num_train_tasks)
}
self._is_resuming = True | Object.__setstate__.
Args:
state (dict): unpickled state.
| __setstate__ | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def train(self, trainer):
"""Obtain samples, train, and evaluate for each epoch.
Args:
trainer (Trainer): Gives the algorithm the access to
:method:`Trainer..step_epochs()`, which provides services
such as snapshotting and sampler control.
"""
for _ in trainer.step_epochs():
epoch = trainer.step_itr / self._num_steps_per_epoch
# obtain initial set of samples from all train tasks
if epoch == 0 or self._is_resuming:
for idx in range(self._num_train_tasks):
self._task_idx = idx
self._obtain_samples(trainer, epoch,
self._num_initial_steps, np.inf)
self._is_resuming = False
# obtain samples from random tasks
for _ in range(self._num_tasks_sample):
idx = np.random.randint(self._num_train_tasks)
self._task_idx = idx
self._context_replay_buffers[idx].clear()
# obtain samples with z ~ prior
if self._num_steps_prior > 0:
self._obtain_samples(trainer, epoch, self._num_steps_prior,
np.inf)
# obtain samples with z ~ posterior
if self._num_steps_posterior > 0:
self._obtain_samples(trainer, epoch,
self._num_steps_posterior,
self._update_post_train)
# obtain extras samples for RL training but not encoder
if self._num_extra_rl_steps_posterior > 0:
self._obtain_samples(trainer,
epoch,
self._num_extra_rl_steps_posterior,
self._update_post_train,
add_to_enc_buffer=False)
logger.log('Training...')
# sample train tasks and optimize networks
self._train_once()
trainer.step_itr += 1
logger.log('Evaluating...')
# evaluate
self._policy.reset_belief()
self._evaluator.evaluate(self) | Obtain samples, train, and evaluate for each epoch.
Args:
trainer (Trainer): Gives the algorithm the access to
:method:`Trainer..step_epochs()`, which provides services
such as snapshotting and sampler control.
| train | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def _optimize_policy(self, indices):
"""Perform algorithm optimizing.
Args:
indices (list): Tasks used for training.
"""
num_tasks = len(indices)
context = self._sample_context(indices)
# clear context and reset belief of policy
self._policy.reset_belief(num_tasks=num_tasks)
# data shape is (task, batch, feat)
obs, actions, rewards, next_obs, terms = self._sample_data(indices)
policy_outputs, task_z = self._policy(obs, context)
new_actions, policy_mean, policy_log_std, log_pi = policy_outputs[:4]
# flatten out the task dimension
t, b, _ = obs.size()
obs = obs.view(t * b, -1)
actions = actions.view(t * b, -1)
next_obs = next_obs.view(t * b, -1)
# optimize qf and encoder networks
q1_pred = self._qf1(torch.cat([obs, actions], dim=1), task_z)
q2_pred = self._qf2(torch.cat([obs, actions], dim=1), task_z)
v_pred = self._vf(obs, task_z.detach())
with torch.no_grad():
target_v_values = self.target_vf(next_obs, task_z)
# KL constraint on z if probabilistic
zero_optim_grads(self.context_optimizer)
if self._use_information_bottleneck:
kl_div = self._policy.compute_kl_div()
kl_loss = self._kl_lambda * kl_div
kl_loss.backward(retain_graph=True)
zero_optim_grads(self.qf1_optimizer)
zero_optim_grads(self.qf2_optimizer)
rewards_flat = rewards.view(self._batch_size * num_tasks, -1)
rewards_flat = rewards_flat * self._reward_scale
terms_flat = terms.view(self._batch_size * num_tasks, -1)
q_target = rewards_flat + (
1. - terms_flat) * self._discount * target_v_values
qf_loss = torch.mean((q1_pred - q_target)**2) + torch.mean(
(q2_pred - q_target)**2)
qf_loss.backward()
self.qf1_optimizer.step()
self.qf2_optimizer.step()
self.context_optimizer.step()
# compute min Q on the new actions
q1 = self._qf1(torch.cat([obs, new_actions], dim=1), task_z.detach())
q2 = self._qf2(torch.cat([obs, new_actions], dim=1), task_z.detach())
min_q = torch.min(q1, q2)
# optimize vf
v_target = min_q - log_pi
vf_loss = self.vf_criterion(v_pred, v_target.detach())
zero_optim_grads(self.vf_optimizer)
vf_loss.backward()
self.vf_optimizer.step()
self._update_target_network()
# optimize policy
log_policy_target = min_q
policy_loss = (log_pi - log_policy_target).mean()
mean_reg_loss = self._policy_mean_reg_coeff * (policy_mean**2).mean()
std_reg_loss = self._policy_std_reg_coeff * (policy_log_std**2).mean()
pre_tanh_value = policy_outputs[-1]
pre_activation_reg_loss = self._policy_pre_activation_coeff * (
(pre_tanh_value**2).sum(dim=1).mean())
policy_reg_loss = (mean_reg_loss + std_reg_loss +
pre_activation_reg_loss)
policy_loss = policy_loss + policy_reg_loss
zero_optim_grads(self._policy_optimizer)
policy_loss.backward()
self._policy_optimizer.step() | Perform algorithm optimizing.
Args:
indices (list): Tasks used for training.
| _optimize_policy | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def _obtain_samples(self,
trainer,
itr,
num_samples,
update_posterior_rate,
add_to_enc_buffer=True):
"""Obtain samples.
Args:
trainer (Trainer): Trainer.
itr (int): Index of iteration (epoch).
num_samples (int): Number of samples to obtain.
update_posterior_rate (int): How often (in episodes) to infer
posterior of policy.
add_to_enc_buffer (bool): Whether or not to add samples to encoder
buffer.
"""
self._policy.reset_belief()
total_samples = 0
if update_posterior_rate != np.inf:
num_samples_per_batch = (update_posterior_rate *
self.max_episode_length)
else:
num_samples_per_batch = num_samples
while total_samples < num_samples:
paths = trainer.obtain_samples(itr, num_samples_per_batch,
self._policy,
self._env[self._task_idx])
total_samples += sum([len(path['rewards']) for path in paths])
for path in paths:
p = {
'observations':
path['observations'],
'actions':
path['actions'],
'rewards':
path['rewards'].reshape(-1, 1),
'next_observations':
path['next_observations'],
'dones':
np.array([
step_type == StepType.TERMINAL
for step_type in path['step_types']
]).reshape(-1, 1)
}
self._replay_buffers[self._task_idx].add_path(p)
if add_to_enc_buffer:
self._context_replay_buffers[self._task_idx].add_path(p)
if update_posterior_rate != np.inf:
context = self._sample_context(self._task_idx)
self._policy.infer_posterior(context) | Obtain samples.
Args:
trainer (Trainer): Trainer.
itr (int): Index of iteration (epoch).
num_samples (int): Number of samples to obtain.
update_posterior_rate (int): How often (in episodes) to infer
posterior of policy.
add_to_enc_buffer (bool): Whether or not to add samples to encoder
buffer.
| _obtain_samples | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def _sample_data(self, indices):
"""Sample batch of training data from a list of tasks.
Args:
indices (list): List of task indices to sample from.
Returns:
torch.Tensor: Obervations, with shape :math:`(X, N, O^*)` where X
is the number of tasks. N is batch size.
torch.Tensor: Actions, with shape :math:`(X, N, A^*)`.
torch.Tensor: Rewards, with shape :math:`(X, N, 1)`.
torch.Tensor: Next obervations, with shape :math:`(X, N, O^*)`.
torch.Tensor: Dones, with shape :math:`(X, N, 1)`.
"""
# transitions sampled randomly from replay buffer
initialized = False
for idx in indices:
batch = self._replay_buffers[idx].sample_transitions(
self._batch_size)
if not initialized:
o = batch['observations'][np.newaxis]
a = batch['actions'][np.newaxis]
r = batch['rewards'][np.newaxis]
no = batch['next_observations'][np.newaxis]
d = batch['dones'][np.newaxis]
initialized = True
else:
o = np.vstack((o, batch['observations'][np.newaxis]))
a = np.vstack((a, batch['actions'][np.newaxis]))
r = np.vstack((r, batch['rewards'][np.newaxis]))
no = np.vstack((no, batch['next_observations'][np.newaxis]))
d = np.vstack((d, batch['dones'][np.newaxis]))
o = np_to_torch(o)
a = np_to_torch(a)
r = np_to_torch(r)
no = np_to_torch(no)
d = np_to_torch(d)
return o, a, r, no, d | Sample batch of training data from a list of tasks.
Args:
indices (list): List of task indices to sample from.
Returns:
torch.Tensor: Obervations, with shape :math:`(X, N, O^*)` where X
is the number of tasks. N is batch size.
torch.Tensor: Actions, with shape :math:`(X, N, A^*)`.
torch.Tensor: Rewards, with shape :math:`(X, N, 1)`.
torch.Tensor: Next obervations, with shape :math:`(X, N, O^*)`.
torch.Tensor: Dones, with shape :math:`(X, N, 1)`.
| _sample_data | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def _sample_context(self, indices):
"""Sample batch of context from a list of tasks.
Args:
indices (list): List of task indices to sample from.
Returns:
torch.Tensor: Context data, with shape :math:`(X, N, C)`. X is the
number of tasks. N is batch size. C is the combined size of
observation, action, reward, and next observation if next
observation is used in context. Otherwise, C is the combined
size of observation, action, and reward.
"""
# make method work given a single task index
if not hasattr(indices, '__iter__'):
indices = [indices]
initialized = False
for idx in indices:
batch = self._context_replay_buffers[idx].sample_transitions(
self._embedding_batch_size)
o = batch['observations']
a = batch['actions']
r = batch['rewards']
context = np.hstack((np.hstack((o, a)), r))
if self._use_next_obs_in_context:
context = np.hstack((context, batch['next_observations']))
if not initialized:
final_context = context[np.newaxis]
initialized = True
else:
final_context = np.vstack((final_context, context[np.newaxis]))
final_context = np_to_torch(final_context)
if len(indices) == 1:
final_context = final_context.unsqueeze(0)
return final_context | Sample batch of context from a list of tasks.
Args:
indices (list): List of task indices to sample from.
Returns:
torch.Tensor: Context data, with shape :math:`(X, N, C)`. X is the
number of tasks. N is batch size. C is the combined size of
observation, action, reward, and next observation if next
observation is used in context. Otherwise, C is the combined
size of observation, action, and reward.
| _sample_context | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def _update_target_network(self):
"""Update parameters in the target vf network."""
for target_param, param in zip(self.target_vf.parameters(),
self._vf.parameters()):
target_param.data.copy_(target_param.data *
(1.0 - self._soft_target_tau) +
param.data * self._soft_target_tau) | Update parameters in the target vf network. | _update_target_network | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def networks(self):
"""Return all the networks within the model.
Returns:
list: A list of networks.
"""
return self._policy.networks + [self._policy] + [
self._qf1, self._qf2, self._vf, self.target_vf
] | Return all the networks within the model.
Returns:
list: A list of networks.
| networks | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def adapt_policy(self, exploration_policy, exploration_episodes):
"""Produce a policy adapted for a task.
Args:
exploration_policy (Policy): A policy which was returned from
get_exploration_policy(), and which generated
exploration_episodes by interacting with an environment.
The caller may not use this object after passing it into this
method.
exploration_episodes (EpisodeBatch): Episodes to which to adapt,
generated by exploration_policy exploring the
environment.
Returns:
Policy: A policy adapted to the task represented by the
exploration_episodes.
"""
total_steps = sum(exploration_episodes.lengths)
o = exploration_episodes.observations
a = exploration_episodes.actions
r = exploration_episodes.rewards.reshape(total_steps, 1)
ctxt = np.hstack((o, a, r)).reshape(1, total_steps, -1)
context = np_to_torch(ctxt)
self._policy.infer_posterior(context)
return self._policy | Produce a policy adapted for a task.
Args:
exploration_policy (Policy): A policy which was returned from
get_exploration_policy(), and which generated
exploration_episodes by interacting with an environment.
The caller may not use this object after passing it into this
method.
exploration_episodes (EpisodeBatch): Episodes to which to adapt,
generated by exploration_policy exploring the
environment.
Returns:
Policy: A policy adapted to the task represented by the
exploration_episodes.
| adapt_policy | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def to(self, device=None):
"""Put all the networks within the model on device.
Args:
device (str): ID of GPU or CPU.
"""
device = device or global_device()
for net in self.networks:
net.to(device) | Put all the networks within the model on device.
Args:
device (str): ID of GPU or CPU.
| to | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def augment_env_spec(cls, env_spec, latent_dim):
"""Augment environment by a size of latent dimension.
Args:
env_spec (EnvSpec): Environment specs to be augmented.
latent_dim (int): Latent dimension.
Returns:
EnvSpec: Augmented environment specs.
"""
obs_dim = int(np.prod(env_spec.observation_space.shape))
action_dim = int(np.prod(env_spec.action_space.shape))
aug_obs = akro.Box(low=-1,
high=1,
shape=(obs_dim + latent_dim, ),
dtype=np.float32)
aug_act = akro.Box(low=-1,
high=1,
shape=(action_dim, ),
dtype=np.float32)
return EnvSpec(aug_obs, aug_act) | Augment environment by a size of latent dimension.
Args:
env_spec (EnvSpec): Environment specs to be augmented.
latent_dim (int): Latent dimension.
Returns:
EnvSpec: Augmented environment specs.
| augment_env_spec | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def get_env_spec(cls, env_spec, latent_dim, module):
"""Get environment specs of encoder with latent dimension.
Args:
env_spec (EnvSpec): Environment specification.
latent_dim (int): Latent dimension.
module (str): Module to get environment specs for.
Returns:
InOutSpec: Module environment specs with latent dimension.
"""
obs_dim = int(np.prod(env_spec.observation_space.shape))
action_dim = int(np.prod(env_spec.action_space.shape))
if module == 'encoder':
in_dim = obs_dim + action_dim + 1
out_dim = latent_dim * 2
elif module == 'vf':
in_dim = obs_dim
out_dim = latent_dim
in_space = akro.Box(low=-1, high=1, shape=(in_dim, ), dtype=np.float32)
out_space = akro.Box(low=-1,
high=1,
shape=(out_dim, ),
dtype=np.float32)
if module == 'encoder':
spec = InOutSpec(in_space, out_space)
elif module == 'vf':
spec = EnvSpec(in_space, out_space)
return spec | Get environment specs of encoder with latent dimension.
Args:
env_spec (EnvSpec): Environment specification.
latent_dim (int): Latent dimension.
module (str): Module to get environment specs for.
Returns:
InOutSpec: Module environment specs with latent dimension.
| get_env_spec | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def step_episode(self):
"""Take a single time-step in the current episode.
Returns:
bool: True iff the episode is done, either due to the environment
indicating termination of due to reaching `max_episode_length`.
"""
if self._eps_length < self._max_episode_length:
a, agent_info = self.agent.get_action(self._prev_obs)
if self._deterministic:
a = agent_info['mean']
a, agent_info = self.agent.get_action(self._prev_obs)
es = self.env.step(a)
self._observations.append(self._prev_obs)
self._env_steps.append(es)
for k, v in agent_info.items():
self._agent_infos[k].append(v)
self._eps_length += 1
if self._accum_context:
s = TimeStep.from_env_step(env_step=es,
last_observation=self._prev_obs,
agent_info=agent_info,
episode_info=self._episode_info)
self.agent.update_context(s)
if not es.last:
self._prev_obs = es.observation
return False
self._lengths.append(self._eps_length)
self._last_observations.append(self._prev_obs)
return True | Take a single time-step in the current episode.
Returns:
bool: True iff the episode is done, either due to the environment
indicating termination of due to reaching `max_episode_length`.
| step_episode | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def rollout(self):
"""Sample a single episode of the agent in the environment.
Returns:
EpisodeBatch: The collected episode.
"""
self.agent.sample_from_belief()
self.start_episode()
while not self.step_episode():
pass
return self.collect_episode() | Sample a single episode of the agent in the environment.
Returns:
EpisodeBatch: The collected episode.
| rollout | python | rlworkgroup/garage | src/garage/torch/algos/pearl.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/pearl.py | MIT |
def _compute_objective(self, advantages, obs, actions, rewards):
r"""Compute objective value.
Args:
advantages (torch.Tensor): Advantage value at each step
with shape :math:`(N \dot [T], )`.
obs (torch.Tensor): Observation from the environment
with shape :math:`(N \dot [T], O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N \dot [T], A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N \dot [T], )`.
Returns:
torch.Tensor: Calculated objective values
with shape :math:`(N \dot [T], )`.
"""
# Compute constraint
with torch.no_grad():
old_ll = self._old_policy(obs)[0].log_prob(actions)
new_ll = self.policy(obs)[0].log_prob(actions)
likelihood_ratio = (new_ll - old_ll).exp()
# Calculate surrogate
surrogate = likelihood_ratio * advantages
# Clipping the constraint
likelihood_ratio_clip = torch.clamp(likelihood_ratio,
min=1 - self._lr_clip_range,
max=1 + self._lr_clip_range)
# Calculate surrotate clip
surrogate_clip = likelihood_ratio_clip * advantages
return torch.min(surrogate, surrogate_clip) | Compute objective value.
Args:
advantages (torch.Tensor): Advantage value at each step
with shape :math:`(N \dot [T], )`.
obs (torch.Tensor): Observation from the environment
with shape :math:`(N \dot [T], O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N \dot [T], A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N \dot [T], )`.
Returns:
torch.Tensor: Calculated objective values
with shape :math:`(N \dot [T], )`.
| _compute_objective | python | rlworkgroup/garage | src/garage/torch/algos/ppo.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/ppo.py | MIT |
def train(self, trainer):
"""Obtain samplers and start actual training for each epoch.
Args:
trainer (Trainer): Gives the algorithm the access to
:method:`~Trainer.step_epochs()`, which provides services
such as snapshotting and sampler control.
Returns:
float: The average return in last epoch cycle.
"""
if not self._eval_env:
self._eval_env = trainer.get_env_copy()
last_return = None
for _ in trainer.step_epochs():
for _ in range(self._steps_per_epoch):
if not (self.replay_buffer.n_transitions_stored >=
self._min_buffer_size):
batch_size = int(self._min_buffer_size)
else:
batch_size = None
trainer.step_episode = trainer.obtain_samples(
trainer.step_itr, batch_size)
path_returns = []
for path in trainer.step_episode:
self.replay_buffer.add_path(
dict(observation=path['observations'],
action=path['actions'],
reward=path['rewards'].reshape(-1, 1),
next_observation=path['next_observations'],
terminal=np.array([
step_type == StepType.TERMINAL
for step_type in path['step_types']
]).reshape(-1, 1)))
path_returns.append(sum(path['rewards']))
assert len(path_returns) == len(trainer.step_episode)
self.episode_rewards.append(np.mean(path_returns))
for _ in range(self._gradient_steps):
policy_loss, qf1_loss, qf2_loss = self.train_once()
last_return = self._evaluate_policy(trainer.step_itr)
self._log_statistics(policy_loss, qf1_loss, qf2_loss)
tabular.record('TotalEnvSteps', trainer.total_env_steps)
trainer.step_itr += 1
return np.mean(last_return) | Obtain samplers and start actual training for each epoch.
Args:
trainer (Trainer): Gives the algorithm the access to
:method:`~Trainer.step_epochs()`, which provides services
such as snapshotting and sampler control.
Returns:
float: The average return in last epoch cycle.
| train | python | rlworkgroup/garage | src/garage/torch/algos/sac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/sac.py | MIT |
def train_once(self, itr=None, paths=None):
"""Complete 1 training iteration of SAC.
Args:
itr (int): Iteration number. This argument is deprecated.
paths (list[dict]): A list of collected paths.
This argument is deprecated.
Returns:
torch.Tensor: loss from actor/policy network after optimization.
torch.Tensor: loss from 1st q-function after optimization.
torch.Tensor: loss from 2nd q-function after optimization.
"""
del itr
del paths
if self.replay_buffer.n_transitions_stored >= self._min_buffer_size:
samples = self.replay_buffer.sample_transitions(
self._buffer_batch_size)
samples = as_torch_dict(samples)
policy_loss, qf1_loss, qf2_loss = self.optimize_policy(samples)
self._update_targets()
return policy_loss, qf1_loss, qf2_loss | Complete 1 training iteration of SAC.
Args:
itr (int): Iteration number. This argument is deprecated.
paths (list[dict]): A list of collected paths.
This argument is deprecated.
Returns:
torch.Tensor: loss from actor/policy network after optimization.
torch.Tensor: loss from 1st q-function after optimization.
torch.Tensor: loss from 2nd q-function after optimization.
| train_once | python | rlworkgroup/garage | src/garage/torch/algos/sac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/sac.py | MIT |
def _get_log_alpha(self, samples_data):
"""Return the value of log_alpha.
Args:
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
This function exists in case there are versions of sac that need
access to a modified log_alpha, such as multi_task sac.
Note:
samples_data's entries should be torch.Tensor's with the following
shapes:
observation: :math:`(N, O^*)`
action: :math:`(N, A^*)`
reward: :math:`(N, 1)`
terminal: :math:`(N, 1)`
next_observation: :math:`(N, O^*)`
Returns:
torch.Tensor: log_alpha
"""
del samples_data
log_alpha = self._log_alpha
return log_alpha | Return the value of log_alpha.
Args:
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
This function exists in case there are versions of sac that need
access to a modified log_alpha, such as multi_task sac.
Note:
samples_data's entries should be torch.Tensor's with the following
shapes:
observation: :math:`(N, O^*)`
action: :math:`(N, A^*)`
reward: :math:`(N, 1)`
terminal: :math:`(N, 1)`
next_observation: :math:`(N, O^*)`
Returns:
torch.Tensor: log_alpha
| _get_log_alpha | python | rlworkgroup/garage | src/garage/torch/algos/sac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/sac.py | MIT |
def _temperature_objective(self, log_pi, samples_data):
"""Compute the temperature/alpha coefficient loss.
Args:
log_pi(torch.Tensor): log probability of actions that are sampled
from the replay buffer. Shape is (1, buffer_batch_size).
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
Note:
samples_data's entries should be torch.Tensor's with the following
shapes:
observation: :math:`(N, O^*)`
action: :math:`(N, A^*)`
reward: :math:`(N, 1)`
terminal: :math:`(N, 1)`
next_observation: :math:`(N, O^*)`
Returns:
torch.Tensor: the temperature/alpha coefficient loss.
"""
alpha_loss = 0
if self._use_automatic_entropy_tuning:
alpha_loss = (-(self._get_log_alpha(samples_data)) *
(log_pi.detach() + self._target_entropy)).mean()
return alpha_loss | Compute the temperature/alpha coefficient loss.
Args:
log_pi(torch.Tensor): log probability of actions that are sampled
from the replay buffer. Shape is (1, buffer_batch_size).
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
Note:
samples_data's entries should be torch.Tensor's with the following
shapes:
observation: :math:`(N, O^*)`
action: :math:`(N, A^*)`
reward: :math:`(N, 1)`
terminal: :math:`(N, 1)`
next_observation: :math:`(N, O^*)`
Returns:
torch.Tensor: the temperature/alpha coefficient loss.
| _temperature_objective | python | rlworkgroup/garage | src/garage/torch/algos/sac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/sac.py | MIT |
def _actor_objective(self, samples_data, new_actions, log_pi_new_actions):
"""Compute the Policy/Actor loss.
Args:
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
new_actions (torch.Tensor): Actions resampled from the policy based
based on the Observations, obs, which were sampled from the
replay buffer. Shape is (action_dim, buffer_batch_size).
log_pi_new_actions (torch.Tensor): Log probability of the new
actions on the TanhNormal distributions that they were sampled
from. Shape is (1, buffer_batch_size).
Note:
samples_data's entries should be torch.Tensor's with the following
shapes:
observation: :math:`(N, O^*)`
action: :math:`(N, A^*)`
reward: :math:`(N, 1)`
terminal: :math:`(N, 1)`
next_observation: :math:`(N, O^*)`
Returns:
torch.Tensor: loss from the Policy/Actor.
"""
obs = samples_data['observation']
with torch.no_grad():
alpha = self._get_log_alpha(samples_data).exp()
min_q_new_actions = torch.min(self._qf1(obs, new_actions),
self._qf2(obs, new_actions))
policy_objective = ((alpha * log_pi_new_actions) -
min_q_new_actions.flatten()).mean()
return policy_objective | Compute the Policy/Actor loss.
Args:
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
new_actions (torch.Tensor): Actions resampled from the policy based
based on the Observations, obs, which were sampled from the
replay buffer. Shape is (action_dim, buffer_batch_size).
log_pi_new_actions (torch.Tensor): Log probability of the new
actions on the TanhNormal distributions that they were sampled
from. Shape is (1, buffer_batch_size).
Note:
samples_data's entries should be torch.Tensor's with the following
shapes:
observation: :math:`(N, O^*)`
action: :math:`(N, A^*)`
reward: :math:`(N, 1)`
terminal: :math:`(N, 1)`
next_observation: :math:`(N, O^*)`
Returns:
torch.Tensor: loss from the Policy/Actor.
| _actor_objective | python | rlworkgroup/garage | src/garage/torch/algos/sac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/sac.py | MIT |
def _critic_objective(self, samples_data):
"""Compute the Q-function/critic loss.
Args:
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
Note:
samples_data's entries should be torch.Tensor's with the following
shapes:
observation: :math:`(N, O^*)`
action: :math:`(N, A^*)`
reward: :math:`(N, 1)`
terminal: :math:`(N, 1)`
next_observation: :math:`(N, O^*)`
Returns:
torch.Tensor: loss from 1st q-function after optimization.
torch.Tensor: loss from 2nd q-function after optimization.
"""
obs = samples_data['observation']
actions = samples_data['action']
rewards = samples_data['reward'].flatten()
terminals = samples_data['terminal'].flatten()
next_obs = samples_data['next_observation']
with torch.no_grad():
alpha = self._get_log_alpha(samples_data).exp()
q1_pred = self._qf1(obs, actions)
q2_pred = self._qf2(obs, actions)
new_next_actions_dist = self.policy(next_obs)[0]
new_next_actions_pre_tanh, new_next_actions = (
new_next_actions_dist.rsample_with_pre_tanh_value())
new_log_pi = new_next_actions_dist.log_prob(
value=new_next_actions, pre_tanh_value=new_next_actions_pre_tanh)
target_q_values = torch.min(
self._target_qf1(next_obs, new_next_actions),
self._target_qf2(
next_obs, new_next_actions)).flatten() - (alpha * new_log_pi)
with torch.no_grad():
q_target = rewards * self._reward_scale + (
1. - terminals) * self._discount * target_q_values
qf1_loss = F.mse_loss(q1_pred.flatten(), q_target)
qf2_loss = F.mse_loss(q2_pred.flatten(), q_target)
return qf1_loss, qf2_loss | Compute the Q-function/critic loss.
Args:
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
Note:
samples_data's entries should be torch.Tensor's with the following
shapes:
observation: :math:`(N, O^*)`
action: :math:`(N, A^*)`
reward: :math:`(N, 1)`
terminal: :math:`(N, 1)`
next_observation: :math:`(N, O^*)`
Returns:
torch.Tensor: loss from 1st q-function after optimization.
torch.Tensor: loss from 2nd q-function after optimization.
| _critic_objective | python | rlworkgroup/garage | src/garage/torch/algos/sac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/sac.py | MIT |
def _caps_regularization_objective(self, action_dists, samples_data):
"""Compute the spatial and temporal regularization loss as in CAPS.
Args:
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
action_dists (torch.distribution.Distribution): Distributions
returned from the policy after feeding through observations.
Returns:
torch.Tensor: combined regularization loss
"""
# torch.tensor is callable and the recommended way to create a scalar
# tensor
# pylint: disable=not-callable
if self._temporal_regularization_factor:
next_action_dists = self.policy(
samples_data['next_observation'])[0]
temporal_loss = self._temporal_regularization_factor * torch.mean(
torch.cdist(action_dists.mean, next_action_dists.mean, p=2))
else:
temporal_loss = torch.tensor(0.)
if self._spatial_regularization_factor:
obs = samples_data['observation']
noisy_action_dists = self.policy(
obs + self._spatial_regularization_dist.sample(obs.shape))[0]
spatial_loss = self._spatial_regularization_factor * torch.mean(
torch.cdist(action_dists.mean, noisy_action_dists.mean, p=2))
else:
spatial_loss = torch.tensor(0.)
return temporal_loss + spatial_loss | Compute the spatial and temporal regularization loss as in CAPS.
Args:
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
action_dists (torch.distribution.Distribution): Distributions
returned from the policy after feeding through observations.
Returns:
torch.Tensor: combined regularization loss
| _caps_regularization_objective | python | rlworkgroup/garage | src/garage/torch/algos/sac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/sac.py | MIT |
def _update_targets(self):
"""Update parameters in the target q-functions."""
target_qfs = [self._target_qf1, self._target_qf2]
qfs = [self._qf1, self._qf2]
for target_qf, qf in zip(target_qfs, qfs):
for t_param, param in zip(target_qf.parameters(), qf.parameters()):
t_param.data.copy_(t_param.data * (1.0 - self._tau) +
param.data * self._tau) | Update parameters in the target q-functions. | _update_targets | python | rlworkgroup/garage | src/garage/torch/algos/sac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/sac.py | MIT |
def optimize_policy(self, samples_data):
"""Optimize the policy q_functions, and temperature coefficient.
Args:
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
Note:
samples_data's entries should be torch.Tensor's with the following
shapes:
observation: :math:`(N, O^*)`
action: :math:`(N, A^*)`
reward: :math:`(N, 1)`
terminal: :math:`(N, 1)`
next_observation: :math:`(N, O^*)`
Returns:
torch.Tensor: loss from actor/policy network after optimization.
torch.Tensor: loss from 1st q-function after optimization.
torch.Tensor: loss from 2nd q-function after optimization.
"""
obs = samples_data['observation']
qf1_loss, qf2_loss = self._critic_objective(samples_data)
zero_optim_grads(self._qf1_optimizer)
qf1_loss.backward()
self._qf1_optimizer.step()
zero_optim_grads(self._qf2_optimizer)
qf2_loss.backward()
self._qf2_optimizer.step()
action_dists = self.policy(obs)[0]
new_actions_pre_tanh, new_actions = (
action_dists.rsample_with_pre_tanh_value())
log_pi_new_actions = action_dists.log_prob(
value=new_actions, pre_tanh_value=new_actions_pre_tanh)
policy_loss = self._actor_objective(samples_data, new_actions,
log_pi_new_actions)
policy_loss += self._caps_regularization_objective(
action_dists, samples_data)
zero_optim_grads(self._policy_optimizer)
policy_loss.backward()
self._policy_optimizer.step()
if self._use_automatic_entropy_tuning:
alpha_loss = self._temperature_objective(log_pi_new_actions,
samples_data)
zero_optim_grads(self._alpha_optimizer)
alpha_loss.backward()
self._alpha_optimizer.step()
return policy_loss, qf1_loss, qf2_loss | Optimize the policy q_functions, and temperature coefficient.
Args:
samples_data (dict): Transitions(S,A,R,S') that are sampled from
the replay buffer. It should have the keys 'observation',
'action', 'reward', 'terminal', and 'next_observations'.
Note:
samples_data's entries should be torch.Tensor's with the following
shapes:
observation: :math:`(N, O^*)`
action: :math:`(N, A^*)`
reward: :math:`(N, 1)`
terminal: :math:`(N, 1)`
next_observation: :math:`(N, O^*)`
Returns:
torch.Tensor: loss from actor/policy network after optimization.
torch.Tensor: loss from 1st q-function after optimization.
torch.Tensor: loss from 2nd q-function after optimization.
| optimize_policy | python | rlworkgroup/garage | src/garage/torch/algos/sac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/sac.py | MIT |
def _evaluate_policy(self, epoch):
"""Evaluate the performance of the policy via deterministic sampling.
Statistics such as (average) discounted return and success rate are
recorded.
Args:
epoch (int): The current training epoch.
Returns:
float: The average return across self._num_evaluation_episodes
episodes
"""
eval_episodes = obtain_evaluation_episodes(
self.policy,
self._eval_env,
self._max_episode_length_eval,
num_eps=self._num_evaluation_episodes,
deterministic=self._use_deterministic_evaluation)
last_return = log_performance(epoch,
eval_episodes,
discount=self._discount)
return last_return | Evaluate the performance of the policy via deterministic sampling.
Statistics such as (average) discounted return and success rate are
recorded.
Args:
epoch (int): The current training epoch.
Returns:
float: The average return across self._num_evaluation_episodes
episodes
| _evaluate_policy | python | rlworkgroup/garage | src/garage/torch/algos/sac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/sac.py | MIT |
def _log_statistics(self, policy_loss, qf1_loss, qf2_loss):
"""Record training statistics to dowel such as losses and returns.
Args:
policy_loss (torch.Tensor): loss from actor/policy network.
qf1_loss (torch.Tensor): loss from 1st qf/critic network.
qf2_loss (torch.Tensor): loss from 2nd qf/critic network.
"""
with torch.no_grad():
tabular.record('AlphaTemperature/mean',
self._log_alpha.exp().mean().item())
tabular.record('Policy/Loss', policy_loss.item())
tabular.record('QF/{}'.format('Qf1Loss'), float(qf1_loss))
tabular.record('QF/{}'.format('Qf2Loss'), float(qf2_loss))
tabular.record('ReplayBuffer/buffer_size',
self.replay_buffer.n_transitions_stored)
tabular.record('Average/TrainAverageReturn',
np.mean(self.episode_rewards)) | Record training statistics to dowel such as losses and returns.
Args:
policy_loss (torch.Tensor): loss from actor/policy network.
qf1_loss (torch.Tensor): loss from 1st qf/critic network.
qf2_loss (torch.Tensor): loss from 2nd qf/critic network.
| _log_statistics | python | rlworkgroup/garage | src/garage/torch/algos/sac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/sac.py | MIT |
def networks(self):
"""Return all the networks within the model.
Returns:
list: A list of networks.
"""
return [
self.policy, self._qf1, self._qf2, self._target_qf1,
self._target_qf2
] | Return all the networks within the model.
Returns:
list: A list of networks.
| networks | python | rlworkgroup/garage | src/garage/torch/algos/sac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/sac.py | MIT |
def to(self, device=None):
"""Put all the networks within the model on device.
Args:
device (str): ID of GPU or CPU.
"""
if device is None:
device = global_device()
for net in self.networks:
net.to(device)
if not self._use_automatic_entropy_tuning:
self._log_alpha = list_to_tensor([self._fixed_alpha
]).log().to(device)
else:
self._log_alpha = self._log_alpha.detach().to(
device).requires_grad_()
self._alpha_optimizer = self._optimizer([self._log_alpha],
lr=self._policy_lr)
self._alpha_optimizer.load_state_dict(
state_dict_to(self._alpha_optimizer.state_dict(), device))
self._qf1_optimizer.load_state_dict(
state_dict_to(self._qf1_optimizer.state_dict(), device))
self._qf2_optimizer.load_state_dict(
state_dict_to(self._qf2_optimizer.state_dict(), device))
self._policy_optimizer.load_state_dict(
state_dict_to(self._policy_optimizer.state_dict(), device)) | Put all the networks within the model on device.
Args:
device (str): ID of GPU or CPU.
| to | python | rlworkgroup/garage | src/garage/torch/algos/sac.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/sac.py | MIT |
def _get_action(self, action, noise_scale):
"""Select action based on policy.
Action can be added with noise.
Args:
action (float): Action.
noise_scale (float): Noise scale added to action.
Return:
float: Action selected by the policy.
"""
action += noise_scale * np.random.randn(self._action_dim)
# pylint: disable=invalid-unary-operand-type
return np.clip(action, -self._max_action, self._max_action) | Select action based on policy.
Action can be added with noise.
Args:
action (float): Action.
noise_scale (float): Noise scale added to action.
Return:
float: Action selected by the policy.
| _get_action | python | rlworkgroup/garage | src/garage/torch/algos/td3.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/td3.py | MIT |
def train(self, trainer):
"""Obtain samplers and start actual training for each epoch.
Args:
trainer (Trainer): Experiment trainer, which provides services
such as snapshotting and sampler control.
"""
if not self._eval_env:
self._eval_env = trainer.get_env_copy()
trainer.enable_logging = False
for _ in trainer.step_epochs():
for cycle in range(self._steps_per_epoch):
# Obtain trasnsition batch and store it in replay buffer.
# Get action randomly from environment within warm-up steps.
# Afterwards, get action from policy.
if self._uniform_random_policy and \
trainer.step_itr < self._start_steps:
trainer.step_episode = trainer.obtain_episodes(
trainer.step_itr,
agent_update=self._uniform_random_policy)
else:
trainer.step_episode = trainer.obtain_episodes(
trainer.step_itr, agent_update=self.exploration_policy)
self._replay_buffer.add_episode_batch(trainer.step_episode)
# Update after warm-up steps.
if trainer.total_env_steps >= self._update_after:
self._train_once(trainer.step_itr)
# Evaluate and log the results.
if (cycle == 0 and self._replay_buffer.n_transitions_stored >=
self._min_buffer_size):
trainer.enable_logging = True
eval_eps = self._evaluate_policy()
log_performance(trainer.step_episode,
eval_eps,
discount=self._discount,
prefix='Training')
log_performance(trainer.step_itr,
eval_eps,
discount=self._discount,
prefix='Evaluation')
trainer.step_itr += 1 | Obtain samplers and start actual training for each epoch.
Args:
trainer (Trainer): Experiment trainer, which provides services
such as snapshotting and sampler control.
| train | python | rlworkgroup/garage | src/garage/torch/algos/td3.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/td3.py | MIT |
def _train_once(self, itr):
"""Perform one iteration of training.
Args:
itr (int): Iteration number.
"""
for grad_step_timer in range(self._grad_steps_per_env_step):
if (self._replay_buffer.n_transitions_stored >=
self._min_buffer_size):
# Sample from buffer
samples = self._replay_buffer.sample_transitions(
self._buffer_batch_size)
samples = as_torch_dict(samples)
# Optimize
qf_loss, y, q, policy_loss = torch_to_np(
self._optimize_policy(samples, grad_step_timer))
self._episode_policy_losses.append(policy_loss)
self._episode_qf_losses.append(qf_loss)
self._epoch_ys.append(y)
self._epoch_qs.append(q)
if itr % self._steps_per_epoch == 0:
logger.log('Training finished')
epoch = itr // self._steps_per_epoch
if (self._replay_buffer.n_transitions_stored >=
self._min_buffer_size):
tabular.record('Epoch', epoch)
self._log_statistics() | Perform one iteration of training.
Args:
itr (int): Iteration number.
| _train_once | python | rlworkgroup/garage | src/garage/torch/algos/td3.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/td3.py | MIT |
def _optimize_policy(self, samples_data, grad_step_timer):
"""Perform algorithm optimization.
Args:
samples_data (dict): Processed batch data.
grad_step_timer (int): Iteration number of the gradient time
taken in the env.
Returns:
float: Loss predicted by the q networks
(critic networks).
float: Q value (min) predicted by one of the
target q networks.
float: Q value (min) predicted by one of the
current q networks.
float: Loss predicted by the policy
(action network).
"""
rewards = samples_data['rewards'].to(global_device()).reshape(-1, 1)
terminals = samples_data['terminals'].to(global_device()).reshape(
-1, 1)
actions = samples_data['actions'].to(global_device())
observations = samples_data['observations'].to(global_device())
next_observations = samples_data['next_observations'].to(
global_device())
next_inputs = next_observations
inputs = observations
with torch.no_grad():
# Select action according to policy and add clipped noise
noise = (torch.randn_like(actions) * self._policy_noise).clamp(
-self._policy_noise_clip, self._policy_noise_clip)
next_actions = (self._target_policy(next_inputs) + noise).clamp(
-self._max_action, self._max_action)
# Compute the target Q value
target_Q1 = self._target_qf_1(next_inputs, next_actions)
target_Q2 = self._target_qf_2(next_inputs, next_actions)
target_q = torch.min(target_Q1, target_Q2)
target_Q = rewards * self._reward_scaling + (
1. - terminals) * self._discount * target_q
# Get current Q values
current_Q1 = self._qf_1(inputs, actions)
current_Q2 = self._qf_2(inputs, actions)
current_Q = torch.min(current_Q1, current_Q2)
# Compute critic loss
critic_loss = F.mse_loss(current_Q1, target_Q) + F.mse_loss(
current_Q2, target_Q)
# Optimize critic
zero_optim_grads(self._qf_optimizer_1)
zero_optim_grads(self._qf_optimizer_2)
critic_loss.backward()
self._qf_optimizer_1.step()
self._qf_optimizer_2.step()
# Deplay policy updates
if grad_step_timer % self._update_actor_interval == 0:
# Compute actor loss
actions = self.policy(inputs)
self._actor_loss = -self._qf_1(inputs, actions).mean()
# Optimize actor
zero_optim_grads(self._policy_optimizer)
self._actor_loss.backward()
self._policy_optimizer.step()
# update target networks
self._update_network_parameters()
return (critic_loss.detach(), target_Q, current_Q.detach(),
self._actor_loss.detach()) | Perform algorithm optimization.
Args:
samples_data (dict): Processed batch data.
grad_step_timer (int): Iteration number of the gradient time
taken in the env.
Returns:
float: Loss predicted by the q networks
(critic networks).
float: Q value (min) predicted by one of the
target q networks.
float: Q value (min) predicted by one of the
current q networks.
float: Loss predicted by the policy
(action network).
| _optimize_policy | python | rlworkgroup/garage | src/garage/torch/algos/td3.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/td3.py | MIT |
def _evaluate_policy(self):
"""Evaluate the performance of the policy via deterministic rollouts.
Statistics such as (average) discounted return and success rate are
recorded.
Returns:
TrajectoryBatch: Evaluation trajectories, representing the best
current performance of the algorithm.
"""
return obtain_evaluation_episodes(
self.exploration_policy,
self._eval_env,
self._max_episode_length_eval,
num_eps=self._num_evaluation_episodes,
deterministic=self._use_deterministic_evaluation) | Evaluate the performance of the policy via deterministic rollouts.
Statistics such as (average) discounted return and success rate are
recorded.
Returns:
TrajectoryBatch: Evaluation trajectories, representing the best
current performance of the algorithm.
| _evaluate_policy | python | rlworkgroup/garage | src/garage/torch/algos/td3.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/td3.py | MIT |
def _update_network_parameters(self):
"""Update parameters in actor network and critic networks."""
soft_update_model(self._target_qf_1, self._qf_1, self._tau)
soft_update_model(self._target_qf_2, self._qf_2, self._tau)
soft_update_model(self._target_policy, self.policy, self._tau) | Update parameters in actor network and critic networks. | _update_network_parameters | python | rlworkgroup/garage | src/garage/torch/algos/td3.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/td3.py | MIT |
def _log_statistics(self):
"""Output training statistics to dowel such as losses and returns."""
tabular.record('Policy/AveragePolicyLoss',
np.mean(self._episode_policy_losses))
tabular.record('QFunction/AverageQFunctionLoss',
np.mean(self._episode_qf_losses))
tabular.record('QFunction/AverageQ', np.mean(self._epoch_qs))
tabular.record('QFunction/MaxQ', np.max(self._epoch_qs))
tabular.record('QFunction/AverageAbsQ',
np.mean(np.abs(self._epoch_qs)))
tabular.record('QFunction/AverageY', np.mean(self._epoch_ys))
tabular.record('QFunction/MaxY', np.max(self._epoch_ys))
tabular.record('QFunction/AverageAbsY',
np.mean(np.abs(self._epoch_ys))) | Output training statistics to dowel such as losses and returns. | _log_statistics | python | rlworkgroup/garage | src/garage/torch/algos/td3.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/td3.py | MIT |
def networks(self):
"""Return all the networks within the model.
Returns:
list: A list of networks.
"""
return [
self.policy, self._qf_1, self._qf_2, self._target_policy,
self._target_qf_1, self._target_qf_2
] | Return all the networks within the model.
Returns:
list: A list of networks.
| networks | python | rlworkgroup/garage | src/garage/torch/algos/td3.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/td3.py | MIT |
def to(self, device=None):
"""Put all the networks within the model on device.
Args:
device (str): ID of GPU or CPU.
"""
device = device or global_device()
for net in self.networks:
net.to(device) | Put all the networks within the model on device.
Args:
device (str): ID of GPU or CPU.
| to | python | rlworkgroup/garage | src/garage/torch/algos/td3.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/td3.py | MIT |
def _compute_objective(self, advantages, obs, actions, rewards):
r"""Compute objective value.
Args:
advantages (torch.Tensor): Advantage value at each step
with shape :math:`(N \dot [T], )`.
obs (torch.Tensor): Observation from the environment
with shape :math:`(N \dot [T], O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N \dot [T], A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N \dot [T], )`.
Returns:
torch.Tensor: Calculated objective values
with shape :math:`(N \dot [T], )`.
"""
with torch.no_grad():
old_ll = self._old_policy(obs)[0].log_prob(actions)
new_ll = self.policy(obs)[0].log_prob(actions)
likelihood_ratio = (new_ll - old_ll).exp()
# Calculate surrogate
surrogate = likelihood_ratio * advantages
return surrogate | Compute objective value.
Args:
advantages (torch.Tensor): Advantage value at each step
with shape :math:`(N \dot [T], )`.
obs (torch.Tensor): Observation from the environment
with shape :math:`(N \dot [T], O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N \dot [T], A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N \dot [T], )`.
Returns:
torch.Tensor: Calculated objective values
with shape :math:`(N \dot [T], )`.
| _compute_objective | python | rlworkgroup/garage | src/garage/torch/algos/trpo.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/trpo.py | MIT |
def _train_policy(self, obs, actions, rewards, advantages):
r"""Train the policy.
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N, O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N, A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N, )`.
advantages (torch.Tensor): Advantage value at each step
with shape :math:`(N, )`.
Returns:
torch.Tensor: Calculated mean scalar value of policy loss (float).
"""
# pylint: disable=protected-access
zero_optim_grads(self._policy_optimizer._optimizer)
loss = self._compute_loss_with_adv(obs, actions, rewards, advantages)
loss.backward()
self._policy_optimizer.step(
f_loss=lambda: self._compute_loss_with_adv(obs, actions, rewards,
advantages),
f_constraint=lambda: self._compute_kl_constraint(obs))
return loss | Train the policy.
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N, O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N, A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N, )`.
advantages (torch.Tensor): Advantage value at each step
with shape :math:`(N, )`.
Returns:
torch.Tensor: Calculated mean scalar value of policy loss (float).
| _train_policy | python | rlworkgroup/garage | src/garage/torch/algos/trpo.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/trpo.py | MIT |
def _train_once(self, itr, eps):
"""Train the algorithm once.
Args:
itr (int): Iteration number.
eps (EpisodeBatch): A batch of collected paths.
Returns:
numpy.float64: Calculated mean value of undiscounted returns.
"""
obs = np_to_torch(eps.padded_observations)
rewards = np_to_torch(eps.padded_rewards)
returns = np_to_torch(
np.stack([
discount_cumsum(reward, self.discount)
for reward in eps.padded_rewards
]))
valids = eps.lengths
with torch.no_grad():
baselines = self._value_function(obs)
if self._maximum_entropy:
policy_entropies = self._compute_policy_entropy(obs)
rewards += self._policy_ent_coeff * policy_entropies
obs_flat = np_to_torch(eps.observations)
actions_flat = np_to_torch(eps.actions)
rewards_flat = np_to_torch(eps.rewards)
returns_flat = torch.cat(filter_valids(returns, valids))
advs_flat = self._compute_advantage(rewards, valids, baselines)
with torch.no_grad():
policy_loss_before = self._compute_loss_with_adv(
obs_flat, actions_flat, rewards_flat, advs_flat)
vf_loss_before = self._value_function.compute_loss(
obs_flat, returns_flat)
kl_before = self._compute_kl_constraint(obs)
self._train(obs_flat, actions_flat, rewards_flat, returns_flat,
advs_flat)
with torch.no_grad():
policy_loss_after = self._compute_loss_with_adv(
obs_flat, actions_flat, rewards_flat, advs_flat)
vf_loss_after = self._value_function.compute_loss(
obs_flat, returns_flat)
kl_after = self._compute_kl_constraint(obs)
policy_entropy = self._compute_policy_entropy(obs)
with tabular.prefix(self.policy.name):
tabular.record('/LossBefore', policy_loss_before.item())
tabular.record('/LossAfter', policy_loss_after.item())
tabular.record('/dLoss',
(policy_loss_before - policy_loss_after).item())
tabular.record('/KLBefore', kl_before.item())
tabular.record('/KL', kl_after.item())
tabular.record('/Entropy', policy_entropy.mean().item())
with tabular.prefix(self._value_function.name):
tabular.record('/LossBefore', vf_loss_before.item())
tabular.record('/LossAfter', vf_loss_after.item())
tabular.record('/dLoss',
vf_loss_before.item() - vf_loss_after.item())
self._old_policy.load_state_dict(self.policy.state_dict())
undiscounted_returns = log_performance(itr,
eps,
discount=self._discount)
return np.mean(undiscounted_returns) | Train the algorithm once.
Args:
itr (int): Iteration number.
eps (EpisodeBatch): A batch of collected paths.
Returns:
numpy.float64: Calculated mean value of undiscounted returns.
| _train_once | python | rlworkgroup/garage | src/garage/torch/algos/vpg.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/vpg.py | MIT |
def train(self, trainer):
"""Obtain samplers and start actual training for each epoch.
Args:
trainer (Trainer): Gives the algorithm the access to
:method:`~Trainer.step_epochs()`, which provides services
such as snapshotting and sampler control.
Returns:
float: The average return in last epoch cycle.
"""
last_return = None
for _ in trainer.step_epochs():
for _ in range(self._n_samples):
eps = trainer.obtain_episodes(trainer.step_itr)
last_return = self._train_once(trainer.step_itr, eps)
trainer.step_itr += 1
return last_return | Obtain samplers and start actual training for each epoch.
Args:
trainer (Trainer): Gives the algorithm the access to
:method:`~Trainer.step_epochs()`, which provides services
such as snapshotting and sampler control.
Returns:
float: The average return in last epoch cycle.
| train | python | rlworkgroup/garage | src/garage/torch/algos/vpg.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/vpg.py | MIT |
def _train(self, obs, actions, rewards, returns, advs):
r"""Train the policy and value function with minibatch.
Args:
obs (torch.Tensor): Observation from the environment with shape
:math:`(N, O*)`.
actions (torch.Tensor): Actions fed to the environment with shape
:math:`(N, A*)`.
rewards (torch.Tensor): Acquired rewards with shape :math:`(N, )`.
returns (torch.Tensor): Acquired returns with shape :math:`(N, )`.
advs (torch.Tensor): Advantage value at each step with shape
:math:`(N, )`.
"""
for dataset in self._policy_optimizer.get_minibatch(
obs, actions, rewards, advs):
self._train_policy(*dataset)
for dataset in self._vf_optimizer.get_minibatch(obs, returns):
self._train_value_function(*dataset) | Train the policy and value function with minibatch.
Args:
obs (torch.Tensor): Observation from the environment with shape
:math:`(N, O*)`.
actions (torch.Tensor): Actions fed to the environment with shape
:math:`(N, A*)`.
rewards (torch.Tensor): Acquired rewards with shape :math:`(N, )`.
returns (torch.Tensor): Acquired returns with shape :math:`(N, )`.
advs (torch.Tensor): Advantage value at each step with shape
:math:`(N, )`.
| _train | python | rlworkgroup/garage | src/garage/torch/algos/vpg.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/vpg.py | MIT |
def _train_policy(self, obs, actions, rewards, advantages):
r"""Train the policy.
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N, O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N, A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N, )`.
advantages (torch.Tensor): Advantage value at each step
with shape :math:`(N, )`.
Returns:
torch.Tensor: Calculated mean scalar value of policy loss (float).
"""
# pylint: disable=protected-access
zero_optim_grads(self._policy_optimizer._optimizer)
loss = self._compute_loss_with_adv(obs, actions, rewards, advantages)
loss.backward()
self._policy_optimizer.step()
return loss | Train the policy.
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N, O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N, A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N, )`.
advantages (torch.Tensor): Advantage value at each step
with shape :math:`(N, )`.
Returns:
torch.Tensor: Calculated mean scalar value of policy loss (float).
| _train_policy | python | rlworkgroup/garage | src/garage/torch/algos/vpg.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/vpg.py | MIT |
def _train_value_function(self, obs, returns):
r"""Train the value function.
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N, O*)`.
returns (torch.Tensor): Acquired returns
with shape :math:`(N, )`.
Returns:
torch.Tensor: Calculated mean scalar value of value function loss
(float).
"""
# pylint: disable=protected-access
zero_optim_grads(self._vf_optimizer._optimizer)
loss = self._value_function.compute_loss(obs, returns)
loss.backward()
self._vf_optimizer.step()
return loss | Train the value function.
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N, O*)`.
returns (torch.Tensor): Acquired returns
with shape :math:`(N, )`.
Returns:
torch.Tensor: Calculated mean scalar value of value function loss
(float).
| _train_value_function | python | rlworkgroup/garage | src/garage/torch/algos/vpg.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/vpg.py | MIT |
def _compute_loss(self, obs, actions, rewards, valids, baselines):
r"""Compute mean value of loss.
Notes: P is the maximum episode length (self.max_episode_length)
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N, P, O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N, P, A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N, P)`.
valids (list[int]): Numbers of valid steps in each episode
baselines (torch.Tensor): Value function estimation at each step
with shape :math:`(N, P)`.
Returns:
torch.Tensor: Calculated negative mean scalar value of
objective (float).
"""
obs_flat = torch.cat(filter_valids(obs, valids))
actions_flat = torch.cat(filter_valids(actions, valids))
rewards_flat = torch.cat(filter_valids(rewards, valids))
advantages_flat = self._compute_advantage(rewards, valids, baselines)
return self._compute_loss_with_adv(obs_flat, actions_flat,
rewards_flat, advantages_flat) | Compute mean value of loss.
Notes: P is the maximum episode length (self.max_episode_length)
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N, P, O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N, P, A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N, P)`.
valids (list[int]): Numbers of valid steps in each episode
baselines (torch.Tensor): Value function estimation at each step
with shape :math:`(N, P)`.
Returns:
torch.Tensor: Calculated negative mean scalar value of
objective (float).
| _compute_loss | python | rlworkgroup/garage | src/garage/torch/algos/vpg.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/vpg.py | MIT |
def _compute_loss_with_adv(self, obs, actions, rewards, advantages):
r"""Compute mean value of loss.
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N \dot [T], O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N \dot [T], A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N \dot [T], )`.
advantages (torch.Tensor): Advantage value at each step
with shape :math:`(N \dot [T], )`.
Returns:
torch.Tensor: Calculated negative mean scalar value of objective.
"""
objectives = self._compute_objective(advantages, obs, actions, rewards)
if self._entropy_regularzied:
policy_entropies = self._compute_policy_entropy(obs)
objectives += self._policy_ent_coeff * policy_entropies
return -objectives.mean() | Compute mean value of loss.
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N \dot [T], O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N \dot [T], A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N \dot [T], )`.
advantages (torch.Tensor): Advantage value at each step
with shape :math:`(N \dot [T], )`.
Returns:
torch.Tensor: Calculated negative mean scalar value of objective.
| _compute_loss_with_adv | python | rlworkgroup/garage | src/garage/torch/algos/vpg.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/vpg.py | MIT |
def _compute_advantage(self, rewards, valids, baselines):
r"""Compute mean value of loss.
Notes: P is the maximum episode length (self.max_episode_length)
Args:
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N, P)`.
valids (list[int]): Numbers of valid steps in each episode
baselines (torch.Tensor): Value function estimation at each step
with shape :math:`(N, P)`.
Returns:
torch.Tensor: Calculated advantage values given rewards and
baselines with shape :math:`(N \dot [T], )`.
"""
advantages = compute_advantages(self._discount, self._gae_lambda,
self.max_episode_length, baselines,
rewards)
advantage_flat = torch.cat(filter_valids(advantages, valids))
if self._center_adv:
means = advantage_flat.mean()
variance = advantage_flat.var()
advantage_flat = (advantage_flat - means) / (variance + 1e-8)
if self._positive_adv:
advantage_flat -= advantage_flat.min()
return advantage_flat | Compute mean value of loss.
Notes: P is the maximum episode length (self.max_episode_length)
Args:
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N, P)`.
valids (list[int]): Numbers of valid steps in each episode
baselines (torch.Tensor): Value function estimation at each step
with shape :math:`(N, P)`.
Returns:
torch.Tensor: Calculated advantage values given rewards and
baselines with shape :math:`(N \dot [T], )`.
| _compute_advantage | python | rlworkgroup/garage | src/garage/torch/algos/vpg.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/vpg.py | MIT |
def _compute_kl_constraint(self, obs):
r"""Compute KL divergence.
Compute the KL divergence between the old policy distribution and
current policy distribution.
Notes: P is the maximum episode length (self.max_episode_length)
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N, P, O*)`.
Returns:
torch.Tensor: Calculated mean scalar value of KL divergence
(float).
"""
with torch.no_grad():
old_dist = self._old_policy(obs)[0]
new_dist = self.policy(obs)[0]
kl_constraint = torch.distributions.kl.kl_divergence(
old_dist, new_dist)
return kl_constraint.mean() | Compute KL divergence.
Compute the KL divergence between the old policy distribution and
current policy distribution.
Notes: P is the maximum episode length (self.max_episode_length)
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N, P, O*)`.
Returns:
torch.Tensor: Calculated mean scalar value of KL divergence
(float).
| _compute_kl_constraint | python | rlworkgroup/garage | src/garage/torch/algos/vpg.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/vpg.py | MIT |
def _compute_policy_entropy(self, obs):
r"""Compute entropy value of probability distribution.
Notes: P is the maximum episode length (self.max_episode_length)
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N, P, O*)`.
Returns:
torch.Tensor: Calculated entropy values given observation
with shape :math:`(N, P)`.
"""
if self._stop_entropy_gradient:
with torch.no_grad():
policy_entropy = self.policy(obs)[0].entropy()
else:
policy_entropy = self.policy(obs)[0].entropy()
# This prevents entropy from becoming negative for small policy std
if self._use_softplus_entropy:
policy_entropy = F.softplus(policy_entropy)
return policy_entropy | Compute entropy value of probability distribution.
Notes: P is the maximum episode length (self.max_episode_length)
Args:
obs (torch.Tensor): Observation from the environment
with shape :math:`(N, P, O*)`.
Returns:
torch.Tensor: Calculated entropy values given observation
with shape :math:`(N, P)`.
| _compute_policy_entropy | python | rlworkgroup/garage | src/garage/torch/algos/vpg.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/vpg.py | MIT |
def _compute_objective(self, advantages, obs, actions, rewards):
r"""Compute objective value.
Args:
advantages (torch.Tensor): Advantage value at each step
with shape :math:`(N \dot [T], )`.
obs (torch.Tensor): Observation from the environment
with shape :math:`(N \dot [T], O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N \dot [T], A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N \dot [T], )`.
Returns:
torch.Tensor: Calculated objective values
with shape :math:`(N \dot [T], )`.
"""
del rewards
log_likelihoods = self.policy(obs)[0].log_prob(actions)
return log_likelihoods * advantages | Compute objective value.
Args:
advantages (torch.Tensor): Advantage value at each step
with shape :math:`(N \dot [T], )`.
obs (torch.Tensor): Observation from the environment
with shape :math:`(N \dot [T], O*)`.
actions (torch.Tensor): Actions fed to the environment
with shape :math:`(N \dot [T], A*)`.
rewards (torch.Tensor): Acquired rewards
with shape :math:`(N \dot [T], )`.
Returns:
torch.Tensor: Calculated objective values
with shape :math:`(N \dot [T], )`.
| _compute_objective | python | rlworkgroup/garage | src/garage/torch/algos/vpg.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/algos/vpg.py | MIT |
def log_prob(self, value, pre_tanh_value=None, epsilon=1e-6):
"""The log likelihood of a sample on the this Tanh Distribution.
Args:
value (torch.Tensor): The sample whose loglikelihood is being
computed.
pre_tanh_value (torch.Tensor): The value prior to having the tanh
function applied to it but after it has been sampled from the
normal distribution.
epsilon (float): Regularization constant. Making this value larger
makes the computation more stable but less precise.
Note:
when pre_tanh_value is None, an estimate is made of what the
value is. This leads to a worse estimation of the log_prob.
If the value being used is collected from functions like
`sample` and `rsample`, one can instead use functions like
`sample_return_pre_tanh_value` or
`rsample_return_pre_tanh_value`
Returns:
torch.Tensor: The log likelihood of value on the distribution.
"""
# pylint: disable=arguments-differ
if pre_tanh_value is None:
pre_tanh_value = torch.log(
(1 + epsilon + value) / (1 + epsilon - value)) / 2
norm_lp = self._normal.log_prob(pre_tanh_value)
ret = (norm_lp - torch.sum(
torch.log(self._clip_but_pass_gradient((1. - value**2)) + epsilon),
axis=-1))
return ret | The log likelihood of a sample on the this Tanh Distribution.
Args:
value (torch.Tensor): The sample whose loglikelihood is being
computed.
pre_tanh_value (torch.Tensor): The value prior to having the tanh
function applied to it but after it has been sampled from the
normal distribution.
epsilon (float): Regularization constant. Making this value larger
makes the computation more stable but less precise.
Note:
when pre_tanh_value is None, an estimate is made of what the
value is. This leads to a worse estimation of the log_prob.
If the value being used is collected from functions like
`sample` and `rsample`, one can instead use functions like
`sample_return_pre_tanh_value` or
`rsample_return_pre_tanh_value`
Returns:
torch.Tensor: The log likelihood of value on the distribution.
| log_prob | python | rlworkgroup/garage | src/garage/torch/distributions/tanh_normal.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/distributions/tanh_normal.py | MIT |
def rsample_with_pre_tanh_value(self, sample_shape=torch.Size()):
"""Return a sample, sampled from this TanhNormal distribution.
Returns the sampled value before the tanh transform is applied and the
sampled value with the tanh transform applied to it.
Args:
sample_shape (list): shape of the return.
Note:
Gradients pass through this operation.
Returns:
torch.Tensor: Samples from this distribution.
torch.Tensor: Samples from the underlying
:obj:`torch.distributions.Normal` distribution, prior to being
transformed with `tanh`.
"""
z = self._normal.rsample(sample_shape)
return z, torch.tanh(z) | Return a sample, sampled from this TanhNormal distribution.
Returns the sampled value before the tanh transform is applied and the
sampled value with the tanh transform applied to it.
Args:
sample_shape (list): shape of the return.
Note:
Gradients pass through this operation.
Returns:
torch.Tensor: Samples from this distribution.
torch.Tensor: Samples from the underlying
:obj:`torch.distributions.Normal` distribution, prior to being
transformed with `tanh`.
| rsample_with_pre_tanh_value | python | rlworkgroup/garage | src/garage/torch/distributions/tanh_normal.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/distributions/tanh_normal.py | MIT |
def _from_distribution(cls, new_normal):
"""Construct a new TanhNormal distribution from a normal distribution.
Args:
new_normal (Independent(Normal)): underlying normal dist for
the new TanhNormal distribution.
Returns:
TanhNormal: A new distribution whose underlying normal dist
is new_normal.
"""
# pylint: disable=protected-access
new = cls(torch.zeros(1), torch.zeros(1))
new._normal = new_normal
return new | Construct a new TanhNormal distribution from a normal distribution.
Args:
new_normal (Independent(Normal)): underlying normal dist for
the new TanhNormal distribution.
Returns:
TanhNormal: A new distribution whose underlying normal dist
is new_normal.
| _from_distribution | python | rlworkgroup/garage | src/garage/torch/distributions/tanh_normal.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/distributions/tanh_normal.py | MIT |
def expand(self, batch_shape, _instance=None):
"""Returns a new TanhNormal distribution.
(or populates an existing instance provided by a derived class) with
batch dimensions expanded to `batch_shape`. This method calls
:class:`~torch.Tensor.expand` on the distribution's parameters. As
such, this does not allocate new memory for the expanded distribution
instance. Additionally, this does not repeat any args checking or
parameter broadcasting in `__init__.py`, when an instance is first
created.
Args:
batch_shape (torch.Size): the desired expanded size.
_instance(instance): new instance provided by subclasses that
need to override `.expand`.
Returns:
Instance: New distribution instance with batch dimensions expanded
to `batch_size`.
"""
new_normal = self._normal.expand(batch_shape, _instance)
new = self._from_distribution(new_normal)
return new | Returns a new TanhNormal distribution.
(or populates an existing instance provided by a derived class) with
batch dimensions expanded to `batch_shape`. This method calls
:class:`~torch.Tensor.expand` on the distribution's parameters. As
such, this does not allocate new memory for the expanded distribution
instance. Additionally, this does not repeat any args checking or
parameter broadcasting in `__init__.py`, when an instance is first
created.
Args:
batch_shape (torch.Size): the desired expanded size.
_instance(instance): new instance provided by subclasses that
need to override `.expand`.
Returns:
Instance: New distribution instance with batch dimensions expanded
to `batch_size`.
| expand | python | rlworkgroup/garage | src/garage/torch/distributions/tanh_normal.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/distributions/tanh_normal.py | MIT |
def _clip_but_pass_gradient(x, lower=0., upper=1.):
"""Clipping function that allows for gradients to flow through.
Args:
x (torch.Tensor): value to be clipped
lower (float): lower bound of clipping
upper (float): upper bound of clipping
Returns:
torch.Tensor: x clipped between lower and upper.
"""
clip_up = (x > upper).float()
clip_low = (x < lower).float()
with torch.no_grad():
clip = ((upper - x) * clip_up + (lower - x) * clip_low)
return x + clip | Clipping function that allows for gradients to flow through.
Args:
x (torch.Tensor): value to be clipped
lower (float): lower bound of clipping
upper (float): upper bound of clipping
Returns:
torch.Tensor: x clipped between lower and upper.
| _clip_but_pass_gradient | python | rlworkgroup/garage | src/garage/torch/distributions/tanh_normal.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/distributions/tanh_normal.py | MIT |
def spec(self):
"""garage.InOutSpec: Input and output space."""
input_space = akro.Box(-np.inf, np.inf, self._input_dim)
output_space = akro.Box(-np.inf, np.inf, self._output_dim)
return InOutSpec(input_space, output_space) | garage.InOutSpec: Input and output space. | spec | python | rlworkgroup/garage | src/garage/torch/embeddings/mlp_encoder.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/embeddings/mlp_encoder.py | MIT |
def reset(self, do_resets=None):
"""Reset the encoder.
This is effective only to recurrent encoder. do_resets is effective
only to vectoried encoder.
For a vectorized encoder, do_resets is an array of boolean indicating
which internal states to be reset. The length of do_resets should be
equal to the length of inputs.
Args:
do_resets (numpy.ndarray): Bool array indicating which states
to be reset.
""" | Reset the encoder.
This is effective only to recurrent encoder. do_resets is effective
only to vectoried encoder.
For a vectorized encoder, do_resets is an array of boolean indicating
which internal states to be reset. The length of do_resets should be
equal to the length of inputs.
Args:
do_resets (numpy.ndarray): Bool array indicating which states
to be reset.
| reset | python | rlworkgroup/garage | src/garage/torch/embeddings/mlp_encoder.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/embeddings/mlp_encoder.py | MIT |
def forward(self, x):
"""Forward method.
Args:
x (torch.Tensor): Input values. Should match image_format
specified at construction (either NCHW or NCWH).
Returns:
List[torch.Tensor]: Output values
"""
# Transform single values into batch, if necessary.
if len(x.shape) == 3:
x = x.unsqueeze(0)
# This should be the single place in torch that image normalization
# happens
if isinstance(self.spec.input_space, akro.Image):
x = torch.div(x, 255.0)
assert len(x.shape) == 4
if self._format == 'NHWC':
# Convert to internal NCHW format
x = x.permute((0, 3, 1, 2))
for layer in self._cnn_layers:
x = layer(x)
if self._format == 'NHWC':
# Convert back to NHWC (just in case)
x = x.permute((0, 2, 3, 1))
# Remove non-batch dimensions
x = x.reshape(x.shape[0], -1)
# Apply final linearity, if it was requested.
if self._final_layer is not None:
x = self._final_layer(x)
return x | Forward method.
Args:
x (torch.Tensor): Input values. Should match image_format
specified at construction (either NCHW or NCWH).
Returns:
List[torch.Tensor]: Output values
| forward | python | rlworkgroup/garage | src/garage/torch/modules/cnn_module.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/modules/cnn_module.py | MIT |
def _check_spec(spec, image_format):
"""Check that an InOutSpec is suitable for a CNNModule.
Args:
spec (garage.InOutSpec): Specification of inputs and outputs. The
input should be in 'NCHW' format: [batch_size, channel, height,
width]. Will print a warning if the channel size is not 1 or 3.
If output_space is specified, then a final linear layer will be
inserted to map to that dimensionality. If output_space is None,
it will be filled in with the computed output space.
image_format (str): Either 'NCHW' or 'NHWC'. Should match the input
specification. Gym uses NHWC by default, but PyTorch uses NCHW by
default.
Returns:
tuple[int, int, int]: The input channels, height, and width.
Raises:
ValueError: If spec isn't suitable for a CNNModule.
"""
# pylint: disable=no-else-raise
input_space = spec.input_space
output_space = spec.output_space
# Don't use isinstance, since akro.Space is guaranteed to inherit from
# gym.Space
if getattr(input_space, 'shape', None) is None:
raise ValueError(
f'input_space to CNNModule is {input_space}, but should be an '
f'akro.Box or akro.Image')
elif len(input_space.shape) != 3:
raise ValueError(
f'Input to CNNModule is {input_space}, but should have three '
f'dimensions.')
if (output_space is not None and not (hasattr(output_space, 'shape')
and len(output_space.shape) == 1)):
raise ValueError(
f'output_space to CNNModule is {output_space}, but should be '
f'an akro.Box with a single dimension or None')
if image_format == 'NCHW':
in_channels = spec.input_space.shape[0]
height = spec.input_space.shape[1]
width = spec.input_space.shape[2]
elif image_format == 'NHWC':
height = spec.input_space.shape[0]
width = spec.input_space.shape[1]
in_channels = spec.input_space.shape[2]
else:
raise ValueError(
f'image_format has value {image_format!r}, but must be either '
f"'NCHW' or 'NHWC'")
if in_channels not in (1, 3):
warnings.warn(
f'CNNModule input has {in_channels} channels, but '
f'1 or 3 channels are typical. Consider changing the CNN '
f'image_format.')
return in_channels, height, width | Check that an InOutSpec is suitable for a CNNModule.
Args:
spec (garage.InOutSpec): Specification of inputs and outputs. The
input should be in 'NCHW' format: [batch_size, channel, height,
width]. Will print a warning if the channel size is not 1 or 3.
If output_space is specified, then a final linear layer will be
inserted to map to that dimensionality. If output_space is None,
it will be filled in with the computed output space.
image_format (str): Either 'NCHW' or 'NHWC'. Should match the input
specification. Gym uses NHWC by default, but PyTorch uses NCHW by
default.
Returns:
tuple[int, int, int]: The input channels, height, and width.
Raises:
ValueError: If spec isn't suitable for a CNNModule.
| _check_spec | python | rlworkgroup/garage | src/garage/torch/modules/cnn_module.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/modules/cnn_module.py | MIT |
def to(self, *args, **kwargs):
"""Move the module to the specified device.
Args:
*args: args to pytorch to function.
**kwargs: keyword args to pytorch to function.
"""
super().to(*args, **kwargs)
buffers = dict(self.named_buffers())
if not isinstance(self._log_std, torch.nn.Parameter):
self._log_std = buffers['log_std']
self._min_std_param = buffers['min_std_param']
self._max_std_param = buffers['max_std_param'] | Move the module to the specified device.
Args:
*args: args to pytorch to function.
**kwargs: keyword args to pytorch to function.
| to | python | rlworkgroup/garage | src/garage/torch/modules/gaussian_mlp_module.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/modules/gaussian_mlp_module.py | MIT |
def forward(self, *inputs):
"""Forward method.
Args:
*inputs: Input to the module.
Returns:
torch.distributions.independent.Independent: Independent
distribution.
"""
mean, log_std_uncentered = self._get_mean_and_log_std(*inputs)
if self._min_std_param or self._max_std_param:
log_std_uncentered = log_std_uncentered.clamp(
min=(None if self._min_std_param is None else
self._min_std_param.item()),
max=(None if self._max_std_param is None else
self._max_std_param.item()))
if self._std_parameterization == 'exp':
std = log_std_uncentered.exp()
else:
std = log_std_uncentered.exp().exp().add(1.).log()
dist = self._norm_dist_class(mean, std)
# This control flow is needed because if a TanhNormal distribution is
# wrapped by torch.distributions.Independent, then custom functions
# such as rsample_with_pretanh_value of the TanhNormal distribution
# are not accessable.
if not isinstance(dist, TanhNormal):
# Makes it so that a sample from the distribution is treated as a
# single sample and not dist.batch_shape samples.
dist = Independent(dist, 1)
return dist | Forward method.
Args:
*inputs: Input to the module.
Returns:
torch.distributions.independent.Independent: Independent
distribution.
| forward | python | rlworkgroup/garage | src/garage/torch/modules/gaussian_mlp_module.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/modules/gaussian_mlp_module.py | MIT |
def _get_mean_and_log_std(self, x):
"""Get mean and std of Gaussian distribution given inputs.
Args:
x: Input to the module.
Returns:
torch.Tensor: The mean of Gaussian distribution.
torch.Tensor: The variance of Gaussian distribution.
"""
mean = self._mean_module(x)
return mean, self._log_std | Get mean and std of Gaussian distribution given inputs.
Args:
x: Input to the module.
Returns:
torch.Tensor: The mean of Gaussian distribution.
torch.Tensor: The variance of Gaussian distribution.
| _get_mean_and_log_std | python | rlworkgroup/garage | src/garage/torch/modules/gaussian_mlp_module.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/modules/gaussian_mlp_module.py | MIT |
def _check_parameter_for_output_layer(cls, var_name, var, n_heads):
"""Check input parameters for output layer are valid.
Args:
var_name (str): variable name
var (any): variable to be checked
n_heads (int): number of head
Returns:
list: list of variables (length of n_heads)
Raises:
ValueError: if the variable is a list but length of the variable
is not equal to n_heads
"""
if isinstance(var, (list, tuple)):
if len(var) == 1:
return list(var) * n_heads
if len(var) == n_heads:
return var
msg = ('{} should be either an integer or a collection of length '
'n_heads ({}), but {} provided.')
raise ValueError(msg.format(var_name, n_heads, var))
return [copy.deepcopy(var) for _ in range(n_heads)] | Check input parameters for output layer are valid.
Args:
var_name (str): variable name
var (any): variable to be checked
n_heads (int): number of head
Returns:
list: list of variables (length of n_heads)
Raises:
ValueError: if the variable is a list but length of the variable
is not equal to n_heads
| _check_parameter_for_output_layer | python | rlworkgroup/garage | src/garage/torch/modules/multi_headed_mlp_module.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/modules/multi_headed_mlp_module.py | MIT |
def forward(self, input_val):
"""Forward method.
Args:
input_val (torch.Tensor): Input values with (N, *, input_dim)
shape.
Returns:
List[torch.Tensor]: Output values
"""
x = input_val
for layer in self._layers:
x = layer(x)
return [output_layer(x) for output_layer in self._output_layers] | Forward method.
Args:
input_val (torch.Tensor): Input values with (N, *, input_dim)
shape.
Returns:
List[torch.Tensor]: Output values
| forward | python | rlworkgroup/garage | src/garage/torch/modules/multi_headed_mlp_module.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/modules/multi_headed_mlp_module.py | MIT |
def _build_hessian_vector_product(func, params, reg_coeff=1e-5):
"""Computes Hessian-vector product using Pearlmutter's algorithm.
`Pearlmutter, Barak A. "Fast exact multiplication by the Hessian." Neural
computation 6.1 (1994): 147-160.`
Args:
func (callable): A function that returns a torch.Tensor. Hessian of
the return value will be computed.
params (list[torch.Tensor]): A list of function parameters.
reg_coeff (float): A small value so that A -> A + reg*I.
Returns:
function: It can be called to get the final result.
"""
param_shapes = [p.shape or torch.Size([1]) for p in params]
f = func()
f_grads = torch.autograd.grad(f, params, create_graph=True)
def _eval(vector):
"""The evaluation function.
Args:
vector (torch.Tensor): The vector to be multiplied with
Hessian.
Returns:
torch.Tensor: The product of Hessian of function f and v.
"""
unflatten_vector = unflatten_tensors(vector, param_shapes)
assert len(f_grads) == len(unflatten_vector)
grad_vector_product = torch.sum(
torch.stack(
[torch.sum(g * x) for g, x in zip(f_grads, unflatten_vector)]))
hvp = list(
torch.autograd.grad(grad_vector_product, params,
retain_graph=True))
for i, (hx, p) in enumerate(zip(hvp, params)):
if hx is None:
hvp[i] = torch.zeros_like(p)
flat_output = torch.cat([h.reshape(-1) for h in hvp])
return flat_output + reg_coeff * vector
return _eval | Computes Hessian-vector product using Pearlmutter's algorithm.
`Pearlmutter, Barak A. "Fast exact multiplication by the Hessian." Neural
computation 6.1 (1994): 147-160.`
Args:
func (callable): A function that returns a torch.Tensor. Hessian of
the return value will be computed.
params (list[torch.Tensor]): A list of function parameters.
reg_coeff (float): A small value so that A -> A + reg*I.
Returns:
function: It can be called to get the final result.
| _build_hessian_vector_product | python | rlworkgroup/garage | src/garage/torch/optimizers/conjugate_gradient_optimizer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/optimizers/conjugate_gradient_optimizer.py | MIT |
def _eval(vector):
"""The evaluation function.
Args:
vector (torch.Tensor): The vector to be multiplied with
Hessian.
Returns:
torch.Tensor: The product of Hessian of function f and v.
"""
unflatten_vector = unflatten_tensors(vector, param_shapes)
assert len(f_grads) == len(unflatten_vector)
grad_vector_product = torch.sum(
torch.stack(
[torch.sum(g * x) for g, x in zip(f_grads, unflatten_vector)]))
hvp = list(
torch.autograd.grad(grad_vector_product, params,
retain_graph=True))
for i, (hx, p) in enumerate(zip(hvp, params)):
if hx is None:
hvp[i] = torch.zeros_like(p)
flat_output = torch.cat([h.reshape(-1) for h in hvp])
return flat_output + reg_coeff * vector | The evaluation function.
Args:
vector (torch.Tensor): The vector to be multiplied with
Hessian.
Returns:
torch.Tensor: The product of Hessian of function f and v.
| _eval | python | rlworkgroup/garage | src/garage/torch/optimizers/conjugate_gradient_optimizer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/optimizers/conjugate_gradient_optimizer.py | MIT |
def _conjugate_gradient(f_Ax, b, cg_iters, residual_tol=1e-10):
"""Use Conjugate Gradient iteration to solve Ax = b. Demmel p 312.
Args:
f_Ax (callable): A function to compute Hessian vector product.
b (torch.Tensor): Right hand side of the equation to solve.
cg_iters (int): Number of iterations to run conjugate gradient
algorithm.
residual_tol (float): Tolerence for convergence.
Returns:
torch.Tensor: Solution x* for equation Ax = b.
"""
p = b.clone()
r = b.clone()
x = torch.zeros_like(b)
rdotr = torch.dot(r, r)
for _ in range(cg_iters):
z = f_Ax(p)
v = rdotr / torch.dot(p, z)
x += v * p
r -= v * z
newrdotr = torch.dot(r, r)
mu = newrdotr / rdotr
p = r + mu * p
rdotr = newrdotr
if rdotr < residual_tol:
break
return x | Use Conjugate Gradient iteration to solve Ax = b. Demmel p 312.
Args:
f_Ax (callable): A function to compute Hessian vector product.
b (torch.Tensor): Right hand side of the equation to solve.
cg_iters (int): Number of iterations to run conjugate gradient
algorithm.
residual_tol (float): Tolerence for convergence.
Returns:
torch.Tensor: Solution x* for equation Ax = b.
| _conjugate_gradient | python | rlworkgroup/garage | src/garage/torch/optimizers/conjugate_gradient_optimizer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/optimizers/conjugate_gradient_optimizer.py | MIT |
def step(self, f_loss, f_constraint): # pylint: disable=arguments-differ
"""Take an optimization step.
Args:
f_loss (callable): Function to compute the loss.
f_constraint (callable): Function to compute the constraint value.
"""
# Collect trainable parameters and gradients
params = []
grads = []
for group in self.param_groups:
for p in group['params']:
if p.grad is not None:
params.append(p)
grads.append(p.grad.reshape(-1))
flat_loss_grads = torch.cat(grads)
# Build Hessian-vector-product function
f_Ax = _build_hessian_vector_product(f_constraint, params,
self._hvp_reg_coeff)
# Compute step direction
step_dir = _conjugate_gradient(f_Ax, flat_loss_grads, self._cg_iters)
# Replace nan with 0.
step_dir[step_dir.ne(step_dir)] = 0.
# Compute step size
step_size = np.sqrt(2.0 * self._max_constraint_value *
(1. /
(torch.dot(step_dir, f_Ax(step_dir)) + 1e-8)))
if np.isnan(step_size):
step_size = 1.
descent_step = step_size * step_dir
# Update parameters using backtracking line search
self._backtracking_line_search(params, descent_step, f_loss,
f_constraint) | Take an optimization step.
Args:
f_loss (callable): Function to compute the loss.
f_constraint (callable): Function to compute the constraint value.
| step | python | rlworkgroup/garage | src/garage/torch/optimizers/conjugate_gradient_optimizer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/optimizers/conjugate_gradient_optimizer.py | MIT |
def state(self):
"""dict: The hyper-parameters of the optimizer."""
return {
'max_constraint_value': self._max_constraint_value,
'cg_iters': self._cg_iters,
'max_backtracks': self._max_backtracks,
'backtrack_ratio': self._backtrack_ratio,
'hvp_reg_coeff': self._hvp_reg_coeff,
'accept_violation': self._accept_violation,
} | dict: The hyper-parameters of the optimizer. | state | python | rlworkgroup/garage | src/garage/torch/optimizers/conjugate_gradient_optimizer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/optimizers/conjugate_gradient_optimizer.py | MIT |
def __setstate__(self, state):
"""Restore the optimizer state.
Args:
state (dict): State dictionary.
"""
if 'hvp_reg_coeff' not in state['state']:
warnings.warn(
'Resuming ConjugateGradientOptimizer with lost state. '
'This behavior is fixed if pickling from garage>=2020.02.0.')
self.defaults = state['defaults']
# Set the fields manually so that the setter gets called.
self.state = state['state']
self.param_groups = state['param_groups'] | Restore the optimizer state.
Args:
state (dict): State dictionary.
| __setstate__ | python | rlworkgroup/garage | src/garage/torch/optimizers/conjugate_gradient_optimizer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/optimizers/conjugate_gradient_optimizer.py | MIT |
def zero_grad(self):
"""Sets gradients of all model parameters to zero."""
for param in self.module.parameters():
if param.grad is not None:
param.grad.detach_()
param.grad.zero_() | Sets gradients of all model parameters to zero. | zero_grad | python | rlworkgroup/garage | src/garage/torch/optimizers/differentiable_sgd.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/optimizers/differentiable_sgd.py | MIT |
def set_grads_none(self):
"""Sets gradients for all model parameters to None.
This is an alternative to `zero_grad` which sets
gradients to zero.
"""
for param in self.module.parameters():
if param.grad is not None:
param.grad = None | Sets gradients for all model parameters to None.
This is an alternative to `zero_grad` which sets
gradients to zero.
| set_grads_none | python | rlworkgroup/garage | src/garage/torch/optimizers/differentiable_sgd.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/optimizers/differentiable_sgd.py | MIT |
def get_minibatch(self, *inputs):
r"""Yields a batch of inputs.
Notes: P is the size of minibatch (self._minibatch_size)
Args:
*inputs (list[torch.Tensor]): A list of inputs. Each input has
shape :math:`(N \dot [T], *)`.
Yields:
list[torch.Tensor]: A list batch of inputs. Each batch has shape
:math:`(P, *)`.
"""
batch_dataset = BatchDataset(inputs, self._minibatch_size)
for _ in range(self._max_optimization_epochs):
for dataset in batch_dataset.iterate():
yield dataset | Yields a batch of inputs.
Notes: P is the size of minibatch (self._minibatch_size)
Args:
*inputs (list[torch.Tensor]): A list of inputs. Each input has
shape :math:`(N \dot [T], *)`.
Yields:
list[torch.Tensor]: A list batch of inputs. Each batch has shape
:math:`(P, *)`.
| get_minibatch | python | rlworkgroup/garage | src/garage/torch/optimizers/optimizer_wrapper.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/optimizers/optimizer_wrapper.py | MIT |
def forward(self, observations):
"""Compute the action distributions from the observations.
Args:
observations (torch.Tensor): Observations to act on.
Returns:
torch.distributions.Distribution: Batch distribution of actions.
dict[str, torch.Tensor]: Additional agent_info, as torch Tensors.
Do not need to be detached, and can be on any device.
"""
# We're given flattened observations.
observations = observations.reshape(
-1, *self._env_spec.observation_space.shape)
cnn_output = self._cnn_module(observations)
mlp_output = self._mlp_module(cnn_output)[0]
logits = torch.softmax(mlp_output, axis=1)
dist = torch.distributions.Categorical(logits=logits)
return dist, {} | Compute the action distributions from the observations.
Args:
observations (torch.Tensor): Observations to act on.
Returns:
torch.distributions.Distribution: Batch distribution of actions.
dict[str, torch.Tensor]: Additional agent_info, as torch Tensors.
Do not need to be detached, and can be on any device.
| forward | python | rlworkgroup/garage | src/garage/torch/policies/categorical_cnn_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/categorical_cnn_policy.py | MIT |
def reset_belief(self, num_tasks=1):
r"""Reset :math:`q(z \| c)` to the prior and sample a new z from the prior.
Args:
num_tasks (int): Number of tasks.
"""
# reset distribution over z to the prior
mu = torch.zeros(num_tasks, self._latent_dim).to(global_device())
if self._use_information_bottleneck:
var = torch.ones(num_tasks, self._latent_dim).to(global_device())
else:
var = torch.zeros(num_tasks, self._latent_dim).to(global_device())
self.z_means = mu
self.z_vars = var
# sample a new z from the prior
self.sample_from_belief()
# reset the context collected so far
self._context = None
# reset any hidden state in the encoder network (relevant for RNN)
self._context_encoder.reset() | Reset :math:`q(z \| c)` to the prior and sample a new z from the prior.
Args:
num_tasks (int): Number of tasks.
| reset_belief | python | rlworkgroup/garage | src/garage/torch/policies/context_conditioned_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/context_conditioned_policy.py | MIT |
def sample_from_belief(self):
"""Sample z using distributions from current means and variances."""
if self._use_information_bottleneck:
posteriors = [
torch.distributions.Normal(m, torch.sqrt(s)) for m, s in zip(
torch.unbind(self.z_means), torch.unbind(self.z_vars))
]
z = [d.rsample() for d in posteriors]
self.z = torch.stack(z)
else:
self.z = self.z_means | Sample z using distributions from current means and variances. | sample_from_belief | python | rlworkgroup/garage | src/garage/torch/policies/context_conditioned_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/context_conditioned_policy.py | MIT |
def update_context(self, timestep):
"""Append single transition to the current context.
Args:
timestep (garage._dtypes.TimeStep): Timestep containing transition
information to be added to context.
"""
o = torch.as_tensor(timestep.observation[None, None, ...],
device=global_device()).float()
a = torch.as_tensor(timestep.action[None, None, ...],
device=global_device()).float()
r = torch.as_tensor(np.array([timestep.reward])[None, None, ...],
device=global_device()).float()
no = torch.as_tensor(timestep.next_observation[None, None, ...],
device=global_device()).float()
if self._use_next_obs:
data = torch.cat([o, a, r, no], dim=2)
else:
data = torch.cat([o, a, r], dim=2)
if self._context is None:
self._context = data
else:
self._context = torch.cat([self._context, data], dim=1) | Append single transition to the current context.
Args:
timestep (garage._dtypes.TimeStep): Timestep containing transition
information to be added to context.
| update_context | python | rlworkgroup/garage | src/garage/torch/policies/context_conditioned_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/context_conditioned_policy.py | MIT |
def infer_posterior(self, context):
r"""Compute :math:`q(z \| c)` as a function of input context and sample new z.
Args:
context (torch.Tensor): Context values, with shape
:math:`(X, N, C)`. X is the number of tasks. N is batch size. C
is the combined size of observation, action, reward, and next
observation if next observation is used in context. Otherwise,
C is the combined size of observation, action, and reward.
"""
params = self._context_encoder.forward(context)
params = params.view(context.size(0), -1,
self._context_encoder.output_dim)
# with probabilistic z, predict mean and variance of q(z | c)
if self._use_information_bottleneck:
mu = params[..., :self._latent_dim]
sigma_squared = F.softplus(params[..., self._latent_dim:])
z_params = [
product_of_gaussians(m, s)
for m, s in zip(torch.unbind(mu), torch.unbind(sigma_squared))
]
self.z_means = torch.stack([p[0] for p in z_params])
self.z_vars = torch.stack([p[1] for p in z_params])
else:
self.z_means = torch.mean(params, dim=1)
self.sample_from_belief() | Compute :math:`q(z \| c)` as a function of input context and sample new z.
Args:
context (torch.Tensor): Context values, with shape
:math:`(X, N, C)`. X is the number of tasks. N is batch size. C
is the combined size of observation, action, reward, and next
observation if next observation is used in context. Otherwise,
C is the combined size of observation, action, and reward.
| infer_posterior | python | rlworkgroup/garage | src/garage/torch/policies/context_conditioned_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/context_conditioned_policy.py | MIT |
def forward(self, obs, context):
"""Given observations and context, get actions and probs from policy.
Args:
obs (torch.Tensor): Observation values, with shape
:math:`(X, N, O)`. X is the number of tasks. N is batch size. O
is the size of the flattened observation space.
context (torch.Tensor): Context values, with shape
:math:`(X, N, C)`. X is the number of tasks. N is batch size. C
is the combined size of observation, action, reward, and next
observation if next observation is used in context. Otherwise,
C is the combined size of observation, action, and reward.
Returns:
tuple:
* torch.Tensor: Predicted action values.
* np.ndarray: Mean of distribution.
* np.ndarray: Log std of distribution.
* torch.Tensor: Log likelihood of distribution.
* torch.Tensor: Sampled values from distribution before
applying tanh transformation.
torch.Tensor: z values, with shape :math:`(N, L)`. N is batch size.
L is the latent dimension.
"""
self.infer_posterior(context)
self.sample_from_belief()
task_z = self.z
# task, batch
t, b, _ = obs.size()
obs = obs.view(t * b, -1)
task_z = [z.repeat(b, 1) for z in task_z]
task_z = torch.cat(task_z, dim=0)
# run policy, get log probs and new actions
obs_z = torch.cat([obs, task_z.detach()], dim=1)
dist = self._policy(obs_z)[0]
pre_tanh, actions = dist.rsample_with_pre_tanh_value()
log_pi = dist.log_prob(value=actions, pre_tanh_value=pre_tanh)
log_pi = log_pi.unsqueeze(1)
mean = dist.mean.to('cpu').detach().numpy()
log_std = (dist.variance**.5).log().to('cpu').detach().numpy()
return (actions, mean, log_std, log_pi, pre_tanh), task_z | Given observations and context, get actions and probs from policy.
Args:
obs (torch.Tensor): Observation values, with shape
:math:`(X, N, O)`. X is the number of tasks. N is batch size. O
is the size of the flattened observation space.
context (torch.Tensor): Context values, with shape
:math:`(X, N, C)`. X is the number of tasks. N is batch size. C
is the combined size of observation, action, reward, and next
observation if next observation is used in context. Otherwise,
C is the combined size of observation, action, and reward.
Returns:
tuple:
* torch.Tensor: Predicted action values.
* np.ndarray: Mean of distribution.
* np.ndarray: Log std of distribution.
* torch.Tensor: Log likelihood of distribution.
* torch.Tensor: Sampled values from distribution before
applying tanh transformation.
torch.Tensor: z values, with shape :math:`(N, L)`. N is batch size.
L is the latent dimension.
| forward | python | rlworkgroup/garage | src/garage/torch/policies/context_conditioned_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/context_conditioned_policy.py | MIT |
def get_action(self, obs):
"""Sample action from the policy, conditioned on the task embedding.
Args:
obs (torch.Tensor): Observation values, with shape :math:`(1, O)`.
O is the size of the flattened observation space.
Returns:
torch.Tensor: Output action value, with shape :math:`(1, A)`.
A is the size of the flattened action space.
dict:
* np.ndarray[float]: Mean of the distribution.
* np.ndarray[float]: Standard deviation of logarithmic values
of the distribution.
"""
z = self.z
obs = torch.as_tensor(obs[None], device=global_device()).float()
obs_in = torch.cat([obs, z], dim=1)
action, info = self._policy.get_action(obs_in)
return action, info | Sample action from the policy, conditioned on the task embedding.
Args:
obs (torch.Tensor): Observation values, with shape :math:`(1, O)`.
O is the size of the flattened observation space.
Returns:
torch.Tensor: Output action value, with shape :math:`(1, A)`.
A is the size of the flattened action space.
dict:
* np.ndarray[float]: Mean of the distribution.
* np.ndarray[float]: Standard deviation of logarithmic values
of the distribution.
| get_action | python | rlworkgroup/garage | src/garage/torch/policies/context_conditioned_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/context_conditioned_policy.py | MIT |
def compute_kl_div(self):
r"""Compute :math:`KL(q(z|c) \| p(z))`.
Returns:
float: :math:`KL(q(z|c) \| p(z))`.
"""
prior = torch.distributions.Normal(
torch.zeros(self._latent_dim).to(global_device()),
torch.ones(self._latent_dim).to(global_device()))
posteriors = [
torch.distributions.Normal(mu, torch.sqrt(var)) for mu, var in zip(
torch.unbind(self.z_means), torch.unbind(self.z_vars))
]
kl_divs = [
torch.distributions.kl.kl_divergence(post, prior)
for post in posteriors
]
kl_div_sum = torch.sum(torch.stack(kl_divs))
return kl_div_sum | Compute :math:`KL(q(z|c) \| p(z))`.
Returns:
float: :math:`KL(q(z|c) \| p(z))`.
| compute_kl_div | python | rlworkgroup/garage | src/garage/torch/policies/context_conditioned_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/context_conditioned_policy.py | MIT |
def __init__(self, env_spec, name='DeterministicMLPPolicy', **kwargs):
"""Initialize class with multiple attributes.
Args:
env_spec (EnvSpec): Environment specification.
name (str): Policy name.
**kwargs: Additional keyword arguments passed to the MLPModule.
"""
super().__init__(env_spec, name)
self._obs_dim = env_spec.observation_space.flat_dim
self._action_dim = env_spec.action_space.flat_dim
self._module = MLPModule(input_dim=self._obs_dim,
output_dim=self._action_dim,
**kwargs) | Initialize class with multiple attributes.
Args:
env_spec (EnvSpec): Environment specification.
name (str): Policy name.
**kwargs: Additional keyword arguments passed to the MLPModule.
| __init__ | python | rlworkgroup/garage | src/garage/torch/policies/deterministic_mlp_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/deterministic_mlp_policy.py | MIT |
def get_action(self, observation):
"""Get a single action given an observation.
Args:
observation (np.ndarray): Observation from the environment.
Returns:
tuple:
* np.ndarray: Predicted action.
* dict:
* np.ndarray[float]: Mean of the distribution
* np.ndarray[float]: Log of standard deviation of the
distribution
"""
if not isinstance(observation, np.ndarray) and not isinstance(
observation, torch.Tensor):
observation = self._env_spec.observation_space.flatten(observation)
elif isinstance(observation,
np.ndarray) and len(observation.shape) > 1:
observation = self._env_spec.observation_space.flatten(observation)
elif isinstance(observation,
torch.Tensor) and len(observation.shape) > 1:
observation = torch.flatten(observation)
with torch.no_grad():
observation = torch.Tensor(observation).unsqueeze(0)
action, agent_infos = self.get_actions(observation)
return action[0], {k: v[0] for k, v in agent_infos.items()} | Get a single action given an observation.
Args:
observation (np.ndarray): Observation from the environment.
Returns:
tuple:
* np.ndarray: Predicted action.
* dict:
* np.ndarray[float]: Mean of the distribution
* np.ndarray[float]: Log of standard deviation of the
distribution
| get_action | python | rlworkgroup/garage | src/garage/torch/policies/deterministic_mlp_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/deterministic_mlp_policy.py | MIT |
def get_actions(self, observations):
"""Get actions given observations.
Args:
observations (np.ndarray): Observations from the environment.
Returns:
tuple:
* np.ndarray: Predicted actions.
* dict:
* np.ndarray[float]: Mean of the distribution
* np.ndarray[float]: Log of standard deviation of the
distribution
"""
if not isinstance(observations[0], np.ndarray) and not isinstance(
observations[0], torch.Tensor):
observations = self._env_spec.observation_space.flatten_n(
observations)
# frequently users like to pass lists of torch tensors or lists of
# numpy arrays. This handles those conversions.
if isinstance(observations, list):
if isinstance(observations[0], np.ndarray):
observations = np.stack(observations)
elif isinstance(observations[0], torch.Tensor):
observations = torch.stack(observations)
if isinstance(observations[0],
np.ndarray) and len(observations[0].shape) > 1:
observations = self._env_spec.observation_space.flatten_n(
observations)
elif isinstance(observations[0],
torch.Tensor) and len(observations[0].shape) > 1:
observations = torch.flatten(observations, start_dim=1)
if isinstance(self._env_spec.observation_space, akro.Image) and \
len(observations.shape) < \
len(self._env_spec.observation_space.shape):
observations = self._env_spec.observation_space.unflatten_n(
observations)
with torch.no_grad():
x = self(torch.Tensor(observations).to(global_device()))
return x.cpu().numpy(), dict() | Get actions given observations.
Args:
observations (np.ndarray): Observations from the environment.
Returns:
tuple:
* np.ndarray: Predicted actions.
* dict:
* np.ndarray[float]: Mean of the distribution
* np.ndarray[float]: Log of standard deviation of the
distribution
| get_actions | python | rlworkgroup/garage | src/garage/torch/policies/deterministic_mlp_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/deterministic_mlp_policy.py | MIT |
def forward(self, observations):
"""Compute the action distributions from the observations.
Args:
observations(torch.Tensor): Batch of observations of shape
:math:`(N, O)`. Observations should be flattened even
if they are images as the underlying Q network handles
unflattening.
Returns:
torch.distributions.Distribution: Batch distribution of actions.
dict[str, torch.Tensor]: Additional agent_info, as torch Tensors.
Do not need to be detached, and can be on any device.
"""
# We're given flattened observations.
observations = observations.reshape(
-1, *self._env_spec.observation_space.shape)
output = self._cnn_module(observations)
logits = torch.softmax(output, axis=1)
dist = torch.distributions.Bernoulli(logits=logits)
return dist, {} | Compute the action distributions from the observations.
Args:
observations(torch.Tensor): Batch of observations of shape
:math:`(N, O)`. Observations should be flattened even
if they are images as the underlying Q network handles
unflattening.
Returns:
torch.distributions.Distribution: Batch distribution of actions.
dict[str, torch.Tensor]: Additional agent_info, as torch Tensors.
Do not need to be detached, and can be on any device.
| forward | python | rlworkgroup/garage | src/garage/torch/policies/discrete_cnn_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/discrete_cnn_policy.py | MIT |
def forward(self, observations):
"""Get actions corresponding to a batch of observations.
Args:
observations(torch.Tensor): Batch of observations of shape
:math:`(N, O)`. Observations should be flattened even
if they are images as the underlying Q network handles
unflattening.
Returns:
torch.Tensor: Batch of actions of shape :math:`(N, A)`
"""
qs = self._qf(observations)
return torch.argmax(qs, dim=1) | Get actions corresponding to a batch of observations.
Args:
observations(torch.Tensor): Batch of observations of shape
:math:`(N, O)`. Observations should be flattened even
if they are images as the underlying Q network handles
unflattening.
Returns:
torch.Tensor: Batch of actions of shape :math:`(N, A)`
| forward | python | rlworkgroup/garage | src/garage/torch/policies/discrete_qf_argmax_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/discrete_qf_argmax_policy.py | MIT |
def get_action(self, observation):
"""Get a single action given an observation.
Args:
observation (np.ndarray): Observation with shape :math:`(O, )`.
Returns:
torch.Tensor: Predicted action with shape :math:`(A, )`.
dict: Empty since this policy does not produce a distribution.
"""
act, info = self.get_actions(np.expand_dims(observation, axis=0))
return act[0], info | Get a single action given an observation.
Args:
observation (np.ndarray): Observation with shape :math:`(O, )`.
Returns:
torch.Tensor: Predicted action with shape :math:`(A, )`.
dict: Empty since this policy does not produce a distribution.
| get_action | python | rlworkgroup/garage | src/garage/torch/policies/discrete_qf_argmax_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/discrete_qf_argmax_policy.py | MIT |
def get_actions(self, observations):
"""Get actions given observations.
Args:
observations (np.ndarray): Batch of observations, should
have shape :math:`(N, O)`.
Returns:
torch.Tensor: Predicted actions. Tensor has shape :math:`(N, A)`.
dict: Empty since this policy does not produce a distribution.
"""
with torch.no_grad():
return self(np_to_torch(observations)).cpu().numpy(), dict() | Get actions given observations.
Args:
observations (np.ndarray): Batch of observations, should
have shape :math:`(N, O)`.
Returns:
torch.Tensor: Predicted actions. Tensor has shape :math:`(N, A)`.
dict: Empty since this policy does not produce a distribution.
| get_actions | python | rlworkgroup/garage | src/garage/torch/policies/discrete_qf_argmax_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/discrete_qf_argmax_policy.py | MIT |
def forward(self, observations):
"""Compute the action distributions from the observations.
Args:
observations (torch.Tensor): Batch of observations on default
torch device.
Returns:
torch.distributions.Distribution: Batch distribution of actions.
dict[str, torch.Tensor]: Additional agent_info, as torch Tensors
"""
dist = self._module(observations)
return (dist, dict(mean=dist.mean, log_std=(dist.variance**.5).log())) | Compute the action distributions from the observations.
Args:
observations (torch.Tensor): Batch of observations on default
torch device.
Returns:
torch.distributions.Distribution: Batch distribution of actions.
dict[str, torch.Tensor]: Additional agent_info, as torch Tensors
| forward | python | rlworkgroup/garage | src/garage/torch/policies/gaussian_mlp_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/gaussian_mlp_policy.py | MIT |
def get_action(self, observation):
"""Get action sampled from the policy.
Args:
observation (np.ndarray): Observation from the environment.
Returns:
Tuple[np.ndarray, dict[str,np.ndarray]]: Action and extra agent
info.
""" | Get action sampled from the policy.
Args:
observation (np.ndarray): Observation from the environment.
Returns:
Tuple[np.ndarray, dict[str,np.ndarray]]: Action and extra agent
info.
| get_action | python | rlworkgroup/garage | src/garage/torch/policies/policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/policy.py | MIT |
def get_actions(self, observations):
"""Get actions given observations.
Args:
observations (np.ndarray): Observations from the environment.
Returns:
Tuple[np.ndarray, dict[str,np.ndarray]]: Actions and extra agent
infos.
""" | Get actions given observations.
Args:
observations (np.ndarray): Observations from the environment.
Returns:
Tuple[np.ndarray, dict[str,np.ndarray]]: Actions and extra agent
infos.
| get_actions | python | rlworkgroup/garage | src/garage/torch/policies/policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/torch/policies/policy.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.