code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def adapt_policy(self, exploration_policy, exploration_episodes):
"""Produce a policy adapted for a task.
Args:
exploration_policy (Policy): A policy which was returned from
get_exploration_policy(), and which generated
exploration_trajectories by interacting with an environment.
The caller may not use this object after passing it into this
method.
exploration_episodes (EpisodeBatch): Episodes with which to adapt.
These are generated by exploration_policy while exploring the
environment.
Returns:
Policy: A policy adapted to the task represented by the
exploration_episodes.
""" | Produce a policy adapted for a task.
Args:
exploration_policy (Policy): A policy which was returned from
get_exploration_policy(), and which generated
exploration_trajectories by interacting with an environment.
The caller may not use this object after passing it into this
method.
exploration_episodes (EpisodeBatch): Episodes with which to adapt.
These are generated by exploration_policy while exploring the
environment.
Returns:
Policy: A policy adapted to the task represented by the
exploration_episodes.
| adapt_policy | python | rlworkgroup/garage | src/garage/np/algos/meta_rl_algorithm.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/algos/meta_rl_algorithm.py | MIT |
def optimize_policy(self, paths):
"""Optimize the policy using the samples.
Args:
paths (list[dict]): A list of collected paths.
""" | Optimize the policy using the samples.
Args:
paths (list[dict]): A list of collected paths.
| optimize_policy | python | rlworkgroup/garage | src/garage/np/algos/nop.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/algos/nop.py | MIT |
def train(self, trainer):
"""Obtain samplers and start actual training for each epoch.
Args:
trainer (Trainer): Trainer is passed to give algorithm
the access to trainer.step_epochs(), which provides services
such as snapshotting and sampler control.
""" | Obtain samplers and start actual training for each epoch.
Args:
trainer (Trainer): Trainer is passed to give algorithm
the access to trainer.step_epochs(), which provides services
such as snapshotting and sampler control.
| train | python | rlworkgroup/garage | src/garage/np/algos/nop.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/algos/nop.py | MIT |
def train(self, trainer):
"""Obtain samplers and start actual training for each epoch.
Args:
trainer (Trainer): Trainer is passed to give algorithm
the access to trainer.step_epochs(), which provides services
such as snapshotting and sampler control.
""" | Obtain samplers and start actual training for each epoch.
Args:
trainer (Trainer): Trainer is passed to give algorithm
the access to trainer.step_epochs(), which provides services
such as snapshotting and sampler control.
| train | python | rlworkgroup/garage | src/garage/np/algos/rl_algorithm.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/algos/rl_algorithm.py | MIT |
def fit(self, paths):
"""Fit regressor based on paths.
Args:
paths (dict[numpy.ndarray]): Sample paths.
""" | Fit regressor based on paths.
Args:
paths (dict[numpy.ndarray]): Sample paths.
| fit | python | rlworkgroup/garage | src/garage/np/baselines/baseline.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/baselines/baseline.py | MIT |
def predict(self, paths):
"""Predict value based on paths.
Args:
paths (dict[numpy.ndarray]): Sample paths.
Returns:
numpy.ndarray: Predicted value.
""" | Predict value based on paths.
Args:
paths (dict[numpy.ndarray]): Sample paths.
Returns:
numpy.ndarray: Predicted value.
| predict | python | rlworkgroup/garage | src/garage/np/baselines/baseline.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/baselines/baseline.py | MIT |
def _features(self, path):
"""Extract features from path.
Args:
path (list[dict]): Sample paths.
Returns:
numpy.ndarray: Extracted features.
"""
obs = np.clip(path['observations'], self.lower_bound, self.upper_bound)
length = len(path['observations'])
al = np.arange(length).reshape(-1, 1) / 100.0
return np.concatenate(
[obs, obs**2, al, al**2, al**3,
np.ones((length, 1))], axis=1) | Extract features from path.
Args:
path (list[dict]): Sample paths.
Returns:
numpy.ndarray: Extracted features.
| _features | python | rlworkgroup/garage | src/garage/np/baselines/linear_feature_baseline.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/baselines/linear_feature_baseline.py | MIT |
def fit(self, paths):
"""Fit regressor based on paths.
Args:
paths (list[dict]): Sample paths.
"""
featmat = np.concatenate([self._features(path) for path in paths])
returns = np.concatenate([path['returns'] for path in paths])
reg_coeff = self._reg_coeff
for _ in range(5):
self._coeffs = np.linalg.lstsq(
featmat.T.dot(featmat) +
reg_coeff * np.identity(featmat.shape[1]),
featmat.T.dot(returns),
rcond=-1)[0]
if not np.any(np.isnan(self._coeffs)):
break
reg_coeff *= 10 | Fit regressor based on paths.
Args:
paths (list[dict]): Sample paths.
| fit | python | rlworkgroup/garage | src/garage/np/baselines/linear_feature_baseline.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/baselines/linear_feature_baseline.py | MIT |
def predict(self, paths):
"""Predict value based on paths.
Args:
paths (list[dict]): Sample paths.
Returns:
numpy.ndarray: Predicted value.
"""
if self._coeffs is None:
return np.zeros(len(paths['observations']))
return self._features(paths).dot(self._coeffs) | Predict value based on paths.
Args:
paths (list[dict]): Sample paths.
Returns:
numpy.ndarray: Predicted value.
| predict | python | rlworkgroup/garage | src/garage/np/baselines/linear_feature_baseline.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/baselines/linear_feature_baseline.py | MIT |
def _features(self, path):
"""Extract features from path.
Args:
path (list[dict]): Sample paths.
Returns:
numpy.ndarray: Extracted features.
"""
features = [
np.clip(path[feature_name], -10, 10)
for feature_name in self._feature_names
]
n = len(path['observations'])
return np.concatenate(sum([[f, f**2]
for f in features], []) + [np.ones((n, 1))],
axis=1) | Extract features from path.
Args:
path (list[dict]): Sample paths.
Returns:
numpy.ndarray: Extracted features.
| _features | python | rlworkgroup/garage | src/garage/np/baselines/linear_multi_feature_baseline.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/baselines/linear_multi_feature_baseline.py | MIT |
def reset(self, do_resets=None):
"""Reset the encoder.
This is effective only to recurrent encoder. do_resets is effective
only to vectoried encoder.
For a vectorized encoder, do_resets is an array of boolean indicating
which internal states to be reset. The length of do_resets should be
equal to the length of inputs.
Args:
do_resets (numpy.ndarray): Bool array indicating which states
to be reset.
""" | Reset the encoder.
This is effective only to recurrent encoder. do_resets is effective
only to vectoried encoder.
For a vectorized encoder, do_resets is an array of boolean indicating
which internal states to be reset. The length of do_resets should be
equal to the length of inputs.
Args:
do_resets (numpy.ndarray): Bool array indicating which states
to be reset.
| reset | python | rlworkgroup/garage | src/garage/np/embeddings/encoder.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/embeddings/encoder.py | MIT |
def get_action(self, observation):
"""Get action from this policy for the input observation.
Args:
observation(numpy.ndarray): Observation from the environment.
Returns:
np.ndarray: Actions with noise.
List[dict]: Arbitrary policy state information (agent_info).
"""
action, agent_info = self.policy.get_action(observation)
action = np.clip(
action + np.random.normal(size=action.shape) * self._sigma(),
self._action_space.low, self._action_space.high)
self._total_env_steps += 1
return action, agent_info | Get action from this policy for the input observation.
Args:
observation(numpy.ndarray): Observation from the environment.
Returns:
np.ndarray: Actions with noise.
List[dict]: Arbitrary policy state information (agent_info).
| get_action | python | rlworkgroup/garage | src/garage/np/exploration_policies/add_gaussian_noise.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_gaussian_noise.py | MIT |
def get_actions(self, observations):
"""Get actions from this policy for the input observation.
Args:
observations(list): Observations from the environment.
Returns:
np.ndarray: Actions with noise.
List[dict]: Arbitrary policy state information (agent_info).
"""
actions, agent_infos = self.policy.get_actions(observations)
for itr, _ in enumerate(actions):
actions[itr] = np.clip(
actions[itr] +
np.random.normal(size=actions[itr].shape) * self._sigma(),
self._action_space.low, self._action_space.high)
self._total_env_steps += 1
return actions, agent_infos | Get actions from this policy for the input observation.
Args:
observations(list): Observations from the environment.
Returns:
np.ndarray: Actions with noise.
List[dict]: Arbitrary policy state information (agent_info).
| get_actions | python | rlworkgroup/garage | src/garage/np/exploration_policies/add_gaussian_noise.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_gaussian_noise.py | MIT |
def _sigma(self):
"""Get the current sigma.
Returns:
double: Sigma.
"""
if self._total_env_steps >= self._decay_period:
return self._min_sigma
return self._max_sigma - self._decrement * self._total_env_steps | Get the current sigma.
Returns:
double: Sigma.
| _sigma | python | rlworkgroup/garage | src/garage/np/exploration_policies/add_gaussian_noise.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_gaussian_noise.py | MIT |
def update(self, episode_batch):
"""Update the exploration policy using a batch of trajectories.
Args:
episode_batch (EpisodeBatch): A batch of trajectories which
were sampled with this policy active.
"""
self._total_env_steps = (self._last_total_env_steps +
np.sum(episode_batch.lengths))
self._last_total_env_steps = self._total_env_steps
tabular.record('AddGaussianNoise/Sigma', self._sigma()) | Update the exploration policy using a batch of trajectories.
Args:
episode_batch (EpisodeBatch): A batch of trajectories which
were sampled with this policy active.
| update | python | rlworkgroup/garage | src/garage/np/exploration_policies/add_gaussian_noise.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_gaussian_noise.py | MIT |
def get_param_values(self):
"""Get parameter values.
Returns:
list or dict: Values of each parameter.
"""
return {
'total_env_steps': self._total_env_steps,
'inner_params': self.policy.get_param_values()
} | Get parameter values.
Returns:
list or dict: Values of each parameter.
| get_param_values | python | rlworkgroup/garage | src/garage/np/exploration_policies/add_gaussian_noise.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_gaussian_noise.py | MIT |
def set_param_values(self, params):
"""Set param values.
Args:
params (np.ndarray): A numpy array of parameter values.
"""
self._total_env_steps = params['total_env_steps']
self.policy.set_param_values(params['inner_params'])
self._last_total_env_steps = self._total_env_steps | Set param values.
Args:
params (np.ndarray): A numpy array of parameter values.
| set_param_values | python | rlworkgroup/garage | src/garage/np/exploration_policies/add_gaussian_noise.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_gaussian_noise.py | MIT |
def _simulate(self):
"""Advance the OU process.
Returns:
np.ndarray: Updated OU process state.
"""
x = self._state
dx = self._theta * (self._mu - x) * self._dt + self._sigma * np.sqrt(
self._dt) * np.random.normal(size=len(x))
self._state = x + dx
return self._state | Advance the OU process.
Returns:
np.ndarray: Updated OU process state.
| _simulate | python | rlworkgroup/garage | src/garage/np/exploration_policies/add_ornstein_uhlenbeck_noise.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_ornstein_uhlenbeck_noise.py | MIT |
def get_action(self, observation):
"""Return an action with noise.
Args:
observation (np.ndarray): Observation from the environment.
Returns:
np.ndarray: An action with noise.
dict: Arbitrary policy state information (agent_info).
"""
action, agent_infos = self.policy.get_action(observation)
ou_state = self._simulate()
return np.clip(action + ou_state, self._action_space.low,
self._action_space.high), agent_infos | Return an action with noise.
Args:
observation (np.ndarray): Observation from the environment.
Returns:
np.ndarray: An action with noise.
dict: Arbitrary policy state information (agent_info).
| get_action | python | rlworkgroup/garage | src/garage/np/exploration_policies/add_ornstein_uhlenbeck_noise.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_ornstein_uhlenbeck_noise.py | MIT |
def get_actions(self, observations):
"""Return actions with noise.
Args:
observations (np.ndarray): Observation from the environment.
Returns:
np.ndarray: Actions with noise.
List[dict]: Arbitrary policy state information (agent_info).
"""
actions, agent_infos = self.policy.get_actions(observations)
ou_state = self._simulate()
return np.clip(actions + ou_state, self._action_space.low,
self._action_space.high), agent_infos | Return actions with noise.
Args:
observations (np.ndarray): Observation from the environment.
Returns:
np.ndarray: Actions with noise.
List[dict]: Arbitrary policy state information (agent_info).
| get_actions | python | rlworkgroup/garage | src/garage/np/exploration_policies/add_ornstein_uhlenbeck_noise.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_ornstein_uhlenbeck_noise.py | MIT |
def get_action(self, observation):
"""Get action from this policy for the input observation.
Args:
observation (numpy.ndarray): Observation from the environment.
Returns:
np.ndarray: An action with noise.
dict: Arbitrary policy state information (agent_info).
"""
opt_action, _ = self.policy.get_action(observation)
if np.random.random() < self._epsilon():
opt_action = self._action_space.sample()
self._total_env_steps += 1
return opt_action, dict() | Get action from this policy for the input observation.
Args:
observation (numpy.ndarray): Observation from the environment.
Returns:
np.ndarray: An action with noise.
dict: Arbitrary policy state information (agent_info).
| get_action | python | rlworkgroup/garage | src/garage/np/exploration_policies/epsilon_greedy_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/epsilon_greedy_policy.py | MIT |
def get_actions(self, observations):
"""Get actions from this policy for the input observations.
Args:
observations (numpy.ndarray): Observation from the environment.
Returns:
np.ndarray: Actions with noise.
List[dict]: Arbitrary policy state information (agent_info).
"""
opt_actions, _ = self.policy.get_actions(observations)
for itr, _ in enumerate(opt_actions):
if np.random.random() < self._epsilon():
opt_actions[itr] = self._action_space.sample()
self._total_env_steps += 1
return opt_actions, dict() | Get actions from this policy for the input observations.
Args:
observations (numpy.ndarray): Observation from the environment.
Returns:
np.ndarray: Actions with noise.
List[dict]: Arbitrary policy state information (agent_info).
| get_actions | python | rlworkgroup/garage | src/garage/np/exploration_policies/epsilon_greedy_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/epsilon_greedy_policy.py | MIT |
def _epsilon(self):
"""Get the current epsilon.
Returns:
double: Epsilon.
"""
if self._total_env_steps >= self._decay_period:
return self._min_epsilon
return self._max_epsilon - self._decrement * self._total_env_steps | Get the current epsilon.
Returns:
double: Epsilon.
| _epsilon | python | rlworkgroup/garage | src/garage/np/exploration_policies/epsilon_greedy_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/epsilon_greedy_policy.py | MIT |
def update(self, episode_batch):
"""Update the exploration policy using a batch of trajectories.
Args:
episode_batch (EpisodeBatch): A batch of trajectories which
were sampled with this policy active.
"""
self._total_env_steps = (self._last_total_env_steps +
np.sum(episode_batch.lengths))
self._last_total_env_steps = self._total_env_steps
tabular.record('EpsilonGreedyPolicy/Epsilon', self._epsilon()) | Update the exploration policy using a batch of trajectories.
Args:
episode_batch (EpisodeBatch): A batch of trajectories which
were sampled with this policy active.
| update | python | rlworkgroup/garage | src/garage/np/exploration_policies/epsilon_greedy_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/epsilon_greedy_policy.py | MIT |
def get_param_values(self):
"""Get parameter values.
Returns:
list or dict: Values of each parameter.
"""
return {
'total_env_steps': self._total_env_steps,
'inner_params': self.policy.get_param_values()
} | Get parameter values.
Returns:
list or dict: Values of each parameter.
| get_param_values | python | rlworkgroup/garage | src/garage/np/exploration_policies/epsilon_greedy_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/epsilon_greedy_policy.py | MIT |
def set_param_values(self, params):
"""Set param values.
Args:
params (np.ndarray): A numpy array of parameter values.
"""
self._total_env_steps = params['total_env_steps']
self.policy.set_param_values(params['inner_params'])
self._last_total_env_steps = self._total_env_steps | Set param values.
Args:
params (np.ndarray): A numpy array of parameter values.
| set_param_values | python | rlworkgroup/garage | src/garage/np/exploration_policies/epsilon_greedy_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/epsilon_greedy_policy.py | MIT |
def get_action(self, observation):
"""Return an action with noise.
Args:
observation (np.ndarray): Observation from the environment.
Returns:
np.ndarray: An action with noise.
dict: Arbitrary policy state information (agent_info).
""" | Return an action with noise.
Args:
observation (np.ndarray): Observation from the environment.
Returns:
np.ndarray: An action with noise.
dict: Arbitrary policy state information (agent_info).
| get_action | python | rlworkgroup/garage | src/garage/np/exploration_policies/exploration_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/exploration_policy.py | MIT |
def get_actions(self, observations):
"""Return actions with noise.
Args:
observations (np.ndarray): Observation from the environment.
Returns:
np.ndarray: Actions with noise.
List[dict]: Arbitrary policy state information (agent_info).
""" | Return actions with noise.
Args:
observations (np.ndarray): Observation from the environment.
Returns:
np.ndarray: Actions with noise.
List[dict]: Arbitrary policy state information (agent_info).
| get_actions | python | rlworkgroup/garage | src/garage/np/exploration_policies/exploration_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/exploration_policy.py | MIT |
def update(self, episode_batch):
"""Update the exploration policy using a batch of trajectories.
Args:
episode_batch (EpisodeBatch): A batch of trajectories which
were sampled with this policy active.
""" | Update the exploration policy using a batch of trajectories.
Args:
episode_batch (EpisodeBatch): A batch of trajectories which
were sampled with this policy active.
| update | python | rlworkgroup/garage | src/garage/np/exploration_policies/exploration_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/exploration_policy.py | MIT |
def reset(self, do_resets=None):
"""Reset policy.
Args:
do_resets (None or list[bool]): Vectorized policy states to reset.
Raises:
ValueError: If do_resets has length greater than 1.
"""
if do_resets is None:
do_resets = [True]
if len(do_resets) > 1:
raise ValueError('FixedPolicy does not support more than one '
'action at a time.')
self._indices[0] = 0 | Reset policy.
Args:
do_resets (None or list[bool]): Vectorized policy states to reset.
Raises:
ValueError: If do_resets has length greater than 1.
| reset | python | rlworkgroup/garage | src/garage/np/policies/fixed_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/fixed_policy.py | MIT |
def get_action(self, observation):
"""Get next action.
Args:
observation (np.ndarray): Ignored.
Raises:
ValueError: If policy is currently vectorized (reset was called
with more than one done value).
Returns:
tuple[np.ndarray, dict[str, np.ndarray]]: The action and agent_info
for this time step.
"""
del observation
action = self._scripted_actions[self._indices[0]]
agent_info = self._agent_infos[self._indices[0]]
self._indices[0] += 1
return action, agent_info | Get next action.
Args:
observation (np.ndarray): Ignored.
Raises:
ValueError: If policy is currently vectorized (reset was called
with more than one done value).
Returns:
tuple[np.ndarray, dict[str, np.ndarray]]: The action and agent_info
for this time step.
| get_action | python | rlworkgroup/garage | src/garage/np/policies/fixed_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/fixed_policy.py | MIT |
def get_actions(self, observations):
"""Get next action.
Args:
observations (np.ndarray): Ignored.
Raises:
ValueError: If observations has length greater than 1.
Returns:
tuple[np.ndarray, dict[str, np.ndarray]]: The action and agent_info
for this time step.
"""
if len(observations) != 1:
raise ValueError('FixedPolicy does not support more than one '
'observation at a time.')
action, agent_info = self.get_action(observations[0])
return np.array(
[action]), {k: np.array([v])
for (k, v) in agent_info.items()} | Get next action.
Args:
observations (np.ndarray): Ignored.
Raises:
ValueError: If observations has length greater than 1.
Returns:
tuple[np.ndarray, dict[str, np.ndarray]]: The action and agent_info
for this time step.
| get_actions | python | rlworkgroup/garage | src/garage/np/policies/fixed_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/fixed_policy.py | MIT |
def get_action(self, observation):
"""Get action sampled from the policy.
Args:
observation (np.ndarray): Observation from the environment.
Returns:
Tuple[np.ndarray, dict[str,np.ndarray]]: Actions and extra agent
infos.
""" | Get action sampled from the policy.
Args:
observation (np.ndarray): Observation from the environment.
Returns:
Tuple[np.ndarray, dict[str,np.ndarray]]: Actions and extra agent
infos.
| get_action | python | rlworkgroup/garage | src/garage/np/policies/policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/policy.py | MIT |
def get_actions(self, observations):
"""Get actions given observations.
Args:
observations (torch.Tensor): Observations from the environment.
Returns:
Tuple[np.ndarray, dict[str,np.ndarray]]: Actions and extra agent
infos.
""" | Get actions given observations.
Args:
observations (torch.Tensor): Observations from the environment.
Returns:
Tuple[np.ndarray, dict[str,np.ndarray]]: Actions and extra agent
infos.
| get_actions | python | rlworkgroup/garage | src/garage/np/policies/policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/policy.py | MIT |
def reset(self, do_resets=None):
"""Reset the policy.
This is effective only to recurrent policies.
do_resets is an array of boolean indicating
which internal states to be reset. The length of do_resets should be
equal to the length of inputs, i.e. batch size.
Args:
do_resets (numpy.ndarray): Bool array indicating which states
to be reset.
""" | Reset the policy.
This is effective only to recurrent policies.
do_resets is an array of boolean indicating
which internal states to be reset. The length of do_resets should be
equal to the length of inputs, i.e. batch size.
Args:
do_resets (numpy.ndarray): Bool array indicating which states
to be reset.
| reset | python | rlworkgroup/garage | src/garage/np/policies/policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/policy.py | MIT |
def name(self):
"""Name of policy.
Returns:
str: Name of policy
""" | Name of policy.
Returns:
str: Name of policy
| name | python | rlworkgroup/garage | src/garage/np/policies/policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/policy.py | MIT |
def env_spec(self):
"""Policy environment specification.
Returns:
garage.EnvSpec: Environment specification.
""" | Policy environment specification.
Returns:
garage.EnvSpec: Environment specification.
| env_spec | python | rlworkgroup/garage | src/garage/np/policies/policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/policy.py | MIT |
def set_param_values(self, params):
"""Set param values.
Args:
params (np.ndarray): A numpy array of parameter values.
""" | Set param values.
Args:
params (np.ndarray): A numpy array of parameter values.
| set_param_values | python | rlworkgroup/garage | src/garage/np/policies/scripted_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/scripted_policy.py | MIT |
def get_action(self, observation):
"""Return a single action.
Args:
observation (numpy.ndarray): Observations.
Returns:
int: Action given input observation.
dict[dict]: Agent infos indexed by observation.
"""
if self._agent_env_infos:
a_info = self._agent_env_infos[observation]
else:
a_info = dict()
return self._scripted_actions[observation], a_info | Return a single action.
Args:
observation (numpy.ndarray): Observations.
Returns:
int: Action given input observation.
dict[dict]: Agent infos indexed by observation.
| get_action | python | rlworkgroup/garage | src/garage/np/policies/scripted_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/scripted_policy.py | MIT |
def get_actions(self, observations):
"""Return multiple actions.
Args:
observations (numpy.ndarray): Observations.
Returns:
list[int]: Actions given input observations.
dict[dict]: Agent info indexed by observation.
"""
if self._agent_env_infos:
a_info = self._agent_env_infos[observations[0]]
else:
a_info = dict()
return [self._scripted_actions[obs] for obs in observations], a_info | Return multiple actions.
Args:
observations (numpy.ndarray): Observations.
Returns:
list[int]: Actions given input observations.
dict[dict]: Agent info indexed by observation.
| get_actions | python | rlworkgroup/garage | src/garage/np/policies/scripted_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/scripted_policy.py | MIT |
def get_actions(self, observations):
"""Get actions from this policy for the input observation.
Args:
observations(list): Observations from the environment.
Returns:
np.ndarray: Actions with noise.
List[dict]: Arbitrary policy state information (agent_info).
"""
return [self._env_spec.action_space.sample()
for obs in observations], dict() | Get actions from this policy for the input observation.
Args:
observations(list): Observations from the environment.
Returns:
np.ndarray: Actions with noise.
List[dict]: Arbitrary policy state information (agent_info).
| get_actions | python | rlworkgroup/garage | src/garage/np/policies/uniform_random_policy.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/uniform_random_policy.py | MIT |
def init_plot(self, env, policy):
"""Initialize the plotter.
Args:
env (GymEnv): Environment to visualize.
policy (Policy): Policy to roll out in the
visualization.
"""
if not Plotter.enable:
return
if not (self._process and self._queue):
self._init_worker()
# Needed in order to draw glfw window on the main thread
if 'Darwin' in platform.platform():
rollout(env, policy, max_episode_length=np.inf, animated=True)
self._queue.put(Message(op=Op.UPDATE, args=(env, policy), kwargs=None)) | Initialize the plotter.
Args:
env (GymEnv): Environment to visualize.
policy (Policy): Policy to roll out in the
visualization.
| init_plot | python | rlworkgroup/garage | src/garage/plotter/plotter.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/plotter/plotter.py | MIT |
def update_plot(self, policy, max_length=np.inf):
"""Update the plotter.
Args:
policy (garage.np.policies.Policy): New policy to roll out in the
visualization.
max_length (int): Maximum number of steps to roll out.
"""
if not Plotter.enable:
return
self._queue.put(
Message(op=Op.DEMO,
args=(policy.get_param_values(), max_length),
kwargs=None)) | Update the plotter.
Args:
policy (garage.np.policies.Policy): New policy to roll out in the
visualization.
max_length (int): Maximum number of steps to roll out.
| update_plot | python | rlworkgroup/garage | src/garage/plotter/plotter.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/plotter/plotter.py | MIT |
def _sample_her_goals(self, path, transition_idx):
"""Samples HER goals from the given path.
Goals are randomly sampled starting from the index after
transition_idx in the given path.
Args:
path (dict[str, np.ndarray]): A dict containing the transition
keys, where each key contains an ndarray of shape
:math:`(T, S^*)`.
transition_idx (int): index of the current transition. Only
transitions after the current transitions will be randomly
sampled for HER goals.
Returns:
np.ndarray: A numpy array of HER goals with shape
(replay_k, goal_dim).
"""
goal_indexes = np.random.randint(transition_idx + 1,
len(path['observations']),
size=self._replay_k)
return [
goal['achieved_goal']
for goal in np.asarray(path['observations'])[goal_indexes]
] | Samples HER goals from the given path.
Goals are randomly sampled starting from the index after
transition_idx in the given path.
Args:
path (dict[str, np.ndarray]): A dict containing the transition
keys, where each key contains an ndarray of shape
:math:`(T, S^*)`.
transition_idx (int): index of the current transition. Only
transitions after the current transitions will be randomly
sampled for HER goals.
Returns:
np.ndarray: A numpy array of HER goals with shape
(replay_k, goal_dim).
| _sample_her_goals | python | rlworkgroup/garage | src/garage/replay_buffer/her_replay_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/her_replay_buffer.py | MIT |
def add_path(self, path):
"""Adds a path to the replay buffer.
For each transition in the given path except the last one,
replay_k HER transitions will added to the buffer in addition
to the one in the path. The last transition is added without
sampling additional HER goals.
Args:
path(dict[str, np.ndarray]): Each key in the dict must map
to a np.ndarray of shape :math:`(T, S^*)`.
"""
obs_space = self._env_spec.observation_space
if not isinstance(path['observations'][0], dict):
# unflatten dicts if they've been flattened
path['observations'] = obs_space.unflatten_n(path['observations'])
path['next_observations'] = (obs_space.unflatten_n(
path['next_observations']))
# create HER transitions and add them to the buffer
for idx in range(path['actions'].shape[0] - 1):
transition = {key: sample[idx] for key, sample in path.items()}
her_goals = self._sample_her_goals(path, idx)
# create replay_k transitions using the HER goals
for goal in her_goals:
t_new = copy.deepcopy(transition)
a_g = t_new['next_observations']['achieved_goal']
t_new['rewards'] = np.array(self._reward_fn(a_g, goal, None))
t_new['observations']['desired_goal'] = goal
t_new['next_observations']['desired_goal'] = copy.deepcopy(
goal)
t_new['terminals'] = np.array(False)
# flatten the observation dicts now that we're done with them
self._flatten_dicts(t_new)
for key in t_new.keys():
t_new[key] = t_new[key].reshape(1, -1)
# Since we're using a PathBuffer, add each new transition
# as its own path.
super().add_path(t_new)
self._flatten_dicts(path)
super().add_path(path) | Adds a path to the replay buffer.
For each transition in the given path except the last one,
replay_k HER transitions will added to the buffer in addition
to the one in the path. The last transition is added without
sampling additional HER goals.
Args:
path(dict[str, np.ndarray]): Each key in the dict must map
to a np.ndarray of shape :math:`(T, S^*)`.
| add_path | python | rlworkgroup/garage | src/garage/replay_buffer/her_replay_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/her_replay_buffer.py | MIT |
def add_episode_batch(self, episodes):
"""Add a EpisodeBatch to the buffer.
Args:
episodes (EpisodeBatch): Episodes to add.
"""
if self._env_spec is None:
self._env_spec = episodes.env_spec
env_spec = episodes.env_spec
obs_space = env_spec.observation_space
for eps in episodes.split():
terminals = np.array([
step_type == StepType.TERMINAL for step_type in eps.step_types
],
dtype=bool)
path = {
'observations': obs_space.flatten_n(eps.observations),
'next_observations':
obs_space.flatten_n(eps.next_observations),
'actions': env_spec.action_space.flatten_n(eps.actions),
'rewards': eps.rewards.reshape(-1, 1),
'terminals': terminals.reshape(-1, 1),
}
self.add_path(path) | Add a EpisodeBatch to the buffer.
Args:
episodes (EpisodeBatch): Episodes to add.
| add_episode_batch | python | rlworkgroup/garage | src/garage/replay_buffer/path_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py | MIT |
def add_path(self, path):
"""Add a path to the buffer.
Args:
path (dict): A dict of array of shape (path_len, flat_dim).
Raises:
ValueError: If a key is missing from path or path has wrong shape.
"""
for key, buf_arr in self._buffer.items():
path_array = path.get(key, None)
if path_array is None:
raise ValueError('Key {} missing from path.'.format(key))
if (len(path_array.shape) != 2
or path_array.shape[1] != buf_arr.shape[1]):
raise ValueError('Array {} has wrong shape.'.format(key))
path_len = self._get_path_length(path)
first_seg, second_seg = self._next_path_segments(path_len)
# Remove paths which will overlap with this one.
while (self._path_segments and self._segments_overlap(
first_seg, self._path_segments[0][0])):
self._path_segments.popleft()
while (self._path_segments and self._segments_overlap(
second_seg, self._path_segments[0][0])):
self._path_segments.popleft()
self._path_segments.append((first_seg, second_seg))
for key, array in path.items():
buf_arr = self._get_or_allocate_key(key, array)
# numpy doesn't special case range indexing, so it's very slow.
# Slice manually instead, which is faster than any other method.
buf_arr[first_seg.start:first_seg.stop] = array[:len(first_seg)]
buf_arr[second_seg.start:second_seg.stop] = array[len(first_seg):]
if second_seg.stop != 0:
self._first_idx_of_next_path = second_seg.stop
else:
self._first_idx_of_next_path = first_seg.stop
self._transitions_stored = min(self._capacity,
self._transitions_stored + path_len) | Add a path to the buffer.
Args:
path (dict): A dict of array of shape (path_len, flat_dim).
Raises:
ValueError: If a key is missing from path or path has wrong shape.
| add_path | python | rlworkgroup/garage | src/garage/replay_buffer/path_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py | MIT |
def sample_path(self):
"""Sample a single path from the buffer.
Returns:
path: A dict of arrays of shape (path_len, flat_dim).
"""
path_idx = np.random.randint(len(self._path_segments))
first_seg, second_seg = self._path_segments[path_idx]
first_seg_indices = np.arange(first_seg.start, first_seg.stop)
second_seg_indices = np.arange(second_seg.start, second_seg.stop)
indices = np.concatenate([first_seg_indices, second_seg_indices])
path = {key: buf_arr[indices] for key, buf_arr in self._buffer.items()}
return path | Sample a single path from the buffer.
Returns:
path: A dict of arrays of shape (path_len, flat_dim).
| sample_path | python | rlworkgroup/garage | src/garage/replay_buffer/path_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py | MIT |
def sample_transitions(self, batch_size):
"""Sample a batch of transitions from the buffer.
Args:
batch_size (int): Number of transitions to sample.
Returns:
dict: A dict of arrays of shape (batch_size, flat_dim).
"""
idx = np.random.randint(self._transitions_stored, size=batch_size)
return {key: buf_arr[idx] for key, buf_arr in self._buffer.items()} | Sample a batch of transitions from the buffer.
Args:
batch_size (int): Number of transitions to sample.
Returns:
dict: A dict of arrays of shape (batch_size, flat_dim).
| sample_transitions | python | rlworkgroup/garage | src/garage/replay_buffer/path_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py | MIT |
def sample_timesteps(self, batch_size):
"""Sample a batch of timesteps from the buffer.
Args:
batch_size (int): Number of timesteps to sample.
Returns:
TimeStepBatch: The batch of timesteps.
"""
samples = self.sample_transitions(batch_size)
step_types = np.array([
StepType.TERMINAL if terminal else StepType.MID
for terminal in samples['terminals'].reshape(-1)
],
dtype=StepType)
return TimeStepBatch(env_spec=self._env_spec,
episode_infos={},
observations=samples['observations'],
actions=samples['actions'],
rewards=samples['rewards'].flatten(),
next_observations=samples['next_observations'],
step_types=step_types,
env_infos={},
agent_infos={}) | Sample a batch of timesteps from the buffer.
Args:
batch_size (int): Number of timesteps to sample.
Returns:
TimeStepBatch: The batch of timesteps.
| sample_timesteps | python | rlworkgroup/garage | src/garage/replay_buffer/path_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py | MIT |
def _next_path_segments(self, n_indices):
"""Compute where the next path should be stored.
Args:
n_indices (int): Path length.
Returns:
tuple: Lists of indices where path should be stored.
Raises:
ValueError: If path length is greater than the size of buffer.
"""
if n_indices > self._capacity:
raise ValueError('Path is too long to store in buffer.')
start = self._first_idx_of_next_path
end = start + n_indices
if end > self._capacity:
second_end = end - self._capacity
return (range(start, self._capacity), range(0, second_end))
else:
return (range(start, end), range(0, 0)) | Compute where the next path should be stored.
Args:
n_indices (int): Path length.
Returns:
tuple: Lists of indices where path should be stored.
Raises:
ValueError: If path length is greater than the size of buffer.
| _next_path_segments | python | rlworkgroup/garage | src/garage/replay_buffer/path_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py | MIT |
def _get_or_allocate_key(self, key, array):
"""Get or allocate key in the buffer.
Args:
key (str): Key in buffer.
array (numpy.ndarray): Array corresponding to key.
Returns:
numpy.ndarray: A NumPy array corresponding to key in the buffer.
"""
buf_arr = self._buffer.get(key, None)
if buf_arr is None:
buf_arr = np.zeros((self._capacity, array.shape[1]), array.dtype)
self._buffer[key] = buf_arr
return buf_arr | Get or allocate key in the buffer.
Args:
key (str): Key in buffer.
array (numpy.ndarray): Array corresponding to key.
Returns:
numpy.ndarray: A NumPy array corresponding to key in the buffer.
| _get_or_allocate_key | python | rlworkgroup/garage | src/garage/replay_buffer/path_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py | MIT |
def _get_path_length(path):
"""Get path length.
Args:
path (dict): Path.
Returns:
length: Path length.
Raises:
ValueError: If path is empty or has inconsistent lengths.
"""
length_key = None
length = None
for key, value in path.items():
if length is None:
length = len(value)
length_key = key
elif len(value) != length:
raise ValueError('path has inconsistent lengths between '
'{!r} and {!r}.'.format(length_key, key))
if not length:
raise ValueError('Nothing in path')
return length | Get path length.
Args:
path (dict): Path.
Returns:
length: Path length.
Raises:
ValueError: If path is empty or has inconsistent lengths.
| _get_path_length | python | rlworkgroup/garage | src/garage/replay_buffer/path_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py | MIT |
def _segments_overlap(seg_a, seg_b):
"""Compute if two segments overlap.
Args:
seg_a (range): List of indices of the first segment.
seg_b (range): List of indices of the second segment.
Returns:
bool: True iff the input ranges overlap at at least one index.
"""
# Empty segments never overlap.
if not seg_a or not seg_b:
return False
first = seg_a
second = seg_b
if seg_b.start < seg_a.start:
first, second = seg_b, seg_a
assert first.start <= second.start
return first.stop > second.start | Compute if two segments overlap.
Args:
seg_a (range): List of indices of the first segment.
seg_b (range): List of indices of the second segment.
Returns:
bool: True iff the input ranges overlap at at least one index.
| _segments_overlap | python | rlworkgroup/garage | src/garage/replay_buffer/path_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py | MIT |
def store_episode(self):
"""Add an episode to the buffer."""
episode_buffer = self._convert_episode_to_batch_major()
episode_batch_size = len(episode_buffer['observation'])
idx = self._get_storage_idx(episode_batch_size)
for key in self._buffer:
self._buffer[key][idx] = episode_buffer[key]
self._n_transitions_stored = min(
self._size_in_transitions, self._n_transitions_stored +
self._time_horizon * episode_batch_size) | Add an episode to the buffer. | store_episode | python | rlworkgroup/garage | src/garage/replay_buffer/replay_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/replay_buffer.py | MIT |
def add_transitions(self, **kwargs):
"""Add multiple transitions into the replay buffer.
A transition contains one or multiple entries, e.g.
observation, action, reward, terminal and next_observation.
The same entry of all the transitions are stacked, e.g.
{'observation': [obs1, obs2, obs3]} where obs1 is one
numpy.ndarray observation from the environment.
Args:
kwargs (dict(str, [numpy.ndarray])): Dictionary that holds
the transitions.
"""
if not self._initialized_buffer:
self._initialize_buffer(**kwargs)
for key, value in kwargs.items():
self._episode_buffer[key].append(value)
if len(self._episode_buffer['observation']) == self._time_horizon:
self.store_episode()
for key in self._episode_buffer:
self._episode_buffer[key].clear() | Add multiple transitions into the replay buffer.
A transition contains one or multiple entries, e.g.
observation, action, reward, terminal and next_observation.
The same entry of all the transitions are stacked, e.g.
{'observation': [obs1, obs2, obs3]} where obs1 is one
numpy.ndarray observation from the environment.
Args:
kwargs (dict(str, [numpy.ndarray])): Dictionary that holds
the transitions.
| add_transitions | python | rlworkgroup/garage | src/garage/replay_buffer/replay_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/replay_buffer.py | MIT |
def _get_storage_idx(self, size_increment=1):
"""Get the storage index for the episode to add into the buffer.
Args:
size_increment(int): The number of storage indeces that new
transitions will be placed in.
Returns:
numpy.ndarray: The indeces to store size_incremente transitions at.
"""
if self._current_size + size_increment <= self._size:
idx = np.arange(self._current_size,
self._current_size + size_increment)
elif self._current_size < self._size:
overflow = size_increment - (self._size - self._current_size)
idx_a = np.arange(self._current_size, self._size)
idx_b = np.arange(0, overflow)
idx = np.concatenate([idx_a, idx_b])
self._current_ptr = overflow
else:
if self._current_ptr + size_increment <= self._size:
idx = np.arange(self._current_ptr,
self._current_ptr + size_increment)
self._current_ptr += size_increment
else:
overflow = size_increment - (self._size - self._current_size)
idx_a = np.arange(self._current_ptr, self._size)
idx_b = np.arange(0, overflow)
idx = np.concatenate([idx_a, idx_b])
self._current_ptr = overflow
# Update replay size
self._current_size = min(self._size,
self._current_size + size_increment)
if size_increment == 1:
idx = idx[0]
return idx | Get the storage index for the episode to add into the buffer.
Args:
size_increment(int): The number of storage indeces that new
transitions will be placed in.
Returns:
numpy.ndarray: The indeces to store size_incremente transitions at.
| _get_storage_idx | python | rlworkgroup/garage | src/garage/replay_buffer/replay_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/replay_buffer.py | MIT |
def _convert_episode_to_batch_major(self):
"""Convert the shape of episode_buffer.
episode_buffer: {time_horizon, algo.episode_batch_size, flat_dim}.
buffer: {size, time_horizon, flat_dim}.
Returns:
dict: Transitions that have been formated to fit properly in this
replay buffer.
"""
transitions = {}
for key in self._episode_buffer:
val = np.array(self._episode_buffer[key])
transitions[key] = val.swapaxes(0, 1)
return transitions | Convert the shape of episode_buffer.
episode_buffer: {time_horizon, algo.episode_batch_size, flat_dim}.
buffer: {size, time_horizon, flat_dim}.
Returns:
dict: Transitions that have been formated to fit properly in this
replay buffer.
| _convert_episode_to_batch_major | python | rlworkgroup/garage | src/garage/replay_buffer/replay_buffer.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/replay_buffer.py | MIT |
def update_agent(self, agent_update):
"""Update an agent, assuming it implements :class:`~Policy`.
Args:
agent_update (np.ndarray or dict or Policy): If a tuple, dict, or
np.ndarray, these should be parameters to agent, which should
have been generated by calling `Policy.get_param_values`.
Alternatively, a policy itself. Note that other implementations
of `Worker` may take different types for this parameter.
"""
if isinstance(agent_update, (dict, tuple, np.ndarray)):
self.agent.set_param_values(agent_update)
elif agent_update is not None:
self.agent = agent_update | Update an agent, assuming it implements :class:`~Policy`.
Args:
agent_update (np.ndarray or dict or Policy): If a tuple, dict, or
np.ndarray, these should be parameters to agent, which should
have been generated by calling `Policy.get_param_values`.
Alternatively, a policy itself. Note that other implementations
of `Worker` may take different types for this parameter.
| update_agent | python | rlworkgroup/garage | src/garage/sampler/default_worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/default_worker.py | MIT |
def step_episode(self):
"""Take a single time-step in the current episode.
Returns:
bool: True iff the episode is done, either due to the environment
indicating termination of due to reaching `max_episode_length`.
"""
if self._eps_length < self._max_episode_length:
a, agent_info = self.agent.get_action(self._prev_obs)
es = self.env.step(a)
self._observations.append(self._prev_obs)
self._env_steps.append(es)
for k, v in agent_info.items():
self._agent_infos[k].append(v)
self._eps_length += 1
if not es.terminal:
self._prev_obs = es.observation
return False
self._lengths.append(self._eps_length)
self._last_observations.append(self._prev_obs)
return True | Take a single time-step in the current episode.
Returns:
bool: True iff the episode is done, either due to the environment
indicating termination of due to reaching `max_episode_length`.
| step_episode | python | rlworkgroup/garage | src/garage/sampler/default_worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/default_worker.py | MIT |
def collect_episode(self):
"""Collect the current episode, clearing the internal buffer.
Returns:
EpisodeBatch: A batch of the episodes completed since the last call
to collect_episode().
"""
observations = self._observations
self._observations = []
last_observations = self._last_observations
self._last_observations = []
actions = []
rewards = []
env_infos = defaultdict(list)
step_types = []
for es in self._env_steps:
actions.append(es.action)
rewards.append(es.reward)
step_types.append(es.step_type)
for k, v in es.env_info.items():
env_infos[k].append(v)
self._env_steps = []
agent_infos = self._agent_infos
self._agent_infos = defaultdict(list)
for k, v in agent_infos.items():
agent_infos[k] = np.asarray(v)
for k, v in env_infos.items():
env_infos[k] = np.asarray(v)
episode_infos = self._episode_infos
self._episode_infos = defaultdict(list)
for k, v in episode_infos.items():
episode_infos[k] = np.asarray(v)
lengths = self._lengths
self._lengths = []
return EpisodeBatch(env_spec=self.env.spec,
episode_infos=episode_infos,
observations=np.asarray(observations),
last_observations=np.asarray(last_observations),
actions=np.asarray(actions),
rewards=np.asarray(rewards),
step_types=np.asarray(step_types, dtype=StepType),
env_infos=dict(env_infos),
agent_infos=dict(agent_infos),
lengths=np.asarray(lengths, dtype='i')) | Collect the current episode, clearing the internal buffer.
Returns:
EpisodeBatch: A batch of the episodes completed since the last call
to collect_episode().
| collect_episode | python | rlworkgroup/garage | src/garage/sampler/default_worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/default_worker.py | MIT |
def rollout(self):
"""Sample a single episode of the agent in the environment.
Returns:
EpisodeBatch: The collected episode.
"""
self.start_episode()
while not self.step_episode():
pass
return self.collect_episode() | Sample a single episode of the agent in the environment.
Returns:
EpisodeBatch: The collected episode.
| rollout | python | rlworkgroup/garage | src/garage/sampler/default_worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/default_worker.py | MIT |
def __call__(self, old_env=None):
"""Update an environment.
Args:
old_env (Environment or None): Previous environment. Should not be
used after being passed in, and should not be closed.
Returns:
Environment: The new, updated environment.
"""
if old_env:
old_env.close()
return self._env_constructor() | Update an environment.
Args:
old_env (Environment or None): Previous environment. Should not be
used after being passed in, and should not be closed.
Returns:
Environment: The new, updated environment.
| __call__ | python | rlworkgroup/garage | src/garage/sampler/env_update.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/env_update.py | MIT |
def _make_env(self):
"""Construct the environment, wrapping if necessary.
Returns:
garage.Env: The (possibly wrapped) environment.
"""
env = self._env_type()
env.set_task(self._task)
if self._wrapper_cons is not None:
env = self._wrapper_cons(env, self._task)
return env | Construct the environment, wrapping if necessary.
Returns:
garage.Env: The (possibly wrapped) environment.
| _make_env | python | rlworkgroup/garage | src/garage/sampler/env_update.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/env_update.py | MIT |
def __call__(self, old_env=None):
"""Update an environment.
Args:
old_env (Environment or None): Previous environment. Should not be
used after being passed in, and should not be closed.
Returns:
Environment: The new, updated environment.
"""
# We need exact type equality, not just a subtype
# pylint: disable=unidiomatic-typecheck
if old_env is None:
return self._make_env()
elif type(getattr(old_env, 'unwrapped', old_env)) != self._env_type:
warnings.warn('SetTaskEnvUpdate is closing an environment. This '
'may indicate a very slow TaskSampler setup.')
old_env.close()
return self._make_env()
else:
old_env.set_task(self._task)
return old_env | Update an environment.
Args:
old_env (Environment or None): Previous environment. Should not be
used after being passed in, and should not be closed.
Returns:
Environment: The new, updated environment.
| __call__ | python | rlworkgroup/garage | src/garage/sampler/env_update.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/env_update.py | MIT |
def __getstate__(self):
"""Get the pickle state.
Returns:
dict: The pickled state.
"""
warnings.warn('ExistingEnvUpdate is generally not the most efficient '
'method of transmitting environments to other '
'processes.')
return self.__dict__ | Get the pickle state.
Returns:
dict: The pickled state.
| __getstate__ | python | rlworkgroup/garage | src/garage/sampler/env_update.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/env_update.py | MIT |
def update_env(self, env_update):
"""Update the environments.
If passed a list (*inside* this list passed to the Sampler itself),
distributes the environments across the "vectorization" dimension.
Args:
env_update(Environment or EnvUpdate or None): The environment to
replace the existing env with. Note that other implementations
of `Worker` may take different types for this parameter.
Raises:
TypeError: If env_update is not one of the documented types.
ValueError: If the wrong number of updates is passed.
"""
if isinstance(env_update, list):
if len(env_update) != self._n_envs:
raise ValueError('If separate environments are passed for '
'each worker, there must be exactly n_envs '
'({}) environments, but received {} '
'environments.'.format(
self._n_envs, len(env_update)))
elif env_update is not None:
env_update = [
copy.deepcopy(env_update) for _ in range(self._n_envs)
]
if env_update:
for env_index, env_up in enumerate(env_update):
self._envs[env_index], up = _apply_env_update(
self._envs[env_index], env_up)
self._needs_env_reset |= up | Update the environments.
If passed a list (*inside* this list passed to the Sampler itself),
distributes the environments across the "vectorization" dimension.
Args:
env_update(Environment or EnvUpdate or None): The environment to
replace the existing env with. Note that other implementations
of `Worker` may take different types for this parameter.
Raises:
TypeError: If env_update is not one of the documented types.
ValueError: If the wrong number of updates is passed.
| update_env | python | rlworkgroup/garage | src/garage/sampler/fragment_worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/fragment_worker.py | MIT |
def start_episode(self):
"""Resets all agents if the environment was updated."""
if self._needs_env_reset:
self._needs_env_reset = False
self.agent.reset([True] * len(self._envs))
self._episode_lengths = [0] * len(self._envs)
self._fragments = [InProgressEpisode(env) for env in self._envs] | Resets all agents if the environment was updated. | start_episode | python | rlworkgroup/garage | src/garage/sampler/fragment_worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/fragment_worker.py | MIT |
def step_episode(self):
"""Take a single time-step in the current episode.
Returns:
bool: True iff at least one of the episodes was completed.
"""
prev_obs = np.asarray([frag.last_obs for frag in self._fragments])
actions, agent_infos = self.agent.get_actions(prev_obs)
completes = [False] * len(self._envs)
for i, action in enumerate(actions):
frag = self._fragments[i]
if self._episode_lengths[i] < self._max_episode_length:
agent_info = {k: v[i] for (k, v) in agent_infos.items()}
frag.step(action, agent_info)
self._episode_lengths[i] += 1
if (self._episode_lengths[i] >= self._max_episode_length
or frag.step_types[-1] == StepType.TERMINAL):
self._episode_lengths[i] = 0
complete_frag = frag.to_batch()
self._complete_fragments.append(complete_frag)
self._fragments[i] = InProgressEpisode(self._envs[i])
completes[i] = True
if any(completes):
self.agent.reset(completes)
return any(completes) | Take a single time-step in the current episode.
Returns:
bool: True iff at least one of the episodes was completed.
| step_episode | python | rlworkgroup/garage | src/garage/sampler/fragment_worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/fragment_worker.py | MIT |
def collect_episode(self):
"""Gather fragments from all in-progress episodes.
Returns:
EpisodeBatch: A batch of the episode fragments.
"""
for i, frag in enumerate(self._fragments):
assert frag.env is self._envs[i]
if len(frag.rewards) > 0:
complete_frag = frag.to_batch()
self._complete_fragments.append(complete_frag)
self._fragments[i] = InProgressEpisode(frag.env, frag.last_obs,
frag.episode_info)
assert len(self._complete_fragments) > 0
result = EpisodeBatch.concatenate(*self._complete_fragments)
self._complete_fragments = []
return result | Gather fragments from all in-progress episodes.
Returns:
EpisodeBatch: A batch of the episode fragments.
| collect_episode | python | rlworkgroup/garage | src/garage/sampler/fragment_worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/fragment_worker.py | MIT |
def rollout(self):
"""Sample a single episode of the agent in the environment.
Returns:
EpisodeBatch: The collected episode.
"""
self.start_episode()
for _ in range(self._timesteps_per_call):
self.step_episode()
complete_frag = self.collect_episode()
return complete_frag | Sample a single episode of the agent in the environment.
Returns:
EpisodeBatch: The collected episode.
| rollout | python | rlworkgroup/garage | src/garage/sampler/fragment_worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/fragment_worker.py | MIT |
def _update_workers(self, agent_update, env_update):
"""Apply updates to the workers.
Args:
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
"""
agent_updates = self._factory.prepare_worker_messages(agent_update)
env_updates = self._factory.prepare_worker_messages(
env_update, preprocess=copy.deepcopy)
for worker, agent_up, env_up in zip(self._workers, agent_updates,
env_updates):
worker.update_agent(agent_up)
worker.update_env(env_up) | Apply updates to the workers.
Args:
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
| _update_workers | python | rlworkgroup/garage | src/garage/sampler/local_sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/local_sampler.py | MIT |
def obtain_samples(self, itr, num_samples, agent_update, env_update=None):
"""Collect at least a given number transitions (timesteps).
Args:
itr(int): The current iteration number. Using this argument is
deprecated.
num_samples (int): Minimum number of transitions / timesteps to
sample.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: The batch of collected episodes.
"""
self._update_workers(agent_update, env_update)
batches = []
completed_samples = 0
while True:
for worker in self._workers:
batch = worker.rollout()
completed_samples += len(batch.actions)
batches.append(batch)
if completed_samples >= num_samples:
samples = EpisodeBatch.concatenate(*batches)
self.total_env_steps += sum(samples.lengths)
return samples | Collect at least a given number transitions (timesteps).
Args:
itr(int): The current iteration number. Using this argument is
deprecated.
num_samples (int): Minimum number of transitions / timesteps to
sample.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: The batch of collected episodes.
| obtain_samples | python | rlworkgroup/garage | src/garage/sampler/local_sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/local_sampler.py | MIT |
def obtain_exact_episodes(self,
n_eps_per_worker,
agent_update,
env_update=None):
"""Sample an exact number of episodes per worker.
Args:
n_eps_per_worker (int): Exact number of episodes to gather for
each worker.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before samplin episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: Batch of gathered episodes. Always in worker
order. In other words, first all episodes from worker 0,
then all episodes from worker 1, etc.
"""
self._update_workers(agent_update, env_update)
batches = []
for worker in self._workers:
for _ in range(n_eps_per_worker):
batch = worker.rollout()
batches.append(batch)
samples = EpisodeBatch.concatenate(*batches)
self.total_env_steps += sum(samples.lengths)
return samples | Sample an exact number of episodes per worker.
Args:
n_eps_per_worker (int): Exact number of episodes to gather for
each worker.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before samplin episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: Batch of gathered episodes. Always in worker
order. In other words, first all episodes from worker 0,
then all episodes from worker 1, etc.
| obtain_exact_episodes | python | rlworkgroup/garage | src/garage/sampler/local_sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/local_sampler.py | MIT |
def __getstate__(self):
"""Get the pickle state.
Returns:
dict: The pickled state.
"""
state = self.__dict__.copy()
# Workers aren't picklable (but WorkerFactory is).
state['_workers'] = None
return state | Get the pickle state.
Returns:
dict: The pickled state.
| __getstate__ | python | rlworkgroup/garage | src/garage/sampler/local_sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/local_sampler.py | MIT |
def __setstate__(self, state):
"""Unpickle the state.
Args:
state (dict): Unpickled state.
"""
self.__dict__.update(state)
self._workers = [
self._factory(i) for i in range(self._factory.n_workers)
]
for worker, agent, env in zip(self._workers, self._agents, self._envs):
worker.update_agent(agent)
worker.update_env(env) | Unpickle the state.
Args:
state (dict): Unpickled state.
| __setstate__ | python | rlworkgroup/garage | src/garage/sampler/local_sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/local_sampler.py | MIT |
def _push_updates(self, updated_workers, agent_updates, env_updates):
"""Apply updates to the workers and (re)start them.
Args:
updated_workers (set[int]): Set of workers that don't need to be
updated. Successfully updated workers will be added to this
set.
agent_updates (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_updates (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
"""
for worker_number, q in enumerate(self._to_worker):
if worker_number in updated_workers:
try:
q.put_nowait(('continue', ()))
except queue.Full:
pass
else:
try:
q.put_nowait(('start', (agent_updates[worker_number],
env_updates[worker_number],
self._agent_version)))
updated_workers.add(worker_number)
except queue.Full:
pass | Apply updates to the workers and (re)start them.
Args:
updated_workers (set[int]): Set of workers that don't need to be
updated. Successfully updated workers will be added to this
set.
agent_updates (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_updates (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
| _push_updates | python | rlworkgroup/garage | src/garage/sampler/multiprocessing_sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/multiprocessing_sampler.py | MIT |
def obtain_samples(self, itr, num_samples, agent_update, env_update=None):
"""Collect at least a given number transitions (timesteps).
Args:
itr(int): The current iteration number. Using this argument is
deprecated.
num_samples (int): Minimum number of transitions / timesteps to
sample.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: The batch of collected episodes.
Raises:
AssertionError: On internal errors.
"""
del itr
batches = []
completed_samples = 0
self._agent_version += 1
updated_workers = set()
agent_ups = self._factory.prepare_worker_messages(
agent_update, cloudpickle.dumps)
env_ups = self._factory.prepare_worker_messages(
env_update, cloudpickle.dumps)
with click.progressbar(length=num_samples, label='Sampling') as pbar:
while completed_samples < num_samples:
self._push_updates(updated_workers, agent_ups, env_ups)
for _ in range(self._factory.n_workers):
try:
tag, contents = self._to_sampler.get_nowait()
if tag == 'episode':
batch, version, worker_n = contents
del worker_n
if version == self._agent_version:
batches.append(batch)
num_returned_samples = batch.lengths.sum()
completed_samples += num_returned_samples
pbar.update(num_returned_samples)
else:
# Receiving episodes from previous iterations
# is normal. Potentially, we could gather them
# here, if an off-policy method wants them.
pass
else:
raise AssertionError(
'Unknown tag {} with contents {}'.format(
tag, contents))
except queue.Empty:
pass
for q in self._to_worker:
try:
q.put_nowait(('stop', ()))
except queue.Full:
pass
samples = EpisodeBatch.concatenate(*batches)
self.total_env_steps += sum(samples.lengths)
return samples | Collect at least a given number transitions (timesteps).
Args:
itr(int): The current iteration number. Using this argument is
deprecated.
num_samples (int): Minimum number of transitions / timesteps to
sample.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: The batch of collected episodes.
Raises:
AssertionError: On internal errors.
| obtain_samples | python | rlworkgroup/garage | src/garage/sampler/multiprocessing_sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/multiprocessing_sampler.py | MIT |
def obtain_exact_episodes(self,
n_eps_per_worker,
agent_update,
env_update=None):
"""Sample an exact number of episodes per worker.
Args:
n_eps_per_worker (int): Exact number of episodes to gather for
each worker.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: Batch of gathered episodes. Always in worker
order. In other words, first all episodes from worker 0,
then all episodes from worker 1, etc.
Raises:
AssertionError: On internal errors.
"""
self._agent_version += 1
updated_workers = set()
agent_ups = self._factory.prepare_worker_messages(
agent_update, cloudpickle.dumps)
env_ups = self._factory.prepare_worker_messages(
env_update, cloudpickle.dumps)
episodes = defaultdict(list)
with click.progressbar(length=self._factory.n_workers,
label='Sampling') as pbar:
while any(
len(episodes[i]) < n_eps_per_worker
for i in range(self._factory.n_workers)):
self._push_updates(updated_workers, agent_ups, env_ups)
tag, contents = self._to_sampler.get()
if tag == 'episode':
batch, version, worker_n = contents
if version == self._agent_version:
if len(episodes[worker_n]) < n_eps_per_worker:
episodes[worker_n].append(batch)
if len(episodes[worker_n]) == n_eps_per_worker:
pbar.update(1)
try:
self._to_worker[worker_n].put_nowait(
('stop', ()))
except queue.Full:
pass
else:
raise AssertionError(
'Unknown tag {} with contents {}'.format(
tag, contents))
for q in self._to_worker:
try:
q.put_nowait(('stop', ()))
except queue.Full:
pass
ordered_episodes = list(
itertools.chain(
*[episodes[i] for i in range(self._factory.n_workers)]))
samples = EpisodeBatch.concatenate(*ordered_episodes)
self.total_env_steps += sum(samples.lengths)
return samples | Sample an exact number of episodes per worker.
Args:
n_eps_per_worker (int): Exact number of episodes to gather for
each worker.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: Batch of gathered episodes. Always in worker
order. In other words, first all episodes from worker 0,
then all episodes from worker 1, etc.
Raises:
AssertionError: On internal errors.
| obtain_exact_episodes | python | rlworkgroup/garage | src/garage/sampler/multiprocessing_sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/multiprocessing_sampler.py | MIT |
def __getstate__(self):
"""Get the pickle state.
Returns:
dict: The pickled state.
"""
return dict(
factory=self._factory,
agents=[cloudpickle.loads(agent) for agent in self._agents],
envs=[cloudpickle.loads(env) for env in self._envs]) | Get the pickle state.
Returns:
dict: The pickled state.
| __getstate__ | python | rlworkgroup/garage | src/garage/sampler/multiprocessing_sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/multiprocessing_sampler.py | MIT |
def _update_workers(self, agent_update, env_update):
"""Update all of the workers.
Args:
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling_episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
list[ray._raylet.ObjectID]: Remote values of worker ids.
"""
updating_workers = []
param_ids = self._worker_factory.prepare_worker_messages(
agent_update, ray.put)
env_ids = self._worker_factory.prepare_worker_messages(
env_update, ray.put)
for worker_id in range(self._worker_factory.n_workers):
worker = self._all_workers[worker_id]
updating_workers.append(
worker.update.remote(param_ids[worker_id], env_ids[worker_id]))
return updating_workers | Update all of the workers.
Args:
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling_episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
list[ray._raylet.ObjectID]: Remote values of worker ids.
| _update_workers | python | rlworkgroup/garage | src/garage/sampler/ray_sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/ray_sampler.py | MIT |
def obtain_samples(self, itr, num_samples, agent_update, env_update=None):
"""Sample the policy for new episodes.
Args:
itr (int): Iteration number.
num_samples (int): Number of steps the the sampler should collect.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: Batch of gathered episodes.
"""
active_workers = []
completed_samples = 0
batches = []
# update the policy params of each worker before sampling
# for the current iteration
idle_worker_ids = []
updating_workers = self._update_workers(agent_update, env_update)
with click.progressbar(length=num_samples, label='Sampling') as pbar:
while completed_samples < num_samples:
# if there are workers still being updated, check
# which ones are still updating and take the workers that
# are done updating, and start collecting episodes on those
# workers.
if updating_workers:
updated, updating_workers = ray.wait(updating_workers,
num_returns=1,
timeout=0.1)
upd = [ray.get(up) for up in updated]
idle_worker_ids.extend(upd)
# if there are idle workers, use them to collect episodes and
# mark the newly busy workers as active
while idle_worker_ids:
idle_worker_id = idle_worker_ids.pop()
worker = self._all_workers[idle_worker_id]
active_workers.append(worker.rollout.remote())
# check which workers are done/not done collecting a sample
# if any are done, send them to process the collected
# episode if they are not, keep checking if they are done
ready, not_ready = ray.wait(active_workers,
num_returns=1,
timeout=0.001)
active_workers = not_ready
for result in ready:
ready_worker_id, episode_batch = ray.get(result)
idle_worker_ids.append(ready_worker_id)
num_returned_samples = episode_batch.lengths.sum()
completed_samples += num_returned_samples
batches.append(episode_batch)
pbar.update(num_returned_samples)
samples = EpisodeBatch.concatenate(*batches)
self.total_env_steps += sum(samples.lengths)
return samples | Sample the policy for new episodes.
Args:
itr (int): Iteration number.
num_samples (int): Number of steps the the sampler should collect.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: Batch of gathered episodes.
| obtain_samples | python | rlworkgroup/garage | src/garage/sampler/ray_sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/ray_sampler.py | MIT |
def obtain_exact_episodes(self,
n_eps_per_worker,
agent_update,
env_update=None):
"""Sample an exact number of episodes per worker.
Args:
n_eps_per_worker (int): Exact number of episodes to gather for
each worker.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: Batch of gathered episodes. Always in worker
order. In other words, first all episodes from worker 0, then
all episodes from worker 1, etc.
"""
active_workers = []
episodes = defaultdict(list)
# update the policy params of each worker before sampling
# for the current iteration
idle_worker_ids = []
updating_workers = self._update_workers(agent_update, env_update)
with click.progressbar(length=self._worker_factory.n_workers,
label='Sampling') as pbar:
while any(
len(episodes[i]) < n_eps_per_worker
for i in range(self._worker_factory.n_workers)):
# if there are workers still being updated, check
# which ones are still updating and take the workers that
# are done updating, and start collecting episodes on
# those workers.
if updating_workers:
updated, updating_workers = ray.wait(updating_workers,
num_returns=1,
timeout=0.1)
upd = [ray.get(up) for up in updated]
idle_worker_ids.extend(upd)
# if there are idle workers, use them to collect episodes
# mark the newly busy workers as active
while idle_worker_ids:
idle_worker_id = idle_worker_ids.pop()
worker = self._all_workers[idle_worker_id]
active_workers.append(worker.rollout.remote())
# check which workers are done/not done collecting a sample
# if any are done, send them to process the collected episode
# if they are not, keep checking if they are done
ready, not_ready = ray.wait(active_workers,
num_returns=1,
timeout=0.001)
active_workers = not_ready
for result in ready:
ready_worker_id, episode_batch = ray.get(result)
episodes[ready_worker_id].append(episode_batch)
if len(episodes[ready_worker_id]) < n_eps_per_worker:
idle_worker_ids.append(ready_worker_id)
pbar.update(1)
ordered_episodes = list(
itertools.chain(
*[episodes[i] for i in range(self._worker_factory.n_workers)]))
samples = EpisodeBatch.concatenate(*ordered_episodes)
self.total_env_steps += sum(samples.lengths)
return samples | Sample an exact number of episodes per worker.
Args:
n_eps_per_worker (int): Exact number of episodes to gather for
each worker.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: Batch of gathered episodes. Always in worker
order. In other words, first all episodes from worker 0, then
all episodes from worker 1, etc.
| obtain_exact_episodes | python | rlworkgroup/garage | src/garage/sampler/ray_sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/ray_sampler.py | MIT |
def update(self, agent_update, env_update):
"""Update the agent and environment.
Args:
agent_update (object): Agent update.
env_update (object): Environment update.
Returns:
int: The worker id.
"""
self.inner_worker.update_agent(agent_update)
self.inner_worker.update_env(env_update)
return self.worker_id | Update the agent and environment.
Args:
agent_update (object): Agent update.
env_update (object): Environment update.
Returns:
int: The worker id.
| update | python | rlworkgroup/garage | src/garage/sampler/ray_sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/ray_sampler.py | MIT |
def __init__(self, algo, env):
"""Construct a Sampler from an Algorithm.
Args:
algo (RLAlgorithm): The RL Algorithm controlling this
sampler.
env (Environment): The environment being sampled from.
Calling this method is deprecated.
"""
self.algo = algo
self.env = env | Construct a Sampler from an Algorithm.
Args:
algo (RLAlgorithm): The RL Algorithm controlling this
sampler.
env (Environment): The environment being sampled from.
Calling this method is deprecated.
| __init__ | python | rlworkgroup/garage | src/garage/sampler/sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/sampler.py | MIT |
def start_worker(self):
"""Initialize the sampler.
i.e. launching parallel workers if necessary.
This method is deprecated, please launch workers in construct instead.
""" | Initialize the sampler.
i.e. launching parallel workers if necessary.
This method is deprecated, please launch workers in construct instead.
| start_worker | python | rlworkgroup/garage | src/garage/sampler/sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/sampler.py | MIT |
def obtain_samples(self, itr, num_samples, agent_update, env_update=None):
"""Collect at least a given number transitions :class:`TimeStep`s.
Args:
itr (int): The current iteration number. Using this argument is
deprecated.
num_samples (int): Minimum number of :class:`TimeStep`s to sample.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: The batch of collected episodes.
""" | Collect at least a given number transitions :class:`TimeStep`s.
Args:
itr (int): The current iteration number. Using this argument is
deprecated.
num_samples (int): Minimum number of :class:`TimeStep`s to sample.
agent_update (object): Value which will be passed into the
`agent_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
env_update (object): Value which will be passed into the
`env_update_fn` before sampling episodes. If a list is passed
in, it must have length exactly `factory.n_workers`, and will
be spread across the workers.
Returns:
EpisodeBatch: The batch of collected episodes.
| obtain_samples | python | rlworkgroup/garage | src/garage/sampler/sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/sampler.py | MIT |
def shutdown_worker(self):
"""Terminate workers if necessary.
Because Python object destruction can be somewhat unpredictable, this
method isn't deprecated.
""" | Terminate workers if necessary.
Because Python object destruction can be somewhat unpredictable, this
method isn't deprecated.
| shutdown_worker | python | rlworkgroup/garage | src/garage/sampler/sampler.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/sampler.py | MIT |
def rollout(env,
agent,
*,
max_episode_length=np.inf,
animated=False,
speedup=1,
deterministic=False):
"""Sample a single episode of the agent in the environment.
Args:
agent (Policy): Agent used to select actions.
env (Environment): Environment to perform actions in.
max_episode_length (int): If the episode reaches this many timesteps,
it is truncated.
animated (bool): If true, render the environment after each step.
speedup (float): Factor by which to decrease the wait time between
rendered steps. Only relevant, if animated == true.
deterministic (bool): If true, use the mean action returned by the
stochastic policy instead of sampling from the returned action
distribution.
Returns:
dict[str, np.ndarray or dict]: Dictionary, with keys:
* observations(np.array): Flattened array of observations.
There should be one more of these than actions. Note that
observations[i] (for i < len(observations) - 1) was used by the
agent to choose actions[i]. Should have shape (T + 1, S^*) (the
unflattened state space of the current environment).
* actions(np.array): Non-flattened array of actions. Should have
shape (T, S^*) (the unflattened action space of the current
environment).
* rewards(np.array): Array of rewards of shape (T,) (1D array of
length timesteps).
* agent_infos(Dict[str, np.array]): Dictionary of stacked,
non-flattened `agent_info` arrays.
* env_infos(Dict[str, np.array]): Dictionary of stacked,
non-flattened `env_info` arrays.
* episode_infos(Dict[str, np.array]): Dictionary of stacked,
non-flattened `episode_info` arrays.
* dones(np.array): Array of termination signals.
"""
del speedup
env_steps = []
agent_infos = []
observations = []
episode_infos = []
last_obs, episode_info = env.reset()
agent.reset()
episode_length = 0
if animated:
env.visualize()
while episode_length < (max_episode_length or np.inf):
a, agent_info = agent.get_action(last_obs)
if deterministic and 'mean' in agent_info:
a = agent_info['mean']
es = env.step(a)
env_steps.append(es)
observations.append(last_obs)
agent_infos.append(agent_info)
episode_infos.append(episode_info)
episode_length += 1
if es.last:
break
last_obs = es.observation
return dict(
observations=np.array(observations),
actions=np.array([es.action for es in env_steps]),
rewards=np.array([es.reward for es in env_steps]),
agent_infos=stack_tensor_dict_list(agent_infos),
env_infos=stack_tensor_dict_list([es.env_info for es in env_steps]),
episode_infos=stack_tensor_dict_list(episode_infos),
dones=np.array([es.terminal for es in env_steps]),
) | Sample a single episode of the agent in the environment.
Args:
agent (Policy): Agent used to select actions.
env (Environment): Environment to perform actions in.
max_episode_length (int): If the episode reaches this many timesteps,
it is truncated.
animated (bool): If true, render the environment after each step.
speedup (float): Factor by which to decrease the wait time between
rendered steps. Only relevant, if animated == true.
deterministic (bool): If true, use the mean action returned by the
stochastic policy instead of sampling from the returned action
distribution.
Returns:
dict[str, np.ndarray or dict]: Dictionary, with keys:
* observations(np.array): Flattened array of observations.
There should be one more of these than actions. Note that
observations[i] (for i < len(observations) - 1) was used by the
agent to choose actions[i]. Should have shape (T + 1, S^*) (the
unflattened state space of the current environment).
* actions(np.array): Non-flattened array of actions. Should have
shape (T, S^*) (the unflattened action space of the current
environment).
* rewards(np.array): Array of rewards of shape (T,) (1D array of
length timesteps).
* agent_infos(Dict[str, np.array]): Dictionary of stacked,
non-flattened `agent_info` arrays.
* env_infos(Dict[str, np.array]): Dictionary of stacked,
non-flattened `env_info` arrays.
* episode_infos(Dict[str, np.array]): Dictionary of stacked,
non-flattened `episode_info` arrays.
* dones(np.array): Array of termination signals.
| rollout | python | rlworkgroup/garage | src/garage/sampler/utils.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/utils.py | MIT |
def truncate_paths(paths, max_samples):
"""Truncate the paths so that the total number of samples is max_samples.
This is done by removing extra paths at the end of
the list, and make the last path shorter if necessary
Args:
paths (list[dict[str, np.ndarray]]): Samples, items with keys:
* observations (np.ndarray): Enviroment observations
* actions (np.ndarray): Agent actions
* rewards (np.ndarray): Environment rewards
* env_infos (dict): Environment state information
* agent_infos (dict): Agent state information
max_samples(int) : Maximum number of samples allowed.
Returns:
list[dict[str, np.ndarray]]: A list of paths, truncated so that the
number of samples adds up to max-samples
Raises:
ValueError: If key a other than 'observations', 'actions', 'rewards',
'env_infos' and 'agent_infos' is found.
"""
# chop samples collected by extra paths
# make a copy
valid_keys = {
'observations', 'actions', 'rewards', 'env_infos', 'agent_infos'
}
paths = list(paths)
total_n_samples = sum(len(path['rewards']) for path in paths)
while paths and total_n_samples - len(paths[-1]['rewards']) >= max_samples:
total_n_samples -= len(paths.pop(-1)['rewards'])
if paths:
last_path = paths.pop(-1)
truncated_last_path = dict()
truncated_len = len(
last_path['rewards']) - (total_n_samples - max_samples)
for k, v in last_path.items():
if k in ['observations', 'actions', 'rewards']:
truncated_last_path[k] = v[:truncated_len]
elif k in ['env_infos', 'agent_infos']:
truncated_last_path[k] = truncate_tensor_dict(v, truncated_len)
else:
raise ValueError(
'Unexpected key {} found in path. Valid keys: {}'.format(
k, valid_keys))
paths.append(truncated_last_path)
return paths | Truncate the paths so that the total number of samples is max_samples.
This is done by removing extra paths at the end of
the list, and make the last path shorter if necessary
Args:
paths (list[dict[str, np.ndarray]]): Samples, items with keys:
* observations (np.ndarray): Enviroment observations
* actions (np.ndarray): Agent actions
* rewards (np.ndarray): Environment rewards
* env_infos (dict): Environment state information
* agent_infos (dict): Agent state information
max_samples(int) : Maximum number of samples allowed.
Returns:
list[dict[str, np.ndarray]]: A list of paths, truncated so that the
number of samples adds up to max-samples
Raises:
ValueError: If key a other than 'observations', 'actions', 'rewards',
'env_infos' and 'agent_infos' is found.
| truncate_paths | python | rlworkgroup/garage | src/garage/sampler/utils.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/utils.py | MIT |
def update_agent(self, agent_update):
"""Update an agent, assuming it implements :class:`~Policy`.
Args:
agent_update (np.ndarray or dict or Policy): If a
tuple, dict, or np.ndarray, these should be parameters to
agent, which should have been generated by calling
`Policy.get_param_values`. Alternatively, a policy itself. Note
that other implementations of `Worker` may take different types
for this parameter.
"""
super().update_agent(agent_update)
self._needs_agent_reset = True | Update an agent, assuming it implements :class:`~Policy`.
Args:
agent_update (np.ndarray or dict or Policy): If a
tuple, dict, or np.ndarray, these should be parameters to
agent, which should have been generated by calling
`Policy.get_param_values`. Alternatively, a policy itself. Note
that other implementations of `Worker` may take different types
for this parameter.
| update_agent | python | rlworkgroup/garage | src/garage/sampler/vec_worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/vec_worker.py | MIT |
def update_env(self, env_update):
"""Update the environments.
If passed a list (*inside* this list passed to the Sampler itself),
distributes the environments across the "vectorization" dimension.
Args:
env_update(Environment or EnvUpdate or None): The environment to
replace the existing env with. Note that other implementations
of `Worker` may take different types for this parameter.
Raises:
TypeError: If env_update is not one of the documented types.
ValueError: If the wrong number of updates is passed.
"""
if isinstance(env_update, list):
if len(env_update) != self._n_envs:
raise ValueError('If separate environments are passed for '
'each worker, there must be exactly n_envs '
'({}) environments, but received {} '
'environments.'.format(
self._n_envs, len(env_update)))
elif env_update is not None:
env_update = [
copy.deepcopy(env_update) for _ in range(self._n_envs)
]
if env_update:
for env_index, env_up in enumerate(env_update):
self._envs[env_index], up = _apply_env_update(
self._envs[env_index], env_up)
self._needs_env_reset |= up | Update the environments.
If passed a list (*inside* this list passed to the Sampler itself),
distributes the environments across the "vectorization" dimension.
Args:
env_update(Environment or EnvUpdate or None): The environment to
replace the existing env with. Note that other implementations
of `Worker` may take different types for this parameter.
Raises:
TypeError: If env_update is not one of the documented types.
ValueError: If the wrong number of updates is passed.
| update_env | python | rlworkgroup/garage | src/garage/sampler/vec_worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/vec_worker.py | MIT |
def step_episode(self):
"""Take a single time-step in the current episode.
Returns:
bool: True iff at least one of the episodes was completed.
"""
finished = False
actions, agent_info = self.agent.get_actions(self._prev_obs)
completes = [False] * len(self._envs)
for i, action in enumerate(actions):
if self._episode_lengths[i] < self._max_episode_length:
es = self._envs[i].step(action)
self._observations[i].append(self._prev_obs[i])
self._rewards[i].append(es.reward)
self._actions[i].append(es.action)
for k, v in agent_info.items():
self._agent_infos[i][k].append(v[i])
for k, v in es.env_info.items():
self._env_infos[i][k].append(v)
self._episode_lengths[i] += 1
self._step_types[i].append(es.step_type)
self._prev_obs[i] = es.observation
if self._episode_lengths[i] >= self._max_episode_length or es.last:
self._gather_episode(i, es.observation)
completes[i] = True
finished = True
if finished:
self.agent.reset(completes)
return finished | Take a single time-step in the current episode.
Returns:
bool: True iff at least one of the episodes was completed.
| step_episode | python | rlworkgroup/garage | src/garage/sampler/vec_worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/vec_worker.py | MIT |
def collect_episode(self):
"""Collect all completed episodes.
Returns:
EpisodeBatch: A batch of the episodes completed since the last call
to collect_episode().
"""
if len(self._completed_episodes) == 1:
result = self._completed_episodes[0]
else:
result = EpisodeBatch.concatenate(*self._completed_episodes)
self._completed_episodes = []
return result | Collect all completed episodes.
Returns:
EpisodeBatch: A batch of the episodes completed since the last call
to collect_episode().
| collect_episode | python | rlworkgroup/garage | src/garage/sampler/vec_worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/vec_worker.py | MIT |
def __init__(self, *, seed, max_episode_length, worker_number):
"""Initialize a worker.
Args:
seed (int): The seed to use to intialize random number generators.
max_episode_length (int or float): The maximum length of episodes
which will be sampled. Can be (floating point) infinity.
worker_number (int): The number of the worker this update is
occurring in. This argument is used to set a different seed for
each worker.
Should create fields the following fields:
agent (Policy or None): The worker's initial agent.
env (Environment or None): The worker's environment.
"""
self._seed = seed
self._max_episode_length = max_episode_length
self._worker_number = worker_number | Initialize a worker.
Args:
seed (int): The seed to use to intialize random number generators.
max_episode_length (int or float): The maximum length of episodes
which will be sampled. Can be (floating point) infinity.
worker_number (int): The number of the worker this update is
occurring in. This argument is used to set a different seed for
each worker.
Should create fields the following fields:
agent (Policy or None): The worker's initial agent.
env (Environment or None): The worker's environment.
| __init__ | python | rlworkgroup/garage | src/garage/sampler/worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker.py | MIT |
def update_agent(self, agent_update):
"""Update the worker's agent, using agent_update.
Args:
agent_update (object): An agent update. The exact type of this
argument depends on the `Worker` implementation.
""" | Update the worker's agent, using agent_update.
Args:
agent_update (object): An agent update. The exact type of this
argument depends on the `Worker` implementation.
| update_agent | python | rlworkgroup/garage | src/garage/sampler/worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker.py | MIT |
def update_env(self, env_update):
"""Update the worker's env, using env_update.
Args:
env_update (object): An environment update. The exact type of this
argument depends on the `Worker` implementation.
""" | Update the worker's env, using env_update.
Args:
env_update (object): An environment update. The exact type of this
argument depends on the `Worker` implementation.
| update_env | python | rlworkgroup/garage | src/garage/sampler/worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker.py | MIT |
def rollout(self):
"""Sample a single episode of the agent in the environment.
Returns:
EpisodeBatch: Batch of sampled episodes. May be truncated if
max_episode_length is set.
""" | Sample a single episode of the agent in the environment.
Returns:
EpisodeBatch: Batch of sampled episodes. May be truncated if
max_episode_length is set.
| rollout | python | rlworkgroup/garage | src/garage/sampler/worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker.py | MIT |
def step_episode(self):
"""Take a single time-step in the current episode.
Returns:
True iff the episode is done, either due to the environment
indicating termination of due to reaching `max_episode_length`.
""" | Take a single time-step in the current episode.
Returns:
True iff the episode is done, either due to the environment
indicating termination of due to reaching `max_episode_length`.
| step_episode | python | rlworkgroup/garage | src/garage/sampler/worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker.py | MIT |
def collect_episode(self):
"""Collect the current episode, clearing the internal buffer.
Returns:
EpisodeBatch: Batch of sampled episodes. May be truncated if the
episodes haven't completed yet.
""" | Collect the current episode, clearing the internal buffer.
Returns:
EpisodeBatch: Batch of sampled episodes. May be truncated if the
episodes haven't completed yet.
| collect_episode | python | rlworkgroup/garage | src/garage/sampler/worker.py | https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.