code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def store_episode(self): """Add an episode to the buffer.""" episode_buffer = self._convert_episode_to_batch_major() episode_batch_size = len(episode_buffer['observation']) idx = self._get_storage_idx(episode_batch_size) for key in self._buffer: self._buffer[key][idx...
Add an episode to the buffer.
store_episode
python
rlworkgroup/garage
src/garage/replay_buffer/replay_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/replay_buffer.py
MIT
def add_transitions(self, **kwargs): """Add multiple transitions into the replay buffer. A transition contains one or multiple entries, e.g. observation, action, reward, terminal and next_observation. The same entry of all the transitions are stacked, e.g. {'observation': [obs1,...
Add multiple transitions into the replay buffer. A transition contains one or multiple entries, e.g. observation, action, reward, terminal and next_observation. The same entry of all the transitions are stacked, e.g. {'observation': [obs1, obs2, obs3]} where obs1 is one numpy.nd...
add_transitions
python
rlworkgroup/garage
src/garage/replay_buffer/replay_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/replay_buffer.py
MIT
def _get_storage_idx(self, size_increment=1): """Get the storage index for the episode to add into the buffer. Args: size_increment(int): The number of storage indeces that new transitions will be placed in. Returns: numpy.ndarray: The indeces to store s...
Get the storage index for the episode to add into the buffer. Args: size_increment(int): The number of storage indeces that new transitions will be placed in. Returns: numpy.ndarray: The indeces to store size_incremente transitions at.
_get_storage_idx
python
rlworkgroup/garage
src/garage/replay_buffer/replay_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/replay_buffer.py
MIT
def _convert_episode_to_batch_major(self): """Convert the shape of episode_buffer. episode_buffer: {time_horizon, algo.episode_batch_size, flat_dim}. buffer: {size, time_horizon, flat_dim}. Returns: dict: Transitions that have been formated to fit properly in this ...
Convert the shape of episode_buffer. episode_buffer: {time_horizon, algo.episode_batch_size, flat_dim}. buffer: {size, time_horizon, flat_dim}. Returns: dict: Transitions that have been formated to fit properly in this replay buffer.
_convert_episode_to_batch_major
python
rlworkgroup/garage
src/garage/replay_buffer/replay_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/replay_buffer.py
MIT
def update_agent(self, agent_update): """Update an agent, assuming it implements :class:`~Policy`. Args: agent_update (np.ndarray or dict or Policy): If a tuple, dict, or np.ndarray, these should be parameters to agent, which should have been generated by cal...
Update an agent, assuming it implements :class:`~Policy`. Args: agent_update (np.ndarray or dict or Policy): If a tuple, dict, or np.ndarray, these should be parameters to agent, which should have been generated by calling `Policy.get_param_values`. A...
update_agent
python
rlworkgroup/garage
src/garage/sampler/default_worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/default_worker.py
MIT
def step_episode(self): """Take a single time-step in the current episode. Returns: bool: True iff the episode is done, either due to the environment indicating termination of due to reaching `max_episode_length`. """ if self._eps_length < self._max_episode_leng...
Take a single time-step in the current episode. Returns: bool: True iff the episode is done, either due to the environment indicating termination of due to reaching `max_episode_length`.
step_episode
python
rlworkgroup/garage
src/garage/sampler/default_worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/default_worker.py
MIT
def collect_episode(self): """Collect the current episode, clearing the internal buffer. Returns: EpisodeBatch: A batch of the episodes completed since the last call to collect_episode(). """ observations = self._observations self._observations = [] ...
Collect the current episode, clearing the internal buffer. Returns: EpisodeBatch: A batch of the episodes completed since the last call to collect_episode().
collect_episode
python
rlworkgroup/garage
src/garage/sampler/default_worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/default_worker.py
MIT
def rollout(self): """Sample a single episode of the agent in the environment. Returns: EpisodeBatch: The collected episode. """ self.start_episode() while not self.step_episode(): pass return self.collect_episode()
Sample a single episode of the agent in the environment. Returns: EpisodeBatch: The collected episode.
rollout
python
rlworkgroup/garage
src/garage/sampler/default_worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/default_worker.py
MIT
def __call__(self, old_env=None): """Update an environment. Args: old_env (Environment or None): Previous environment. Should not be used after being passed in, and should not be closed. Returns: Environment: The new, updated environment. """ ...
Update an environment. Args: old_env (Environment or None): Previous environment. Should not be used after being passed in, and should not be closed. Returns: Environment: The new, updated environment.
__call__
python
rlworkgroup/garage
src/garage/sampler/env_update.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/env_update.py
MIT
def _make_env(self): """Construct the environment, wrapping if necessary. Returns: garage.Env: The (possibly wrapped) environment. """ env = self._env_type() env.set_task(self._task) if self._wrapper_cons is not None: env = self._wrapper_cons(env...
Construct the environment, wrapping if necessary. Returns: garage.Env: The (possibly wrapped) environment.
_make_env
python
rlworkgroup/garage
src/garage/sampler/env_update.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/env_update.py
MIT
def __getstate__(self): """Get the pickle state. Returns: dict: The pickled state. """ warnings.warn('ExistingEnvUpdate is generally not the most efficient ' 'method of transmitting environments to other ' 'processes.') re...
Get the pickle state. Returns: dict: The pickled state.
__getstate__
python
rlworkgroup/garage
src/garage/sampler/env_update.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/env_update.py
MIT
def update_env(self, env_update): """Update the environments. If passed a list (*inside* this list passed to the Sampler itself), distributes the environments across the "vectorization" dimension. Args: env_update(Environment or EnvUpdate or None): The environment to ...
Update the environments. If passed a list (*inside* this list passed to the Sampler itself), distributes the environments across the "vectorization" dimension. Args: env_update(Environment or EnvUpdate or None): The environment to replace the existing env with. Note...
update_env
python
rlworkgroup/garage
src/garage/sampler/fragment_worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/fragment_worker.py
MIT
def start_episode(self): """Resets all agents if the environment was updated.""" if self._needs_env_reset: self._needs_env_reset = False self.agent.reset([True] * len(self._envs)) self._episode_lengths = [0] * len(self._envs) self._fragments = [InProgressE...
Resets all agents if the environment was updated.
start_episode
python
rlworkgroup/garage
src/garage/sampler/fragment_worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/fragment_worker.py
MIT
def step_episode(self): """Take a single time-step in the current episode. Returns: bool: True iff at least one of the episodes was completed. """ prev_obs = np.asarray([frag.last_obs for frag in self._fragments]) actions, agent_infos = self.agent.get_actions(prev_o...
Take a single time-step in the current episode. Returns: bool: True iff at least one of the episodes was completed.
step_episode
python
rlworkgroup/garage
src/garage/sampler/fragment_worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/fragment_worker.py
MIT
def collect_episode(self): """Gather fragments from all in-progress episodes. Returns: EpisodeBatch: A batch of the episode fragments. """ for i, frag in enumerate(self._fragments): assert frag.env is self._envs[i] if len(frag.rewards) > 0: ...
Gather fragments from all in-progress episodes. Returns: EpisodeBatch: A batch of the episode fragments.
collect_episode
python
rlworkgroup/garage
src/garage/sampler/fragment_worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/fragment_worker.py
MIT
def _update_workers(self, agent_update, env_update): """Apply updates to the workers. Args: agent_update (object): Value which will be passed into the `agent_update_fn` before sampling episodes. If a list is passed in, it must have length exactly `factory.n_w...
Apply updates to the workers. Args: agent_update (object): Value which will be passed into the `agent_update_fn` before sampling episodes. If a list is passed in, it must have length exactly `factory.n_workers`, and will be spread across the workers. ...
_update_workers
python
rlworkgroup/garage
src/garage/sampler/local_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/local_sampler.py
MIT
def obtain_samples(self, itr, num_samples, agent_update, env_update=None): """Collect at least a given number transitions (timesteps). Args: itr(int): The current iteration number. Using this argument is deprecated. num_samples (int): Minimum number of transition...
Collect at least a given number transitions (timesteps). Args: itr(int): The current iteration number. Using this argument is deprecated. num_samples (int): Minimum number of transitions / timesteps to sample. agent_update (object): Value whic...
obtain_samples
python
rlworkgroup/garage
src/garage/sampler/local_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/local_sampler.py
MIT
def obtain_exact_episodes(self, n_eps_per_worker, agent_update, env_update=None): """Sample an exact number of episodes per worker. Args: n_eps_per_worker (int): Exact number of episodes to gather for ...
Sample an exact number of episodes per worker. Args: n_eps_per_worker (int): Exact number of episodes to gather for each worker. agent_update (object): Value which will be passed into the `agent_update_fn` before sampling episodes. If a list is passed ...
obtain_exact_episodes
python
rlworkgroup/garage
src/garage/sampler/local_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/local_sampler.py
MIT
def __setstate__(self, state): """Unpickle the state. Args: state (dict): Unpickled state. """ self.__dict__.update(state) self._workers = [ self._factory(i) for i in range(self._factory.n_workers) ] for worker, agent, env in zip(self._wo...
Unpickle the state. Args: state (dict): Unpickled state.
__setstate__
python
rlworkgroup/garage
src/garage/sampler/local_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/local_sampler.py
MIT
def _push_updates(self, updated_workers, agent_updates, env_updates): """Apply updates to the workers and (re)start them. Args: updated_workers (set[int]): Set of workers that don't need to be updated. Successfully updated workers will be added to this set. ...
Apply updates to the workers and (re)start them. Args: updated_workers (set[int]): Set of workers that don't need to be updated. Successfully updated workers will be added to this set. agent_updates (object): Value which will be passed into the ...
_push_updates
python
rlworkgroup/garage
src/garage/sampler/multiprocessing_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/multiprocessing_sampler.py
MIT
def obtain_samples(self, itr, num_samples, agent_update, env_update=None): """Collect at least a given number transitions (timesteps). Args: itr(int): The current iteration number. Using this argument is deprecated. num_samples (int): Minimum number of transition...
Collect at least a given number transitions (timesteps). Args: itr(int): The current iteration number. Using this argument is deprecated. num_samples (int): Minimum number of transitions / timesteps to sample. agent_update (object): Value whic...
obtain_samples
python
rlworkgroup/garage
src/garage/sampler/multiprocessing_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/multiprocessing_sampler.py
MIT
def obtain_exact_episodes(self, n_eps_per_worker, agent_update, env_update=None): """Sample an exact number of episodes per worker. Args: n_eps_per_worker (int): Exact number of episodes to gather for ...
Sample an exact number of episodes per worker. Args: n_eps_per_worker (int): Exact number of episodes to gather for each worker. agent_update (object): Value which will be passed into the `agent_update_fn` before sampling episodes. If a list is passed ...
obtain_exact_episodes
python
rlworkgroup/garage
src/garage/sampler/multiprocessing_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/multiprocessing_sampler.py
MIT
def _update_workers(self, agent_update, env_update): """Update all of the workers. Args: agent_update (object): Value which will be passed into the `agent_update_fn` before sampling episodes. If a list is passed in, it must have length exactly `factory.n_work...
Update all of the workers. Args: agent_update (object): Value which will be passed into the `agent_update_fn` before sampling episodes. If a list is passed in, it must have length exactly `factory.n_workers`, and will be spread across the workers. ...
_update_workers
python
rlworkgroup/garage
src/garage/sampler/ray_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/ray_sampler.py
MIT
def obtain_samples(self, itr, num_samples, agent_update, env_update=None): """Sample the policy for new episodes. Args: itr (int): Iteration number. num_samples (int): Number of steps the the sampler should collect. agent_update (object): Value which will be passed i...
Sample the policy for new episodes. Args: itr (int): Iteration number. num_samples (int): Number of steps the the sampler should collect. agent_update (object): Value which will be passed into the `agent_update_fn` before sampling episodes. If a list is passe...
obtain_samples
python
rlworkgroup/garage
src/garage/sampler/ray_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/ray_sampler.py
MIT
def obtain_exact_episodes(self, n_eps_per_worker, agent_update, env_update=None): """Sample an exact number of episodes per worker. Args: n_eps_per_worker (int): Exact number of episodes to gather for ...
Sample an exact number of episodes per worker. Args: n_eps_per_worker (int): Exact number of episodes to gather for each worker. agent_update (object): Value which will be passed into the `agent_update_fn` before sampling episodes. If a list is passed ...
obtain_exact_episodes
python
rlworkgroup/garage
src/garage/sampler/ray_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/ray_sampler.py
MIT
def update(self, agent_update, env_update): """Update the agent and environment. Args: agent_update (object): Agent update. env_update (object): Environment update. Returns: int: The worker id. """ self.inner_worker.update_agent(agent_update...
Update the agent and environment. Args: agent_update (object): Agent update. env_update (object): Environment update. Returns: int: The worker id.
update
python
rlworkgroup/garage
src/garage/sampler/ray_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/ray_sampler.py
MIT
def __init__(self, algo, env): """Construct a Sampler from an Algorithm. Args: algo (RLAlgorithm): The RL Algorithm controlling this sampler. env (Environment): The environment being sampled from. Calling this method is deprecated. """ s...
Construct a Sampler from an Algorithm. Args: algo (RLAlgorithm): The RL Algorithm controlling this sampler. env (Environment): The environment being sampled from. Calling this method is deprecated.
__init__
python
rlworkgroup/garage
src/garage/sampler/sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/sampler.py
MIT
def start_worker(self): """Initialize the sampler. i.e. launching parallel workers if necessary. This method is deprecated, please launch workers in construct instead. """
Initialize the sampler. i.e. launching parallel workers if necessary. This method is deprecated, please launch workers in construct instead.
start_worker
python
rlworkgroup/garage
src/garage/sampler/sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/sampler.py
MIT
def obtain_samples(self, itr, num_samples, agent_update, env_update=None): """Collect at least a given number transitions :class:`TimeStep`s. Args: itr (int): The current iteration number. Using this argument is deprecated. num_samples (int): Minimum number of :c...
Collect at least a given number transitions :class:`TimeStep`s. Args: itr (int): The current iteration number. Using this argument is deprecated. num_samples (int): Minimum number of :class:`TimeStep`s to sample. agent_update (object): Value which will be pas...
obtain_samples
python
rlworkgroup/garage
src/garage/sampler/sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/sampler.py
MIT
def shutdown_worker(self): """Terminate workers if necessary. Because Python object destruction can be somewhat unpredictable, this method isn't deprecated. """
Terminate workers if necessary. Because Python object destruction can be somewhat unpredictable, this method isn't deprecated.
shutdown_worker
python
rlworkgroup/garage
src/garage/sampler/sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/sampler.py
MIT
def rollout(env, agent, *, max_episode_length=np.inf, animated=False, speedup=1, deterministic=False): """Sample a single episode of the agent in the environment. Args: agent (Policy): Agent used to select actions. env (Env...
Sample a single episode of the agent in the environment. Args: agent (Policy): Agent used to select actions. env (Environment): Environment to perform actions in. max_episode_length (int): If the episode reaches this many timesteps, it is truncated. animated (bool): If t...
rollout
python
rlworkgroup/garage
src/garage/sampler/utils.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/utils.py
MIT
def truncate_paths(paths, max_samples): """Truncate the paths so that the total number of samples is max_samples. This is done by removing extra paths at the end of the list, and make the last path shorter if necessary Args: paths (list[dict[str, np.ndarray]]): Samples, items with keys: ...
Truncate the paths so that the total number of samples is max_samples. This is done by removing extra paths at the end of the list, and make the last path shorter if necessary Args: paths (list[dict[str, np.ndarray]]): Samples, items with keys: * observations (np.ndarray): Enviroment o...
truncate_paths
python
rlworkgroup/garage
src/garage/sampler/utils.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/utils.py
MIT
def update_agent(self, agent_update): """Update an agent, assuming it implements :class:`~Policy`. Args: agent_update (np.ndarray or dict or Policy): If a tuple, dict, or np.ndarray, these should be parameters to agent, which should have been generated by cal...
Update an agent, assuming it implements :class:`~Policy`. Args: agent_update (np.ndarray or dict or Policy): If a tuple, dict, or np.ndarray, these should be parameters to agent, which should have been generated by calling `Policy.get_param_values`. A...
update_agent
python
rlworkgroup/garage
src/garage/sampler/vec_worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/vec_worker.py
MIT
def update_env(self, env_update): """Update the environments. If passed a list (*inside* this list passed to the Sampler itself), distributes the environments across the "vectorization" dimension. Args: env_update(Environment or EnvUpdate or None): The environment to ...
Update the environments. If passed a list (*inside* this list passed to the Sampler itself), distributes the environments across the "vectorization" dimension. Args: env_update(Environment or EnvUpdate or None): The environment to replace the existing env with. Note...
update_env
python
rlworkgroup/garage
src/garage/sampler/vec_worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/vec_worker.py
MIT
def step_episode(self): """Take a single time-step in the current episode. Returns: bool: True iff at least one of the episodes was completed. """ finished = False actions, agent_info = self.agent.get_actions(self._prev_obs) completes = [False] * len(self._en...
Take a single time-step in the current episode. Returns: bool: True iff at least one of the episodes was completed.
step_episode
python
rlworkgroup/garage
src/garage/sampler/vec_worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/vec_worker.py
MIT
def collect_episode(self): """Collect all completed episodes. Returns: EpisodeBatch: A batch of the episodes completed since the last call to collect_episode(). """ if len(self._completed_episodes) == 1: result = self._completed_episodes[0] ...
Collect all completed episodes. Returns: EpisodeBatch: A batch of the episodes completed since the last call to collect_episode().
collect_episode
python
rlworkgroup/garage
src/garage/sampler/vec_worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/vec_worker.py
MIT
def __init__(self, *, seed, max_episode_length, worker_number): """Initialize a worker. Args: seed (int): The seed to use to intialize random number generators. max_episode_length (int or float): The maximum length of episodes which will be sampled. Can be (float...
Initialize a worker. Args: seed (int): The seed to use to intialize random number generators. max_episode_length (int or float): The maximum length of episodes which will be sampled. Can be (floating point) infinity. worker_number (int): The number of the wor...
__init__
python
rlworkgroup/garage
src/garage/sampler/worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker.py
MIT
def update_agent(self, agent_update): """Update the worker's agent, using agent_update. Args: agent_update (object): An agent update. The exact type of this argument depends on the `Worker` implementation. """
Update the worker's agent, using agent_update. Args: agent_update (object): An agent update. The exact type of this argument depends on the `Worker` implementation.
update_agent
python
rlworkgroup/garage
src/garage/sampler/worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker.py
MIT
def update_env(self, env_update): """Update the worker's env, using env_update. Args: env_update (object): An environment update. The exact type of this argument depends on the `Worker` implementation. """
Update the worker's env, using env_update. Args: env_update (object): An environment update. The exact type of this argument depends on the `Worker` implementation.
update_env
python
rlworkgroup/garage
src/garage/sampler/worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker.py
MIT
def rollout(self): """Sample a single episode of the agent in the environment. Returns: EpisodeBatch: Batch of sampled episodes. May be truncated if max_episode_length is set. """
Sample a single episode of the agent in the environment. Returns: EpisodeBatch: Batch of sampled episodes. May be truncated if max_episode_length is set.
rollout
python
rlworkgroup/garage
src/garage/sampler/worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker.py
MIT
def step_episode(self): """Take a single time-step in the current episode. Returns: True iff the episode is done, either due to the environment indicating termination of due to reaching `max_episode_length`. """
Take a single time-step in the current episode. Returns: True iff the episode is done, either due to the environment indicating termination of due to reaching `max_episode_length`.
step_episode
python
rlworkgroup/garage
src/garage/sampler/worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker.py
MIT
def collect_episode(self): """Collect the current episode, clearing the internal buffer. Returns: EpisodeBatch: Batch of sampled episodes. May be truncated if the episodes haven't completed yet. """
Collect the current episode, clearing the internal buffer. Returns: EpisodeBatch: Batch of sampled episodes. May be truncated if the episodes haven't completed yet.
collect_episode
python
rlworkgroup/garage
src/garage/sampler/worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker.py
MIT
def __getstate__(self): """Refuse to be pickled. Raises: ValueError: Always raised, since pickling Workers is not supported. """ raise ValueError('Workers are not pickleable. ' 'Please pickle the WorkerFactory instead.')
Refuse to be pickled. Raises: ValueError: Always raised, since pickling Workers is not supported.
__getstate__
python
rlworkgroup/garage
src/garage/sampler/worker.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker.py
MIT
def prepare_worker_messages(self, objs, preprocess=identity_function): """Take an argument and canonicalize it into a list for all workers. This helper function is used to handle arguments in the sampler API which may (optionally) be lists. Specifically, these are agent, env, agent_upda...
Take an argument and canonicalize it into a list for all workers. This helper function is used to handle arguments in the sampler API which may (optionally) be lists. Specifically, these are agent, env, agent_update, and env_update. Checks that the number of parameters is correct. ...
prepare_worker_messages
python
rlworkgroup/garage
src/garage/sampler/worker_factory.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker_factory.py
MIT
def __call__(self, worker_number): """Construct a worker given its number. Args: worker_number(int): The worker number. Should be at least 0 and less than or equal to `n_workers`. Raises: ValueError: If the worker number is greater than `n_workers`. ...
Construct a worker given its number. Args: worker_number(int): The worker number. Should be at least 0 and less than or equal to `n_workers`. Raises: ValueError: If the worker number is greater than `n_workers`. Returns: garage.sampler.Worke...
__call__
python
rlworkgroup/garage
src/garage/sampler/worker_factory.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/worker_factory.py
MIT
def step(self, action, agent_info): """Step the episode using an action from an agent. Args: action (np.ndarray): The action taken by the agent. agent_info (dict[str, np.ndarray]): Extra agent information. Returns: np.ndarray: The new observation from the en...
Step the episode using an action from an agent. Args: action (np.ndarray): The action taken by the agent. agent_info (dict[str, np.ndarray]): Extra agent information. Returns: np.ndarray: The new observation from the environment.
step
python
rlworkgroup/garage
src/garage/sampler/_dtypes.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/_dtypes.py
MIT
def to_batch(self): """Convert this in-progress episode into a EpisodeBatch. Returns: EpisodeBatch: This episode as a batch. Raises: AssertionError: If this episode contains no time steps. """ assert len(self.rewards) > 0 env_infos = dict(self.e...
Convert this in-progress episode into a EpisodeBatch. Returns: EpisodeBatch: This episode as a batch. Raises: AssertionError: If this episode contains no time steps.
to_batch
python
rlworkgroup/garage
src/garage/sampler/_dtypes.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/_dtypes.py
MIT
def _apply_env_update(old_env, env_update): """Use any non-None env_update as a new environment. A simple env update function. If env_update is not None, it should be the complete new environment. This allows changing environments by passing the new environment as `env_update` into `obtain_samples...
Use any non-None env_update as a new environment. A simple env update function. If env_update is not None, it should be the complete new environment. This allows changing environments by passing the new environment as `env_update` into `obtain_samples`. Args: old_env (Environment): Enviro...
_apply_env_update
python
rlworkgroup/garage
src/garage/sampler/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/sampler/_functions.py
MIT
def compile_function(inputs, outputs): """Compiles a tensorflow function using the current session. Args: inputs (list[tf.Tensor]): Inputs to the function. Can be a list of inputs or just one. outputs (list[tf.Tensor]): Outputs of the function. Can be a list of outputs o...
Compiles a tensorflow function using the current session. Args: inputs (list[tf.Tensor]): Inputs to the function. Can be a list of inputs or just one. outputs (list[tf.Tensor]): Outputs of the function. Can be a list of outputs or just one. Returns: Callable: Co...
compile_function
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def get_target_ops(variables, target_variables, tau=None): """Get target variables update operations. In RL algorithms we often update target network every n steps. This function returns the tf.Operation for updating target variables (denoted by target_var) from variables (denote by var) with fract...
Get target variables update operations. In RL algorithms we often update target network every n steps. This function returns the tf.Operation for updating target variables (denoted by target_var) from variables (denote by var) with fraction tau. In other words, each time we want to keep tau of the ...
get_target_ops
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def flatten_batch_dict(d, name='flatten_batch_dict'): """Flatten a batch of observations represented as a dict. Args: d (dict[tf.Tensor]): A dict of Tensors to flatten. name (string): The name of the operation (None by default). Returns: dict[tf.Tensor]: A dict with flattened tenso...
Flatten a batch of observations represented as a dict. Args: d (dict[tf.Tensor]): A dict of Tensors to flatten. name (string): The name of the operation (None by default). Returns: dict[tf.Tensor]: A dict with flattened tensors.
flatten_batch_dict
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def filter_valids(t, valid, name='filter_valids'): """Filter out tensor using valid array. Args: t (tf.Tensor): The tensor to filter. valid (list[float]): Array of length of the valid values (either 0 or 1). name (string): Name of the operation. Returns: tf.Tens...
Filter out tensor using valid array. Args: t (tf.Tensor): The tensor to filter. valid (list[float]): Array of length of the valid values (either 0 or 1). name (string): Name of the operation. Returns: tf.Tensor: Filtered Tensor.
filter_valids
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def filter_valids_dict(d, valid, name='filter_valids_dict'): """Filter valid values on a dict. Args: d (dict[tf.Tensor]): Dict of tensors to be filtered. valid (list[float]): Array of length of the valid values (elements can be either 0 or 1). name (string): Name of the operatio...
Filter valid values on a dict. Args: d (dict[tf.Tensor]): Dict of tensors to be filtered. valid (list[float]): Array of length of the valid values (elements can be either 0 or 1). name (string): Name of the operation. None by default. Returns: dict[tf.Tensor]: Dict with...
filter_valids_dict
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def flatten_inputs(deep): """Flattens an :class:`collections.abc.Iterable` recursively. Args: deep (Iterable): An :class:`~collections.abc.Iterable` to flatten. Returns: list: The flattened result. """ def flatten(deep): # pylint: disable=missing-yield-doc,missing-yield-ty...
Flattens an :class:`collections.abc.Iterable` recursively. Args: deep (Iterable): An :class:`~collections.abc.Iterable` to flatten. Returns: list: The flattened result.
flatten_inputs
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def new_tensor(name, ndim, dtype): """Creates a placeholder :class:`tf.Tensor` with the specified arguments. Args: name (string): Name of the tf.Tensor. ndim (int): Number of dimensions of the tf.Tensor. dtype (type): Data type of the tf.Tensor's contents. Returns: tf.Tenso...
Creates a placeholder :class:`tf.Tensor` with the specified arguments. Args: name (string): Name of the tf.Tensor. ndim (int): Number of dimensions of the tf.Tensor. dtype (type): Data type of the tf.Tensor's contents. Returns: tf.Tensor: Placeholder tensor.
new_tensor
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def concat_tensor_dict_list(tensor_dict_list): """Concatenates a dict of tensors lists. Each list of tensors gets concatenated into one tensor. Args: tensor_dict_list (dict[list[ndarray]]): Dict with lists of tensors. Returns: dict[ndarray]: A dict with the concatenated tensors. "...
Concatenates a dict of tensors lists. Each list of tensors gets concatenated into one tensor. Args: tensor_dict_list (dict[list[ndarray]]): Dict with lists of tensors. Returns: dict[ndarray]: A dict with the concatenated tensors.
concat_tensor_dict_list
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def stack_tensor_dict_list(tensor_dict_list): """Stack a list of dictionaries of {tensors or dictionary of tensors}. Args: tensor_dict_list (dict): a list of dictionaries of {tensors or dictionary of tensors}. Returns: dict: a dictionary of {stacked tensors or dictionary of sta...
Stack a list of dictionaries of {tensors or dictionary of tensors}. Args: tensor_dict_list (dict): a list of dictionaries of {tensors or dictionary of tensors}. Returns: dict: a dictionary of {stacked tensors or dictionary of stacked tensors}.
stack_tensor_dict_list
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def split_tensor_dict_list(tensor_dict): """Split a list of dictionaries of {tensors or dictionary of tensors}. Args: tensor_dict (dict): a list of dictionaries of {tensors or dictionary of tensors}. Returns: dict: a dictionary of {split tensors or dictionary of split tensors}. ...
Split a list of dictionaries of {tensors or dictionary of tensors}. Args: tensor_dict (dict): a list of dictionaries of {tensors or dictionary of tensors}. Returns: dict: a dictionary of {split tensors or dictionary of split tensors}.
split_tensor_dict_list
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def pad_tensor(x, max_len): """Pad tensors with zeros. Args: x (numpy.ndarray): Tensors to be padded. max_len (int): Maximum length. Returns: numpy.ndarray: Padded tensor. """ return np.concatenate([ x, np.tile(np.zeros_like(x[0]), (max_len -...
Pad tensors with zeros. Args: x (numpy.ndarray): Tensors to be padded. max_len (int): Maximum length. Returns: numpy.ndarray: Padded tensor.
pad_tensor
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def pad_tensor_n(xs, max_len): """Pad array of tensors. Args: xs (numpy.ndarray): Tensors to be padded. max_len (int): Maximum length. Returns: numpy.ndarray: Padded tensor. """ ret = np.zeros((len(xs), max_len) + xs[0].shape[1:], dtype=xs[0].dtype) for idx, x in enumer...
Pad array of tensors. Args: xs (numpy.ndarray): Tensors to be padded. max_len (int): Maximum length. Returns: numpy.ndarray: Padded tensor.
pad_tensor_n
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def pad_tensor_dict(tensor_dict, max_len): """Pad dictionary of tensors with zeros. Args: tensor_dict (dict[numpy.ndarray]): Tensors to be padded. max_len (int): Maximum length. Returns: dict[numpy.ndarray]: Padded tensor. """ keys = list(tensor_dict.keys()) ret = dict(...
Pad dictionary of tensors with zeros. Args: tensor_dict (dict[numpy.ndarray]): Tensors to be padded. max_len (int): Maximum length. Returns: dict[numpy.ndarray]: Padded tensor.
pad_tensor_dict
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def compute_advantages(discount, gae_lambda, max_len, baselines, rewards, name='compute_advantages'): """Calculate advantages. Advantages are a discounted cumulative sum. The discount cumulat...
Calculate advantages. Advantages are a discounted cumulative sum. The discount cumulative sum can be represented as an IIR filter ob the reversed input vectors, i.e. y[t] - discount*y[t+1] = x[t], or rev(y)[t] - discount*rev(y)[t-1] = rev(x)[t] Given the time-domain IIR filter step response,...
compute_advantages
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def center_advs(advs, axes, eps, offset=0, scale=1, name='center_adv'): """Normalize the advs tensor. This calculates the mean and variance using the axes specified and normalizes the tensor using those values. Args: advs (tf.Tensor): Tensor to normalize. axes (array[int]): Axes along ...
Normalize the advs tensor. This calculates the mean and variance using the axes specified and normalizes the tensor using those values. Args: advs (tf.Tensor): Tensor to normalize. axes (array[int]): Axes along which to compute the mean and variance. eps (float): Small number to av...
center_advs
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def positive_advs(advs, eps, name='positive_adv'): """Make all the values in the advs tensor positive. Offsets all values in advs by the minimum value in the tensor, plus an epsilon value to avoid dividing by zero. Args: advs (tf.Tensor): The tensor to offset. eps (tf.float32): A small...
Make all the values in the advs tensor positive. Offsets all values in advs by the minimum value in the tensor, plus an epsilon value to avoid dividing by zero. Args: advs (tf.Tensor): The tensor to offset. eps (tf.float32): A small value to avoid by-zero division. name (string): N...
positive_advs
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def discounted_returns(discount, max_len, rewards, name='discounted_returns'): """Calculate discounted returns. Args: discount (float): Discount factor. max_len (int): Maximum length of a single episode. rewards (tf.Tensor): A 2D vector of per-step rewards with shape :math:`...
Calculate discounted returns. Args: discount (float): Discount factor. max_len (int): Maximum length of a single episode. rewards (tf.Tensor): A 2D vector of per-step rewards with shape :math:`(N, T)`, where :math:`N` is the batch dimension (number of episodes) and :...
discounted_returns
python
rlworkgroup/garage
src/garage/tf/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/_functions.py
MIT
def _init_opt(self): """Build the loss function and init the optimizer.""" with tf.name_scope(self._name): # Create target policy and qf network with tf.name_scope('inputs'): obs_dim = self._env_spec.observation_space.flat_dim input_y = tf.compat.v...
Build the loss function and init the optimizer.
_init_opt
python
rlworkgroup/garage
src/garage/tf/algos/ddpg.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/ddpg.py
MIT
def __getstate__(self): """Object.__getstate__. Returns: dict: the state to be pickled for the instance. """ data = self.__dict__.copy() del data['_target_policy_f_prob_online'] del data['_target_qf_f_prob_online'] del data['_f_train_policy'] ...
Object.__getstate__. Returns: dict: the state to be pickled for the instance.
__getstate__
python
rlworkgroup/garage
src/garage/tf/algos/ddpg.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/ddpg.py
MIT
def train(self, trainer): """Obtain samplers and start actual training for each epoch. Args: trainer (Trainer): Experiment trainer, which provides services such as snapshotting and sampler control. Returns: float: The average return in last epoch cycle. ...
Obtain samplers and start actual training for each epoch. Args: trainer (Trainer): Experiment trainer, which provides services such as snapshotting and sampler control. Returns: float: The average return in last epoch cycle.
train
python
rlworkgroup/garage
src/garage/tf/algos/ddpg.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/ddpg.py
MIT
def _train_once(self, itr, episodes): """Perform one step of policy optimization given one batch of samples. Args: itr (int): Iteration number. episodes (EpisodeBatch): Batch of episodes. """ self._replay_buffer.add_episode_batch(episodes) epoch = itr / ...
Perform one step of policy optimization given one batch of samples. Args: itr (int): Iteration number. episodes (EpisodeBatch): Batch of episodes.
_train_once
python
rlworkgroup/garage
src/garage/tf/algos/ddpg.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/ddpg.py
MIT
def _optimize_policy(self): """Perform algorithm optimizing. Returns: float: Loss of action predicted by the policy network float: Loss of q value predicted by the q network. float: ys. float: Q value predicted by the q network. """ times...
Perform algorithm optimizing. Returns: float: Loss of action predicted by the policy network float: Loss of q value predicted by the q network. float: ys. float: Q value predicted by the q network.
_optimize_policy
python
rlworkgroup/garage
src/garage/tf/algos/ddpg.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/ddpg.py
MIT
def _init_opt(self): """Initialize the networks and Ops. Assume discrete space for dqn, so action dimension will always be action_space.n """ action_dim = self._env_spec.action_space.n # build q networks with tf.name_scope(self._name): action_t_ph = ...
Initialize the networks and Ops. Assume discrete space for dqn, so action dimension will always be action_space.n
_init_opt
python
rlworkgroup/garage
src/garage/tf/algos/dqn.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/dqn.py
MIT
def _train_once(self, itr, episodes): """Perform one step of policy optimization given one batch of samples. Args: itr (int): Iteration number. episodes (EpisodeBatch): Batch of episodes. Returns: list[float]: Q function losses """ self._rep...
Perform one step of policy optimization given one batch of samples. Args: itr (int): Iteration number. episodes (EpisodeBatch): Batch of episodes. Returns: list[float]: Q function losses
_train_once
python
rlworkgroup/garage
src/garage/tf/algos/dqn.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/dqn.py
MIT
def _optimize_policy(self): """Optimize network using experiences from replay buffer. Returns: numpy.float64: Loss of policy. """ timesteps = self._replay_buffer.sample_timesteps( self._buffer_batch_size) observations = timesteps.observations re...
Optimize network using experiences from replay buffer. Returns: numpy.float64: Loss of policy.
_optimize_policy
python
rlworkgroup/garage
src/garage/tf/algos/dqn.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/dqn.py
MIT
def train(self, trainer): """Obtain samplers and start actual training for each epoch. Args: trainer (Trainer): Experiment trainer, which rovides services such as snapshotting and sampler control. Returns: float: The average return in last epoch cycle. ...
Obtain samplers and start actual training for each epoch. Args: trainer (Trainer): Experiment trainer, which rovides services such as snapshotting and sampler control. Returns: float: The average return in last epoch cycle.
train
python
rlworkgroup/garage
src/garage/tf/algos/npo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/npo.py
MIT
def _train_once(self, itr, episodes): """Perform one step of policy optimization given one batch of samples. Args: itr (int): Iteration number. episodes (EpisodeBatch): Batch of episodes. Returns: numpy.float64: Average return. """ # -- Stag...
Perform one step of policy optimization given one batch of samples. Args: itr (int): Iteration number. episodes (EpisodeBatch): Batch of episodes. Returns: numpy.float64: Average return.
_train_once
python
rlworkgroup/garage
src/garage/tf/algos/npo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/npo.py
MIT
def _optimize_policy(self, episodes, baselines): """Optimize policy. Args: episodes (EpisodeBatch): Batch of episodes. baselines (np.ndarray): Baseline predictions. """ policy_opt_input_values = self._policy_opt_input_values( episodes, baselines) ...
Optimize policy. Args: episodes (EpisodeBatch): Batch of episodes. baselines (np.ndarray): Baseline predictions.
_optimize_policy
python
rlworkgroup/garage
src/garage/tf/algos/npo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/npo.py
MIT
def _build_inputs(self): """Build input variables. Returns: namedtuple: Collection of variables to compute policy loss. namedtuple: Collection of variables to do policy optimization. """ observation_space = self.policy.observation_space action_space = se...
Build input variables. Returns: namedtuple: Collection of variables to compute policy loss. namedtuple: Collection of variables to do policy optimization.
_build_inputs
python
rlworkgroup/garage
src/garage/tf/algos/npo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/npo.py
MIT
def _build_policy_loss(self, i): """Build policy loss and other output tensors. Args: i (namedtuple): Collection of variables to compute policy loss. Returns: tf.Tensor: Policy loss. tf.Tensor: Mean policy KL divergence. """ policy_entropy =...
Build policy loss and other output tensors. Args: i (namedtuple): Collection of variables to compute policy loss. Returns: tf.Tensor: Policy loss. tf.Tensor: Mean policy KL divergence.
_build_policy_loss
python
rlworkgroup/garage
src/garage/tf/algos/npo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/npo.py
MIT
def _build_entropy_term(self, i): """Build policy entropy tensor. Args: i (namedtuple): Collection of variables to compute policy loss. Returns: tf.Tensor: Policy entropy. """ pol_dist = self._policy_network.dist with tf.name_scope('policy_entr...
Build policy entropy tensor. Args: i (namedtuple): Collection of variables to compute policy loss. Returns: tf.Tensor: Policy entropy.
_build_entropy_term
python
rlworkgroup/garage
src/garage/tf/algos/npo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/npo.py
MIT
def _fit_baseline_with_data(self, episodes, baselines): """Update baselines from samples. Args: episodes (EpisodeBatch): Batch of episodes. baselines (np.ndarray): Baseline predictions. Returns: np.ndarray: Augment returns. """ policy_opt_in...
Update baselines from samples. Args: episodes (EpisodeBatch): Batch of episodes. baselines (np.ndarray): Baseline predictions. Returns: np.ndarray: Augment returns.
_fit_baseline_with_data
python
rlworkgroup/garage
src/garage/tf/algos/npo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/npo.py
MIT
def _policy_opt_input_values(self, episodes, baselines): """Map episode samples to the policy optimizer inputs. Args: episodes (EpisodeBatch): Batch of episodes. baselines (np.ndarray): Baseline predictions. Returns: list(np.ndarray): Flatten policy optimiza...
Map episode samples to the policy optimizer inputs. Args: episodes (EpisodeBatch): Batch of episodes. baselines (np.ndarray): Baseline predictions. Returns: list(np.ndarray): Flatten policy optimization input values.
_policy_opt_input_values
python
rlworkgroup/garage
src/garage/tf/algos/npo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/npo.py
MIT
def _check_entropy_configuration(self, entropy_method, center_adv, stop_entropy_gradient, use_neg_logli_entropy, policy_ent_coeff): """Check entropy configuration. Args: entropy_method (str): A string from: 'max', 're...
Check entropy configuration. Args: entropy_method (str): A string from: 'max', 'regularized', 'no_entropy'. The type of entropy method to use. 'max' adds the dense entropy to the reward for each time step. 'regularized' adds the mean entropy to the su...
_check_entropy_configuration
python
rlworkgroup/garage
src/garage/tf/algos/npo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/npo.py
MIT
def __setstate__(self, state): """Parameters to restore from snapshot. Args: state (dict): Parameters to restore from. """ self.__dict__ = state self._name_scope = tf.name_scope(self._name) self._init_opt()
Parameters to restore from snapshot. Args: state (dict): Parameters to restore from.
__setstate__
python
rlworkgroup/garage
src/garage/tf/algos/npo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/npo.py
MIT
def _optimize_policy(self, episodes): """Optimize the policy using the samples. Args: episodes (EpisodeBatch): Batch of episodes. """ # Initial BFGS parameter values. x0 = np.hstack([self._param_eta, self._param_v]) # Set parameter boundaries: \eta>=1e-12, v...
Optimize the policy using the samples. Args: episodes (EpisodeBatch): Batch of episodes.
_optimize_policy
python
rlworkgroup/garage
src/garage/tf/algos/reps.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/reps.py
MIT
def eval_dual(x): """Evaluate dual function loss. Args: x (numpy.ndarray): Input to dual function. Returns: numpy.float64: Dual function loss. """ self._param_eta = x[0] self._param_v = x[1:] dual_opt_...
Evaluate dual function loss. Args: x (numpy.ndarray): Input to dual function. Returns: numpy.float64: Dual function loss.
eval_dual
python
rlworkgroup/garage
src/garage/tf/algos/reps.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/reps.py
MIT
def eval_dual_grad(x): """Evaluate gradient of dual function loss. Args: x (numpy.ndarray): Input to dual function. Returns: numpy.ndarray: Gradient of dual function loss. """ self._param_eta = x[0] self._param_v ...
Evaluate gradient of dual function loss. Args: x (numpy.ndarray): Input to dual function. Returns: numpy.ndarray: Gradient of dual function loss.
eval_dual_grad
python
rlworkgroup/garage
src/garage/tf/algos/reps.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/reps.py
MIT
def _build_policy_loss(self, i): """Build policy loss and other output tensors. Args: i (namedtuple): Collection of variables to compute policy loss. Returns: tf.Tensor: Policy loss. tf.Tensor: Mean policy KL divergence. Raises: NotImple...
Build policy loss and other output tensors. Args: i (namedtuple): Collection of variables to compute policy loss. Returns: tf.Tensor: Policy loss. tf.Tensor: Mean policy KL divergence. Raises: NotImplementedError: If is_recurrent is True. ...
_build_policy_loss
python
rlworkgroup/garage
src/garage/tf/algos/reps.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/reps.py
MIT
def _dual_opt_input_values(self, episodes): """Update dual func optimize input values based on samples data. Args: episodes (EpisodeBatch): Batch of episodes. Returns: list(np.ndarray): Flatten dual function optimization input values. """ agent_infos = ...
Update dual func optimize input values based on samples data. Args: episodes (EpisodeBatch): Batch of episodes. Returns: list(np.ndarray): Flatten dual function optimization input values.
_dual_opt_input_values
python
rlworkgroup/garage
src/garage/tf/algos/reps.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/reps.py
MIT
def _policy_opt_input_values(self, episodes): """Update policy optimize input values based on samples data. Args: episodes (EpisodeBatch): Batch of episodes. Returns: list(np.ndarray): Flatten policy optimization input values. """ agent_infos = episodes...
Update policy optimize input values based on samples data. Args: episodes (EpisodeBatch): Batch of episodes. Returns: list(np.ndarray): Flatten policy optimization input values.
_policy_opt_input_values
python
rlworkgroup/garage
src/garage/tf/algos/reps.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/reps.py
MIT
def _features(self, episodes): """Get valid view features based on samples data. Args: episodes (EpisodeBatch): Batch of episodes. Returns: numpy.ndarray: Features for training. """ start = 0 feat_diff = [] for length in episodes.lengths...
Get valid view features based on samples data. Args: episodes (EpisodeBatch): Batch of episodes. Returns: numpy.ndarray: Features for training.
_features
python
rlworkgroup/garage
src/garage/tf/algos/reps.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/reps.py
MIT
def _create_rl2_obs_space(self): """Create observation space for RL2. Returns: akro.Box: Augmented observation space. """ obs_flat_dim = np.prod(self._env.observation_space.shape) action_flat_dim = np.prod(self._env.action_space.shape) return akro.Box(low=-n...
Create observation space for RL2. Returns: akro.Box: Augmented observation space.
_create_rl2_obs_space
python
rlworkgroup/garage
src/garage/tf/algos/rl2.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/rl2.py
MIT
def set_param_values(self, params): """Set param values. Args: params (Tuple[np.ndarray, np.ndarray]): Two numpy array of parameter values, one of the network parameters, one for the initial hidden state. """ inner_params, hiddens = params ...
Set param values. Args: params (Tuple[np.ndarray, np.ndarray]): Two numpy array of parameter values, one of the network parameters, one for the initial hidden state.
set_param_values
python
rlworkgroup/garage
src/garage/tf/algos/rl2.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/rl2.py
MIT
def train(self, trainer): """Obtain samplers and start actual training for each epoch. Args: trainer (Trainer): Experiment trainer, which provides services such as snapshotting and sampler control. Returns: float: The average return in last epoch. ...
Obtain samplers and start actual training for each epoch. Args: trainer (Trainer): Experiment trainer, which provides services such as snapshotting and sampler control. Returns: float: The average return in last epoch.
train
python
rlworkgroup/garage
src/garage/tf/algos/rl2.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/rl2.py
MIT
def _process_samples(self, itr, episodes): # pylint: disable=too-many-statements """Return processed sample data based on the collected paths. Args: itr (int): Iteration number. episodes (EpisodeBatch): Original collected episode batch for each task. For ...
Return processed sample data based on the collected paths. Args: itr (int): Iteration number. episodes (EpisodeBatch): Original collected episode batch for each task. For each episode, episode.agent_infos['batch_idx'] indicates which task this episode bel...
_process_samples
python
rlworkgroup/garage
src/garage/tf/algos/rl2.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/rl2.py
MIT
def _concatenate_episodes(self, episode_list): """Concatenate episodes. The input list contains samples from different episodes but same task/environment. In RL^2, paths within each meta batch are all concatenate into a single path and fed to the policy. Args: episo...
Concatenate episodes. The input list contains samples from different episodes but same task/environment. In RL^2, paths within each meta batch are all concatenate into a single path and fed to the policy. Args: episode_list (list[EpisodeBatch]): Input paths. All paths are f...
_concatenate_episodes
python
rlworkgroup/garage
src/garage/tf/algos/rl2.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/rl2.py
MIT
def _optimize_policy(self, itr): """Perform algorithm optimizing. Args: itr(int): Iterations. Returns: float: Loss of action predicted by the policy network. float: Loss of q value predicted by the q network. float: y_s. float: Q valu...
Perform algorithm optimizing. Args: itr(int): Iterations. Returns: float: Loss of action predicted by the policy network. float: Loss of q value predicted by the q network. float: y_s. float: Q value predicted by the q network.
_optimize_policy
python
rlworkgroup/garage
src/garage/tf/algos/td3.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/td3.py
MIT
def step_episode(self): """Take a single time-step in the current episode. Returns: bool: True iff the episode is done, either due to the environment indicating termination of due to reaching `max_episode_length`. """ if self._eps_length < self._max_episode_...
Take a single time-step in the current episode. Returns: bool: True iff the episode is done, either due to the environment indicating termination of due to reaching `max_episode_length`.
step_episode
python
rlworkgroup/garage
src/garage/tf/algos/te.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/te.py
MIT
def collect_episode(self): """Collect the current episode, clearing the internal buffer. One-hot task id is saved in env_infos['task_onehot']. Latent is saved in agent_infos['latent']. Latent infos are saved in agent_infos['latent_info_name'], where info_name is the original latent ...
Collect the current episode, clearing the internal buffer. One-hot task id is saved in env_infos['task_onehot']. Latent is saved in agent_infos['latent']. Latent infos are saved in agent_infos['latent_info_name'], where info_name is the original latent info name. Returns: ...
collect_episode
python
rlworkgroup/garage
src/garage/tf/algos/te.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/te.py
MIT
def _init_opt(self): """Initialize optimizater. Raises: NotImplementedError: Raise if the policy is recurrent. """ # Input variables (pol_loss_inputs, pol_opt_inputs, infer_loss_inputs, infer_opt_inputs) = self._build_inputs() self._policy_opt_inpu...
Initialize optimizater. Raises: NotImplementedError: Raise if the policy is recurrent.
_init_opt
python
rlworkgroup/garage
src/garage/tf/algos/te_npo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/te_npo.py
MIT
def train(self, trainer): """Obtain samplers and start actual training for each epoch. Args: trainer (Trainer): Trainer is passed to give algorithm the access to trainer.step_epochs(), which provides services such as snapshotting and sampler control. ...
Obtain samplers and start actual training for each epoch. Args: trainer (Trainer): Trainer is passed to give algorithm the access to trainer.step_epochs(), which provides services such as snapshotting and sampler control. Returns: float: The aver...
train
python
rlworkgroup/garage
src/garage/tf/algos/te_npo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/tf/algos/te_npo.py
MIT