code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def log_performance(itr, batch, discount, prefix='Evaluation'): """Evaluate the performance of an algorithm on a batch of episodes. Args: itr (int): Iteration number. batch (EpisodeBatch): The episodes to evaluate with. discount (float): Discount value, from algorithm's property. ...
Evaluate the performance of an algorithm on a batch of episodes. Args: itr (int): Iteration number. batch (EpisodeBatch): The episodes to evaluate with. discount (float): Discount value, from algorithm's property. prefix (str): Prefix to add to all logged keys. Returns: ...
log_performance
python
rlworkgroup/garage
src/garage/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/_functions.py
MIT
def __init__(self, desc='4x4', max_episode_length=None): """Initialize the environment. Args: desc (str): grid configuration key. max_episode_length (int): The maximum steps allowed for an episode. """ if isinstance(desc, str): desc = MAPS[desc] ...
Initialize the environment. Args: desc (str): grid configuration key. max_episode_length (int): The maximum steps allowed for an episode.
__init__
python
rlworkgroup/garage
src/garage/envs/grid_world_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/grid_world_env.py
MIT
def step(self, action): """Steps the environment. action map: 0: left 1: down 2: right 3: up Args: action (int): an int encoding the action Returns: EnvStep: The environment step resulting from the action. Raises: ...
Steps the environment. action map: 0: left 1: down 2: right 3: up Args: action (int): an int encoding the action Returns: EnvStep: The environment step resulting from the action. Raises: RuntimeError: if `step()` is ...
step
python
rlworkgroup/garage
src/garage/envs/grid_world_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/grid_world_env.py
MIT
def render(self, mode): """Renders the environment. Args: mode (str): the mode to render with. The string must be present in `Environment.render_modes`. """
Renders the environment. Args: mode (str): the mode to render with. The string must be present in `Environment.render_modes`.
render
python
rlworkgroup/garage
src/garage/envs/grid_world_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/grid_world_env.py
MIT
def _get_possible_next_states(self, state, action): """Return possible next states and their probabilities. Only next states with nonzero probabilities will be returned. Args: state (list): start state action (int): action Returns: list: a list of p...
Return possible next states and their probabilities. Only next states with nonzero probabilities will be returned. Args: state (list): start state action (int): action Returns: list: a list of pairs (s', p(s'|s,a))
_get_possible_next_states
python
rlworkgroup/garage
src/garage/envs/grid_world_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/grid_world_env.py
MIT
def __new__(cls, *args, **kwargs): """Returns environment specific wrapper based on input environment type. Args: *args: Positional arguments **kwargs: Keyword arguments Returns: garage.envs.bullet.BulletEnv: if the environment is a bullet-based ...
Returns environment specific wrapper based on input environment type. Args: *args: Positional arguments **kwargs: Keyword arguments Returns: garage.envs.bullet.BulletEnv: if the environment is a bullet-based environment. Else returns a garage.envs.G...
__new__
python
rlworkgroup/garage
src/garage/envs/gym_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/gym_env.py
MIT
def step(self, action): """Call step on wrapped env. Args: action (np.ndarray): An action provided by the agent. Returns: EnvStep: The environment step resulting from the action. Raises: RuntimeError: if `step()` is called after the environment has ...
Call step on wrapped env. Args: action (np.ndarray): An action provided by the agent. Returns: EnvStep: The environment step resulting from the action. Raises: RuntimeError: if `step()` is called after the environment has been constructed an...
step
python
rlworkgroup/garage
src/garage/envs/gym_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/gym_env.py
MIT
def _close_viewer_window(self): """Close viewer window. Unfortunately, some gym environments don't close the viewer windows properly, which leads to "out of memory" issues when several of these environments are tested one after the other. This method searches for the viewer obje...
Close viewer window. Unfortunately, some gym environments don't close the viewer windows properly, which leads to "out of memory" issues when several of these environments are tested one after the other. This method searches for the viewer object of type MjViewer, Viewer or Simp...
_close_viewer_window
python
rlworkgroup/garage
src/garage/envs/gym_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/gym_env.py
MIT
def __getattr__(self, name): """Handle function calls wrapped environment. Args: name (str): attribute name Returns: object: the wrapped attribute. Raises: AttributeError: if the requested attribute is a private attribute, or if the requ...
Handle function calls wrapped environment. Args: name (str): attribute name Returns: object: the wrapped attribute. Raises: AttributeError: if the requested attribute is a private attribute, or if the requested attribute is not found in the ...
__getattr__
python
rlworkgroup/garage
src/garage/envs/gym_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/gym_env.py
MIT
def sample_tasks(self, n_tasks): """Samples n_tasks tasks. Part of the set_task environment protocol. To call this method, a benchmark must have been passed in at environment construction. Args: n_tasks (int): Number of tasks to sample. Returns: dict[st...
Samples n_tasks tasks. Part of the set_task environment protocol. To call this method, a benchmark must have been passed in at environment construction. Args: n_tasks (int): Number of tasks to sample. Returns: dict[str,object]: Task object to pass back to `set_...
sample_tasks
python
rlworkgroup/garage
src/garage/envs/metaworld_set_task_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/metaworld_set_task_env.py
MIT
def set_task(self, task): """Set the task. Part of the set_task environment protocol. Args: task (dict[str,object]): Task object from `sample_tasks`. """ # Mixing train and test is probably a mistake assert self._kind is None or self._kind == task['kind'] ...
Set the task. Part of the set_task environment protocol. Args: task (dict[str,object]): Task object from `sample_tasks`.
set_task
python
rlworkgroup/garage
src/garage/envs/metaworld_set_task_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/metaworld_set_task_env.py
MIT
def _fill_tasks(self): """Fill out _tasks after the benchmark is set. Raises: ValueError: If kind is not set to "train" or "test" """ if self._add_env_onehot: if (self._kind == 'test' or 'metaworld.ML' in repr(type(self._benchmark))): ...
Fill out _tasks after the benchmark is set. Raises: ValueError: If kind is not set to "train" or "test"
_fill_tasks
python
rlworkgroup/garage
src/garage/envs/metaworld_set_task_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/metaworld_set_task_env.py
MIT
def round_robin_strategy(num_tasks, last_task=None): """A function for sampling tasks in round robin fashion. Args: num_tasks (int): Total number of tasks. last_task (int): Previously sampled task. Returns: int: task id. """ if last_task is None: return 0 retu...
A function for sampling tasks in round robin fashion. Args: num_tasks (int): Total number of tasks. last_task (int): Previously sampled task. Returns: int: task id.
round_robin_strategy
python
rlworkgroup/garage
src/garage/envs/multi_env_wrapper.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/multi_env_wrapper.py
MIT
def observation_space(self): """Observation space. Returns: akro.Box: Observation space. """ if self._mode == 'vanilla': return self._env.observation_space elif self._mode == 'add-onehot': task_lb, task_ub = self.task_space.bounds ...
Observation space. Returns: akro.Box: Observation space.
observation_space
python
rlworkgroup/garage
src/garage/envs/multi_env_wrapper.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/multi_env_wrapper.py
MIT
def task_space(self): """Task Space. Returns: akro.Box: Task space. """ one_hot_ub = np.ones(self.num_tasks) one_hot_lb = np.zeros(self.num_tasks) return akro.Box(one_hot_lb, one_hot_ub)
Task Space. Returns: akro.Box: Task space.
task_space
python
rlworkgroup/garage
src/garage/envs/multi_env_wrapper.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/multi_env_wrapper.py
MIT
def active_task_index(self): """Index of active task env. Returns: int: Index of active task. """ if hasattr(self._env, 'active_task_index'): return self._env.active_task_index else: return self._active_task_index
Index of active task env. Returns: int: Index of active task.
active_task_index
python
rlworkgroup/garage
src/garage/envs/multi_env_wrapper.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/multi_env_wrapper.py
MIT
def step(self, action): """Step the active task env. Args: action (object): object to be passed in Environment.reset(action) Returns: EnvStep: The environment step resulting from the action. """ es = self._env.step(action) if self._mode == 'add...
Step the active task env. Args: action (object): object to be passed in Environment.reset(action) Returns: EnvStep: The environment step resulting from the action.
step
python
rlworkgroup/garage
src/garage/envs/multi_env_wrapper.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/multi_env_wrapper.py
MIT
def _active_task_one_hot(self): """One-hot representation of active task. Returns: numpy.ndarray: one-hot representation of active task """ one_hot = np.zeros(self.task_space.shape) index = self.active_task_index or 0 one_hot[index] = self.task_space.high[in...
One-hot representation of active task. Returns: numpy.ndarray: one-hot representation of active task
_active_task_one_hot
python
rlworkgroup/garage
src/garage/envs/multi_env_wrapper.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/multi_env_wrapper.py
MIT
def step(self, action): """Call step on wrapped env. Args: action (np.ndarray): An action provided by the agent. Returns: EnvStep: The environment step resulting from the action. Raises: RuntimeError: if `step()` is called after the environment has ...
Call step on wrapped env. Args: action (np.ndarray): An action provided by the agent. Returns: EnvStep: The environment step resulting from the action. Raises: RuntimeError: if `step()` is called after the environment has been constructed an...
step
python
rlworkgroup/garage
src/garage/envs/normalized_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/normalized_env.py
MIT
def _apply_normalize_obs(self, obs): """Compute normalized observation. Args: obs (np.ndarray): Observation. Returns: np.ndarray: Normalized observation. """ self._update_obs_estimate(obs) flat_obs = self._env.observation_space.flatten(obs) ...
Compute normalized observation. Args: obs (np.ndarray): Observation. Returns: np.ndarray: Normalized observation.
_apply_normalize_obs
python
rlworkgroup/garage
src/garage/envs/normalized_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/normalized_env.py
MIT
def _apply_normalize_reward(self, reward): """Compute normalized reward. Args: reward (float): Reward. Returns: float: Normalized reward. """ self._update_reward_estimate(reward) return reward / (np.sqrt(self._reward_var) + 1e-8)
Compute normalized reward. Args: reward (float): Reward. Returns: float: Normalized reward.
_apply_normalize_reward
python
rlworkgroup/garage
src/garage/envs/normalized_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/normalized_env.py
MIT
def step(self, action): """Step the environment. Args: action (np.ndarray): An action provided by the agent. Returns: EnvStep: The environment step resulting from the action. Raises: RuntimeError: if `step()` is called after the environment ...
Step the environment. Args: action (np.ndarray): An action provided by the agent. Returns: EnvStep: The environment step resulting from the action. Raises: RuntimeError: if `step()` is called after the environment has been constr...
step
python
rlworkgroup/garage
src/garage/envs/point_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/point_env.py
MIT
def sample_tasks(self, num_tasks): """Sample a list of `num_tasks` tasks. Args: num_tasks (int): Number of tasks to sample. Returns: list[dict[str, np.ndarray]]: A list of "tasks", where each task is a dictionary containing a single key, "goal", mapping ...
Sample a list of `num_tasks` tasks. Args: num_tasks (int): Number of tasks to sample. Returns: list[dict[str, np.ndarray]]: A list of "tasks", where each task is a dictionary containing a single key, "goal", mapping to a point in 2D space. ...
sample_tasks
python
rlworkgroup/garage
src/garage/envs/point_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/point_env.py
MIT
def set_task(self, task): """Reset with a task. Args: task (dict[str, np.ndarray]): A task (a dictionary containing a single key, "goal", which should be a point in 2D space). """ self._task = task self._goal = task['goal']
Reset with a task. Args: task (dict[str, np.ndarray]): A task (a dictionary containing a single key, "goal", which should be a point in 2D space).
set_task
python
rlworkgroup/garage
src/garage/envs/point_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/point_env.py
MIT
def step(self, action): """gym.Env step for the active task env. Args: action (np.ndarray): Action performed by the agent in the environment. Returns: tuple: np.ndarray: Agent's observation of the current environment. floa...
gym.Env step for the active task env. Args: action (np.ndarray): Action performed by the agent in the environment. Returns: tuple: np.ndarray: Agent's observation of the current environment. float: Amount of reward yielded by prev...
step
python
rlworkgroup/garage
src/garage/envs/task_name_wrapper.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/task_name_wrapper.py
MIT
def step(self, action): """Environment step for the active task env. Args: action (np.ndarray): Action performed by the agent in the environment. Returns: EnvStep: The environment step resulting from the action. """ es = self._env.step(a...
Environment step for the active task env. Args: action (np.ndarray): Action performed by the agent in the environment. Returns: EnvStep: The environment step resulting from the action.
step
python
rlworkgroup/garage
src/garage/envs/task_onehot_wrapper.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/task_onehot_wrapper.py
MIT
def _obs_with_one_hot(self, obs): """Concatenate observation and task one-hot. Args: obs (numpy.ndarray): observation Returns: numpy.ndarray: observation + task one-hot. """ one_hot = np.zeros(self._n_total_tasks) one_hot[self._task_index] = 1.0...
Concatenate observation and task one-hot. Args: obs (numpy.ndarray): observation Returns: numpy.ndarray: observation + task one-hot.
_obs_with_one_hot
python
rlworkgroup/garage
src/garage/envs/task_onehot_wrapper.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/task_onehot_wrapper.py
MIT
def wrap_env_list(cls, envs): """Wrap a list of environments, giving each environment a one-hot. This is the primary way of constructing instances of this class. It's mostly useful when training multi-task algorithms using a multi-task aware sampler. For example: ''' ...
Wrap a list of environments, giving each environment a one-hot. This is the primary way of constructing instances of this class. It's mostly useful when training multi-task algorithms using a multi-task aware sampler. For example: ''' .. code-block:: python ...
wrap_env_list
python
rlworkgroup/garage
src/garage/envs/task_onehot_wrapper.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/task_onehot_wrapper.py
MIT
def wrap_env_cons_list(cls, env_cons): """Wrap a list of environment constructors, giving each a one-hot. This function is useful if you want to avoid constructing any environments in the main experiment process, and are using a multi-task aware remote sampler (i.e. `~RaySampler`). ...
Wrap a list of environment constructors, giving each a one-hot. This function is useful if you want to avoid constructing any environments in the main experiment process, and are using a multi-task aware remote sampler (i.e. `~RaySampler`). For example: ''' .. code-bloc...
wrap_env_cons_list
python
rlworkgroup/garage
src/garage/envs/task_onehot_wrapper.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/task_onehot_wrapper.py
MIT
def __init__(self, env, name=None): """Create a DMControlEnv. Args: env (dm_control.suite.Task): The wrapped dm_control environment. name (str): Name of the environment. """ self._env = env self._name = name or type(env.task).__name__ self._viewe...
Create a DMControlEnv. Args: env (dm_control.suite.Task): The wrapped dm_control environment. name (str): Name of the environment.
__init__
python
rlworkgroup/garage
src/garage/envs/dm_control/dm_control_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/dm_control/dm_control_env.py
MIT
def step(self, action): """Steps the environment with the action and returns a `EnvStep`. Args: action (object): input action Returns: EnvStep: The environment step resulting from the action. Raises: RuntimeError: if `step()` is called after the env...
Steps the environment with the action and returns a `EnvStep`. Args: action (object): input action Returns: EnvStep: The environment step resulting from the action. Raises: RuntimeError: if `step()` is called after the environment has been c...
step
python
rlworkgroup/garage
src/garage/envs/dm_control/dm_control_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/dm_control/dm_control_env.py
MIT
def render(self, mode): """Render the environment. Args: mode (str): render mode. Returns: np.ndarray: if mode is 'rgb_array', else return None. Raises: ValueError: if mode is not supported. """ self._validate_render_mode(mode) ...
Render the environment. Args: mode (str): render mode. Returns: np.ndarray: if mode is 'rgb_array', else return None. Raises: ValueError: if mode is not supported.
render
python
rlworkgroup/garage
src/garage/envs/dm_control/dm_control_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/dm_control/dm_control_env.py
MIT
def __getstate__(self): """See `Object.__getstate__`. Returns: dict: dict of the class. """ d = self.__dict__.copy() d['_viewer'] = None return d
See `Object.__getstate__`. Returns: dict: dict of the class.
__getstate__
python
rlworkgroup/garage
src/garage/envs/dm_control/dm_control_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/dm_control/dm_control_env.py
MIT
def step(self, action): """Take one step in the environment. Equivalent to step in HalfCheetahEnv, but with different rewards. Args: action (np.ndarray): The action to take in the environment. Raises: ValueError: If the current direction is not 1.0 or -1.0. ...
Take one step in the environment. Equivalent to step in HalfCheetahEnv, but with different rewards. Args: action (np.ndarray): The action to take in the environment. Raises: ValueError: If the current direction is not 1.0 or -1.0. Returns: tuple: ...
step
python
rlworkgroup/garage
src/garage/envs/mujoco/half_cheetah_dir_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/mujoco/half_cheetah_dir_env.py
MIT
def sample_tasks(self, num_tasks): """Sample a list of `num_tasks` tasks. Args: num_tasks (int): Number of tasks to sample. Returns: list[dict[str, float]]: A list of "tasks," where each task is a dictionary containing a single key, "direction", mapping ...
Sample a list of `num_tasks` tasks. Args: num_tasks (int): Number of tasks to sample. Returns: list[dict[str, float]]: A list of "tasks," where each task is a dictionary containing a single key, "direction", mapping to -1 or 1.
sample_tasks
python
rlworkgroup/garage
src/garage/envs/mujoco/half_cheetah_dir_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/mujoco/half_cheetah_dir_env.py
MIT
def _get_obs(self): """Get a low-dimensional observation of the state. Returns: np.ndarray: Contains the flattened angle quaternion, angular velocity quaternion, and cartesian position. """ return np.concatenate([ self.sim.data.qpos.flat[1:], ...
Get a low-dimensional observation of the state. Returns: np.ndarray: Contains the flattened angle quaternion, angular velocity quaternion, and cartesian position.
_get_obs
python
rlworkgroup/garage
src/garage/envs/mujoco/half_cheetah_env_meta_base.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/mujoco/half_cheetah_env_meta_base.py
MIT
def step(self, action): """Take one step in the environment. Equivalent to step in HalfCheetahEnv, but with different rewards. Args: action (np.ndarray): The action to take in the environment. Returns: tuple: * observation (np.ndarray): The obse...
Take one step in the environment. Equivalent to step in HalfCheetahEnv, but with different rewards. Args: action (np.ndarray): The action to take in the environment. Returns: tuple: * observation (np.ndarray): The observation of the environment. ...
step
python
rlworkgroup/garage
src/garage/envs/mujoco/half_cheetah_vel_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/mujoco/half_cheetah_vel_env.py
MIT
def sample_tasks(self, num_tasks): """Sample a list of `num_tasks` tasks. Args: num_tasks (int): Number of tasks to sample. Returns: list[dict[str, float]]: A list of "tasks," where each task is a dictionary containing a single key, "velocity", mapping t...
Sample a list of `num_tasks` tasks. Args: num_tasks (int): Number of tasks to sample. Returns: list[dict[str, float]]: A list of "tasks," where each task is a dictionary containing a single key, "velocity", mapping to a value between 0 and 2. ...
sample_tasks
python
rlworkgroup/garage
src/garage/envs/mujoco/half_cheetah_vel_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/mujoco/half_cheetah_vel_env.py
MIT
def reset(self, **kwargs): """ gym.Env reset function. Reset only when lives are lost. """ if self._was_real_done: obs = self.env.reset(**kwargs) else: # no-op step obs, _, _, _ = self.env.step(0) self._lives = self.env.unwrapp...
gym.Env reset function. Reset only when lives are lost.
reset
python
rlworkgroup/garage
src/garage/envs/wrappers/episodic_life.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/wrappers/episodic_life.py
MIT
def reset(self, **kwargs): """gym.Env reset function. Args: kwargs (dict): extra arguments passed to gym.Env.reset() Returns: np.ndarray: next observation. """ self.env.reset(**kwargs) obs, _, done, _ = self.env.step(1) if done: ...
gym.Env reset function. Args: kwargs (dict): extra arguments passed to gym.Env.reset() Returns: np.ndarray: next observation.
reset
python
rlworkgroup/garage
src/garage/envs/wrappers/fire_reset.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/wrappers/fire_reset.py
MIT
def step(self, action): """See gym.Env. Args: action (np.ndarray): Action conforming to action_space Returns: np.ndarray: Observation conforming to observation_space float: Reward for this step bool: Termination signal dict: Extra inf...
See gym.Env. Args: action (np.ndarray): Action conforming to action_space Returns: np.ndarray: Observation conforming to observation_space float: Reward for this step bool: Termination signal dict: Extra information from the environment. ...
step
python
rlworkgroup/garage
src/garage/envs/wrappers/grayscale.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/wrappers/grayscale.py
MIT
def _color_to_grayscale(obs): """Convert a 3-channel color observation image to grayscale and uint8. Args: obs (np.ndarray): Observation array, conforming to observation_space Returns: np.ndarray: 1-channel grayscale version of obs, represented as uint8 """ with warnings.catch_warni...
Convert a 3-channel color observation image to grayscale and uint8. Args: obs (np.ndarray): Observation array, conforming to observation_space Returns: np.ndarray: 1-channel grayscale version of obs, represented as uint8
_color_to_grayscale
python
rlworkgroup/garage
src/garage/envs/wrappers/grayscale.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/wrappers/grayscale.py
MIT
def step(self, action): """Repeat action, sum reward, and max over last two observations. Args: action (int): action to take in the atari environment. Returns: np.ndarray: observation of shape :math:`(O*,)` representating the max values over the last two...
Repeat action, sum reward, and max over last two observations. Args: action (int): action to take in the atari environment. Returns: np.ndarray: observation of shape :math:`(O*,)` representating the max values over the last two oservations. float: Re...
step
python
rlworkgroup/garage
src/garage/envs/wrappers/max_and_skip.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/wrappers/max_and_skip.py
MIT
def step(self, action): """gym.Env step function. Performs one action step in the enviornment. Args: action (np.ndarray): Action of shape :math:`(A*, )` to pass to the environment. Returns: np.ndarray: Pixel observation of shape :math:`(O*, )` ...
gym.Env step function. Performs one action step in the enviornment. Args: action (np.ndarray): Action of shape :math:`(A*, )` to pass to the environment. Returns: np.ndarray: Pixel observation of shape :math:`(O*, )` from the wrapped env...
step
python
rlworkgroup/garage
src/garage/envs/wrappers/pixel_observation.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/wrappers/pixel_observation.py
MIT
def reset(self): """gym.Env reset function. Returns: np.ndarray: Observation conforming to observation_space float: Reward for this step bool: Termination signal dict: Extra information from the environment. """ observation = self.env.rese...
gym.Env reset function. Returns: np.ndarray: Observation conforming to observation_space float: Reward for this step bool: Termination signal dict: Extra information from the environment.
reset
python
rlworkgroup/garage
src/garage/envs/wrappers/stack_frames.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/wrappers/stack_frames.py
MIT
def step(self, action): """gym.Env step function. Args: action (int): index of the action to take. Returns: np.ndarray: Observation conforming to observation_space float: Reward for this step bool: Termination signal dict: Extra infor...
gym.Env step function. Args: action (int): index of the action to take. Returns: np.ndarray: Observation conforming to observation_space float: Reward for this step bool: Termination signal dict: Extra information from the environment. ...
step
python
rlworkgroup/garage
src/garage/envs/wrappers/stack_frames.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/envs/wrappers/stack_frames.py
MIT
def query_yes_no(question, default='yes'): """Ask a yes/no question via raw_input() and return their answer. Args: question (str): Printed to user. default (str or None): Default if user just hits enter. Raises: ValueError: If the provided default is invalid. Returns: ...
Ask a yes/no question via raw_input() and return their answer. Args: question (str): Printed to user. default (str or None): Default if user just hits enter. Raises: ValueError: If the provided default is invalid. Returns: bool: True for "yes"y answers, False for "no". ...
query_yes_no
python
rlworkgroup/garage
src/garage/examples/sim_policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/sim_policy.py
MIT
def step_bullet_kuka_env(n_steps=1000): """Load, step, and visualize a Bullet Kuka environment. Args: n_steps (int): number of steps to run. """ # Construct the environment env = GymEnv(gym.make('KukaBulletEnv-v0', renders=True, isDiscrete=True)) # Reset the environment and launch the...
Load, step, and visualize a Bullet Kuka environment. Args: n_steps (int): number of steps to run.
step_bullet_kuka_env
python
rlworkgroup/garage
src/garage/examples/step_bullet_kuka_env.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/step_bullet_kuka_env.py
MIT
def cem_cartpole(ctxt=None, seed=1): """Train CEM with Cartpole-v1 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce deter...
Train CEM with Cartpole-v1 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
cem_cartpole
python
rlworkgroup/garage
src/garage/examples/np/cem_cartpole.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/np/cem_cartpole.py
MIT
def cma_es_cartpole(ctxt=None, seed=1): """Train CMA_ES with Cartpole-v1 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce ...
Train CMA_ES with Cartpole-v1 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
cma_es_cartpole
python
rlworkgroup/garage
src/garage/examples/np/cma_es_cartpole.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/np/cma_es_cartpole.py
MIT
def train(self, trainer): """Get samples and train the policy. Args: trainer (Trainer): Trainer. """ for epoch in trainer.step_epochs(): samples = trainer.obtain_samples(epoch) log_performance(epoch, EpisodeBatch.from_list...
Get samples and train the policy. Args: trainer (Trainer): Trainer.
train
python
rlworkgroup/garage
src/garage/examples/np/tutorial_cem.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/np/tutorial_cem.py
MIT
def _train_once(self, epoch, paths): """Perform one step of policy optimization given one batch of samples. Args: epoch (int): Iteration number. paths (list[dict]): A list of collected paths. Returns: float: The average return of epoch cycle. """ ...
Perform one step of policy optimization given one batch of samples. Args: epoch (int): Iteration number. paths (list[dict]): A list of collected paths. Returns: float: The average return of epoch cycle.
_train_once
python
rlworkgroup/garage
src/garage/examples/np/tutorial_cem.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/np/tutorial_cem.py
MIT
def _sample_params(self, epoch): """Return sample parameters. Args: epoch (int): Epoch number. Returns: np.ndarray: A numpy array of parameter values. """ extra_var_mult = max(1.0 - epoch / self._extra_decay_time, 0) sample_std = np.sqrt( ...
Return sample parameters. Args: epoch (int): Epoch number. Returns: np.ndarray: A numpy array of parameter values.
_sample_params
python
rlworkgroup/garage
src/garage/examples/np/tutorial_cem.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/np/tutorial_cem.py
MIT
def tutorial_cem(ctxt=None): """Train CEM with Cartpole-v1 environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. """ set_seed(100) with TFTrainer(ctxt) as trainer: env = GymEnv('CartPole-...
Train CEM with Cartpole-v1 environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`.
tutorial_cem
python
rlworkgroup/garage
src/garage/examples/np/tutorial_cem.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/np/tutorial_cem.py
MIT
def ddpg_pendulum(ctxt=None, seed=1): """Train DDPG with InvertedDoublePendulum-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce ...
Train DDPG with InvertedDoublePendulum-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
ddpg_pendulum
python
rlworkgroup/garage
src/garage/examples/tf/ddpg_pendulum.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/ddpg_pendulum.py
MIT
def dqn_cartpole(ctxt=None, seed=1): """Train TRPO with CubeCrash-v0 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce det...
Train TRPO with CubeCrash-v0 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
dqn_cartpole
python
rlworkgroup/garage
src/garage/examples/tf/dqn_cartpole.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/dqn_cartpole.py
MIT
def dqn_pong(ctxt=None, seed=1, buffer_size=int(5e4), max_episode_length=500): """Train DQN on PongNoFrameskip-v4 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the rando...
Train DQN on PongNoFrameskip-v4 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. buffer_size (int): Numb...
dqn_pong
python
rlworkgroup/garage
src/garage/examples/tf/dqn_pong.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/dqn_pong.py
MIT
def erwr_cartpole(ctxt=None, seed=1): """Train with ERWR on CartPole-v1 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce ...
Train with ERWR on CartPole-v1 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
erwr_cartpole
python
rlworkgroup/garage
src/garage/examples/tf/erwr_cartpole.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/erwr_cartpole.py
MIT
def her_ddpg_fetchreach(ctxt=None, seed=1): """Train DDPG + HER on the goal-conditioned FetchReach env. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to p...
Train DDPG + HER on the goal-conditioned FetchReach env. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
her_ddpg_fetchreach
python
rlworkgroup/garage
src/garage/examples/tf/her_ddpg_fetchreach.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/her_ddpg_fetchreach.py
MIT
def multi_env_ppo(ctxt=None, seed=1): """Train PPO on two Atari environments simultaneously. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce ...
Train PPO on two Atari environments simultaneously. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
multi_env_ppo
python
rlworkgroup/garage
src/garage/examples/tf/multi_env_ppo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/multi_env_ppo.py
MIT
def multi_env_trpo(ctxt=None, seed=1): """Train TRPO on two different PointEnv instances. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce ...
Train TRPO on two different PointEnv instances. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
multi_env_trpo
python
rlworkgroup/garage
src/garage/examples/tf/multi_env_trpo.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/multi_env_trpo.py
MIT
def ppo_memorize_digits(ctxt=None, seed=1, batch_size=4000, max_episode_length=100): """Train PPO on MemorizeDigits-v0 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by ...
Train PPO on MemorizeDigits-v0 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. batch_size (int): Number...
ppo_memorize_digits
python
rlworkgroup/garage
src/garage/examples/tf/ppo_memorize_digits.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/ppo_memorize_digits.py
MIT
def ppo_pendulum(ctxt=None, seed=1): """Train PPO with InvertedDoublePendulum-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce ...
Train PPO with InvertedDoublePendulum-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
ppo_pendulum
python
rlworkgroup/garage
src/garage/examples/tf/ppo_pendulum.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/ppo_pendulum.py
MIT
def reps_gym_cartpole(ctxt=None, seed=1): """Train REPS with CartPole-v0 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce ...
Train REPS with CartPole-v0 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
reps_gym_cartpole
python
rlworkgroup/garage
src/garage/examples/tf/reps_gym_cartpole.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/reps_gym_cartpole.py
MIT
def rl2_ppo_halfcheetah(ctxt, seed, max_episode_length, meta_batch_size, n_epochs, episode_per_task): """Train PPO with HalfCheetah environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the snapshotter. ...
Train PPO with HalfCheetah environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. max_episode_length (int): Maximum le...
rl2_ppo_halfcheetah
python
rlworkgroup/garage
src/garage/examples/tf/rl2_ppo_halfcheetah.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/rl2_ppo_halfcheetah.py
MIT
def rl2_ppo_halfcheetah_meta_test(ctxt, seed, meta_batch_size, n_epochs, episode_per_task): """Perform meta-testing on RL2PPO with HalfCheetah environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the ...
Perform meta-testing on RL2PPO with HalfCheetah environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. seed (int): Used to seed the random number generator to produce determinism. meta_...
rl2_ppo_halfcheetah_meta_test
python
rlworkgroup/garage
src/garage/examples/tf/rl2_ppo_halfcheetah_meta_test.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/rl2_ppo_halfcheetah_meta_test.py
MIT
def rl2_ppo_metaworld_ml10(ctxt, seed, meta_batch_size, n_epochs, episode_per_task): """Train RL2 PPO with ML10 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int...
Train RL2 PPO with ML10 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. meta_batch_size (int): Meta bat...
rl2_ppo_metaworld_ml10
python
rlworkgroup/garage
src/garage/examples/tf/rl2_ppo_metaworld_ml10.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/rl2_ppo_metaworld_ml10.py
MIT
def rl2_ppo_metaworld_ml1_push(ctxt, seed, meta_batch_size, n_epochs, episode_per_task): """Train RL2 PPO with ML1 environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. ...
Train RL2 PPO with ML1 environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. seed (int): Used to seed the random number generator to produce determinism. meta_batch_size (int): Meta ba...
rl2_ppo_metaworld_ml1_push
python
rlworkgroup/garage
src/garage/examples/tf/rl2_ppo_metaworld_ml1_push.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/rl2_ppo_metaworld_ml1_push.py
MIT
def rl2_ppo_metaworld_ml45(ctxt, seed, meta_batch_size, n_epochs, episode_per_task): """Train PPO with ML45 environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. seed (int):...
Train PPO with ML45 environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. seed (int): Used to seed the random number generator to produce determinism. meta_batch_size (int): Meta batch...
rl2_ppo_metaworld_ml45
python
rlworkgroup/garage
src/garage/examples/tf/rl2_ppo_metaworld_ml45.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/rl2_ppo_metaworld_ml45.py
MIT
def rl2_trpo_halfcheetah(ctxt, seed, max_episode_length, meta_batch_size, n_epochs, episode_per_task): """Train TRPO with HalfCheetah environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshott...
Train TRPO with HalfCheetah environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. seed (int): Used to seed the random number generator to produce determinism. max_episode_length (int):...
rl2_trpo_halfcheetah
python
rlworkgroup/garage
src/garage/examples/tf/rl2_trpo_halfcheetah.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/rl2_trpo_halfcheetah.py
MIT
def td3_pendulum(ctxt=None, seed=1): """Wrap TD3 training task in the run_task function. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce ...
Wrap TD3 training task in the run_task function. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
td3_pendulum
python
rlworkgroup/garage
src/garage/examples/tf/td3_pendulum.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/td3_pendulum.py
MIT
def te_ppo_mt10(ctxt, seed, n_epochs, batch_size_per_task, n_tasks): """Train Task Embedding PPO with PointEnv. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. seed (int): Used to seed the random number gene...
Train Task Embedding PPO with PointEnv. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. seed (int): Used to seed the random number generator to produce determinism. n_epochs (int): Total numb...
te_ppo_mt10
python
rlworkgroup/garage
src/garage/examples/tf/te_ppo_metaworld_mt10.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/te_ppo_metaworld_mt10.py
MIT
def te_ppo_mt1_push(ctxt, seed, n_epochs, batch_size_per_task): """Train Task Embedding PPO with PointEnv. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. seed (int): Used to seed the random number generator...
Train Task Embedding PPO with PointEnv. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. seed (int): Used to seed the random number generator to produce determinism. n_epochs (int): Total numb...
te_ppo_mt1_push
python
rlworkgroup/garage
src/garage/examples/tf/te_ppo_metaworld_mt1_push.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/te_ppo_metaworld_mt1_push.py
MIT
def te_ppo_mt50(ctxt, seed, n_epochs, batch_size_per_task, n_tasks): """Train Task Embedding PPO with PointEnv. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number genera...
Train Task Embedding PPO with PointEnv. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. n_epochs (int): Total number...
te_ppo_mt50
python
rlworkgroup/garage
src/garage/examples/tf/te_ppo_metaworld_mt50.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/te_ppo_metaworld_mt50.py
MIT
def circle(r, n): """Generate n points on a circle of radius r. Args: r (float): Radius of the circle. n (int): Number of points to generate. Yields: tuple(float, float): Coordinate of a point. """ for t in np.arange(0, 2 * np.pi, 2 * np.pi / n): yield r * np.sin(t...
Generate n points on a circle of radius r. Args: r (float): Radius of the circle. n (int): Number of points to generate. Yields: tuple(float, float): Coordinate of a point.
circle
python
rlworkgroup/garage
src/garage/examples/tf/te_ppo_point.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/te_ppo_point.py
MIT
def te_ppo_pointenv(ctxt, seed, n_epochs, batch_size_per_task): """Train Task Embedding PPO with PointEnv. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator t...
Train Task Embedding PPO with PointEnv. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. n_epochs (int): Total number...
te_ppo_pointenv
python
rlworkgroup/garage
src/garage/examples/tf/te_ppo_point.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/te_ppo_point.py
MIT
def trpo_cartpole(ctxt=None, seed=1): """Train TRPO with CartPole-v1 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce det...
Train TRPO with CartPole-v1 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
trpo_cartpole
python
rlworkgroup/garage
src/garage/examples/tf/trpo_cartpole.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/trpo_cartpole.py
MIT
def trpo_cartpole_bullet(ctxt=None, seed=1): """Train TRPO with Pybullet's CartPoleBulletEnv environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to...
Train TRPO with Pybullet's CartPoleBulletEnv environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
trpo_cartpole_bullet
python
rlworkgroup/garage
src/garage/examples/tf/trpo_cartpole_bullet.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/trpo_cartpole_bullet.py
MIT
def trpo_cartpole_recurrent(ctxt, seed, n_epochs, batch_size, plot): """Train TRPO with a recurrent policy on CartPole. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. n_epochs (int): Number of epochs for trai...
Train TRPO with a recurrent policy on CartPole. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. n_epochs (int): Number of epochs for training. seed (int): Used to seed the random number generator to produc...
trpo_cartpole_recurrent
python
rlworkgroup/garage
src/garage/examples/tf/trpo_cartpole_recurrent.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/trpo_cartpole_recurrent.py
MIT
def trpo_cubecrash(ctxt=None, seed=1, max_episode_length=5, batch_size=4000): """Train TRPO with CubeCrash-v0 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random nu...
Train TRPO with CubeCrash-v0 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. max_episode_length (int): ...
trpo_cubecrash
python
rlworkgroup/garage
src/garage/examples/tf/trpo_cubecrash.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/trpo_cubecrash.py
MIT
def trpo_gym_tf_cartpole(ctxt=None, seed=1): """Train TRPO with CartPole-v0 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce ...
Train TRPO with CartPole-v0 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
trpo_gym_tf_cartpole
python
rlworkgroup/garage
src/garage/examples/tf/trpo_gym_tf_cartpole.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/trpo_gym_tf_cartpole.py
MIT
def pre_trained_trpo_cartpole( ctxt=None, snapshot_dir='data/local/experiment/trpo_gym_tf_cartpole', seed=1): """Use pre-trained TRPO and reusume experiment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the ...
Use pre-trained TRPO and reusume experiment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. snapshot_dir (path): directory to snapshot seed (int): Used to seed the random number generator to produce ...
pre_trained_trpo_cartpole
python
rlworkgroup/garage
src/garage/examples/tf/trpo_gym_tf_cartpole_pretrained.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/trpo_gym_tf_cartpole_pretrained.py
MIT
def trpo_swimmer(ctxt=None, seed=1, batch_size=4000): """Train TRPO with Swimmer-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce ...
Train TRPO with Swimmer-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. batch_size (int): Number of ...
trpo_swimmer
python
rlworkgroup/garage
src/garage/examples/tf/trpo_swimmer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/trpo_swimmer.py
MIT
def trpo_swimmer_ray_sampler(ctxt=None, seed=1): """tf_trpo_swimmer. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. ...
tf_trpo_swimmer. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
trpo_swimmer_ray_sampler
python
rlworkgroup/garage
src/garage/examples/tf/trpo_swimmer_ray_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/trpo_swimmer_ray_sampler.py
MIT
def init_opt(self): """Initialize optimizer and build computation graph.""" observation_dim = self.policy.observation_space.flat_dim action_dim = self.policy.action_space.flat_dim with tf.name_scope('inputs'): self._observation = tf.compat.v1.placeholder( tf.f...
Initialize optimizer and build computation graph.
init_opt
python
rlworkgroup/garage
src/garage/examples/tf/tutorial_vpg.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/tutorial_vpg.py
MIT
def train(self, trainer): """Obtain samplers and start actual training for each epoch. Args: trainer (Trainer): Experiment trainer. """ for epoch in trainer.step_epochs(): samples = trainer.obtain_samples(epoch) log_performance(epoch, ...
Obtain samplers and start actual training for each epoch. Args: trainer (Trainer): Experiment trainer.
train
python
rlworkgroup/garage
src/garage/examples/tf/tutorial_vpg.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/tutorial_vpg.py
MIT
def _train_once(self, samples): """Perform one step of policy optimization given one batch of samples. Args: samples (list[dict]): A list of collected samples. Returns: numpy.float64: Average return. """ obs = np.concatenate([path['observations'] for pa...
Perform one step of policy optimization given one batch of samples. Args: samples (list[dict]): A list of collected samples. Returns: numpy.float64: Average return.
_train_once
python
rlworkgroup/garage
src/garage/examples/tf/tutorial_vpg.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/tutorial_vpg.py
MIT
def __getstate__(self): """Parameters to save in snapshot. Returns: dict: Parameters to save. """ data = self.__dict__.copy() del data['_observation'] del data['_action'] del data['_returns'] del data['_train_op'] return data
Parameters to save in snapshot. Returns: dict: Parameters to save.
__getstate__
python
rlworkgroup/garage
src/garage/examples/tf/tutorial_vpg.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/tutorial_vpg.py
MIT
def tutorial_vpg(ctxt=None): """Train VPG with PointEnv environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. """ set_seed(100) with TFTrainer(ctxt) as trainer: env = PointEnv(max_episode...
Train VPG with PointEnv environment. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`.
tutorial_vpg
python
rlworkgroup/garage
src/garage/examples/tf/tutorial_vpg.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/tutorial_vpg.py
MIT
def vpg_cartpole(ctxt=None, seed=1): """Train VPG with CartPole-v1 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce deter...
Train VPG with CartPole-v1 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
vpg_cartpole
python
rlworkgroup/garage
src/garage/examples/tf/vpg_cartpole.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/tf/vpg_cartpole.py
MIT
def get_actions(self, observations): """Get actions given observations. Args: observations (np.ndarray): Observations from the environment. Has shape :math:`(B, O)`, where :math:`B` is the batch dimension and :math:`O` is the observation dimensionality (at ...
Get actions given observations. Args: observations (np.ndarray): Observations from the environment. Has shape :math:`(B, O)`, where :math:`B` is the batch dimension and :math:`O` is the observation dimensionality (at least 2). Returns: ...
get_actions
python
rlworkgroup/garage
src/garage/examples/torch/bc_point.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/bc_point.py
MIT
def bc_point(ctxt=None, loss='log_prob'): """Run Behavioral Cloning on garage.envs.PointEnv. Args: ctxt (ExperimentContext): Provided by wrap_experiment. loss (str): Either 'log_prob' or 'mse' """ trainer = Trainer(ctxt) goal = np.array([1., 1.]) env = PointEnv(goal=goal, max_e...
Run Behavioral Cloning on garage.envs.PointEnv. Args: ctxt (ExperimentContext): Provided by wrap_experiment. loss (str): Either 'log_prob' or 'mse'
bc_point
python
rlworkgroup/garage
src/garage/examples/torch/bc_point.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/bc_point.py
MIT
def bc_point(ctxt=None): """Run Behavioral Cloning on garage.envs.PointEnv. Args: ctxt (ExperimentContext): Provided by wrap_experiment. """ trainer = Trainer(ctxt) goal = np.array([1., 1.]) env = PointEnv(goal=goal, max_episode_length=200) expert = OptimalPolicy(env.spec, goal=goa...
Run Behavioral Cloning on garage.envs.PointEnv. Args: ctxt (ExperimentContext): Provided by wrap_experiment.
bc_point
python
rlworkgroup/garage
src/garage/examples/torch/bc_point_deterministic_policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/bc_point_deterministic_policy.py
MIT
def ddpg_pendulum(ctxt=None, seed=1, lr=1e-4): """Train DDPG with InvertedDoublePendulum-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to ...
Train DDPG with InvertedDoublePendulum-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. lr (float): L...
ddpg_pendulum
python
rlworkgroup/garage
src/garage/examples/torch/ddpg_pendulum.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/ddpg_pendulum.py
MIT
def dqn_cartpole(ctxt=None, seed=24): """Train DQN with CartPole-v0 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by LocalRunner to create the snapshotter. seed (int): Used to seed the random number generator to produce ...
Train DQN with CartPole-v0 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by LocalRunner to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
dqn_cartpole
python
rlworkgroup/garage
src/garage/examples/torch/dqn_cartpole.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/dqn_cartpole.py
MIT
def maml_ppo_half_cheetah_dir(ctxt, seed, epochs, episodes_per_task, meta_batch_size): """Set up environment and algorithm and run the task. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotte...
Set up environment and algorithm and run the task. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. seed (int): Used to seed the random number generator to produce determinism. epochs (int): N...
maml_ppo_half_cheetah_dir
python
rlworkgroup/garage
src/garage/examples/torch/maml_ppo_half_cheetah_dir.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/maml_ppo_half_cheetah_dir.py
MIT
def maml_trpo_half_cheetah_dir(ctxt, seed, epochs, episodes_per_task, meta_batch_size): """Set up environment and algorithm and run the task. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshot...
Set up environment and algorithm and run the task. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer` to create the :class:`~Snapshotter`. seed (int): Used to seed the random number generator to produce determinism. epochs (int): N...
maml_trpo_half_cheetah_dir
python
rlworkgroup/garage
src/garage/examples/torch/maml_trpo_half_cheetah_dir.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/maml_trpo_half_cheetah_dir.py
MIT
def maml_trpo_metaworld_ml10(ctxt, seed, epochs, episodes_per_task, meta_batch_size): """Set up environment and algorithm and run the task. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer: to create the :class:`~Snapshotter:...
Set up environment and algorithm and run the task. Args: ctxt (ExperimentContext): The experiment configuration used by :class:`~Trainer: to create the :class:`~Snapshotter:. seed (int): Used to seed the random number generator to produce determinism. epochs (int): N...
maml_trpo_metaworld_ml10
python
rlworkgroup/garage
src/garage/examples/torch/maml_trpo_metaworld_ml10.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/maml_trpo_metaworld_ml10.py
MIT
def maml_trpo_metaworld_ml1_push(ctxt, seed, epochs, rollouts_per_task, meta_batch_size): """Set up environment and algorithm and run the task. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snaps...
Set up environment and algorithm and run the task. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. epochs (int): Num...
maml_trpo_metaworld_ml1_push
python
rlworkgroup/garage
src/garage/examples/torch/maml_trpo_metaworld_ml1_push.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/maml_trpo_metaworld_ml1_push.py
MIT
def mtppo_metaworld_mt10(ctxt, seed, epochs, batch_size, n_workers, n_tasks): """Set up environment and algorithm and run the task. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the ...
Set up environment and algorithm and run the task. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. epochs (int): Num...
mtppo_metaworld_mt10
python
rlworkgroup/garage
src/garage/examples/torch/mtppo_metaworld_mt10.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/mtppo_metaworld_mt10.py
MIT