code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def mtppo_metaworld_mt1_push(ctxt, seed, epochs, batch_size): """Set up environment and algorithm and run the task. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number ge...
Set up environment and algorithm and run the task. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. epochs (int): Num...
mtppo_metaworld_mt1_push
python
rlworkgroup/garage
src/garage/examples/torch/mtppo_metaworld_mt1_push.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/mtppo_metaworld_mt1_push.py
MIT
def mtppo_metaworld_mt50(ctxt, seed, epochs, batch_size, n_workers, n_tasks): """Set up environment and algorithm and run the task. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the ...
Set up environment and algorithm and run the task. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. epochs (int): Num...
mtppo_metaworld_mt50
python
rlworkgroup/garage
src/garage/examples/torch/mtppo_metaworld_mt50.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/mtppo_metaworld_mt50.py
MIT
def mtsac_metaworld_mt10(ctxt=None, *, seed, _gpu, n_tasks, timesteps): """Train MTSAC with MT10 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generato...
Train MTSAC with MT10 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. _gpu (int): The ID of the gpu to ...
mtsac_metaworld_mt10
python
rlworkgroup/garage
src/garage/examples/torch/mtsac_metaworld_mt10.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/mtsac_metaworld_mt10.py
MIT
def mtsac_metaworld_mt1_pick_place(ctxt=None, *, seed, timesteps, _gpu): """Train MTSAC with the MT1 pick-place-v1 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the rand...
Train MTSAC with the MT1 pick-place-v1 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. _gpu (int): The ...
mtsac_metaworld_mt1_pick_place
python
rlworkgroup/garage
src/garage/examples/torch/mtsac_metaworld_mt1_pick_place.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/mtsac_metaworld_mt1_pick_place.py
MIT
def mtsac_metaworld_mt50(ctxt=None, *, seed, use_gpu, _gpu, n_tasks, timesteps): """Train MTSAC with MT50 environment. Args: ctxt (garage.experiment.Expe...
Train MTSAC with MT50 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. use_gpu (bool): Used to enable us...
mtsac_metaworld_mt50
python
rlworkgroup/garage
src/garage/examples/torch/mtsac_metaworld_mt50.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/mtsac_metaworld_mt50.py
MIT
def pearl_half_cheetah_vel(ctxt=None, seed=1, num_epochs=500, num_train_tasks=100, num_test_tasks=100, latent_size=5, encoder_hidden_size=200, ...
Train PEARL with HalfCheetahVel environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. num_epochs (int): Numbe...
pearl_half_cheetah_vel
python
rlworkgroup/garage
src/garage/examples/torch/pearl_half_cheetah_vel.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/pearl_half_cheetah_vel.py
MIT
def pearl_metaworld_ml10(ctxt=None, seed=1, num_epochs=1000, num_train_tasks=10, latent_size=7, encoder_hidden_size=200, net_size=300, meta_batch...
Train PEARL with ML10 environments. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. num_epochs (int): Number of trai...
pearl_metaworld_ml10
python
rlworkgroup/garage
src/garage/examples/torch/pearl_metaworld_ml10.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/pearl_metaworld_ml10.py
MIT
def pearl_metaworld_ml1_push(ctxt=None, seed=1, num_epochs=1000, num_train_tasks=50, latent_size=7, encoder_hidden_size=200, net_size=300, ...
Train PEARL with ML1 environments. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. num_epochs (int): Number of train...
pearl_metaworld_ml1_push
python
rlworkgroup/garage
src/garage/examples/torch/pearl_metaworld_ml1_push.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/pearl_metaworld_ml1_push.py
MIT
def pearl_metaworld_ml45(ctxt=None, seed=1, num_epochs=1000, num_train_tasks=45, latent_size=7, encoder_hidden_size=200, net_size=300, meta_batch...
Train PEARL with ML45 environments. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism. num_epochs (int): Number of trai...
pearl_metaworld_ml45
python
rlworkgroup/garage
src/garage/examples/torch/pearl_metaworld_ml45.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/pearl_metaworld_ml45.py
MIT
def sac_half_cheetah_batch(ctxt=None, seed=1): """Set up environment and algorithm and run the task. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to prod...
Set up environment and algorithm and run the task. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
sac_half_cheetah_batch
python
rlworkgroup/garage
src/garage/examples/torch/sac_half_cheetah_batch.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/sac_half_cheetah_batch.py
MIT
def td3_half_cheetah(ctxt=None, seed=1): """Train TD3 with InvertedDoublePendulum-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by LocalRunner to create the snapshotter. seed (int): Used to seed the random number generator to pro...
Train TD3 with InvertedDoublePendulum-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by LocalRunner to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
td3_half_cheetah
python
rlworkgroup/garage
src/garage/examples/torch/td3_halfcheetah.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/td3_halfcheetah.py
MIT
def td3_pendulum(ctxt=None, seed=1): """Train TD3 with InvertedDoublePendulum-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by LocalRunner to create the snapshotter. seed (int): Used to seed the random number generator to produce...
Train TD3 with InvertedDoublePendulum-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by LocalRunner to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
td3_pendulum
python
rlworkgroup/garage
src/garage/examples/torch/td3_pendulum.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/td3_pendulum.py
MIT
def trpo_pendulum(ctxt=None, seed=1): """Train TRPO with InvertedDoublePendulum-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce ...
Train TRPO with InvertedDoublePendulum-v2 environment. Args: ctxt (garage.experiment.ExperimentContext): The experiment configuration used by Trainer to create the snapshotter. seed (int): Used to seed the random number generator to produce determinism.
trpo_pendulum
python
rlworkgroup/garage
src/garage/examples/torch/trpo_pendulum.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/trpo_pendulum.py
MIT
def _train_once(self, samples): """Perform one step of policy optimization given one batch of samples. Args: samples (list[dict]): A list of collected paths. Returns: numpy.float64: Average return. """ losses = [] self._policy_opt.zero_grad() ...
Perform one step of policy optimization given one batch of samples. Args: samples (list[dict]): A list of collected paths. Returns: numpy.float64: Average return.
_train_once
python
rlworkgroup/garage
src/garage/examples/torch/tutorial_vpg.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/tutorial_vpg.py
MIT
def watch_atari(saved_dir, env=None, num_episodes=10): """Watch a trained agent play an atari game. Args: saved_dir (str): Directory containing the pickle file. env (str): Environment to run episodes on. If None, the pickled environment is used. num_episodes (int): Number of...
Watch a trained agent play an atari game. Args: saved_dir (str): Directory containing the pickle file. env (str): Environment to run episodes on. If None, the pickled environment is used. num_episodes (int): Number of episodes to play. Note that when using the Episod...
watch_atari
python
rlworkgroup/garage
src/garage/examples/torch/watch_atari.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/examples/torch/watch_atari.py
MIT
def set_seed(seed): """Set the process-wide random seed. Args: seed (int): A positive integer """ seed %= 4294967294 # pylint: disable=global-statement global seed_ global seed_stream_ seed_ = seed random.seed(seed) np.random.seed(seed) if 'tensorflow' in sys.module...
Set the process-wide random seed. Args: seed (int): A positive integer
set_seed
python
rlworkgroup/garage
src/garage/experiment/deterministic.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/deterministic.py
MIT
def get_tf_seed_stream(): """Get the pseudo-random number generator (PRNG) for TensorFlow ops. Returns: int: A seed generated by a PRNG with fixed global seed. """ if seed_stream_ is None: set_seed(0) return seed_stream_() % 4294967294
Get the pseudo-random number generator (PRNG) for TensorFlow ops. Returns: int: A seed generated by a PRNG with fixed global seed.
get_tf_seed_stream
python
rlworkgroup/garage
src/garage/experiment/deterministic.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/deterministic.py
MIT
def _make_sequential_log_dir(log_dir): """Creates log_dir, appending a number if necessary. Attempts to create the directory `log_dir`. If it already exists, appends "_1". If that already exists, appends "_2" instead, etc. Args: log_dir (str): The log directory to attempt to create. Retur...
Creates log_dir, appending a number if necessary. Attempts to create the directory `log_dir`. If it already exists, appends "_1". If that already exists, appends "_2" instead, etc. Args: log_dir (str): The log directory to attempt to create. Returns: str: The log directory actually cr...
_make_sequential_log_dir
python
rlworkgroup/garage
src/garage/experiment/experiment.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/experiment.py
MIT
def _make_experiment_signature(function): """Generate an ExperimentTemplate's signature from its function. Checks that the first parameter is named ctxt and removes it from the signature. Makes all other parameters keyword only. Args: function (callable[ExperimentContext, ...]): The wrapped fu...
Generate an ExperimentTemplate's signature from its function. Checks that the first parameter is named ctxt and removes it from the signature. Makes all other parameters keyword only. Args: function (callable[ExperimentContext, ...]): The wrapped function. Returns: inspect.Signature: ...
_make_experiment_signature
python
rlworkgroup/garage
src/garage/experiment/experiment.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/experiment.py
MIT
def _update_wrap_params(self): """Update self to "look like" the wrapped funciton. Mostly, this involves creating a function signature for the ExperimentTemplate that looks like the wrapped function, but with the first argument (ctxt) excluded, and all other arguments required to be ...
Update self to "look like" the wrapped funciton. Mostly, this involves creating a function signature for the ExperimentTemplate that looks like the wrapped function, but with the first argument (ctxt) excluded, and all other arguments required to be keyword only.
_update_wrap_params
python
rlworkgroup/garage
src/garage/experiment/experiment.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/experiment.py
MIT
def _augment_name(cls, options, name, params): """Augment the experiment name with parameters. Args: options (dict): Options to `wrap_experiment` itself. See the function documentation for details. name (str): Name without parameter names. params (dic...
Augment the experiment name with parameters. Args: options (dict): Options to `wrap_experiment` itself. See the function documentation for details. name (str): Name without parameter names. params (dict): Dictionary of parameters. Raises: ...
_augment_name
python
rlworkgroup/garage
src/garage/experiment/experiment.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/experiment.py
MIT
def _get_options(self, *args): """Get the options for wrap_experiment. This method combines options passed to `wrap_experiment` itself and to the wrapped experiment. Args: args (list[dict]): Unnamed arguments to the wrapped experiment. May be an empty list o...
Get the options for wrap_experiment. This method combines options passed to `wrap_experiment` itself and to the wrapped experiment. Args: args (list[dict]): Unnamed arguments to the wrapped experiment. May be an empty list or a list containing a single dictionary. ...
_get_options
python
rlworkgroup/garage
src/garage/experiment/experiment.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/experiment.py
MIT
def _make_context(cls, options, **kwargs): """Make a context from the template information and variant args. Currently, all arguments should be keyword arguments. Args: options (dict): Options to `wrap_experiment` itself. See the function documentation for details. ...
Make a context from the template information and variant args. Currently, all arguments should be keyword arguments. Args: options (dict): Options to `wrap_experiment` itself. See the function documentation for details. kwargs (dict): Keyword arguments for the w...
_make_context
python
rlworkgroup/garage
src/garage/experiment/experiment.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/experiment.py
MIT
def __call__(self, *args, **kwargs): """Wrap a function to turn it into an ExperimentTemplate. Note that this docstring will be overriden to match the function's docstring on the ExperimentTemplate once a function is passed in. Args: args (list): If no function has been set...
Wrap a function to turn it into an ExperimentTemplate. Note that this docstring will be overriden to match the function's docstring on the ExperimentTemplate once a function is passed in. Args: args (list): If no function has been set yet, must be a list containing ...
__call__
python
rlworkgroup/garage
src/garage/experiment/experiment.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/experiment.py
MIT
def dump_json(filename, data): """Dump a dictionary to a file in JSON format. Args: filename(str): Filename for the file. data(dict): Data to save to file. """ pathlib.Path(os.path.dirname(filename)).mkdir(parents=True, exist_ok=True) with open(filename, 'w') as f: # We do ...
Dump a dictionary to a file in JSON format. Args: filename(str): Filename for the file. data(dict): Data to save to file.
dump_json
python
rlworkgroup/garage
src/garage/experiment/experiment.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/experiment.py
MIT
def make_launcher_archive(*, git_root_path, log_dir): """Saves an archive of the launcher's git repo to the log directory. Args: git_root_path (str): Absolute path to git repo to archive. log_dir (str): Absolute path to the log directory. """ git_files = subprocess.check_output( ...
Saves an archive of the launcher's git repo to the log directory. Args: git_root_path (str): Absolute path to git repo to archive. log_dir (str): Absolute path to the log directory.
make_launcher_archive
python
rlworkgroup/garage
src/garage/experiment/experiment.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/experiment.py
MIT
def default(self, o): """Perform JSON encoding. Args: o (object): Object to encode. Raises: TypeError: If `o` cannot be turned into JSON even using `repr(o)`. Returns: dict or str or float or bool: Object encoded in JSON. """ # Why ...
Perform JSON encoding. Args: o (object): Object to encode. Raises: TypeError: If `o` cannot be turned into JSON even using `repr(o)`. Returns: dict or str or float or bool: Object encoded in JSON.
default
python
rlworkgroup/garage
src/garage/experiment/experiment.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/experiment.py
MIT
def _default_inner(self, o): """Perform JSON encoding. Args: o (object): Object to encode. Raises: TypeError: If `o` cannot be turned into JSON even using `repr(o)`. ValueError: If raised by calling repr on an object. Returns: dict or st...
Perform JSON encoding. Args: o (object): Object to encode. Raises: TypeError: If `o` cannot be turned into JSON even using `repr(o)`. ValueError: If raised by calling repr on an object. Returns: dict or str or float or bool: Object encoded in JS...
_default_inner
python
rlworkgroup/garage
src/garage/experiment/experiment.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/experiment.py
MIT
def evaluate(self, algo, test_episodes_per_task=None): """Evaluate the Meta-RL algorithm on the test tasks. Args: algo (MetaRLAlgorithm): The algorithm to evaluate. test_episodes_per_task (int or None): Number of episodes per task. """ if test_episodes_per_task ...
Evaluate the Meta-RL algorithm on the test tasks. Args: algo (MetaRLAlgorithm): The algorithm to evaluate. test_episodes_per_task (int or None): Number of episodes per task.
evaluate
python
rlworkgroup/garage
src/garage/experiment/meta_evaluator.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/meta_evaluator.py
MIT
def save_itr_params(self, itr, params): """Save the parameters if at the right iteration. Args: itr (int): Number of iterations. Used as the index of snapshot. params (obj): Content of snapshot to be saved. Raises: ValueError: If snapshot_mode is not one of ...
Save the parameters if at the right iteration. Args: itr (int): Number of iterations. Used as the index of snapshot. params (obj): Content of snapshot to be saved. Raises: ValueError: If snapshot_mode is not one of "all", "last", "gap", "gap_overwrit...
save_itr_params
python
rlworkgroup/garage
src/garage/experiment/snapshotter.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/snapshotter.py
MIT
def load(self, load_dir, itr='last'): # pylint: disable=no-self-use """Load one snapshot of parameters from disk. Args: load_dir (str): Directory of the cloudpickle file to resume experiment from. itr (int or string): Iteration to load. Ca...
Load one snapshot of parameters from disk. Args: load_dir (str): Directory of the cloudpickle file to resume experiment from. itr (int or string): Iteration to load. Can be an integer, 'last' or 'first'. Returns: dict: Loaded snapshot...
load
python
rlworkgroup/garage
src/garage/experiment/snapshotter.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/snapshotter.py
MIT
def _extract_snapshot_itr(filename: str) -> int: """Extracts the integer itr from a filename. Args: filename(str): The snapshot filename. Returns: int: The snapshot as an integer. """ base = os.path.splitext(filename)[0] digits = base.split('itr_')[1] return int(digits)
Extracts the integer itr from a filename. Args: filename(str): The snapshot filename. Returns: int: The snapshot as an integer.
_extract_snapshot_itr
python
rlworkgroup/garage
src/garage/experiment/snapshotter.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/snapshotter.py
MIT
def _sample_indices(n_to_sample, n_available_tasks, with_replacement): """Select indices of tasks to sample. Args: n_to_sample (int): Number of environments to sample. May be greater than n_available_tasks. n_available_tasks (int): Number of available tasks. Task indices will ...
Select indices of tasks to sample. Args: n_to_sample (int): Number of environments to sample. May be greater than n_available_tasks. n_available_tasks (int): Number of available tasks. Task indices will be selected in the range [0, n_available_tasks). with_replacemen...
_sample_indices
python
rlworkgroup/garage
src/garage/experiment/task_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/task_sampler.py
MIT
def sample(self, n_tasks, with_replacement=False): """Sample a list of environment updates. Args: n_tasks (int): Number of updates to sample. with_replacement (bool): Whether tasks can repeat when sampled. Note that if more tasks are sampled than exist, then task...
Sample a list of environment updates. Args: n_tasks (int): Number of updates to sample. with_replacement (bool): Whether tasks can repeat when sampled. Note that if more tasks are sampled than exist, then tasks may repeat, but only after every environment...
sample
python
rlworkgroup/garage
src/garage/experiment/task_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/task_sampler.py
MIT
def sample(self, n_tasks, with_replacement=False): """Sample a list of environment updates. Args: n_tasks (int): Number of updates to sample. with_replacement (bool): Whether tasks can repeat when sampled. Since this cannot be easily implemented for an object poo...
Sample a list of environment updates. Args: n_tasks (int): Number of updates to sample. with_replacement (bool): Whether tasks can repeat when sampled. Since this cannot be easily implemented for an object pool, setting this to True results in ValueError....
sample
python
rlworkgroup/garage
src/garage/experiment/task_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/task_sampler.py
MIT
def grow_pool(self, new_size): """Increase the size of the pool by copying random tasks in it. Note that this only copies the tasks already in the pool, and cannot create new original tasks in any way. Args: new_size (int): Size the pool should be after growning. "...
Increase the size of the pool by copying random tasks in it. Note that this only copies the tasks already in the pool, and cannot create new original tasks in any way. Args: new_size (int): Size the pool should be after growning.
grow_pool
python
rlworkgroup/garage
src/garage/experiment/task_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/task_sampler.py
MIT
def sample(self, n_tasks, with_replacement=False): """Sample a list of environment updates. Note that this will always return environments in the same order, to make parallel sampling across workers efficient. If randomizing the environment order is required, shuffle the result of this ...
Sample a list of environment updates. Note that this will always return environments in the same order, to make parallel sampling across workers efficient. If randomizing the environment order is required, shuffle the result of this method. Args: n_tasks (int): Number of up...
sample
python
rlworkgroup/garage
src/garage/experiment/task_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/task_sampler.py
MIT
def wrap(env, task): """Wrap an environment in a metaworld benchmark. Args: env (gym.Env): A metaworld / gym environment. task (metaworld.Task): A metaworld task. Returns: garage.Env: The wrapped environment. """ ...
Wrap an environment in a metaworld benchmark. Args: env (gym.Env): A metaworld / gym environment. task (metaworld.Task): A metaworld task. Returns: garage.Env: The wrapped environment.
wrap
python
rlworkgroup/garage
src/garage/experiment/task_sampler.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/experiment/task_sampler.py
MIT
def explained_variance_1d(ypred, y, valids=None): """Explained variation for 1D inputs. It is the proportion of the variance in one variable that is explained or predicted from another variable. Args: ypred (np.ndarray): Sample data from the first variable. Shape: :math:`(N, max_ep...
Explained variation for 1D inputs. It is the proportion of the variance in one variable that is explained or predicted from another variable. Args: ypred (np.ndarray): Sample data from the first variable. Shape: :math:`(N, max_episode_length)`. y (np.ndarray): Sample data from ...
explained_variance_1d
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def rrse(actual, predicted): """Root Relative Squared Error. Args: actual (np.ndarray): The actual value. predicted (np.ndarray): The predicted value. Returns: float: The root relative square error between the actual and the predicted value. """ return np.sqrt(...
Root Relative Squared Error. Args: actual (np.ndarray): The actual value. predicted (np.ndarray): The predicted value. Returns: float: The root relative square error between the actual and the predicted value.
rrse
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def sliding_window(t, window, smear=False): """Create a sliding window over a tensor. Args: t (np.ndarray): A tensor to create sliding window from, with shape :math:`(N, D)`, where N is the length of a trajectory, D is the dimension of each step in trajectory. window (in...
Create a sliding window over a tensor. Args: t (np.ndarray): A tensor to create sliding window from, with shape :math:`(N, D)`, where N is the length of a trajectory, D is the dimension of each step in trajectory. window (int): Window size, mush be less than N. smear...
sliding_window
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def flatten_tensors(tensors): """Flatten a list of tensors. Args: tensors (list[numpy.ndarray]): List of tensors to be flattened. Returns: numpy.ndarray: Flattened tensors. Example: .. testsetup:: from garage.np import flatten_tensors >>> flatten_tensors([np.ndarray...
Flatten a list of tensors. Args: tensors (list[numpy.ndarray]): List of tensors to be flattened. Returns: numpy.ndarray: Flattened tensors. Example: .. testsetup:: from garage.np import flatten_tensors >>> flatten_tensors([np.ndarray([1]), np.ndarray([1])]) array(.....
flatten_tensors
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def unflatten_tensors(flattened, tensor_shapes): """Unflatten a flattened tensors into a list of tensors. Args: flattened (numpy.ndarray): Flattened tensors. tensor_shapes (tuple): Tensor shapes. Returns: list[numpy.ndarray]: Unflattened list of tensors. """ tensor_sizes =...
Unflatten a flattened tensors into a list of tensors. Args: flattened (numpy.ndarray): Flattened tensors. tensor_shapes (tuple): Tensor shapes. Returns: list[numpy.ndarray]: Unflattened list of tensors.
unflatten_tensors
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def pad_tensor(x, max_len, mode='zero'): """Pad tensors. Args: x (numpy.ndarray): Tensors to be padded. max_len (int): Maximum length. mode (str): If 'last', pad with the last element, otherwise pad with 0. Returns: numpy.ndarray: Padded tensor. """ padding = np.ze...
Pad tensors. Args: x (numpy.ndarray): Tensors to be padded. max_len (int): Maximum length. mode (str): If 'last', pad with the last element, otherwise pad with 0. Returns: numpy.ndarray: Padded tensor.
pad_tensor
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def pad_tensor_n(xs, max_len): """Pad array of tensors. Args: xs (numpy.ndarray): Tensors to be padded. max_len (int): Maximum length. Returns: numpy.ndarray: Padded tensor. """ ret = np.zeros((len(xs), max_len) + xs[0].shape[1:], dtype=xs[0].dtype) for idx, x in enume...
Pad array of tensors. Args: xs (numpy.ndarray): Tensors to be padded. max_len (int): Maximum length. Returns: numpy.ndarray: Padded tensor.
pad_tensor_n
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def pad_tensor_dict(tensor_dict, max_len, mode='zero'): """Pad dictionary of tensors. Args: tensor_dict (dict[numpy.ndarray]): Tensors to be padded. max_len (int): Maximum length. mode (str): If 'last', pad with the last element, otherwise pad with 0. Returns: dict[numpy.nd...
Pad dictionary of tensors. Args: tensor_dict (dict[numpy.ndarray]): Tensors to be padded. max_len (int): Maximum length. mode (str): If 'last', pad with the last element, otherwise pad with 0. Returns: dict[numpy.ndarray]: Padded tensor.
pad_tensor_dict
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def stack_tensor_dict_list(tensor_dict_list): """Stack a list of dictionaries of {tensors or dictionary of tensors}. Args: tensor_dict_list (dict[list]): a list of dictionaries of {tensors or dictionary of tensors}. Return: dict: a dictionary of {stacked tensors or dictionary o...
Stack a list of dictionaries of {tensors or dictionary of tensors}. Args: tensor_dict_list (dict[list]): a list of dictionaries of {tensors or dictionary of tensors}. Return: dict: a dictionary of {stacked tensors or dictionary of stacked tensors}
stack_tensor_dict_list
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def stack_and_pad_tensor_dict_list(tensor_dict_list, max_len): """Stack and pad array of list of tensors. Input paths are a list of N dicts, each with values of shape :math:`(D, S^*)`. This function stack and pad the values with the input key with max_len, so output will be shape :math:`(N, D, S^*)`. ...
Stack and pad array of list of tensors. Input paths are a list of N dicts, each with values of shape :math:`(D, S^*)`. This function stack and pad the values with the input key with max_len, so output will be shape :math:`(N, D, S^*)`. Args: tensor_dict_list (list[dict]): List of dict to be st...
stack_and_pad_tensor_dict_list
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def concat_tensor_dict_list(tensor_dict_list): """Concatenate dictionary of list of tensor. Args: tensor_dict_list (dict[list]): a list of dictionaries of {tensors or dictionary of tensors}. Return: dict: a dictionary of {stacked tensors or dictionary of stacked ten...
Concatenate dictionary of list of tensor. Args: tensor_dict_list (dict[list]): a list of dictionaries of {tensors or dictionary of tensors}. Return: dict: a dictionary of {stacked tensors or dictionary of stacked tensors}
concat_tensor_dict_list
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def truncate_tensor_dict(tensor_dict, truncated_len): """Truncate dictionary of list of tensor. Args: tensor_dict (dict[numpy.ndarray]): a dictionary of {tensors or dictionary of tensors}. truncated_len (int): Length to truncate. Return: dict: a dictionary of {stacked t...
Truncate dictionary of list of tensor. Args: tensor_dict (dict[numpy.ndarray]): a dictionary of {tensors or dictionary of tensors}. truncated_len (int): Length to truncate. Return: dict: a dictionary of {stacked tensors or dictionary of stacked tensors}
truncate_tensor_dict
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def slice_nested_dict(dict_or_array, start, stop): """Slice a dictionary containing arrays (or dictionaries). This function is primarily intended for un-batching env_infos and action_infos. Args: dict_or_array (dict[str, dict or np.ndarray] or np.ndarray): A nested dictionary shoul...
Slice a dictionary containing arrays (or dictionaries). This function is primarily intended for un-batching env_infos and action_infos. Args: dict_or_array (dict[str, dict or np.ndarray] or np.ndarray): A nested dictionary should only contain dictionaries and numpy arrays (...
slice_nested_dict
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def pad_batch_array(array, lengths, max_length=None): r"""Convert a packed into a padded array with one more dimension. Args: array (np.ndarray): Array of length :math:`(N \bullet [T], X^*)` lengths (list[int]): List of length :math:`N` containing the length of each episode in the b...
Convert a packed into a padded array with one more dimension. Args: array (np.ndarray): Array of length :math:`(N \bullet [T], X^*)` lengths (list[int]): List of length :math:`N` containing the length of each episode in the batch array. max_length (int): Defaults to max(lengths)...
pad_batch_array
python
rlworkgroup/garage
src/garage/np/_functions.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/_functions.py
MIT
def train(self, trainer): """Initialize variables and start training. Args: trainer (Trainer): Experiment trainer, which provides services such as snapshotting and sampler control. Returns: float: The average return in last epoch cycle. """ ...
Initialize variables and start training. Args: trainer (Trainer): Experiment trainer, which provides services such as snapshotting and sampler control. Returns: float: The average return in last epoch cycle.
train
python
rlworkgroup/garage
src/garage/np/algos/cem.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/algos/cem.py
MIT
def _train_once(self, itr, episodes): """Perform one step of policy optimization given one batch of samples. Args: itr (int): Iteration number. episodes (garage.EpisodeBatch): Episodes collected using the current policy. Returns: float: The a...
Perform one step of policy optimization given one batch of samples. Args: itr (int): Iteration number. episodes (garage.EpisodeBatch): Episodes collected using the current policy. Returns: float: The average return of epoch cycle.
_train_once
python
rlworkgroup/garage
src/garage/np/algos/cem.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/algos/cem.py
MIT
def train(self, trainer): """Initialize variables and start training. Args: trainer (Trainer): Trainer is passed to give algorithm the access to trainer.step_epochs(), which provides services such as snapshotting and sampler control. Returns: ...
Initialize variables and start training. Args: trainer (Trainer): Trainer is passed to give algorithm the access to trainer.step_epochs(), which provides services such as snapshotting and sampler control. Returns: float: The average return in las...
train
python
rlworkgroup/garage
src/garage/np/algos/cma_es.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/algos/cma_es.py
MIT
def get_exploration_policy(self): """Return a policy used before adaptation to a specific task. Each time it is retrieved, this policy should only be evaluated in one task. Returns: Policy: The policy used to obtain samples, which are later used for meta-RL ...
Return a policy used before adaptation to a specific task. Each time it is retrieved, this policy should only be evaluated in one task. Returns: Policy: The policy used to obtain samples, which are later used for meta-RL adaptation.
get_exploration_policy
python
rlworkgroup/garage
src/garage/np/algos/meta_rl_algorithm.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/algos/meta_rl_algorithm.py
MIT
def adapt_policy(self, exploration_policy, exploration_episodes): """Produce a policy adapted for a task. Args: exploration_policy (Policy): A policy which was returned from get_exploration_policy(), and which generated exploration_trajectories by interacting...
Produce a policy adapted for a task. Args: exploration_policy (Policy): A policy which was returned from get_exploration_policy(), and which generated exploration_trajectories by interacting with an environment. The caller may not use this object afte...
adapt_policy
python
rlworkgroup/garage
src/garage/np/algos/meta_rl_algorithm.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/algos/meta_rl_algorithm.py
MIT
def optimize_policy(self, paths): """Optimize the policy using the samples. Args: paths (list[dict]): A list of collected paths. """
Optimize the policy using the samples. Args: paths (list[dict]): A list of collected paths.
optimize_policy
python
rlworkgroup/garage
src/garage/np/algos/nop.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/algos/nop.py
MIT
def train(self, trainer): """Obtain samplers and start actual training for each epoch. Args: trainer (Trainer): Trainer is passed to give algorithm the access to trainer.step_epochs(), which provides services such as snapshotting and sampler control. ...
Obtain samplers and start actual training for each epoch. Args: trainer (Trainer): Trainer is passed to give algorithm the access to trainer.step_epochs(), which provides services such as snapshotting and sampler control.
train
python
rlworkgroup/garage
src/garage/np/algos/nop.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/algos/nop.py
MIT
def fit(self, paths): """Fit regressor based on paths. Args: paths (dict[numpy.ndarray]): Sample paths. """
Fit regressor based on paths. Args: paths (dict[numpy.ndarray]): Sample paths.
fit
python
rlworkgroup/garage
src/garage/np/baselines/baseline.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/baselines/baseline.py
MIT
def predict(self, paths): """Predict value based on paths. Args: paths (dict[numpy.ndarray]): Sample paths. Returns: numpy.ndarray: Predicted value. """
Predict value based on paths. Args: paths (dict[numpy.ndarray]): Sample paths. Returns: numpy.ndarray: Predicted value.
predict
python
rlworkgroup/garage
src/garage/np/baselines/baseline.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/baselines/baseline.py
MIT
def _features(self, path): """Extract features from path. Args: path (list[dict]): Sample paths. Returns: numpy.ndarray: Extracted features. """ obs = np.clip(path['observations'], self.lower_bound, self.upper_bound) length = len(path['observati...
Extract features from path. Args: path (list[dict]): Sample paths. Returns: numpy.ndarray: Extracted features.
_features
python
rlworkgroup/garage
src/garage/np/baselines/linear_feature_baseline.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/baselines/linear_feature_baseline.py
MIT
def fit(self, paths): """Fit regressor based on paths. Args: paths (list[dict]): Sample paths. """ featmat = np.concatenate([self._features(path) for path in paths]) returns = np.concatenate([path['returns'] for path in paths]) reg_coeff = self._reg_coeff ...
Fit regressor based on paths. Args: paths (list[dict]): Sample paths.
fit
python
rlworkgroup/garage
src/garage/np/baselines/linear_feature_baseline.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/baselines/linear_feature_baseline.py
MIT
def predict(self, paths): """Predict value based on paths. Args: paths (list[dict]): Sample paths. Returns: numpy.ndarray: Predicted value. """ if self._coeffs is None: return np.zeros(len(paths['observations'])) return self._feature...
Predict value based on paths. Args: paths (list[dict]): Sample paths. Returns: numpy.ndarray: Predicted value.
predict
python
rlworkgroup/garage
src/garage/np/baselines/linear_feature_baseline.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/baselines/linear_feature_baseline.py
MIT
def reset(self, do_resets=None): """Reset the encoder. This is effective only to recurrent encoder. do_resets is effective only to vectoried encoder. For a vectorized encoder, do_resets is an array of boolean indicating which internal states to be reset. The length of do_resets...
Reset the encoder. This is effective only to recurrent encoder. do_resets is effective only to vectoried encoder. For a vectorized encoder, do_resets is an array of boolean indicating which internal states to be reset. The length of do_resets should be equal to the length of in...
reset
python
rlworkgroup/garage
src/garage/np/embeddings/encoder.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/embeddings/encoder.py
MIT
def get_action(self, observation): """Get action from this policy for the input observation. Args: observation(numpy.ndarray): Observation from the environment. Returns: np.ndarray: Actions with noise. List[dict]: Arbitrary policy state information (agent_in...
Get action from this policy for the input observation. Args: observation(numpy.ndarray): Observation from the environment. Returns: np.ndarray: Actions with noise. List[dict]: Arbitrary policy state information (agent_info).
get_action
python
rlworkgroup/garage
src/garage/np/exploration_policies/add_gaussian_noise.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_gaussian_noise.py
MIT
def get_actions(self, observations): """Get actions from this policy for the input observation. Args: observations(list): Observations from the environment. Returns: np.ndarray: Actions with noise. List[dict]: Arbitrary policy state information (agent_info)....
Get actions from this policy for the input observation. Args: observations(list): Observations from the environment. Returns: np.ndarray: Actions with noise. List[dict]: Arbitrary policy state information (agent_info).
get_actions
python
rlworkgroup/garage
src/garage/np/exploration_policies/add_gaussian_noise.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_gaussian_noise.py
MIT
def _sigma(self): """Get the current sigma. Returns: double: Sigma. """ if self._total_env_steps >= self._decay_period: return self._min_sigma return self._max_sigma - self._decrement * self._total_env_steps
Get the current sigma. Returns: double: Sigma.
_sigma
python
rlworkgroup/garage
src/garage/np/exploration_policies/add_gaussian_noise.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_gaussian_noise.py
MIT
def update(self, episode_batch): """Update the exploration policy using a batch of trajectories. Args: episode_batch (EpisodeBatch): A batch of trajectories which were sampled with this policy active. """ self._total_env_steps = (self._last_total_env_steps +...
Update the exploration policy using a batch of trajectories. Args: episode_batch (EpisodeBatch): A batch of trajectories which were sampled with this policy active.
update
python
rlworkgroup/garage
src/garage/np/exploration_policies/add_gaussian_noise.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_gaussian_noise.py
MIT
def get_param_values(self): """Get parameter values. Returns: list or dict: Values of each parameter. """ return { 'total_env_steps': self._total_env_steps, 'inner_params': self.policy.get_param_values() }
Get parameter values. Returns: list or dict: Values of each parameter.
get_param_values
python
rlworkgroup/garage
src/garage/np/exploration_policies/add_gaussian_noise.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_gaussian_noise.py
MIT
def set_param_values(self, params): """Set param values. Args: params (np.ndarray): A numpy array of parameter values. """ self._total_env_steps = params['total_env_steps'] self.policy.set_param_values(params['inner_params']) self._last_total_env_steps = sel...
Set param values. Args: params (np.ndarray): A numpy array of parameter values.
set_param_values
python
rlworkgroup/garage
src/garage/np/exploration_policies/add_gaussian_noise.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_gaussian_noise.py
MIT
def _simulate(self): """Advance the OU process. Returns: np.ndarray: Updated OU process state. """ x = self._state dx = self._theta * (self._mu - x) * self._dt + self._sigma * np.sqrt( self._dt) * np.random.normal(size=len(x)) self._state = x + d...
Advance the OU process. Returns: np.ndarray: Updated OU process state.
_simulate
python
rlworkgroup/garage
src/garage/np/exploration_policies/add_ornstein_uhlenbeck_noise.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_ornstein_uhlenbeck_noise.py
MIT
def get_action(self, observation): """Return an action with noise. Args: observation (np.ndarray): Observation from the environment. Returns: np.ndarray: An action with noise. dict: Arbitrary policy state information (agent_info). """ action...
Return an action with noise. Args: observation (np.ndarray): Observation from the environment. Returns: np.ndarray: An action with noise. dict: Arbitrary policy state information (agent_info).
get_action
python
rlworkgroup/garage
src/garage/np/exploration_policies/add_ornstein_uhlenbeck_noise.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_ornstein_uhlenbeck_noise.py
MIT
def get_actions(self, observations): """Return actions with noise. Args: observations (np.ndarray): Observation from the environment. Returns: np.ndarray: Actions with noise. List[dict]: Arbitrary policy state information (agent_info). """ a...
Return actions with noise. Args: observations (np.ndarray): Observation from the environment. Returns: np.ndarray: Actions with noise. List[dict]: Arbitrary policy state information (agent_info).
get_actions
python
rlworkgroup/garage
src/garage/np/exploration_policies/add_ornstein_uhlenbeck_noise.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/add_ornstein_uhlenbeck_noise.py
MIT
def get_action(self, observation): """Get action from this policy for the input observation. Args: observation (numpy.ndarray): Observation from the environment. Returns: np.ndarray: An action with noise. dict: Arbitrary policy state information (agent_info)...
Get action from this policy for the input observation. Args: observation (numpy.ndarray): Observation from the environment. Returns: np.ndarray: An action with noise. dict: Arbitrary policy state information (agent_info).
get_action
python
rlworkgroup/garage
src/garage/np/exploration_policies/epsilon_greedy_policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/epsilon_greedy_policy.py
MIT
def get_actions(self, observations): """Get actions from this policy for the input observations. Args: observations (numpy.ndarray): Observation from the environment. Returns: np.ndarray: Actions with noise. List[dict]: Arbitrary policy state information (ag...
Get actions from this policy for the input observations. Args: observations (numpy.ndarray): Observation from the environment. Returns: np.ndarray: Actions with noise. List[dict]: Arbitrary policy state information (agent_info).
get_actions
python
rlworkgroup/garage
src/garage/np/exploration_policies/epsilon_greedy_policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/epsilon_greedy_policy.py
MIT
def _epsilon(self): """Get the current epsilon. Returns: double: Epsilon. """ if self._total_env_steps >= self._decay_period: return self._min_epsilon return self._max_epsilon - self._decrement * self._total_env_steps
Get the current epsilon. Returns: double: Epsilon.
_epsilon
python
rlworkgroup/garage
src/garage/np/exploration_policies/epsilon_greedy_policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/exploration_policies/epsilon_greedy_policy.py
MIT
def reset(self, do_resets=None): """Reset policy. Args: do_resets (None or list[bool]): Vectorized policy states to reset. Raises: ValueError: If do_resets has length greater than 1. """ if do_resets is None: do_resets = [True] if le...
Reset policy. Args: do_resets (None or list[bool]): Vectorized policy states to reset. Raises: ValueError: If do_resets has length greater than 1.
reset
python
rlworkgroup/garage
src/garage/np/policies/fixed_policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/fixed_policy.py
MIT
def get_action(self, observation): """Get next action. Args: observation (np.ndarray): Ignored. Raises: ValueError: If policy is currently vectorized (reset was called with more than one done value). Returns: tuple[np.ndarray, dict[s...
Get next action. Args: observation (np.ndarray): Ignored. Raises: ValueError: If policy is currently vectorized (reset was called with more than one done value). Returns: tuple[np.ndarray, dict[str, np.ndarray]]: The action and agent_info ...
get_action
python
rlworkgroup/garage
src/garage/np/policies/fixed_policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/fixed_policy.py
MIT
def get_actions(self, observations): """Get next action. Args: observations (np.ndarray): Ignored. Raises: ValueError: If observations has length greater than 1. Returns: tuple[np.ndarray, dict[str, np.ndarray]]: The action and agent_info ...
Get next action. Args: observations (np.ndarray): Ignored. Raises: ValueError: If observations has length greater than 1. Returns: tuple[np.ndarray, dict[str, np.ndarray]]: The action and agent_info for this time step.
get_actions
python
rlworkgroup/garage
src/garage/np/policies/fixed_policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/fixed_policy.py
MIT
def get_action(self, observation): """Get action sampled from the policy. Args: observation (np.ndarray): Observation from the environment. Returns: Tuple[np.ndarray, dict[str,np.ndarray]]: Actions and extra agent infos. """
Get action sampled from the policy. Args: observation (np.ndarray): Observation from the environment. Returns: Tuple[np.ndarray, dict[str,np.ndarray]]: Actions and extra agent infos.
get_action
python
rlworkgroup/garage
src/garage/np/policies/policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/policy.py
MIT
def get_actions(self, observations): """Get actions given observations. Args: observations (torch.Tensor): Observations from the environment. Returns: Tuple[np.ndarray, dict[str,np.ndarray]]: Actions and extra agent infos. """
Get actions given observations. Args: observations (torch.Tensor): Observations from the environment. Returns: Tuple[np.ndarray, dict[str,np.ndarray]]: Actions and extra agent infos.
get_actions
python
rlworkgroup/garage
src/garage/np/policies/policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/policy.py
MIT
def reset(self, do_resets=None): """Reset the policy. This is effective only to recurrent policies. do_resets is an array of boolean indicating which internal states to be reset. The length of do_resets should be equal to the length of inputs, i.e. batch size. Args: ...
Reset the policy. This is effective only to recurrent policies. do_resets is an array of boolean indicating which internal states to be reset. The length of do_resets should be equal to the length of inputs, i.e. batch size. Args: do_resets (numpy.ndarray): Bool ar...
reset
python
rlworkgroup/garage
src/garage/np/policies/policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/policy.py
MIT
def name(self): """Name of policy. Returns: str: Name of policy """
Name of policy. Returns: str: Name of policy
name
python
rlworkgroup/garage
src/garage/np/policies/policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/policy.py
MIT
def env_spec(self): """Policy environment specification. Returns: garage.EnvSpec: Environment specification. """
Policy environment specification. Returns: garage.EnvSpec: Environment specification.
env_spec
python
rlworkgroup/garage
src/garage/np/policies/policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/policy.py
MIT
def get_action(self, observation): """Return a single action. Args: observation (numpy.ndarray): Observations. Returns: int: Action given input observation. dict[dict]: Agent infos indexed by observation. """ if self._agent_env_infos: ...
Return a single action. Args: observation (numpy.ndarray): Observations. Returns: int: Action given input observation. dict[dict]: Agent infos indexed by observation.
get_action
python
rlworkgroup/garage
src/garage/np/policies/scripted_policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/scripted_policy.py
MIT
def get_actions(self, observations): """Return multiple actions. Args: observations (numpy.ndarray): Observations. Returns: list[int]: Actions given input observations. dict[dict]: Agent info indexed by observation. """ if self._agent_env_in...
Return multiple actions. Args: observations (numpy.ndarray): Observations. Returns: list[int]: Actions given input observations. dict[dict]: Agent info indexed by observation.
get_actions
python
rlworkgroup/garage
src/garage/np/policies/scripted_policy.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/np/policies/scripted_policy.py
MIT
def init_plot(self, env, policy): """Initialize the plotter. Args: env (GymEnv): Environment to visualize. policy (Policy): Policy to roll out in the visualization. """ if not Plotter.enable: return if not (self._process and s...
Initialize the plotter. Args: env (GymEnv): Environment to visualize. policy (Policy): Policy to roll out in the visualization.
init_plot
python
rlworkgroup/garage
src/garage/plotter/plotter.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/plotter/plotter.py
MIT
def update_plot(self, policy, max_length=np.inf): """Update the plotter. Args: policy (garage.np.policies.Policy): New policy to roll out in the visualization. max_length (int): Maximum number of steps to roll out. """ if not Plotter.enable: ...
Update the plotter. Args: policy (garage.np.policies.Policy): New policy to roll out in the visualization. max_length (int): Maximum number of steps to roll out.
update_plot
python
rlworkgroup/garage
src/garage/plotter/plotter.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/plotter/plotter.py
MIT
def _sample_her_goals(self, path, transition_idx): """Samples HER goals from the given path. Goals are randomly sampled starting from the index after transition_idx in the given path. Args: path (dict[str, np.ndarray]): A dict containing the transition keys,...
Samples HER goals from the given path. Goals are randomly sampled starting from the index after transition_idx in the given path. Args: path (dict[str, np.ndarray]): A dict containing the transition keys, where each key contains an ndarray of shape :...
_sample_her_goals
python
rlworkgroup/garage
src/garage/replay_buffer/her_replay_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/her_replay_buffer.py
MIT
def add_path(self, path): """Adds a path to the replay buffer. For each transition in the given path except the last one, replay_k HER transitions will added to the buffer in addition to the one in the path. The last transition is added without sampling additional HER goals. ...
Adds a path to the replay buffer. For each transition in the given path except the last one, replay_k HER transitions will added to the buffer in addition to the one in the path. The last transition is added without sampling additional HER goals. Args: path(dict[str...
add_path
python
rlworkgroup/garage
src/garage/replay_buffer/her_replay_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/her_replay_buffer.py
MIT
def add_episode_batch(self, episodes): """Add a EpisodeBatch to the buffer. Args: episodes (EpisodeBatch): Episodes to add. """ if self._env_spec is None: self._env_spec = episodes.env_spec env_spec = episodes.env_spec obs_space = env_spec.observ...
Add a EpisodeBatch to the buffer. Args: episodes (EpisodeBatch): Episodes to add.
add_episode_batch
python
rlworkgroup/garage
src/garage/replay_buffer/path_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py
MIT
def add_path(self, path): """Add a path to the buffer. Args: path (dict): A dict of array of shape (path_len, flat_dim). Raises: ValueError: If a key is missing from path or path has wrong shape. """ for key, buf_arr in self._buffer.items(): ...
Add a path to the buffer. Args: path (dict): A dict of array of shape (path_len, flat_dim). Raises: ValueError: If a key is missing from path or path has wrong shape.
add_path
python
rlworkgroup/garage
src/garage/replay_buffer/path_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py
MIT
def sample_path(self): """Sample a single path from the buffer. Returns: path: A dict of arrays of shape (path_len, flat_dim). """ path_idx = np.random.randint(len(self._path_segments)) first_seg, second_seg = self._path_segments[path_idx] first_seg_indices ...
Sample a single path from the buffer. Returns: path: A dict of arrays of shape (path_len, flat_dim).
sample_path
python
rlworkgroup/garage
src/garage/replay_buffer/path_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py
MIT
def sample_transitions(self, batch_size): """Sample a batch of transitions from the buffer. Args: batch_size (int): Number of transitions to sample. Returns: dict: A dict of arrays of shape (batch_size, flat_dim). """ idx = np.random.randint(self._trans...
Sample a batch of transitions from the buffer. Args: batch_size (int): Number of transitions to sample. Returns: dict: A dict of arrays of shape (batch_size, flat_dim).
sample_transitions
python
rlworkgroup/garage
src/garage/replay_buffer/path_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py
MIT
def sample_timesteps(self, batch_size): """Sample a batch of timesteps from the buffer. Args: batch_size (int): Number of timesteps to sample. Returns: TimeStepBatch: The batch of timesteps. """ samples = self.sample_transitions(batch_size) step...
Sample a batch of timesteps from the buffer. Args: batch_size (int): Number of timesteps to sample. Returns: TimeStepBatch: The batch of timesteps.
sample_timesteps
python
rlworkgroup/garage
src/garage/replay_buffer/path_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py
MIT
def _next_path_segments(self, n_indices): """Compute where the next path should be stored. Args: n_indices (int): Path length. Returns: tuple: Lists of indices where path should be stored. Raises: ValueError: If path length is greater than the size ...
Compute where the next path should be stored. Args: n_indices (int): Path length. Returns: tuple: Lists of indices where path should be stored. Raises: ValueError: If path length is greater than the size of buffer.
_next_path_segments
python
rlworkgroup/garage
src/garage/replay_buffer/path_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py
MIT
def _get_or_allocate_key(self, key, array): """Get or allocate key in the buffer. Args: key (str): Key in buffer. array (numpy.ndarray): Array corresponding to key. Returns: numpy.ndarray: A NumPy array corresponding to key in the buffer. """ ...
Get or allocate key in the buffer. Args: key (str): Key in buffer. array (numpy.ndarray): Array corresponding to key. Returns: numpy.ndarray: A NumPy array corresponding to key in the buffer.
_get_or_allocate_key
python
rlworkgroup/garage
src/garage/replay_buffer/path_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py
MIT
def _get_path_length(path): """Get path length. Args: path (dict): Path. Returns: length: Path length. Raises: ValueError: If path is empty or has inconsistent lengths. """ length_key = None length = None for key, va...
Get path length. Args: path (dict): Path. Returns: length: Path length. Raises: ValueError: If path is empty or has inconsistent lengths.
_get_path_length
python
rlworkgroup/garage
src/garage/replay_buffer/path_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py
MIT
def _segments_overlap(seg_a, seg_b): """Compute if two segments overlap. Args: seg_a (range): List of indices of the first segment. seg_b (range): List of indices of the second segment. Returns: bool: True iff the input ranges overlap at at least one index. ...
Compute if two segments overlap. Args: seg_a (range): List of indices of the first segment. seg_b (range): List of indices of the second segment. Returns: bool: True iff the input ranges overlap at at least one index.
_segments_overlap
python
rlworkgroup/garage
src/garage/replay_buffer/path_buffer.py
https://github.com/rlworkgroup/garage/blob/master/src/garage/replay_buffer/path_buffer.py
MIT