code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def extras(cfg: DictConfig) -> None: """Applies optional utilities before the task is started. Utilities: - Ignoring python warnings - Setting tags from command line - Rich config printing :param cfg: A DictConfig object containing the config tree. """ # return if no `extra...
Applies optional utilities before the task is started. Utilities: - Ignoring python warnings - Setting tags from command line - Rich config printing :param cfg: A DictConfig object containing the config tree.
extras
python
abus-aikorea/voice-pro
third_party/Matcha-TTS/matcha/utils/utils.py
https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/utils/utils.py
MIT
def task_wrapper(task_func: Callable) -> Callable: """Optional decorator that controls the failure behavior when executing the task function. This wrapper can be used to: - make sure loggers are closed even if the task function raises an exception (prevents multirun failure) - save the exceptio...
Optional decorator that controls the failure behavior when executing the task function. This wrapper can be used to: - make sure loggers are closed even if the task function raises an exception (prevents multirun failure) - save the exception to a `.log` file - mark the run as failed with a...
task_wrapper
python
abus-aikorea/voice-pro
third_party/Matcha-TTS/matcha/utils/utils.py
https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/utils/utils.py
MIT
def get_metric_value(metric_dict: Dict[str, Any], metric_name: str) -> float: """Safely retrieves value of the metric logged in LightningModule. :param metric_dict: A dict containing metric values. :param metric_name: The name of the metric to retrieve. :return: The value of the metric. """ if ...
Safely retrieves value of the metric logged in LightningModule. :param metric_dict: A dict containing metric values. :param metric_name: The name of the metric to retrieve. :return: The value of the metric.
get_metric_value
python
abus-aikorea/voice-pro
third_party/Matcha-TTS/matcha/utils/utils.py
https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/utils/utils.py
MIT
def get_user_data_dir(appname="matcha_tts"): """ Args: appname (str): Name of application Returns: Path: path to user data directory """ MATCHA_HOME = os.environ.get("MATCHA_HOME") if MATCHA_HOME is not None: ans = Path(MATCHA_HOME).expanduser().resolve(strict=False) ...
Args: appname (str): Name of application Returns: Path: path to user data directory
get_user_data_dir
python
abus-aikorea/voice-pro
third_party/Matcha-TTS/matcha/utils/utils.py
https://github.com/abus-aikorea/voice-pro/blob/master/third_party/Matcha-TTS/matcha/utils/utils.py
MIT
def get_args(): """ Description: Parses arguments at command line. Parameters: None Return: args - the arguments parsed """ parser = argparse.ArgumentParser() parser.add_argument('--mode', dest='mode', type=str, default='train') # can be 'train' or 'test' parser.add_argument('--actor_...
Description: Parses arguments at command line. Parameters: None Return: args - the arguments parsed
get_args
python
ericyangyu/PPO-for-Beginners
arguments.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/arguments.py
MIT
def _log_summary(ep_len, ep_ret, ep_num): """ Print to stdout what we've logged so far in the most recent episode. Parameters: None Return: None """ # Round decimal places for more aesthetic logging messages ep_len = str(round(ep_len, 2)) ep_ret = str(round(ep_ret, 2)) # Print logging st...
Print to stdout what we've logged so far in the most recent episode. Parameters: None Return: None
_log_summary
python
ericyangyu/PPO-for-Beginners
eval_policy.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/eval_policy.py
MIT
def rollout(policy, env, render): """ Returns a generator to roll out each episode given a trained policy and environment to test on. Parameters: policy - The trained policy to test env - The environment to evaluate the policy on render - Specifies whether to render or not Return: A generator ...
Returns a generator to roll out each episode given a trained policy and environment to test on. Parameters: policy - The trained policy to test env - The environment to evaluate the policy on render - Specifies whether to render or not Return: A generator object rollout, or iterable, which wil...
rollout
python
ericyangyu/PPO-for-Beginners
eval_policy.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/eval_policy.py
MIT
def eval_policy(policy, env, render=False): """ The main function to evaluate our policy with. It will iterate a generator object "rollout", which will simulate each episode and return the most recent episode's length and return. We can then log it right after. And yes, eval_policy will run forever until you k...
The main function to evaluate our policy with. It will iterate a generator object "rollout", which will simulate each episode and return the most recent episode's length and return. We can then log it right after. And yes, eval_policy will run forever until you kill the process. Parameters: policy - The...
eval_policy
python
ericyangyu/PPO-for-Beginners
eval_policy.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/eval_policy.py
MIT
def train(env, hyperparameters, actor_model, critic_model): """ Trains the model. Parameters: env - the environment to train on hyperparameters - a dict of hyperparameters to use, defined in main actor_model - the actor model to load in if we want to continue training critic_model - the critic model t...
Trains the model. Parameters: env - the environment to train on hyperparameters - a dict of hyperparameters to use, defined in main actor_model - the actor model to load in if we want to continue training critic_model - the critic model to load in if we want to continue training Return: None
train
python
ericyangyu/PPO-for-Beginners
main.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/main.py
MIT
def test(env, actor_model): """ Tests the model. Parameters: env - the environment to test the policy on actor_model - the actor model to load in Return: None """ print(f"Testing {actor_model}", flush=True) # If the actor model is not specified, then exit if actor_model == '': print(f"Didn't sp...
Tests the model. Parameters: env - the environment to test the policy on actor_model - the actor model to load in Return: None
test
python
ericyangyu/PPO-for-Beginners
main.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/main.py
MIT
def main(args): """ The main function to run. Parameters: args - the arguments parsed from command line Return: None """ # NOTE: Here's where you can set hyperparameters for PPO. I don't include them as part of # ArgumentParser because it's too annoying to type them every time at command line. Instead...
The main function to run. Parameters: args - the arguments parsed from command line Return: None
main
python
ericyangyu/PPO-for-Beginners
main.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/main.py
MIT
def __init__(self, in_dim, out_dim): """ Initialize the network and set up the layers. Parameters: in_dim - input dimensions as an int out_dim - output dimensions as an int Return: None """ super(FeedForwardNN, self).__init__() self.layer1 = nn.Linear(in_dim, 64) self.layer2 = nn.Linea...
Initialize the network and set up the layers. Parameters: in_dim - input dimensions as an int out_dim - output dimensions as an int Return: None
__init__
python
ericyangyu/PPO-for-Beginners
network.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/network.py
MIT
def forward(self, obs): """ Runs a forward pass on the neural network. Parameters: obs - observation to pass as input Return: output - the output of our forward pass """ # Convert observation to tensor if it's a numpy array if isinstance(obs, np.ndarray): obs = torch.tensor(obs, dtype=torc...
Runs a forward pass on the neural network. Parameters: obs - observation to pass as input Return: output - the output of our forward pass
forward
python
ericyangyu/PPO-for-Beginners
network.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/network.py
MIT
def __init__(self, policy_class, env, **hyperparameters): """ Initializes the PPO model, including hyperparameters. Parameters: policy_class - the policy class to use for our actor/critic networks. env - the environment to train on. hyperparameters - all extra arguments passed into PPO that should ...
Initializes the PPO model, including hyperparameters. Parameters: policy_class - the policy class to use for our actor/critic networks. env - the environment to train on. hyperparameters - all extra arguments passed into PPO that should be hyperparameters. Returns: None
__init__
python
ericyangyu/PPO-for-Beginners
ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/ppo.py
MIT
def learn(self, total_timesteps): """ Train the actor and critic networks. Here is where the main PPO algorithm resides. Parameters: total_timesteps - the total number of timesteps to train for Return: None """ print(f"Learning... Running {self.max_timesteps_per_episode} timesteps per episode, ...
Train the actor and critic networks. Here is where the main PPO algorithm resides. Parameters: total_timesteps - the total number of timesteps to train for Return: None
learn
python
ericyangyu/PPO-for-Beginners
ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/ppo.py
MIT
def rollout(self): """ Too many transformers references, I'm sorry. This is where we collect the batch of data from simulation. Since this is an on-policy algorithm, we'll need to collect a fresh batch of data each time we iterate the actor/critic networks. Parameters: None Return: batch_obs ...
Too many transformers references, I'm sorry. This is where we collect the batch of data from simulation. Since this is an on-policy algorithm, we'll need to collect a fresh batch of data each time we iterate the actor/critic networks. Parameters: None Return: batch_obs - the observations colle...
rollout
python
ericyangyu/PPO-for-Beginners
ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/ppo.py
MIT
def compute_rtgs(self, batch_rews): """ Compute the Reward-To-Go of each timestep in a batch given the rewards. Parameters: batch_rews - the rewards in a batch, Shape: (number of episodes, number of timesteps per episode) Return: batch_rtgs - the rewards to go, Shape: (number of timesteps in batch)...
Compute the Reward-To-Go of each timestep in a batch given the rewards. Parameters: batch_rews - the rewards in a batch, Shape: (number of episodes, number of timesteps per episode) Return: batch_rtgs - the rewards to go, Shape: (number of timesteps in batch)
compute_rtgs
python
ericyangyu/PPO-for-Beginners
ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/ppo.py
MIT
def get_action(self, obs): """ Queries an action from the actor network, should be called from rollout. Parameters: obs - the observation at the current timestep Return: action - the action to take, as a numpy array log_prob - the log probability of the selected action in the distribution """...
Queries an action from the actor network, should be called from rollout. Parameters: obs - the observation at the current timestep Return: action - the action to take, as a numpy array log_prob - the log probability of the selected action in the distribution
get_action
python
ericyangyu/PPO-for-Beginners
ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/ppo.py
MIT
def evaluate(self, batch_obs, batch_acts): """ Estimate the values of each observation, and the log probs of each action in the most recent batch with the most recent iteration of the actor network. Should be called from learn. Parameters: batch_obs - the observations from the most recently collected...
Estimate the values of each observation, and the log probs of each action in the most recent batch with the most recent iteration of the actor network. Should be called from learn. Parameters: batch_obs - the observations from the most recently collected batch as a tensor. Shape: (number of tim...
evaluate
python
ericyangyu/PPO-for-Beginners
ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/ppo.py
MIT
def _init_hyperparameters(self, hyperparameters): """ Initialize default and custom values for hyperparameters Parameters: hyperparameters - the extra arguments included when creating the PPO model, should only include hyperparameters defined below with custom values. Return: None """ ...
Initialize default and custom values for hyperparameters Parameters: hyperparameters - the extra arguments included when creating the PPO model, should only include hyperparameters defined below with custom values. Return: None
_init_hyperparameters
python
ericyangyu/PPO-for-Beginners
ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/ppo.py
MIT
def _log_summary(self): """ Print to stdout what we've logged so far in the most recent batch. Parameters: None Return: None """ # Calculate logging values. I use a few python shortcuts to calculate each value # without explaining since it's not too important to PPO; feel free to look it over...
Print to stdout what we've logged so far in the most recent batch. Parameters: None Return: None
_log_summary
python
ericyangyu/PPO-for-Beginners
ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/ppo.py
MIT
def get_file_locations(): """ Gets the absolute paths of each data file to graph. Parameters: None Return: paths - a dict with the following structure: { env: { seeds: absolute_seeds_file_path stable_baseli...
Gets the absolute paths of each data file to graph. Parameters: None Return: paths - a dict with the following structure: { env: { seeds: absolute_seeds_file_path stable_baselines: [absolute_file_paths_to_data...
get_file_locations
python
ericyangyu/PPO-for-Beginners
graph_code/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/make_graph.py
MIT
def extract_ppo_for_beginners_data(env, filename): """ Extract the total timesteps and average episodic return from the logging data specified to PPO for Beginners. Parameters: env - The environment we're currently graphing. filename - The file containing data. Shoul...
Extract the total timesteps and average episodic return from the logging data specified to PPO for Beginners. Parameters: env - The environment we're currently graphing. filename - The file containing data. Should be "seed_xxx.txt" such that the ...
extract_ppo_for_beginners_data
python
ericyangyu/PPO-for-Beginners
graph_code/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/make_graph.py
MIT
def extract_stable_baselines_data(env, filename): """ Extract the total timesteps and average episodic return from the logging data specified to Stable Baselines PPO2. Parameters: env - The environment we're currently graphing. filename - The file containing data. Sh...
Extract the total timesteps and average episodic return from the logging data specified to Stable Baselines PPO2. Parameters: env - The environment we're currently graphing. filename - The file containing data. Should be "seed_xxx.txt" such that the ...
extract_stable_baselines_data
python
ericyangyu/PPO-for-Beginners
graph_code/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/make_graph.py
MIT
def calculate_lower_bounds(x_s, y_s): """ Calculate lower bounds of total timesteps and average episodic return per iteration. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed. Re...
Calculate lower bounds of total timesteps and average episodic return per iteration. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed. Return: Lower bounds of both x_s a...
calculate_lower_bounds
python
ericyangyu/PPO-for-Beginners
graph_code/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/make_graph.py
MIT
def calculate_upper_bounds(x_s, y_s): """ Calculate upper bounds of total timesteps and average episodic return per iteration. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed. Re...
Calculate upper bounds of total timesteps and average episodic return per iteration. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed. Return: Upper bounds of both x_s a...
calculate_upper_bounds
python
ericyangyu/PPO-for-Beginners
graph_code/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/make_graph.py
MIT
def calculate_means(x_s, y_s): """ Calculate mean of each total timestep and average episodic return over all trials at each iteration. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed ...
Calculate mean of each total timestep and average episodic return over all trials at each iteration. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed Return: Means of x_...
calculate_means
python
ericyangyu/PPO-for-Beginners
graph_code/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/make_graph.py
MIT
def clip_data(x_s, y_s): """ In the case that there are different number of iterations across learning trials, clip all trials to the length of the shortest trial. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of ...
In the case that there are different number of iterations across learning trials, clip all trials to the length of the shortest trial. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed...
clip_data
python
ericyangyu/PPO-for-Beginners
graph_code/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/make_graph.py
MIT
def extract_data(paths): """ Extracts data from all the files, and returns a generator object extract_data to iterably return data for each environment. Number of iterations should equal number of environments in graph_data. Parameters: paths - Contains the paths to eac...
Extracts data from all the files, and returns a generator object extract_data to iterably return data for each environment. Number of iterations should equal number of environments in graph_data. Parameters: paths - Contains the paths to each data file. Check function desc...
extract_data
python
ericyangyu/PPO-for-Beginners
graph_code/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/make_graph.py
MIT
def graph_data(paths): """ Graphs the data with matplotlib. Will display on screen for user to screenshot. Parameters: paths - Contains the paths to each data file. Check function description of get_file_locations() to see how paths is structured. Return: ...
Graphs the data with matplotlib. Will display on screen for user to screenshot. Parameters: paths - Contains the paths to each data file. Check function description of get_file_locations() to see how paths is structured. Return: None
graph_data
python
ericyangyu/PPO-for-Beginners
graph_code/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/make_graph.py
MIT
def main(): """ Main function to get file locations and graph the data. Parameters: None Return: None """ # Extract absolute file paths paths = get_file_locations() # Graph the data from the file paths extracted graph_data(paths)
Main function to get file locations and graph the data. Parameters: None Return: None
main
python
ericyangyu/PPO-for-Beginners
graph_code/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/make_graph.py
MIT
def train_stable_baselines(args): """ Trains with PPO2 on specified environment. Parameters: args - the arguments defined in main. Return: None """ # Import stable baselines from stable_baselines import PPO2 from stable_baselines.common.callbacks import CheckpointCallback from stable_baselines.commo...
Trains with PPO2 on specified environment. Parameters: args - the arguments defined in main. Return: None
train_stable_baselines
python
ericyangyu/PPO-for-Beginners
graph_code/run.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/run.py
MIT
def train_ppo_for_beginners(args): """ Trains with PPO for Beginners on specified environment. Parameters: args - the arguments defined in main. Return: None """ # Import ppo for beginners from ppo_for_beginners.ppo import PPO from ppo_for_beginners.network import FeedForwardNN # Store hyperparamet...
Trains with PPO for Beginners on specified environment. Parameters: args - the arguments defined in main. Return: None
train_ppo_for_beginners
python
ericyangyu/PPO-for-Beginners
graph_code/run.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/run.py
MIT
def main(args): """ An intermediate function that will call either PPO2 learn or PPO for Beginners learn. Parameters: args - the arguments defined below Return: None """ if args.code == 'stable_baselines_ppo2': train_stable_baselines(args) elif args.code == 'ppo_for_beginners': train_ppo_for_begin...
An intermediate function that will call either PPO2 learn or PPO for Beginners learn. Parameters: args - the arguments defined below Return: None
main
python
ericyangyu/PPO-for-Beginners
graph_code/run.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/run.py
MIT
def get_args(): """ Description: Parses arguments at command line. Parameters: None Return: args - the arguments parsed """ parser = argparse.ArgumentParser() parser.add_argument('--mode', dest='mode', type=str, default='train') # can be 'train' or 'test' parser.add_argument('--actor_...
Description: Parses arguments at command line. Parameters: None Return: args - the arguments parsed
get_args
python
ericyangyu/PPO-for-Beginners
graph_code/ppo_for_beginners/arguments.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/ppo_for_beginners/arguments.py
MIT
def __init__(self, in_dim, out_dim): """ Initialize the network and set up the layers. Parameters: in_dim - input dimensions as an int out_dim - output dimensions as an int Return: None """ super(FeedForwardNN, self).__init__() self.layer1 = nn.Linear(in_dim, 64) self.layer2 = nn.Linea...
Initialize the network and set up the layers. Parameters: in_dim - input dimensions as an int out_dim - output dimensions as an int Return: None
__init__
python
ericyangyu/PPO-for-Beginners
graph_code/ppo_for_beginners/network.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/ppo_for_beginners/network.py
MIT
def forward(self, obs): """ Runs a forward pass on the neural network. Parameters: obs - observation to pass as input Return: output - the output of our forward pass """ # Convert observation to tensor if it's a numpy array if isinstance(obs, np.ndarray): obs = torch.tensor(obs, dtype=torc...
Runs a forward pass on the neural network. Parameters: obs - observation to pass as input Return: output - the output of our forward pass
forward
python
ericyangyu/PPO-for-Beginners
graph_code/ppo_for_beginners/network.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/ppo_for_beginners/network.py
MIT
def __init__(self, policy_class, env, **hyperparameters): """ Initializes the PPO model, including hyperparameters. Parameters: policy_class - the policy class to use for our actor/critic networks. env - the environment to train on. hyperparameters - all extra arguments passed into PPO that should ...
Initializes the PPO model, including hyperparameters. Parameters: policy_class - the policy class to use for our actor/critic networks. env - the environment to train on. hyperparameters - all extra arguments passed into PPO that should be hyperparameters. Returns: None
__init__
python
ericyangyu/PPO-for-Beginners
graph_code/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/ppo_for_beginners/ppo.py
MIT
def learn(self, total_timesteps): """ Train the actor and critic networks. Here is where the main PPO algorithm resides. Parameters: total_timesteps - the total number of timesteps to train for Return: None """ print(f"Learning... Running {self.max_timesteps_per_episode} timesteps per episode, ...
Train the actor and critic networks. Here is where the main PPO algorithm resides. Parameters: total_timesteps - the total number of timesteps to train for Return: None
learn
python
ericyangyu/PPO-for-Beginners
graph_code/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/ppo_for_beginners/ppo.py
MIT
def rollout(self): """ Too many transformers references, I'm sorry. This is where we collect the batch of data from simulation. Since this is an on-policy algorithm, we'll need to collect a fresh batch of data each time we iterate the actor/critic networks. Parameters: None Return: batch_obs ...
Too many transformers references, I'm sorry. This is where we collect the batch of data from simulation. Since this is an on-policy algorithm, we'll need to collect a fresh batch of data each time we iterate the actor/critic networks. Parameters: None Return: batch_obs - the observations colle...
rollout
python
ericyangyu/PPO-for-Beginners
graph_code/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/ppo_for_beginners/ppo.py
MIT
def compute_rtgs(self, batch_rews): """ Compute the Reward-To-Go of each timestep in a batch given the rewards. Parameters: batch_rews - the rewards in a batch, Shape: (number of episodes, number of timesteps per episode) Return: batch_rtgs - the rewards to go, Shape: (number of timesteps in batch)...
Compute the Reward-To-Go of each timestep in a batch given the rewards. Parameters: batch_rews - the rewards in a batch, Shape: (number of episodes, number of timesteps per episode) Return: batch_rtgs - the rewards to go, Shape: (number of timesteps in batch)
compute_rtgs
python
ericyangyu/PPO-for-Beginners
graph_code/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/ppo_for_beginners/ppo.py
MIT
def get_action(self, obs): """ Queries an action from the actor network, should be called from rollout. Parameters: obs - the observation at the current timestep Return: action - the action to take, as a numpy array log_prob - the log probability of the selected action in the distribution """...
Queries an action from the actor network, should be called from rollout. Parameters: obs - the observation at the current timestep Return: action - the action to take, as a numpy array log_prob - the log probability of the selected action in the distribution
get_action
python
ericyangyu/PPO-for-Beginners
graph_code/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/ppo_for_beginners/ppo.py
MIT
def evaluate(self, batch_obs, batch_acts, batch_rtgs): """ Estimate the values of each observation, and the log probs of each action in the most recent batch with the most recent iteration of the actor network. Should be called from learn. Parameters: batch_obs - the observations from the most recent...
Estimate the values of each observation, and the log probs of each action in the most recent batch with the most recent iteration of the actor network. Should be called from learn. Parameters: batch_obs - the observations from the most recently collected batch as a tensor. Shape: (number of tim...
evaluate
python
ericyangyu/PPO-for-Beginners
graph_code/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/ppo_for_beginners/ppo.py
MIT
def _init_hyperparameters(self, hyperparameters): """ Initialize default and custom values for hyperparameters Parameters: hyperparameters - the extra arguments included when creating the PPO model, should only include hyperparameters defined below with custom values. Return: None """ ...
Initialize default and custom values for hyperparameters Parameters: hyperparameters - the extra arguments included when creating the PPO model, should only include hyperparameters defined below with custom values. Return: None
_init_hyperparameters
python
ericyangyu/PPO-for-Beginners
graph_code/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/ppo_for_beginners/ppo.py
MIT
def _log_summary(self): """ Print to stdout what we've logged so far in the most recent batch. Parameters: None Return: None """ # Calculate logging values. I use a few python shortcuts to calculate each value # without explaining since it's not too important to PPO; feel free to look it over...
Print to stdout what we've logged so far in the most recent batch. Parameters: None Return: None
_log_summary
python
ericyangyu/PPO-for-Beginners
graph_code/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/graph_code/ppo_for_beginners/ppo.py
MIT
def get_file_locations(): """ Gets the absolute paths of each data file to graph. Parameters: None Return: paths - a dict with the following structure: { env: { seeds: absolute_seeds_file_path stable_baseli...
Gets the absolute paths of each data file to graph. Parameters: None Return: paths - a dict with the following structure: { env: { seeds: absolute_seeds_file_path stable_baselines: [absolute_file_paths_to_data...
get_file_locations
python
ericyangyu/PPO-for-Beginners
part4/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/make_graph.py
MIT
def extract_ppo_for_beginners_data(env, filename): """ Extract the total timesteps and average episodic return from the logging data specified to PPO for Beginners. Parameters: env - The environment we're currently graphing. filename - The file containing data. Shoul...
Extract the total timesteps and average episodic return from the logging data specified to PPO for Beginners. Parameters: env - The environment we're currently graphing. filename - The file containing data. Should be "seed_xxx.txt" such that the ...
extract_ppo_for_beginners_data
python
ericyangyu/PPO-for-Beginners
part4/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/make_graph.py
MIT
def extract_stable_baselines_data(env, filename): """ Extract the total timesteps and average episodic return from the logging data specified to Stable Baselines PPO2. Parameters: env - The environment we're currently graphing. filename - The file containing data. Sh...
Extract the total timesteps and average episodic return from the logging data specified to Stable Baselines PPO2. Parameters: env - The environment we're currently graphing. filename - The file containing data. Should be "seed_xxx.txt" such that the ...
extract_stable_baselines_data
python
ericyangyu/PPO-for-Beginners
part4/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/make_graph.py
MIT
def calculate_lower_bounds(x_s, y_s): """ Calculate lower bounds of total timesteps and average episodic return per iteration. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed. Re...
Calculate lower bounds of total timesteps and average episodic return per iteration. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed. Return: Lower bounds of both x_s a...
calculate_lower_bounds
python
ericyangyu/PPO-for-Beginners
part4/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/make_graph.py
MIT
def calculate_upper_bounds(x_s, y_s): """ Calculate upper bounds of total timesteps and average episodic return per iteration. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed. Re...
Calculate upper bounds of total timesteps and average episodic return per iteration. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed. Return: Upper bounds of both x_s a...
calculate_upper_bounds
python
ericyangyu/PPO-for-Beginners
part4/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/make_graph.py
MIT
def calculate_means(x_s, y_s): """ Calculate mean of each total timestep and average episodic return over all trials at each iteration. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed ...
Calculate mean of each total timestep and average episodic return over all trials at each iteration. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed Return: Means of x_...
calculate_means
python
ericyangyu/PPO-for-Beginners
part4/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/make_graph.py
MIT
def clip_data(x_s, y_s): """ In the case that there are different number of iterations across learning trials, clip all trials to the length of the shortest trial. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of ...
In the case that there are different number of iterations across learning trials, clip all trials to the length of the shortest trial. Parameters: x_s - A list of lists of total timesteps so far per seed. y_s - A list of lists of average episodic return per seed...
clip_data
python
ericyangyu/PPO-for-Beginners
part4/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/make_graph.py
MIT
def extract_data(paths): """ Extracts data from all the files, and returns a generator object extract_data to iterably return data for each environment. Number of iterations should equal number of environments in graph_data. Parameters: paths - Contains the paths to eac...
Extracts data from all the files, and returns a generator object extract_data to iterably return data for each environment. Number of iterations should equal number of environments in graph_data. Parameters: paths - Contains the paths to each data file. Check function desc...
extract_data
python
ericyangyu/PPO-for-Beginners
part4/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/make_graph.py
MIT
def graph_data(paths): """ Graphs the data with matplotlib. Will display on screen for user to screenshot. Parameters: paths - Contains the paths to each data file. Check function description of get_file_locations() to see how paths is structured. Return: ...
Graphs the data with matplotlib. Will display on screen for user to screenshot. Parameters: paths - Contains the paths to each data file. Check function description of get_file_locations() to see how paths is structured. Return: None
graph_data
python
ericyangyu/PPO-for-Beginners
part4/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/make_graph.py
MIT
def main(): """ Main function to get file locations and graph the data. Parameters: None Return: None """ # Extract absolute file paths paths = get_file_locations() # Graph the data from the file paths extracted graph_data(paths)
Main function to get file locations and graph the data. Parameters: None Return: None
main
python
ericyangyu/PPO-for-Beginners
part4/make_graph.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/make_graph.py
MIT
def train_stable_baselines(args): """ Trains with PPO2 on specified environment. Parameters: args - the arguments defined in main. Return: None """ from stable_baselines3 import PPO # Store hyperparameters and total timesteps to run by environment hyperparameters = {} total_timesteps = 0 if args.e...
Trains with PPO2 on specified environment. Parameters: args - the arguments defined in main. Return: None
train_stable_baselines
python
ericyangyu/PPO-for-Beginners
part4/run.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/run.py
MIT
def train_ppo_for_beginners(args): """ Trains with PPO for Beginners on specified environment. Parameters: args - the arguments defined in main. Return: None """ # Import ppo for beginners if args.optimized: from ppo_for_beginners.ppo_optimized import PPO else: from ppo_for_beginners.ppo import P...
Trains with PPO for Beginners on specified environment. Parameters: args - the arguments defined in main. Return: None
train_ppo_for_beginners
python
ericyangyu/PPO-for-Beginners
part4/run.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/run.py
MIT
def main(args): """ An intermediate function that will call either PPO2 learn or PPO for Beginners learn. Parameters: args - the arguments defined below Return: None """ if args.code == 'stable_baselines_ppo': train_stable_baselines(args) elif args.code == 'ppo_for_beginners': train_ppo_for_beginn...
An intermediate function that will call either PPO2 learn or PPO for Beginners learn. Parameters: args - the arguments defined below Return: None
main
python
ericyangyu/PPO-for-Beginners
part4/run.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/run.py
MIT
def get_args(): """ Description: Parses arguments at command line. Parameters: None Return: args - the arguments parsed """ parser = argparse.ArgumentParser() parser.add_argument('--mode', dest='mode', type=str, default='train') # can be 'train' or 'test' parser.add_argument('--actor_...
Description: Parses arguments at command line. Parameters: None Return: args - the arguments parsed
get_args
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/arguments.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/arguments.py
MIT
def __init__(self, in_dim, out_dim): """ Initialize the network and set up the layers. Parameters: in_dim - input dimensions as an int out_dim - output dimensions as an int Return: None """ super(FeedForwardNN, self).__init__() self.layer1 = nn.Linear(in_dim, 64) self.layer2 = nn.Linea...
Initialize the network and set up the layers. Parameters: in_dim - input dimensions as an int out_dim - output dimensions as an int Return: None
__init__
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/network.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/network.py
MIT
def forward(self, obs): """ Runs a forward pass on the neural network. Parameters: obs - observation to pass as input Return: output - the output of our forward pass """ # Convert observation to tensor if it's a numpy array if isinstance(obs, np.ndarray): obs = torch.tensor(obs, dtype=torc...
Runs a forward pass on the neural network. Parameters: obs - observation to pass as input Return: output - the output of our forward pass
forward
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/network.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/network.py
MIT
def __init__(self, policy_class, env, **hyperparameters): """ Initializes the PPO model, including hyperparameters. Parameters: policy_class - the policy class to use for our actor/critic networks. env - the environment to train on. hyperparameters - all extra arguments passed into PPO that should ...
Initializes the PPO model, including hyperparameters. Parameters: policy_class - the policy class to use for our actor/critic networks. env - the environment to train on. hyperparameters - all extra arguments passed into PPO that should be hyperparameters. Returns: None
__init__
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo.py
MIT
def learn(self, total_timesteps): """ Train the actor and critic networks. Here is where the main PPO algorithm resides. Parameters: total_timesteps - the total number of timesteps to train for Return: None """ print(f"Learning... Running {self.max_timesteps_per_episode} timesteps per episode, ...
Train the actor and critic networks. Here is where the main PPO algorithm resides. Parameters: total_timesteps - the total number of timesteps to train for Return: None
learn
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo.py
MIT
def rollout(self): """ Too many transformers references, I'm sorry. This is where we collect the batch of data from simulation. Since this is an on-policy algorithm, we'll need to collect a fresh batch of data each time we iterate the actor/critic networks. Parameters: None Return: batch_obs ...
Too many transformers references, I'm sorry. This is where we collect the batch of data from simulation. Since this is an on-policy algorithm, we'll need to collect a fresh batch of data each time we iterate the actor/critic networks. Parameters: None Return: batch_obs - the observations colle...
rollout
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo.py
MIT
def compute_rtgs(self, batch_rews): """ Compute the Reward-To-Go of each timestep in a batch given the rewards. Parameters: batch_rews - the rewards in a batch, Shape: (number of episodes, number of timesteps per episode) Return: batch_rtgs - the rewards to go, Shape: (number of timesteps in batch)...
Compute the Reward-To-Go of each timestep in a batch given the rewards. Parameters: batch_rews - the rewards in a batch, Shape: (number of episodes, number of timesteps per episode) Return: batch_rtgs - the rewards to go, Shape: (number of timesteps in batch)
compute_rtgs
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo.py
MIT
def get_action(self, obs): """ Queries an action from the actor network, should be called from rollout. Parameters: obs - the observation at the current timestep Return: action - the action to take, as a numpy array log_prob - the log probability of the selected action in the distribution """...
Queries an action from the actor network, should be called from rollout. Parameters: obs - the observation at the current timestep Return: action - the action to take, as a numpy array log_prob - the log probability of the selected action in the distribution
get_action
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo.py
MIT
def evaluate(self, batch_obs, batch_acts): """ Estimate the values of each observation, and the log probs of each action in the most recent batch with the most recent iteration of the actor network. Should be called from learn. Parameters: batch_obs - the observations from the most recently collected...
Estimate the values of each observation, and the log probs of each action in the most recent batch with the most recent iteration of the actor network. Should be called from learn. Parameters: batch_obs - the observations from the most recently collected batch as a tensor. Shape: (number of tim...
evaluate
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo.py
MIT
def _init_hyperparameters(self, hyperparameters): """ Initialize default and custom values for hyperparameters Parameters: hyperparameters - the extra arguments included when creating the PPO model, should only include hyperparameters defined below with custom values. Return: None """ ...
Initialize default and custom values for hyperparameters Parameters: hyperparameters - the extra arguments included when creating the PPO model, should only include hyperparameters defined below with custom values. Return: None
_init_hyperparameters
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo.py
MIT
def _log_summary(self): """ Print to stdout what we've logged so far in the most recent batch. Parameters: None Return: None """ # Calculate logging values. I use a few python shortcuts to calculate each value # without explaining since it's not too important to PPO; feel free to look it over...
Print to stdout what we've logged so far in the most recent batch. Parameters: None Return: None
_log_summary
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo.py
MIT
def __init__(self, policy_class, env, **hyperparameters): """ Initializes the PPO model, including hyperparameters. Parameters: policy_class - the policy class to use for our actor/critic networks. env - the environment to train on. hyperp...
Initializes the PPO model, including hyperparameters. Parameters: policy_class - the policy class to use for our actor/critic networks. env - the environment to train on. hyperparameters - all extra arguments passed into PPO that should be hyperp...
__init__
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo_optimized.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo_optimized.py
MIT
def learn(self, total_timesteps): """ Train the actor and critic networks. Here is where the main PPO algorithm resides. Parameters: total_timesteps - the total number of timesteps to train for Return: None """ print(f"Learnin...
Train the actor and critic networks. Here is where the main PPO algorithm resides. Parameters: total_timesteps - the total number of timesteps to train for Return: None
learn
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo_optimized.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo_optimized.py
MIT
def get_action(self, obs): """ Queries an action from the actor network, should be called from rollout. Parameters: obs - the observation at the current timestep Return: action - the action to take, as a numpy array log_prob -...
Queries an action from the actor network, should be called from rollout. Parameters: obs - the observation at the current timestep Return: action - the action to take, as a numpy array log_prob - the log probability of the selected a...
get_action
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo_optimized.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo_optimized.py
MIT
def evaluate(self, batch_obs, batch_acts): """ Estimate the values of each observation, and the log probs of each action in the most recent batch with the most recent iteration of the actor network. Should be called from learn. Parameters: batch_o...
Estimate the values of each observation, and the log probs of each action in the most recent batch with the most recent iteration of the actor network. Should be called from learn. Parameters: batch_obs - the observations from the most recently collected...
evaluate
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo_optimized.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo_optimized.py
MIT
def _init_hyperparameters(self, hyperparameters): """ Initialize default and custom values for hyperparameters Parameters: hyperparameters - the extra arguments included when creating the PPO model, should only include hyperparameters ...
Initialize default and custom values for hyperparameters Parameters: hyperparameters - the extra arguments included when creating the PPO model, should only include hyperparameters defined below with custom values. Return: ...
_init_hyperparameters
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo_optimized.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo_optimized.py
MIT
def _log_summary(self): """ Print to stdout what we've logged so far in the most recent batch. Parameters: None Return: None """ # Calculate logging values. I use a few python shortcuts to calculate each value # withou...
Print to stdout what we've logged so far in the most recent batch. Parameters: None Return: None
_log_summary
python
ericyangyu/PPO-for-Beginners
part4/ppo_for_beginners/ppo_optimized.py
https://github.com/ericyangyu/PPO-for-Beginners/blob/master/part4/ppo_for_beginners/ppo_optimized.py
MIT
def _validate_args( object_height: float, shadow_length: float, date_time: datetime, ) -> None: """ Validate the text search CLI arguments, raises an error if the arguments are invalid. """ if not object_height: raise ValueError("Object height cannot be empty") if not shadow_leng...
Validate the text search CLI arguments, raises an error if the arguments are invalid.
_validate_args
python
bellingcat/ShadowFinder
src/shadowfinder/cli.py
https://github.com/bellingcat/ShadowFinder/blob/master/src/shadowfinder/cli.py
MIT
def _validate_args_sun( sun_altitude_angle: float, date_time: datetime, ) -> None: """ Validate the text search CLI arguments, raises an error if the arguments are invalid. """ if not sun_altitude_angle: raise ValueError("Sun altitude angle cannot be empty") if not date_time: ...
Validate the text search CLI arguments, raises an error if the arguments are invalid.
_validate_args_sun
python
bellingcat/ShadowFinder
src/shadowfinder/cli.py
https://github.com/bellingcat/ShadowFinder/blob/master/src/shadowfinder/cli.py
MIT
def find( object_height: float, shadow_length: float, date: str, time: str, time_format: str = "utc", grid: str = "timezone_grid.json", ) -> None: """ Find the shadow length of an object given its height and the date and time. :param object_hei...
Find the shadow length of an object given its height and the date and time. :param object_height: Height of the object in arbitrary units :param shadow_length: Length of the shadow in arbitrary units :param date: Date in the format YYYY-MM-DD :param time: UTC Time in the format ...
find
python
bellingcat/ShadowFinder
src/shadowfinder/cli.py
https://github.com/bellingcat/ShadowFinder/blob/master/src/shadowfinder/cli.py
MIT
def find_sun( sun_altitude_angle: float, date: str, time: str, time_format: str = "utc", grid: str = "timezene_grid.json", ) -> None: """ Locate a shadow based on the solar altitude angle and the date and time. :param sun_altitude_angle: Sun altitude a...
Locate a shadow based on the solar altitude angle and the date and time. :param sun_altitude_angle: Sun altitude angle in degrees :param date: Date in the format YYYY-MM-DD :param time: UTC Time in the format HH:MM:SS
find_sun
python
bellingcat/ShadowFinder
src/shadowfinder/cli.py
https://github.com/bellingcat/ShadowFinder/blob/master/src/shadowfinder/cli.py
MIT
def generate_timezone_grid( grid: str = "timezone_grid.json", ) -> None: """ Generate a timezone grid file. :param grid: File path to save the timezone grid. """ shadow_finder = ShadowFinder() shadow_finder.generate_timezone_grid() shadow_finder.save_...
Generate a timezone grid file. :param grid: File path to save the timezone grid.
generate_timezone_grid
python
bellingcat/ShadowFinder
src/shadowfinder/cli.py
https://github.com/bellingcat/ShadowFinder/blob/master/src/shadowfinder/cli.py
MIT
def test_executable_without_args(): """Tests that running shadowfinder without any arguments returns the CLI's help string and 0 exit code.""" # GIVEN expected = """ NAME shadowfinder SYNOPSIS shadowfinder COMMAND COMMANDS COMMAND is one of the following: find Find the shadow leng...
Tests that running shadowfinder without any arguments returns the CLI's help string and 0 exit code.
test_executable_without_args
python
bellingcat/ShadowFinder
tests/test_executable.py
https://github.com/bellingcat/ShadowFinder/blob/master/tests/test_executable.py
MIT
def test_creation_with_valid_arguments_should_pass(): """Baseline test to assert that we can create an instance of ShadowFinder with only object height, shadow length, and a datetime object.""" # GIVEN object_height = 6 shadow_length = 3.2 date_time = datetime.now() # WHEN / THEN Shadow...
Baseline test to assert that we can create an instance of ShadowFinder with only object height, shadow length, and a datetime object.
test_creation_with_valid_arguments_should_pass
python
bellingcat/ShadowFinder
tests/test_shadowfinder.py
https://github.com/bellingcat/ShadowFinder/blob/master/tests/test_shadowfinder.py
MIT
def test_huber_loss(self): """Test of huber loss huber_loss() allows two types of inputs: - `y_target` and `y_pred` - `diff` """ # [1, 1] -> [0.5, 0.5] loss = huber_loss(np.array([1., 1.]), delta=1.) np.testing.assert_array_equal( np.array([0.5...
Test of huber loss huber_loss() allows two types of inputs: - `y_target` and `y_pred` - `diff`
test_huber_loss
python
keiohta/tf2rl
tests/misc/test_huber_loss.py
https://github.com/keiohta/tf2rl/blob/master/tests/misc/test_huber_loss.py
MIT
def layer_test(layer_cls, kwargs=None, input_shape=None, input_dtype=None, input_data=None, expected_output=None, expected_output_dtype=None, custom_objects=None): """Test routine for a layer with a single input and single output. Arguments: layer_cls: Layer class object. ...
Test routine for a layer with a single input and single output. Arguments: layer_cls: Layer class object. kwargs: Optional dictionary of keyword arguments for instantiating the layer. input_shape: Input shape tuple. input_dtype: Data type of the input data. input_data: Numpy a...
layer_test
python
keiohta/tf2rl
tests/networks/utils.py
https://github.com/keiohta/tf2rl/blob/master/tests/networks/utils.py
MIT
def explorer(global_rb, queue, trained_steps, is_training_done, lock, env_fn, policy_fn, set_weights_fn, noise_level, n_env=64, n_thread=4, buffer_size=1024, episode_max_steps=1000, gpu=0): """Collect transitions and store them to prioritized replay buffer. Args: global_rb: mu...
Collect transitions and store them to prioritized replay buffer. Args: global_rb: multiprocessing.managers.AutoProxy[PrioritizedReplayBuffer] Prioritized replay buffer sharing with multiple explorers and only one learner. This object is shared over processes, so it must be locked wh...
explorer
python
keiohta/tf2rl
tf2rl/algos/apex.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/apex.py
MIT
def learner(global_rb, trained_steps, is_training_done, lock, env, policy_fn, get_weights_fn, n_training, update_freq, evaluation_freq, gpu, queues): """Update network weights using samples collected by explorers. Args: global_rb: multiprocessing.managers.AutoProxy[PrioritizedRe...
Update network weights using samples collected by explorers. Args: global_rb: multiprocessing.managers.AutoProxy[PrioritizedReplayBuffer] Prioritized replay buffer sharing with multiple explorers and only one learner. This object is shared over processes, so it must be locked when t...
learner
python
keiohta/tf2rl
tf2rl/algos/apex.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/apex.py
MIT
def evaluator(is_training_done, env, policy_fn, set_weights_fn, queue, gpu, save_model_interval=int(1e6), n_evaluation=10, episode_max_steps=1000, show_test_progress=False): """Evaluate trained network weights periodically. Args: is_training_done: multiprocessing.Event ...
Evaluate trained network weights periodically. Args: is_training_done: multiprocessing.Event multiprocessing.Event object to share the status of training. env: Open-AI gym compatible environment Environment object. policy_fn: function Method object to gen...
evaluator
python
keiohta/tf2rl
tf2rl/algos/apex.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/apex.py
MIT
def __init__(self, eta=0.05, name="BiResDDPG", **kwargs): """ Initialize BiResDDPG agent Args: eta (float): Gradients mixing factor. name (str): Name of agent. The default is ``"BiResDDPG"``. state_shape (iterable of int): action_dim (int): ...
Initialize BiResDDPG agent Args: eta (float): Gradients mixing factor. name (str): Name of agent. The default is ``"BiResDDPG"``. state_shape (iterable of int): action_dim (int): max_action (float): Size of maximum action. (``-max_action`` <=...
__init__
python
keiohta/tf2rl
tf2rl/algos/bi_res_ddpg.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/bi_res_ddpg.py
MIT
def compute_td_error(self, states, actions, next_states, rewards, dones): """ Compute TD error Args: states actions next_states rewars dones Returns: np.ndarray: Sum of two TD errors. """ td_error1,...
Compute TD error Args: states actions next_states rewars dones Returns: np.ndarray: Sum of two TD errors.
compute_td_error
python
keiohta/tf2rl
tf2rl/algos/bi_res_ddpg.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/bi_res_ddpg.py
MIT
def get_argument(parser=None): """ Create or update argument parser for command line program Args: parser (argparse.ArgParser, optional): argument parser Returns: argparse.ArgParser: argument parser """ parser = DDPG.get_argument(parser) ...
Create or update argument parser for command line program Args: parser (argparse.ArgParser, optional): argument parser Returns: argparse.ArgParser: argument parser
get_argument
python
keiohta/tf2rl
tf2rl/algos/bi_res_ddpg.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/bi_res_ddpg.py
MIT
def __init__( self, state_shape, action_dim, q_func=None, name="DQN", lr=0.001, adam_eps=1e-07, units=(32, 32), epsilon=0.1, epsilon_min=None, epsilon_decay_step=int(1e6), n_wa...
Initialize Categorical DQN Args: state_shape (iterable of int): Observation space shape action_dim (int): Dimension of discrete action q_function (QFunc): Custom Q function class. If ``None`` (default), Q function is constructed with ``QFunc``. name (str...
__init__
python
keiohta/tf2rl
tf2rl/algos/categorical_dqn.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/categorical_dqn.py
MIT
def get_action(self, state, test=False, tensor=False): """ Get action Args: state: Observation state test (bool): When ``False`` (default), policy returns exploratory action. tensor (bool): When ``True``, return type is ``tf.Tensor`` Returns: ...
Get action Args: state: Observation state test (bool): When ``False`` (default), policy returns exploratory action. tensor (bool): When ``True``, return type is ``tf.Tensor`` Returns: tf.Tensor or np.ndarray or float: Selected action
get_action
python
keiohta/tf2rl
tf2rl/algos/categorical_dqn.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/categorical_dqn.py
MIT
def train(self, states, actions, next_states, rewards, done, weights=None): """ Train DQN Args: states actions next_states rewards done weights (optional): Weights for importance sampling """ if weights is N...
Train DQN Args: states actions next_states rewards done weights (optional): Weights for importance sampling
train
python
keiohta/tf2rl
tf2rl/algos/categorical_dqn.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/categorical_dqn.py
MIT
def compute_td_error(self, states, actions, next_states, rewards, dones): """ Compute TD error Args: states actions next_states rewars dones Returns tf.Tensor: TD error """ # TODO: fix this ugly con...
Compute TD error Args: states actions next_states rewars dones Returns tf.Tensor: TD error
compute_td_error
python
keiohta/tf2rl
tf2rl/algos/categorical_dqn.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/categorical_dqn.py
MIT
def _compute_td_error_body(self, states, actions, next_states, rewards, dones): """ Args: states: actions: next_states: rewards: Shape should be (batch_size, 1) dones: Shape should be (batch_size, 1) Ret...
Args: states: actions: next_states: rewards: Shape should be (batch_size, 1) dones: Shape should be (batch_size, 1) Returns:
_compute_td_error_body
python
keiohta/tf2rl
tf2rl/algos/categorical_dqn.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/categorical_dqn.py
MIT
def get_argument(parser=None): """ Create or update argument parser for command line program Args: parser (argparse.ArgParser, optional): argument parser Returns: argparse.ArgParser: argument parser """ parser = OffPolicyAgent.get_argument(parser...
Create or update argument parser for command line program Args: parser (argparse.ArgParser, optional): argument parser Returns: argparse.ArgParser: argument parser
get_argument
python
keiohta/tf2rl
tf2rl/algos/categorical_dqn.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/categorical_dqn.py
MIT
def __init__(self, *args, **kwargs): """ Initialize CURL Args: action_dim (int): obs_shape: (iterable of int): The default is ``(84, 84, 9)`` n_conv_layers (int): Number of convolutional layers at encoder. The default is ``4`...
Initialize CURL Args: action_dim (int): obs_shape: (iterable of int): The default is ``(84, 84, 9)`` n_conv_layers (int): Number of convolutional layers at encoder. The default is ``4`` n_conv_filters (int): Number of filters in convolutional layers. The...
__init__
python
keiohta/tf2rl
tf2rl/algos/curl_sac.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/curl_sac.py
MIT
def train(self, states, actions, next_states, rewards, dones, weights=None): """ Train CURL Args: states actions next_states rewards done weights (optional): Weights for importance sampling """ if weights is...
Train CURL Args: states actions next_states rewards done weights (optional): Weights for importance sampling
train
python
keiohta/tf2rl
tf2rl/algos/curl_sac.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/curl_sac.py
MIT
def __init__( self, state_shape, action_dim, name="DDPG", max_action=1., lr_actor=0.001, lr_critic=0.001, actor_units=(400, 300), critic_units=(400, 300), sigma=0.1, tau=0.005, ...
Initialize DDPG agent Args: state_shape (iterable of int): action_dim (int): name (str): Name of agent. The default is ``"DDPG"``. max_action (float): Size of maximum action. (``-max_action`` <= action <= ``max_action``). The degault is ``1``. ...
__init__
python
keiohta/tf2rl
tf2rl/algos/ddpg.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/ddpg.py
MIT
def get_action(self, state, test=False, tensor=False): """ Get action Args: state: Observation state test (bool): When ``False`` (default), policy returns exploratory action. tensor (bool): When ``True``, return type is ``tf.Tensor`` Returns: ...
Get action Args: state: Observation state test (bool): When ``False`` (default), policy returns exploratory action. tensor (bool): When ``True``, return type is ``tf.Tensor`` Returns: tf.Tensor or np.ndarray or float: Selected action
get_action
python
keiohta/tf2rl
tf2rl/algos/ddpg.py
https://github.com/keiohta/tf2rl/blob/master/tf2rl/algos/ddpg.py
MIT