INSTRUCTION
stringlengths
1
46.3k
RESPONSE
stringlengths
75
80.2k
Inform Metrics class that experience collection is done.
def end_experience_collection_timer(self): """ Inform Metrics class that experience collection is done. """ if self.time_start_experience_collection: curr_delta = time() - self.time_start_experience_collection if self.delta_last_experience_collection is None: ...
Inform Metrics class about time to step in environment.
def add_delta_step(self, delta: float): """ Inform Metrics class about time to step in environment. """ if self.delta_last_experience_collection: self.delta_last_experience_collection += delta else: self.delta_last_experience_collection = delta
Inform Metrics class that policy update has started. :int number_experiences: Number of experiences in Buffer at this point. :float mean_return: Return averaged across all cumulative returns since last policy update
def start_policy_update_timer(self, number_experiences: int, mean_return: float): """ Inform Metrics class that policy update has started. :int number_experiences: Number of experiences in Buffer at this point. :float mean_return: Return averaged across all cumulative returns since last ...
Inform Metrics class that policy update has started.
def end_policy_update(self): """ Inform Metrics class that policy update has started. """ if self.time_policy_update_start: self.delta_policy_update = time() - self.time_policy_update_start else: self.delta_policy_update = 0 delta_train_start = tim...
Write Training Metrics to CSV
def write_training_metrics(self): """ Write Training Metrics to CSV """ with open(self.path, 'w') as file: writer = csv.writer(file) writer.writerow(FIELD_NAMES) for row in self.rows: writer.writerow(row)
Creates TF ops to track and increment recent average cumulative reward.
def create_reward_encoder(): """Creates TF ops to track and increment recent average cumulative reward.""" last_reward = tf.Variable(0, name="last_reward", trainable=False, dtype=tf.float32) new_reward = tf.placeholder(shape=[], dtype=tf.float32, name='new_reward') update_reward = tf.ass...
Creates state encoders for current and future observations. Used for implementation of Curiosity-driven Exploration by Self-supervised Prediction See https://arxiv.org/abs/1705.05363 for more details. :return: current and future state encoder tensors.
def create_curiosity_encoders(self): """ Creates state encoders for current and future observations. Used for implementation of Curiosity-driven Exploration by Self-supervised Prediction See https://arxiv.org/abs/1705.05363 for more details. :return: current and future state enc...
Creates inverse model TensorFlow ops for Curiosity module. Predicts action taken given current and future encoded states. :param encoded_state: Tensor corresponding to encoded current state. :param encoded_next_state: Tensor corresponding to encoded next state.
def create_inverse_model(self, encoded_state, encoded_next_state): """ Creates inverse model TensorFlow ops for Curiosity module. Predicts action taken given current and future encoded states. :param encoded_state: Tensor corresponding to encoded current state. :param encoded_nex...
Creates forward model TensorFlow ops for Curiosity module. Predicts encoded future state based on encoded current state and given action. :param encoded_state: Tensor corresponding to encoded current state. :param encoded_next_state: Tensor corresponding to encoded next state.
def create_forward_model(self, encoded_state, encoded_next_state): """ Creates forward model TensorFlow ops for Curiosity module. Predicts encoded future state based on encoded current state and given action. :param encoded_state: Tensor corresponding to encoded current state. :p...
Creates training-specific Tensorflow ops for PPO models. :param probs: Current policy probabilities :param old_probs: Past policy probabilities :param value: Current value estimate :param beta: Entropy regularization strength :param entropy: Current policy entropy :param ...
def create_ppo_optimizer(self, probs, old_probs, value, entropy, beta, epsilon, lr, max_step): """ Creates training-specific Tensorflow ops for PPO models. :param probs: Current policy probabilities :param old_probs: Past policy probabilities :param value: Current value estimate ...
Evaluates policy for the agent experiences provided. :param brain_info: BrainInfo object containing inputs. :return: Outputs from network as defined by self.inference_dict.
def evaluate(self, brain_info): """ Evaluates policy for the agent experiences provided. :param brain_info: BrainInfo object containing inputs. :return: Outputs from network as defined by self.inference_dict. """ feed_dict = {self.model.batch_size: len(brain_info.vector_o...
Updates model using buffer. :param num_sequences: Number of trajectories in batch. :param mini_batch: Experience batch. :return: Output from update process.
def update(self, mini_batch, num_sequences): """ Updates model using buffer. :param num_sequences: Number of trajectories in batch. :param mini_batch: Experience batch. :return: Output from update process. """ feed_dict = {self.model.batch_size: num_sequences, ...
Generates intrinsic reward used for Curiosity-based training. :BrainInfo curr_info: Current BrainInfo. :BrainInfo next_info: Next BrainInfo. :return: Intrinsic rewards for all agents.
def get_intrinsic_rewards(self, curr_info, next_info): """ Generates intrinsic reward used for Curiosity-based training. :BrainInfo curr_info: Current BrainInfo. :BrainInfo next_info: Next BrainInfo. :return: Intrinsic rewards for all agents. """ if self.use_curio...
Generates value estimates for bootstrapping. :param brain_info: BrainInfo to be used for bootstrapping. :param idx: Index in BrainInfo of agent. :return: Value estimate.
def get_value_estimate(self, brain_info, idx): """ Generates value estimates for bootstrapping. :param brain_info: BrainInfo to be used for bootstrapping. :param idx: Index in BrainInfo of agent. :return: Value estimate. """ feed_dict = {self.model.batch_size: 1, ...
Updates reward value for policy. :param new_reward: New reward to save.
def update_reward(self, new_reward): """ Updates reward value for policy. :param new_reward: New reward to save. """ self.sess.run(self.model.update_reward, feed_dict={self.model.new_reward: new_reward})
Adds experiences to each agent's experience history. :param curr_info: Current AllBrainInfo (Dictionary of all current brains and corresponding BrainInfo). :param next_info: Next AllBrainInfo (Dictionary of all current brains and corresponding BrainInfo). :param take_action_outputs: The outputs ...
def add_experiences(self, curr_info: AllBrainInfo, next_info: AllBrainInfo, take_action_outputs): """ Adds experiences to each agent's experience history. :param curr_info: Current AllBrainInfo (Dictionary of all current brains and corresponding BrainInfo). :param...
Checks agent histories for processing condition, and processes them as necessary. Processing involves calculating value and advantage targets for model updating step. :param current_info: Current AllBrainInfo :param next_info: Next AllBrainInfo
def process_experiences(self, current_info: AllBrainInfo, next_info: AllBrainInfo): """ Checks agent histories for processing condition, and processes them as necessary. Processing involves calculating value and advantage targets for model updating step. :param current_info: Current AllB...
A signal that the Episode has ended. The buffer must be reset. Get only called when the academy resets.
def end_episode(self): """ A signal that the Episode has ended. The buffer must be reset. Get only called when the academy resets. """ self.evaluation_buffer.reset_local_buffers() for agent_id in self.cumulative_rewards: self.cumulative_rewards[agent_id] = 0 ...
Updates the policy.
def update_policy(self): """ Updates the policy. """ self.demonstration_buffer.update_buffer.shuffle() batch_losses = [] num_batches = min(len(self.demonstration_buffer.update_buffer['actions']) // self.n_sequences, self.batches_per_epoch) ...
Creates TF ops to track and increment global training step.
def create_global_steps(): """Creates TF ops to track and increment global training step.""" global_step = tf.Variable(0, name="global_step", trainable=False, dtype=tf.int32) increment_step = tf.assign(global_step, tf.add(global_step, 1)) return global_step, increment_step
Creates image input op. :param camera_parameters: Parameters for visual observation from BrainInfo. :param name: Desired name of input op. :return: input op.
def create_visual_input(camera_parameters, name): """ Creates image input op. :param camera_parameters: Parameters for visual observation from BrainInfo. :param name: Desired name of input op. :return: input op. """ o_size_h = camera_parameters['height'] o...
Creates ops for vector observation input. :param name: Name of the placeholder op. :param vec_obs_size: Size of stacked vector observation. :return:
def create_vector_input(self, name='vector_observation'): """ Creates ops for vector observation input. :param name: Name of the placeholder op. :param vec_obs_size: Size of stacked vector observation. :return: """ self.vector_in = tf.placeholder(shape=[None, self...
Builds a set of hidden state encoders. :param reuse: Whether to re-use the weights within the same scope. :param scope: Graph scope for the encoder ops. :param observation_input: Input vector. :param h_size: Hidden layer size. :param activation: What type of activation function t...
def create_vector_observation_encoder(observation_input, h_size, activation, num_layers, scope, reuse): """ Builds a set of hidden state encoders. :param reuse: Whether to re-use the weights within the same scope. :param scope: Graph scope for th...
Builds a set of visual (CNN) encoders. :param reuse: Whether to re-use the weights within the same scope. :param scope: The scope of the graph within which to create the ops. :param image_input: The placeholder for the image input to use. :param h_size: Hidden layer size. :param ...
def create_visual_observation_encoder(self, image_input, h_size, activation, num_layers, scope, reuse): """ Builds a set of visual (CNN) encoders. :param reuse: Whether to re-use the weights within the same scope. :param scope: The scope of the g...
Creates a masking layer for the discrete actions :param all_logits: The concatenated unnormalized action probabilities for all branches :param action_masks: The mask for the logits. Must be of dimension [None x total_number_of_action] :param action_size: A list containing the number of possible ...
def create_discrete_action_masking_layer(all_logits, action_masks, action_size): """ Creates a masking layer for the discrete actions :param all_logits: The concatenated unnormalized action probabilities for all branches :param action_masks: The mask for the logits. Must be of dimension ...
Creates encoding stream for observations. :param num_streams: Number of streams to create. :param h_size: Size of hidden linear layers in stream. :param num_layers: Number of hidden linear layers in stream. :return: List of encoded streams.
def create_observation_streams(self, num_streams, h_size, num_layers): """ Creates encoding stream for observations. :param num_streams: Number of streams to create. :param h_size: Size of hidden linear layers in stream. :param num_layers: Number of hidden linear layers in stream...
Builds a recurrent encoder for either state or observations (LSTM). :param sequence_length: Length of sequence to unroll. :param input_state: The input tensor to the LSTM cell. :param memory_in: The input memory to the LSTM cell. :param name: The scope of the LSTM cell.
def create_recurrent_encoder(input_state, memory_in, sequence_length, name='lstm'): """ Builds a recurrent encoder for either state or observations (LSTM). :param sequence_length: Length of sequence to unroll. :param input_state: The input tensor to the LSTM cell. :param memory_i...
Creates Continuous control actor-critic model. :param h_size: Size of hidden linear layers. :param num_layers: Number of hidden linear layers.
def create_cc_actor_critic(self, h_size, num_layers): """ Creates Continuous control actor-critic model. :param h_size: Size of hidden linear layers. :param num_layers: Number of hidden linear layers. """ hidden_streams = self.create_observation_streams(2, h_size, num_lay...
Creates Discrete control actor-critic model. :param h_size: Size of hidden linear layers. :param num_layers: Number of hidden linear layers.
def create_dc_actor_critic(self, h_size, num_layers): """ Creates Discrete control actor-critic model. :param h_size: Size of hidden linear layers. :param num_layers: Number of hidden linear layers. """ hidden_streams = self.create_observation_streams(1, h_size, num_layer...
Adds experiences to each agent's experience history. :param curr_info: Current AllBrainInfo (Dictionary of all current brains and corresponding BrainInfo). :param next_info: Next AllBrainInfo (Dictionary of all current brains and corresponding BrainInfo). :param take_action_outputs: The outputs ...
def add_experiences(self, curr_info: AllBrainInfo, next_info: AllBrainInfo, take_action_outputs): """ Adds experiences to each agent's experience history. :param curr_info: Current AllBrainInfo (Dictionary of all current brains and corresponding BrainInfo). :param...
Checks agent histories for processing condition, and processes them as necessary. Processing involves calculating value and advantage targets for model updating step. :param current_info: Current AllBrainInfo :param next_info: Next AllBrainInfo
def process_experiences(self, current_info: AllBrainInfo, next_info: AllBrainInfo): """ Checks agent histories for processing condition, and processes them as necessary. Processing involves calculating value and advantage targets for model updating step. :param current_info: Current AllB...
Yield items from any nested iterable; see REF.
def flatten(items,enter=lambda x:isinstance(x, list)): # http://stackoverflow.com/a/40857703 # https://github.com/ctmakro/canton/blob/master/canton/misc.py """Yield items from any nested iterable; see REF.""" for x in items: if enter(x): yield from flatten(x) else: ...
A value in replace_with_strings can be either single string or list of strings
def replace_strings_in_list(array_of_strigs, replace_with_strings): "A value in replace_with_strings can be either single string or list of strings" potentially_nested_list = [replace_with_strings.get(s) or s for s in array_of_strigs] return list(flatten(potentially_nested_list))
Preserves the order of elements in the list
def remove_duplicates_from_list(array): "Preserves the order of elements in the list" output = [] unique = set() for a in array: if a not in unique: unique.add(a) output.append(a) return output
Convert from NHWC|NCHW => HW
def pool_to_HW(shape, data_frmt): """ Convert from NHWC|NCHW => HW """ if len(shape) != 4: return shape # Not NHWC|NCHW, return as is if data_frmt == 'NCHW': return [shape[2], shape[3]] return [shape[1], shape[2]]
Converts a TensorFlow model into a Barracuda model. :param source_file: The TensorFlow Model :param target_file: The name of the file the converted model will be saved to :param trim_unused_by_output: The regexp to match output nodes to remain in the model. All other uconnected nodes will be removed. :p...
def convert(source_file, target_file, trim_unused_by_output="", verbose=False, compress_f16=False): """ Converts a TensorFlow model into a Barracuda model. :param source_file: The TensorFlow Model :param target_file: The name of the file the converted model will be saved to :param trim_unused_by_out...
Loads demonstration file and uses it to fill training buffer. :param file_path: Location of demonstration file (.demo). :param sequence_length: Length of trajectories to fill buffer. :return:
def demo_to_buffer(file_path, sequence_length): """ Loads demonstration file and uses it to fill training buffer. :param file_path: Location of demonstration file (.demo). :param sequence_length: Length of trajectories to fill buffer. :return: """ brain_params, brain_infos, _ = load_demonstr...
Loads and parses a demonstration file. :param file_path: Location of demonstration file (.demo). :return: BrainParameter and list of BrainInfos containing demonstration data.
def load_demonstration(file_path): """ Loads and parses a demonstration file. :param file_path: Location of demonstration file (.demo). :return: BrainParameter and list of BrainInfos containing demonstration data. """ # First 32 bytes of file dedicated to meta-data. INITIAL_POS = 33 if...
Saves current model to checkpoint folder. :param steps: Current number of steps in training process. :param saver: Tensorflow saver for session.
def _save_model(self, steps=0): """ Saves current model to checkpoint folder. :param steps: Current number of steps in training process. :param saver: Tensorflow saver for session. """ for brain_name in self.trainers.keys(): self.trainers[brain_name].save_mode...
Write all CSV metrics :return:
def _write_training_metrics(self): """ Write all CSV metrics :return: """ for brain_name in self.trainers.keys(): if brain_name in self.trainer_metrics: self.trainers[brain_name].write_training_metrics()
Exports latest saved models to .nn format for Unity embedding.
def _export_graph(self): """ Exports latest saved models to .nn format for Unity embedding. """ for brain_name in self.trainers.keys(): self.trainers[brain_name].export_model()
Initialization of the trainers :param trainer_config: The configurations of the trainers
def initialize_trainers(self, trainer_config: Dict[str, Dict[str, str]]): """ Initialization of the trainers :param trainer_config: The configurations of the trainers """ trainer_parameters_dict = {} for brain_name in self.external_brains: trainer_parameters =...
Resets the environment. Returns: A Data structure corresponding to the initial reset state of the environment.
def _reset_env(self, env: BaseUnityEnvironment): """Resets the environment. Returns: A Data structure corresponding to the initial reset state of the environment. """ if self.meta_curriculum is not None: return env.reset(train_mode=self.fast_simulatio...
Sends a shutdown signal to the unity environment, and closes the socket connection.
def close(self): """ Sends a shutdown signal to the unity environment, and closes the socket connection. """ if self._socket is not None and self._conn is not None: message_input = UnityMessage() message_input.header.status = 400 self._communicator_sen...
float sqrt_var = sqrt(var_data[i]); a_data[i] = bias_data[i] - slope_data[i] * mean_data[i] / sqrt_var; b_data[i] = slope_data[i] / sqrt_var; ... ptr[i] = b * ptr[i] + a;
def fuse_batchnorm_weights(gamma, beta, mean, var, epsilon): # https://github.com/Tencent/ncnn/blob/master/src/layer/batchnorm.cpp """ float sqrt_var = sqrt(var_data[i]); a_data[i] = bias_data[i] - slope_data[i] * mean_data[i] / sqrt_var; b_data[i] = slope_data[i] / sqrt_var; ... ...
- Ht = f(Xt*Wi + Ht_1*Ri + Wbi + Rbi)
def rnn(name, input, state, kernel, bias, new_state, number_of_gates = 2): ''' - Ht = f(Xt*Wi + Ht_1*Ri + Wbi + Rbi) ''' nn = Build(name) nn.tanh( nn.mad(kernel=kernel, bias=bias, x=nn.concat(input, state)), out=new_state); return nn.layers;
- zt = f(Xt*Wz + Ht_1*Rz + Wbz + Rbz) - rt = f(Xt*Wr + Ht_1*Rr + Wbr + Rbr) - ht = g(Xt*Wh + (rt . Ht_1)*Rh + Rbh + Wbh) - Ht = (1-zt).ht + zt.Ht_1
def gru(name, input, state, kernel_r, kernel_u, kernel_c, bias_r, bias_u, bias_c, new_state, number_of_gates = 2): ''' - zt = f(Xt*Wz + Ht_1*Rz + Wbz + Rbz) - rt = f(Xt*Wr + Ht_1*Rr + Wbr + Rbr) - ht = g(Xt*Wh + (rt . Ht_1)*Rh + Rbh + Wbh) - Ht = (1-zt).ht + zt.Ht_1 ''' ...
Full: - it = f(Xt*Wi + Ht_1*Ri + Pi . Ct_1 + Wbi + Rbi) - ft = f(Xt*Wf + Ht_1*Rf + Pf . Ct_1 + Wbf + Rbf) - ct = g(Xt*Wc + Ht_1*Rc + Wbc + Rbc) - Ct = ft . Ct_1 + it . ct - ot = f(Xt*Wo + Ht_1*Ro + Po . Ct + Wbo + Rbo) - Ht = ot . h(Ct)
def lstm(name, input, state_c, state_h, kernel_i, kernel_j, kernel_f, kernel_o, bias_i, bias_j, bias_f, bias_o, new_state_c, new_state_h): ''' Full: - it = f(Xt*Wi + Ht_1*Ri + Pi . Ct_1 + Wbi + Rbi) - ft = f(Xt*Wf + Ht_1*Rf + Pf . Ct_1 + Wbf + Rbf) - ct = g(Xt*Wc + Ht_1*Rc + Wbc + Rbc) - Ct = ft . ...
Performs update on model. :param mini_batch: Batch of experiences. :param num_sequences: Number of sequences to process. :return: Results of update.
def update(self, mini_batch, num_sequences): """ Performs update on model. :param mini_batch: Batch of experiences. :param num_sequences: Number of sequences to process. :return: Results of update. """ feed_dict = {self.model.dropout_rate: self.update_rate, ...
Increments the lesson number depending on the progress given. :param measure_val: Measure of progress (either reward or percentage steps completed). :return Whether the lesson was incremented.
def increment_lesson(self, measure_val): """ Increments the lesson number depending on the progress given. :param measure_val: Measure of progress (either reward or percentage steps completed). :return Whether the lesson was incremented. """ if not self.dat...
Returns reset parameters which correspond to the lesson. :param lesson: The lesson you want to get the config of. If None, the current lesson is returned. :return: The configuration of the reset parameters.
def get_config(self, lesson=None): """ Returns reset parameters which correspond to the lesson. :param lesson: The lesson you want to get the config of. If None, the current lesson is returned. :return: The configuration of the reset parameters. """ if not ...
Computes generalized advantage estimate for use in updating policy. :param rewards: list of rewards for time-steps t to T. :param value_next: Value estimate for time-step T+1. :param value_estimates: list of value estimates for time-steps t to T. :param gamma: Discount factor. :param lambd: GAE weig...
def get_gae(rewards, value_estimates, value_next=0.0, gamma=0.99, lambd=0.95): """ Computes generalized advantage estimate for use in updating policy. :param rewards: list of rewards for time-steps t to T. :param value_next: Value estimate for time-step T+1. :param value_estimates: list of value est...
Increment the step count of the trainer and Updates the last reward
def increment_step_and_update_last_reward(self): """ Increment the step count of the trainer and Updates the last reward """ if len(self.stats['Environment/Cumulative Reward']) > 0: mean_reward = np.mean(self.stats['Environment/Cumulative Reward']) self.policy.upd...
Constructs a BrainInfo which contains the most recent previous experiences for all agents info which correspond to the agents in a provided next_info. :BrainInfo next_info: A t+1 BrainInfo. :return: curr_info: Reconstructed BrainInfo to match agents of next_info.
def construct_curr_info(self, next_info: BrainInfo) -> BrainInfo: """ Constructs a BrainInfo which contains the most recent previous experiences for all agents info which correspond to the agents in a provided next_info. :BrainInfo next_info: A t+1 BrainInfo. :return: curr_info: ...
Adds experiences to each agent's experience history. :param curr_all_info: Dictionary of all current brains and corresponding BrainInfo. :param next_all_info: Dictionary of all current brains and corresponding BrainInfo. :param take_action_outputs: The outputs of the Policy's get_action method.
def add_experiences(self, curr_all_info: AllBrainInfo, next_all_info: AllBrainInfo, take_action_outputs): """ Adds experiences to each agent's experience history. :param curr_all_info: Dictionary of all current brains and corresponding BrainInfo. :param next_all_info: Dictionary of all c...
A signal that the Episode has ended. The buffer must be reset. Get only called when the academy resets.
def end_episode(self): """ A signal that the Episode has ended. The buffer must be reset. Get only called when the academy resets. """ self.training_buffer.reset_local_buffers() for agent_id in self.cumulative_rewards: self.cumulative_rewards[agent_id] = 0 ...
Returns whether or not the trainer has enough elements to run update model :return: A boolean corresponding to whether or not update_model() can be run
def is_ready_update(self): """ Returns whether or not the trainer has enough elements to run update model :return: A boolean corresponding to whether or not update_model() can be run """ size_of_buffer = len(self.training_buffer.update_buffer['actions']) return size_of_bu...
Uses demonstration_buffer to update the policy.
def update_policy(self): """ Uses demonstration_buffer to update the policy. """ self.trainer_metrics.start_policy_update_timer( number_experiences=len(self.training_buffer.update_buffer['actions']), mean_return=float(np.mean(self.cumulative_returns_since_policy_u...
Resets the state of the environment and returns an initial observation. In the case of multi-agent environments, this is a list. Returns: observation (object/list): the initial observation of the space.
def reset(self): """Resets the state of the environment and returns an initial observation. In the case of multi-agent environments, this is a list. Returns: observation (object/list): the initial observation of the space. """ info = self._env.reset()[self.brain_name]...
Run one timestep of the environment's dynamics. When end of episode is reached, you are responsible for calling `reset()` to reset this environment's state. Accepts an action and returns a tuple (observation, reward, done, info). In the case of multi-agent environments, these are lists. ...
def step(self, action): """Run one timestep of the environment's dynamics. When end of episode is reached, you are responsible for calling `reset()` to reset this environment's state. Accepts an action and returns a tuple (observation, reward, done, info). In the case of multi-ag...
Creates the GRPC server.
def create_server(self): """ Creates the GRPC server. """ self.check_port(self.port) try: # Establish communication grpc self.server = grpc.server(ThreadPoolExecutor(max_workers=10)) self.unity_to_external = UnityToExternalServicerImplementati...
Creates a Dict that maps discrete actions (scalars) to branched actions (lists). Each key in the Dict maps to one unique set of branched actions, and each value contains the List of branched actions.
def _create_lookup(self, branched_action_space): """ Creates a Dict that maps discrete actions (scalars) to branched actions (lists). Each key in the Dict maps to one unique set of branched actions, and each value contains the List of branched actions. """ possible_vals =...
Attempts to bind to the requested communicator port, checking if it is already in use.
def check_port(self, port): """ Attempts to bind to the requested communicator port, checking if it is already in use. """ s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s.bind(("localhost", port)) except socket.error: raise UnityWorker...
Sends a shutdown signal to the unity environment, and closes the grpc connection.
def close(self): """ Sends a shutdown signal to the unity environment, and closes the grpc connection. """ if self.is_open: message_input = UnityMessage() message_input.header.status = 400 self.unity_to_external.parent_conn.send(message_input) ...
Converts byte array observation image into numpy array, re-sizes it, and optionally converts it to grey scale :param gray_scale: Whether to convert the image to grayscale. :param image_bytes: input byte array corresponding to image :return: processed numpy array of observation from envir...
def process_pixels(image_bytes, gray_scale): """ Converts byte array observation image into numpy array, re-sizes it, and optionally converts it to grey scale :param gray_scale: Whether to convert the image to grayscale. :param image_bytes: input byte array corresponding to image...
Converts list of agent infos to BrainInfo.
def from_agent_proto(agent_info_list, brain_params): """ Converts list of agent infos to BrainInfo. """ vis_obs = [] for i in range(brain_params.number_visual_observations): obs = [BrainInfo.process_pixels(x.visual_observations[i], ...
Converts brain parameter proto to BrainParameter object. :param brain_param_proto: protobuf object. :return: BrainParameter object.
def from_proto(brain_param_proto): """ Converts brain parameter proto to BrainParameter object. :param brain_param_proto: protobuf object. :return: BrainParameter object. """ resolution = [{ "height": x.height, "width": x.width, "blackA...
Creates a new, blank dashboard and redirects to it in edit mode
def new(self): """Creates a new, blank dashboard and redirects to it in edit mode""" new_dashboard = models.Dashboard( dashboard_title='[ untitled dashboard ]', owners=[g.user], ) db.session.add(new_dashboard) db.session.commit() return redirect(f'...
List all tags a given object has.
def get(self, object_type, object_id): """List all tags a given object has.""" if object_id == 0: return json_success(json.dumps([])) query = db.session.query(TaggedObject).filter(and_( TaggedObject.object_type == object_type, TaggedObject.object_id == object...
Add new tags to an object.
def post(self, object_type, object_id): """Add new tags to an object.""" if object_id == 0: return Response(status=404) tagged_objects = [] for name in request.get_json(force=True): if ':' in name: type_name = name.split(':', 1)[0] ...
Remove tags from an object.
def delete(self, object_type, object_id): """Remove tags from an object.""" tag_names = request.get_json(force=True) if not tag_names: return Response(status=403) db.session.query(TaggedObject).filter(and_( TaggedObject.object_type == object_type, Tag...
Imports the datasource from the object to the database. Metrics and columns and datasource will be overrided if exists. This function can be used to import/export dashboards between multiple superset instances. Audit metadata isn't copies over.
def import_datasource( session, i_datasource, lookup_database, lookup_datasource, import_time): """Imports the datasource from the object to the database. Metrics and columns and datasource will be overrided if exists. This function can be used to import/export das...
Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context.
def run_migrations_online(): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ # this callback is used to prevent an auto-migration from being generated # when there are no changes to the schema # reference: h...
Returns a pandas dataframe based on the query object
def get_df(self, query_obj=None): """Returns a pandas dataframe based on the query object""" if not query_obj: query_obj = self.query_obj() if not query_obj: return None self.error_msg = '' timestamp_format = None if self.datasource.type == 'tabl...
Building a query object
def query_obj(self): """Building a query object""" form_data = self.form_data self.process_query_filters() gb = form_data.get('groupby') or [] metrics = self.all_metrics or [] columns = form_data.get('columns') or [] groupby = [] for o in gb + columns: ...
The cache key is made out of the key/values in `query_obj`, plus any other key/values in `extra`. We remove datetime bounds that are hard values, and replace them with the use-provided inputs to bounds, which may be time-relative (as in "5 days ago" or "now"). The `extra` argum...
def cache_key(self, query_obj, **extra): """ The cache key is made out of the key/values in `query_obj`, plus any other key/values in `extra`. We remove datetime bounds that are hard values, and replace them with the use-provided inputs to bounds, which may be time-relative (as ...
This is the data object serialized to the js layer
def data(self): """This is the data object serialized to the js layer""" content = { 'form_data': self.form_data, 'token': self.token, 'viz_name': self.viz_type, 'filter_select_enabled': self.datasource.filter_select_enabled, } return conte...
Returns the query object for this visualization
def query_obj(self): """Returns the query object for this visualization""" d = super().query_obj() d['row_limit'] = self.form_data.get( 'row_limit', int(config.get('VIZ_ROW_LIMIT'))) numeric_columns = self.form_data.get('all_columns_x') if numeric_columns is None: ...
Returns the chart data
def get_data(self, df): """Returns the chart data""" chart_data = [] if len(self.groupby) > 0: groups = df.groupby(self.groupby) else: groups = [((), df)] for keys, data in groups: chart_data.extend([{ 'key': self.labelify(keys,...
Compute the partition at each `level` from the dataframe.
def levels_for(self, time_op, groups, df): """ Compute the partition at each `level` from the dataframe. """ levels = {} for i in range(0, len(groups) + 1): agg_df = df.groupby(groups[:i]) if i else df levels[i] = ( agg_df.mean() if time_op...
Nest values at each level on the back-end with access and setting, instead of summing from the bottom.
def nest_values(self, levels, level=0, metric=None, dims=()): """ Nest values at each level on the back-end with access and setting, instead of summing from the bottom. """ if not level: return [{ 'name': m, 'val': levels[0][m], ...
Data representation of the datasource sent to the frontend
def short_data(self): """Data representation of the datasource sent to the frontend""" return { 'edit_url': self.url, 'id': self.id, 'uid': self.uid, 'schema': self.schema, 'name': self.name, 'type': self.type, 'connecti...
Data representation of the datasource sent to the frontend
def data(self): """Data representation of the datasource sent to the frontend""" order_by_choices = [] # self.column_names return sorted column_names for s in self.column_names: s = str(s or '') order_by_choices.append((json.dumps([s, True]), s + ' [asc]')) ...
Update ORM one-to-many list from object list Used for syncing metrics and columns using the same code
def get_fk_many_from_list( self, object_list, fkmany, fkmany_class, key_attr): """Update ORM one-to-many list from object list Used for syncing metrics and columns using the same code""" object_dict = {o.get(key_attr): o for o in object_list} object_keys = [o.get(key_attr) ...
Update datasource from a data structure The UI's table editor crafts a complex data structure that contains most of the datasource's properties as well as an array of metrics and columns objects. This method receives the object from the UI and syncs the datasource to match it. S...
def update_from_object(self, obj): """Update datasource from a data structure The UI's table editor crafts a complex data structure that contains most of the datasource's properties as well as an array of metrics and columns objects. This method receives the object from the UI a...
Returns a pandas dataframe based on the query object
def get_query_result(self, query_object): """Returns a pandas dataframe based on the query object""" # Here, we assume that all the queries will use the same datasource, which is # is a valid assumption for current setting. In a long term, we may or maynot # support multiple queries fro...
Converting metrics to numeric when pandas.read_sql cannot
def df_metrics_to_num(self, df, query_object): """Converting metrics to numeric when pandas.read_sql cannot""" metrics = [metric for metric in query_object.metrics] for col, dtype in df.dtypes.items(): if dtype.type == np.object_ and col in metrics: df[col] = pd.to_nu...
Returns a payload of metadata and data
def get_single_payload(self, query_obj): """Returns a payload of metadata and data""" payload = self.get_df_payload(query_obj) df = payload.get('df') status = payload.get('status') if status != utils.QueryStatus.FAILED: if df is not None and df.empty: ...
Handles caching around the df paylod retrieval
def get_df_payload(self, query_obj, **kwargs): """Handles caching around the df paylod retrieval""" cache_key = query_obj.cache_key( datasource=self.datasource.uid, **kwargs) if query_obj else None logging.info('Cache key: {}'.format(cache_key)) is_loaded = False stac...
Data used to render slice in templates
def data(self): """Data used to render slice in templates""" d = {} self.token = '' try: d = self.viz.data self.token = d.get('token') except Exception as e: logging.exception(e) d['error'] = str(e) return { 'dat...
Creates :py:class:viz.BaseViz object from the url_params_multidict. :return: object of the 'viz_type' type that is taken from the url_params_multidict or self.params. :rtype: :py:class:viz.BaseViz
def get_viz(self, force=False): """Creates :py:class:viz.BaseViz object from the url_params_multidict. :return: object of the 'viz_type' type that is taken from the url_params_multidict or self.params. :rtype: :py:class:viz.BaseViz """ slice_params = json.loads(self....
Inserts or overrides slc in the database. remote_id and import_time fields in params_dict are set to track the slice origin and ensure correct overrides for multiple imports. Slice.perm is used to find the datasources and connect them. :param Slice slc_to_import: Slice object to import...
def import_obj(cls, slc_to_import, slc_to_override, import_time=None): """Inserts or overrides slc in the database. remote_id and import_time fields in params_dict are set to track the slice origin and ensure correct overrides for multiple imports. Slice.perm is used to find the datasou...
Imports the dashboard from the object to the database. Once dashboard is imported, json_metadata field is extended and stores remote_id and import_time. It helps to decide if the dashboard has to be overridden or just copies over. Slices that belong to this dashboard will be wired t...
def import_obj(cls, dashboard_to_import, import_time=None): """Imports the dashboard from the object to the database. Once dashboard is imported, json_metadata field is extended and stores remote_id and import_time. It helps to decide if the dashboard has to be overridden or just cop...
Get the effective user, especially during impersonation. :param url: SQL Alchemy URL object :param user_name: Default username :return: The effective username
def get_effective_user(self, url, user_name=None): """ Get the effective user, especially during impersonation. :param url: SQL Alchemy URL object :param user_name: Default username :return: The effective username """ effective_username = None if self.impe...
Generates a ``select *`` statement in the proper dialect
def select_star( self, table_name, schema=None, limit=100, show_cols=False, indent=True, latest_partition=False, cols=None): """Generates a ``select *`` statement in the proper dialect""" eng = self.get_sqla_engine( schema=schema, source=utils.sources.get('sql_lab', N...
Parameters need to be passed as keyword arguments.
def all_table_names_in_database(self, cache=False, cache_timeout=None, force=False): """Parameters need to be passed as keyword arguments.""" if not self.allow_multi_schema_metadata_fetch: return [] return self.db_engine_spec.fetch_result_sets(self...
Parameters need to be passed as keyword arguments. For unused parameters, they are referenced in cache_util.memoized_func decorator. :param schema: schema name :type schema: str :param cache: whether cache is enabled for the function :type cache: bool :param cac...
def all_table_names_in_schema(self, schema, cache=False, cache_timeout=None, force=False): """Parameters need to be passed as keyword arguments. For unused parameters, they are referenced in cache_util.memoized_func decorator. :param schema: schema nam...
Parameters need to be passed as keyword arguments. For unused parameters, they are referenced in cache_util.memoized_func decorator. :param schema: schema name :type schema: str :param cache: whether cache is enabled for the function :type cache: bool :param cac...
def all_view_names_in_schema(self, schema, cache=False, cache_timeout=None, force=False): """Parameters need to be passed as keyword arguments. For unused parameters, they are referenced in cache_util.memoized_func decorator. :param schema: schema name ...
Parameters need to be passed as keyword arguments. For unused parameters, they are referenced in cache_util.memoized_func decorator. :param cache: whether cache is enabled for the function :type cache: bool :param cache_timeout: timeout in seconds for the cache :type ca...
def all_schema_names(self, cache=False, cache_timeout=None, force=False): """Parameters need to be passed as keyword arguments. For unused parameters, they are referenced in cache_util.memoized_func decorator. :param cache: whether cache is enabled for the function :type cache:...
Allowing to lookup grain by either label or duration For backward compatibility
def grains_dict(self): """Allowing to lookup grain by either label or duration For backward compatibility""" d = {grain.duration: grain for grain in self.grains()} d.update({grain.label: grain for grain in self.grains()}) return d