hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f509afecc5073874e0a289659f496d3e2b4334c5 | 1,104 | py | Python | tests/test_label_maximum_extension_map.py | elsandal/pyclesperanto_prototype | 7bda828813b86b44b63d73d5e8f466d9769cded1 | [
"BSD-3-Clause"
] | 64 | 2020-03-18T12:11:22.000Z | 2022-03-31T08:19:18.000Z | tests/test_label_maximum_extension_map.py | elsandal/pyclesperanto_prototype | 7bda828813b86b44b63d73d5e8f466d9769cded1 | [
"BSD-3-Clause"
] | 148 | 2020-05-14T06:14:11.000Z | 2022-03-26T15:02:31.000Z | tests/test_label_maximum_extension_map.py | elsandal/pyclesperanto_prototype | 7bda828813b86b44b63d73d5e8f466d9769cded1 | [
"BSD-3-Clause"
] | 16 | 2020-05-31T00:53:44.000Z | 2022-03-23T13:20:57.000Z | import pyclesperanto_prototype as cle
import numpy as np
def test_label_label_maximum_extension_map_2d():
labels = cle.push(np.asarray([
[1, 1, 2],
[1, 0, 0],
[3, 3, 0]
]))
reference = cle.push(np.asarray([
[0.74535596, 0.74535596, 0],
[0.74535596, 0, 0],
[0.5, 0.5, 0]
]
))
result = cle.label_maximum_extension_map(labels)
a = cle.pull(result)
b = cle.pull(reference)
print(a)
print(b)
assert (np.allclose(a, b, 0.001))
def test_label_label_maximum_extension_map_3d():
labels = cle.push(np.asarray([
[
[1, 1, 2],
], [
[1, 0, 0],
], [
[3, 3, 0]
]
]))
reference = cle.push(np.asarray([
[
[0.74535596, 0.74535596, 0],
], [
[0.74535596, 0, 0],
], [
[0.5, 0.5, 0]
]
]
))
result = cle.label_maximum_extension_map(labels)
a = cle.pull(result)
b = cle.pull(reference)
print(a)
print(b)
assert (np.allclose(a, b, 0.001))
| 16.727273 | 52 | 0.48279 | 143 | 1,104 | 3.594406 | 0.230769 | 0.031128 | 0.116732 | 0.18677 | 0.898833 | 0.898833 | 0.898833 | 0.758755 | 0.758755 | 0.758755 | 0 | 0.139007 | 0.361413 | 1,104 | 65 | 53 | 16.984615 | 0.590071 | 0 | 0 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043478 | 1 | 0.043478 | false | 0 | 0.043478 | 0 | 0.086957 | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f567d64ec46f5d65ead03fc1b3e27aded528c123 | 655 | py | Python | user/vistas/widgets/widget-marker.py | ZerpaTechnology/occoa | a8c0bd2657bc058801a883109c0ec0d608d04ccc | [
"Apache-2.0"
] | null | null | null | user/vistas/widgets/widget-marker.py | ZerpaTechnology/occoa | a8c0bd2657bc058801a883109c0ec0d608d04ccc | [
"Apache-2.0"
] | null | null | null | user/vistas/widgets/widget-marker.py | ZerpaTechnology/occoa | a8c0bd2657bc058801a883109c0ec0d608d04ccc | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
print '''<div class="bg-ubuntu_jet d-inline-block text-center"><img src="'''+str(data['base_url']+'static/imgs/marker/institucion-default.png')+'''"><h3 class="white">Tiempo restante: </h3><span class="ubuntu_green">1:00</span><h3 class="white">Marcador:</h3><div> <h4 class="white">Partido</h4> <img src="'''+str(data['base_url']+'static/imgs/marker/partido-default.png')+'''" class="shauto-5"><span class="white"> Votos: 0</span> <h4 class="white">Partido</h4> <img src="'''+str(data['base_url']+'static/imgs/marker/partido-default.png')+'''" class="shauto-5"><span class="white"> Votos: 0</span></div></div>''' | 218.333333 | 613 | 0.665649 | 102 | 655 | 4.22549 | 0.431373 | 0.139211 | 0.062645 | 0.090487 | 0.593968 | 0.593968 | 0.593968 | 0.593968 | 0.593968 | 0.510441 | 0 | 0.025682 | 0.048855 | 655 | 3 | 613 | 218.333333 | 0.666132 | 0.058015 | 0 | 0 | 0 | 3 | 0.858766 | 0.457792 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
1936a5712a4d4b8d19eddb85360a3e7a4305626f | 37 | py | Python | annotmpl/__init__.py | mwaskom/annotmpl | f2a2d213bc0bb95bbff6bc90b4eeae9cc069968e | [
"BSD-3-Clause"
] | null | null | null | annotmpl/__init__.py | mwaskom/annotmpl | f2a2d213bc0bb95bbff6bc90b4eeae9cc069968e | [
"BSD-3-Clause"
] | null | null | null | annotmpl/__init__.py | mwaskom/annotmpl | f2a2d213bc0bb95bbff6bc90b4eeae9cc069968e | [
"BSD-3-Clause"
] | null | null | null | from api import * # noqa: F401,F403
| 18.5 | 36 | 0.675676 | 6 | 37 | 4.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206897 | 0.216216 | 37 | 1 | 37 | 37 | 0.655172 | 0.405405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
193ccce92a9589bf0c167414dc73fdd932ed83f4 | 60,764 | py | Python | stable_baselines/mdal/adversary.py | shanlior/OAL | 39c9eb24f64a27d3da09e92b6da9bf60326baabe | [
"MIT"
] | 3 | 2021-04-08T12:49:16.000Z | 2022-03-11T00:53:47.000Z | stable_baselines/mdal/adversary.py | shanlior/OAL | 39c9eb24f64a27d3da09e92b6da9bf60326baabe | [
"MIT"
] | null | null | null | stable_baselines/mdal/adversary.py | shanlior/OAL | 39c9eb24f64a27d3da09e92b6da9bf60326baabe | [
"MIT"
] | null | null | null | """
Reference: https://github.com/openai/imitation
I follow the architecture from the official repository
"""
import gym
import tensorflow as tf
import numpy as np
from stable_baselines.common.mpi_running_mean_std import RunningMeanStd as MpiRunningMeanStd
from stable_baselines.common.running_mean_std import RunningMeanStd, RunningMinMax
from stable_baselines.common import tf_util as tf_util
from stable_baselines.common import zipsame
def logsigmoid(input_tensor):
"""
Equivalent to tf.log(tf.sigmoid(a))
:param input_tensor: (tf.Tensor)
:return: (tf.Tensor)
"""
return -tf.nn.softplus(-input_tensor)
def logit_bernoulli_entropy(logits):
"""
Reference:
https://github.com/openai/imitation/blob/99fbccf3e060b6e6c739bdf209758620fcdefd3c/policyopt/thutil.py#L48-L51
:param logits: (tf.Tensor) the logits
:return: (tf.Tensor) the Bernoulli entropy
"""
ent = (1. - tf.nn.sigmoid(logits)) * logits - logsigmoid(logits)
return ent
#
class TabularAdversary(object):
def __init__(self, observation_space, action_space, hidden_size,
entcoeff=0.00, scope="adversary", normalize=True, expert_features=None,
exploration_bonus=False, bonus_coef=0.01, t_c=0.1):
"""
Reward regression from observations and transitions
:param observation_space: (gym.spaces)
:param action_space: (gym.spaces)
:param hidden_size: ([int]) the hidden dimension for the MLP
:param entcoeff: (float) the entropy loss weight
:param scope: (str) tensorflow variable scope
:param normalize: (bool) Whether to normalize the reward or not
"""
# TODO: support images properly (using a CNN)
self.scope = scope
self.observation_shape = observation_space.shape
self.actions_shape = action_space.shape
if isinstance(action_space, gym.spaces.Box):
# Continuous action space
self.discrete_actions = False
self.n_actions = action_space.shape[0]
elif isinstance(action_space, gym.spaces.Discrete):
self.n_actions = action_space.n
self.discrete_actions = True
else:
raise ValueError('Action space not supported: {}'.format(action_space))
self.hidden_size = hidden_size
self.normalize = normalize
self.obs_rms = None
self.expert_features = expert_features
self.reward = expert_features
normalization = np.linalg.norm(self.reward)
self.norm_factor = np.sqrt(float(self.observation_shape[0]))
if normalization > 1:
self.reward = self.reward / (self.norm_factor * normalization)
self.exploration_bonus = exploration_bonus
self.t_c = t_c
self.bonus_coef = bonus_coef
if self.exploration_bonus:
self.covariance_lambda = np.identity(self.observation_shape[0])
else:
self.covariance_lambda = None
def update_reward(self, features):
t_c = self.t_c
# self.reward = (1-t_c) * self.reward + t_c * (self.expert_features - features)
self.reward = self.reward + t_c * (self.expert_features - features) / self.norm_factor
normalization = np.linalg.norm(self.reward)
if normalization > 1:
self.reward = self.reward / normalization
def get_reward(self, observation):
"""
Predict the reward using the observation and action
:param obs: (tf.Tensor or np.ndarray) the observation
:param actions: (tf.Tensor or np.ndarray) the action
:return: (np.ndarray) the reward
"""
if self.exploration_bonus:
self.covariance_lambda = self.covariance_lambda \
+ np.matmul(np.expand_dims(observation, axis=1), np.expand_dims(observation, axis=0))
inverse_covariance = np.linalg.inv(self.covariance_lambda)
reward = np.matmul(observation, np.reshape(self.reward, (self.reward.shape[0], 1))).squeeze()
bonus = np.sqrt(np.matmul(np.matmul(observation,inverse_covariance), observation))
return reward + self.bonus_coef * bonus
else:
reward = np.matmul(observation, np.reshape(self.reward, (self.reward.shape[0], 1))).squeeze()
return reward
class TabularAdversaryTF(object):
def __init__(self, sess, observation_space, action_space, hidden_size,
entcoeff=0.00, scope="adversary", normalize=True, expert_features=None,
exploration_bonus=False, is_action_features=True, bonus_coef=0.01, t_c=0.1):
"""
Reward regression from observations and transitions
:param observation_space: (gym.spaces)
:param action_space: (gym.spaces)
:param hidden_size: ([int]) the hidden dimension for the MLP
:param entcoeff: (float) the entropy loss weight
:param scope: (str) tensorflow variable scope
:param normalize: (bool) Whether to normalize the reward or not
"""
# TODO: support images properly (using a CNN)
self.scope = scope
self.sess = sess
self.observation_shape = observation_space.shape
self.actions_shape = action_space.shape
self.is_action_features = is_action_features
if isinstance(action_space, gym.spaces.Box):
# Continuous action space
self.discrete_actions = False
self.n_actions = action_space.shape[0]
elif isinstance(action_space, gym.spaces.Discrete):
self.n_actions = action_space.n
self.discrete_actions = True
else:
raise ValueError('Action space not supported: {}'.format(action_space))
if self.is_action_features:
self.n_features = self.observation_shape[0] + self.n_actions
else:
self.n_features = self.observation_shape[0]
expert_features = expert_features[:self.n_features]
self.hidden_size = hidden_size
self.normalize = normalize
self.obs_rms = None
self.expert_features = tf.constant(expert_features, dtype=tf.float32)
self.norm_factor = tf.sqrt(float(self.n_features))
# self.normalization = tf.square(float(np.linalg.norm(expert_features)))
self.normalization = np.linalg.norm(expert_features)
expert_normalization = np.linalg.norm(expert_features)
# if expert_normalization > 1:
# self.reward_vec = tf.Variable(expert_features / (self.norm_factor * expert_normalization), dtype=tf.float32)
self.reward_vec = tf.Variable(expert_features, dtype=tf.float32)
# else:
# self.reward_vec = tf.Variable(expert_features / self.norm_factor)
# self.reward_vec = tf.Variable(expert_features, dtype=tf.float32)
# self.reward_vec = tf.Variable(expert_features / self.normalization, dtype=tf.float32)
# if normalization > 1:
# self.reward_vec = tf.Variable(expert_features / normalization, dtype=tf.float32)
# else:
# self.reward_vec = tf.Variable(expert_features, dtype=tf.float32)
#
self.exploration_bonus = exploration_bonus
self.t_c = t_c
self.bonus_coef = bonus_coef
if self.exploration_bonus:
self.covariance_lambda = tf.Variable(tf.eye(self.n_features), dtype=tf.float32)
self.inverse_covariance = tf.eye(self.n_features)
else:
self.covariance_lambda = None
self.inverse_covariance = None
# Placeholders
self.features_ph = tf.placeholder(tf.float32, (None,) + (self.n_features, ),
name="observations_ph")
self.successor_features_ph = tf.placeholder(tf.float32, (self.n_features, ),
name="successor_features_ph")
# Build graph
with tf.variable_scope(self.scope, reuse=False):
if self.normalize:
with tf.variable_scope("obfilter"):
self.obs_rms = RunningMeanStd(shape=self.n_features)
# self.obs_rms = RunningMinMax(shape=self.observation_shape)
obs_scaled = (tf.cast(self.features_ph, tf.float32) - self.obs_rms.mean)\
/ tf.cast(tf.sqrt(self.obs_rms.var), tf.float32)
reward_vec_scaled = (tf.cast(self.reward_vec, tf.float32) - self.obs_rms.mean)\
/ (tf.cast(tf.sqrt(self.obs_rms.var), tf.float32))
# obs_scaled = (tf.cast(self.features_ph, tf.float32)) / tf.cast(self.obs_rms.scale, tf.float32)
obs = obs_scaled
# reward_vec_scaled = (tf.cast(self.reward_vec, tf.float32)) / tf.cast(self.obs_rms.scale,
# tf.float32)
reward_vec = reward_vec_scaled / tf.norm(reward_vec_scaled)
else:
obs = self.features_ph
reward_vec = self.reward_vec
if self.exploration_bonus:
self.new_covariance_lambda = self.covariance_lambda \
+ tf.reduce_sum(
tf.matmul(tf.expand_dims(tf.cast(self.features_ph, tf.float32), axis=2),
tf.expand_dims(tf.cast(self.features_ph, tf.float32), axis=1)), axis=0)
self.update_covariance_op = tf.assign(self.covariance_lambda, self.new_covariance_lambda)
bonus = tf.squeeze(tf.sqrt(tf.matmul(tf.matmul(tf.expand_dims(obs, axis=1), self.inverse_covariance),
tf.expand_dims(obs, axis=2))))
reward = tf.squeeze(tf.matmul(obs, tf.expand_dims(reward_vec, 1)))
self.reward_op = reward + self.bonus_coef * bonus
else:
self.reward_op = tf.squeeze(tf.matmul(obs, tf.expand_dims(reward_vec, 1)))
# Update reward
self.new_reward_vec = self.reward_vec + self.t_c * (self.expert_features - self.successor_features_ph)
# self.new_reward_vec = self.reward_vec\
# + self.t_c * (self.expert_features - self.successor_features_ph)\
# / (self.normalization * self.norm_factor)
# normalization = tf.norm(self.new_reward_vec) * self.normalization
# normalization = tf.norm(self.new_reward_vec)
# self.new_reward_vec = tf.cond(normalization > 1.0,
# true_fn=lambda: self.new_reward_vec / normalization,
# false_fn=lambda: self.new_reward_vec)
# self.new_reward_vec = self.new_reward_vec / normalization
# reward_vec_unnormalized = self.reward_vec + self.t_c * (self.expert_features - self.successor_features_ph)
# reward_vec_scaled = (tf.cast(reward_vec_unnormalized, tf.float32)) / tf.cast(self.obs_rms.scale, tf.float32)
# self.new_reward_vec = reward_vec_scaled / tf.norm(reward_vec_scaled)
self.update_reward_op = tf.assign(self.reward_vec, self.new_reward_vec)
def update_reward(self, successor_features):
#
# sess = tf.get_default_session()
# if len(features.shape) == 1:
# features = np.expand_dims(features, 0)
if not self.is_action_features:
successor_features = successor_features[:self.observation_shape[0]]
feed_dict = {self.successor_features_ph: successor_features}
if self.exploration_bonus:
self.inverse_covariance = tf.linalg.inv(self.covariance_lambda)
self.sess.run(self.update_reward_op, feed_dict)
def get_reward(self, obs, action=None):
"""
Predict the reward using the observation and action
:param obs: (tf.Tensor or np.ndarray) the observation
:param actions: (tf.Tensor or np.ndarray) the action
:return: (np.ndarray) the reward
"""
# sess = tf.get_default_session()
if len(obs.shape) == 1:
obs = np.expand_dims(obs, 0)
if len(action.shape) == 1:
action = np.expand_dims(action, 0)
if self.is_action_features:
features = np.concatenate((obs, action), axis=1)
else:
features = obs
feed_dict = {self.features_ph: features}
if self.exploration_bonus:
reward, _ = self.sess.run([self.reward_op, self.update_covariance_op], feed_dict)
else:
reward = self.sess.run(self.reward_op, feed_dict)
return reward
class NeuralAdversary(object):
def __init__(self, sess, observation_space, action_space, hidden_size=64, lipschitz_reg_coef=1.0, scope="adversary", normalize=True):
"""
Reward regression from observations and transitions
:param observation_space: (gym.spaces)
:param action_space: (gym.spaces)
:param hidden_size: ([int]) the hidden dimension for the MLP
:param entcoeff: (float) the entropy loss weight
:param scope: (str) tensorflow variable scope
:param normalize: (bool) Whether to normalize the reward or not
"""
# TODO: support images properly (using a CNN)
self.sess = sess
self.scope = scope
self.observation_shape = observation_space.shape
self.actions_shape = action_space.shape
if isinstance(action_space, gym.spaces.Box):
# Continuous action space
self.discrete_actions = False
self.n_actions = action_space.shape[0]
elif isinstance(action_space, gym.spaces.Discrete):
self.n_actions = action_space.n
self.discrete_actions = True
else:
raise ValueError('Action space not supported: {}'.format(action_space))
self.hidden_size = hidden_size
self.normalize = normalize
self.obs_rms = None
# Placeholders
self.policy_obs_ph = tf.placeholder(observation_space.dtype, (None,) + self.observation_shape,
name="observations_ph")
self.policy_acs_ph = tf.placeholder(action_space.dtype, (None,) + self.actions_shape,
name="actions_ph")
self.policy_gammas_ph = tf.placeholder(tf.float32, (None, 1), name="gammas_ph")
self.expert_obs_ph = tf.placeholder(observation_space.dtype, (None,) + self.observation_shape,
name="expert_observations_ph")
self.expert_acs_ph = tf.placeholder(action_space.dtype, (None,) + self.actions_shape,
name="expert_actions_ph")
self.expert_gammas_ph = tf.placeholder(tf.float32, (None, 1), name="gammas_ph")
self.mix_obs_ph = tf.placeholder(observation_space.dtype, (None,) + self.observation_shape,
name="expert_observations_ph")
self.mix_acs_ph = tf.placeholder(action_space.dtype, (None,) + self.actions_shape,
name="expert_actions_ph")
# Build graph
policy_rewards = self.build_graph(self.policy_obs_ph, self.policy_acs_ph, reuse=False)
expert_rewards = self.build_graph(self.expert_obs_ph, self.expert_acs_ph, reuse=True)
# generator_rewards = tf.math.sigmoid(generator_logits)
# expert_rewards = tf.math.sigmoid(expert_logits)
# policy_scaled_rewards = tf.multiply(policy_rewards, self.policy_gammas_ph)
policy_scaled_rewards = policy_rewards
# policy_value = (1-0.99) * tf.reduce_sum(policy_scaled_rewards)
policy_value = tf.reduce_mean(policy_scaled_rewards)
# expert_scaled_rewards = tf.multiply(expert_rewards, self.expert_gammas_ph)
expert_scaled_rewards = expert_rewards
# expert_value = (1-0.99) * tf.reduce_sum(expert_scaled_rewards)
expert_value = tf.reduce_mean(expert_scaled_rewards)
# alpha = tf.random.uniform([], 0.0, 1.0, observation_space.dtype)
# generator_obs_mix = tf.reduce_mean(self.generator_obs_ph, axis=0, keepdims=True)
# generator_acs_mix = tf.reduce_mean(self.generator_acs_ph, axis=0, keepdims=True)
# expert_obs_mix = tf.reduce_mean(self.expert_obs_ph, axis=0, keepdims=True)
# expert_acs_mix = tf.reduce_mean(self.expert_acs_ph, axis=0, keepdims=True)
# generator_obs_mix = (1-0.99) * tf.reduce_sum(tf.cast(self.generator_gammas_ph, observation_space.dtype) * self.generator_obs_ph, axis=0, keepdims=True)
# generator_acs_mix = (1-0.99) * tf.reduce_sum(tf.cast(self.generator_gammas_ph, action_space.dtype) * self.generator_acs_ph, axis=0, keepdims=True)
# expert_obs_mix = (1-0.99) * tf.reduce_sum(tf.cast(self.expert_gammas_ph, observation_space.dtype) * self.expert_obs_ph, axis=0, keepdims=True)
# expert_acs_mix = (1-0.99) * tf.reduce_sum(tf.cast(self.expert_gammas_ph, action_space.dtype) * self.expert_acs_ph, axis=0, keepdims=True)
# mixture_obs = alpha * generator_obs_mix + (1 - alpha) * tf.reduce_mean(expert_obs_mix)
# mixture_acs = tf.cast(alpha, action_space.dtype) * generator_acs_mix\
# + tf.cast((1 - alpha), action_space.dtype) * expert_acs_mix
mixture_rewards = self.build_graph(self.mix_obs_ph, self.mix_acs_ph, reuse=True)
grads = tf.gradients(mixture_rewards, [self.mix_obs_ph, self.mix_acs_ph])[0]
norm = tf.cast(tf.sqrt(tf.reduce_sum(tf.square(grads), axis=1)), tf.float32)
lipschitz_reg = tf.reduce_mean(tf.square(norm - 1.0))
lipschitz_reg_loss = lipschitz_reg_coef * lipschitz_reg
rewards = tf.concat([policy_rewards, expert_rewards], 0)
rewards_reg = - tf.reduce_mean(logit_bernoulli_entropy(rewards))
rewards_reg_coef = 0.001
# rewards_reg = tf.reduce_sum(tf.square(rewards))
# rewards_reg_coef = 0.01
rewards_reg_loss = rewards_reg_coef * rewards_reg
policy_loss = policy_value - expert_value
loss = policy_loss + lipschitz_reg_loss + rewards_reg_loss
# Loss + Accuracy terms
self.losses = [loss]
self.loss_name = ["generator_loss", "expert_loss", "entropy", "entropy_loss", "generator_acc", "expert_acc"]
# self.total_loss = loss
# Build Reward for policy
self.reward_op = tf.clip_by_value(policy_rewards, -10.0, 10.0)
# self.reward_op = tf.stop_gradient(policy_rewards)
# self.reward_op = generator_rewards
var_list = self.get_trainable_variables()
rewards_optimizer = tf.train.AdamOptimizer(learning_rate=3e-4)
# rewards_optimizer = tf.train.AdamOptimizer(learning_rate=1e-5)
# rewards_optimizer = tf.train.AdamOptimizer(learning_rate=3e-4, beta1=0)
# rewards_optimizer = tf.train.AdamOptimizer(learning_rate=1e-3, beta1=0.5)
# rewards_optimizer = tf.train.AdamOptimizer(learning_rate=1e-2)
# grads, vars = zip(*rewards_optimizer.compute_gradients(loss, var_list=var_list))
# accum_vars = [tf.Variable(tf.zeros_like(var.initialized_value()), trainable=False) for var in var_list]
# accumulation_counter = tf.Variable(0.0, trainable=False)
# zero_ops = [var.assign(tf.zeros_like(var)) for var in accum_vars]
# zero_ops.append(accumulation_counter.assign(0.0))
# gvs = rewards_optimizer.compute_gradients(loss, var_list)
# accumulate_ops = [accum_vars[i].assign_add(gv[0]) for i, gv in enumerate(gvs)]
# accumulate_ops.append(accumulation_counter.assign_add(1.0))
# train_step = rewards_optimizer.apply_gradients([(accum_vars[i] / accumulation_counter, gv[1]) for i, gv in enumerate(gvs)])
# grads, vars = list(zip(*grads_and_vars))
# grads, norm = tf.clip_by_global_norm(grads, 300.0)
# rewards_train_op = rewards_optimizer.apply_gradients(zip(grads, vars))
# norm = tf.constant(0.)
rewards_train_op = rewards_optimizer.minimize(loss, var_list=var_list)
# rewards_train_op = [rewards_train_op, norm]
# self.zero_grad = tf_util.function([], zero_ops)
# self.compute_grads = tf_util.function(
# [self.generator_obs_ph, self.generator_acs_ph, self.generator_gammas_ph,
# self.expert_obs_ph, self.expert_acs_ph, self.expert_gammas_ph], accumulate_ops)
# self.train = tf_util.function([], train_step)
# print_op = tf.print("Value diff:", policy_value - expert_value, "Grad Regularizer:", lipschitz_reg)
print_op = tf.no_op()
self.train = tf_util.function(
[self.policy_obs_ph, self.policy_acs_ph, self.policy_gammas_ph,
self.expert_obs_ph, self.expert_acs_ph, self.expert_gammas_ph,
self.mix_obs_ph, self.mix_acs_ph], [rewards_train_op, print_op])
def build_graph(self, obs_ph, acs_ph, reuse=False):
"""
build the graph
:param obs_ph: (tf.Tensor) the observation placeholder
:param acs_ph: (tf.Tensor) the action placeholder
:param reuse: (bool)
:return: (tf.Tensor) the graph output
"""
with tf.variable_scope(self.scope):
if reuse:
tf.get_variable_scope().reuse_variables()
if self.normalize:
with tf.variable_scope("obfilter"):
self.obs_rms = RunningMeanStd(shape=self.observation_shape)
obs = (tf.cast(obs_ph, tf.float32) - self.obs_rms.mean) / tf.cast(tf.sqrt(self.obs_rms.var), tf.float32)
else:
obs = tf.cast(obs_ph, tf.float32)
if self.discrete_actions:
one_hot_actions = tf.one_hot(acs_ph, self.n_actions)
actions_ph = tf.cast(one_hot_actions, tf.float32)
else:
actions_ph = acs_ph
_input = tf.concat([obs, actions_ph], axis=1) # concatenate the two input -> form a transition
p_h1 = tf.contrib.layers.fully_connected(_input, self.hidden_size, activation_fn=tf.nn.tanh)
p_h2 = tf.contrib.layers.fully_connected(p_h1, self.hidden_size, activation_fn=tf.nn.tanh)
# rewards = tf.contrib.layers.fully_connected(p_h2, 1, activation_fn=tf.nn.tanh)
# rewards = tf.contrib.layers.fully_connected(p_h2, 1, activation_fn=tf.math.sigmoid)
rewards = tf.contrib.layers.fully_connected(p_h2, 1, activation_fn=tf.identity)
return rewards
def get_trainable_variables(self):
"""
Get all the trainable variables from the graph
:return: ([tf.Tensor]) the variables
"""
return tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, self.scope)
def get_reward(self, obs, actions):
"""
Predict the reward using the observation and action
:param obs: (tf.Tensor or np.ndarray) the observation
:param actions: (tf.Tensor or np.ndarray) the action
:return: (np.ndarray) the reward
"""
# sess = tf.get_default_session()
if len(obs.shape) == 1:
obs = np.expand_dims(obs, 0)
if len(actions.shape) == 1:
actions = np.expand_dims(actions, 0)
elif len(actions.shape) == 0:
# one discrete action
actions = np.expand_dims(actions, 0)
feed_dict = {self.policy_obs_ph: obs, self.policy_acs_ph: actions}
reward = self.sess.run(self.reward_op, feed_dict)
return reward
class NeuralAdversaryTRPO(object):
def __init__(self, sess, observation_space, action_space, hidden_size=64, entcoeff=0.001,
scope="adversary", normalize=True):
"""
Reward regression from observations and transitions
:param observation_space: (gym.spaces)
:param action_space: (gym.spaces)
:param hidden_size: ([int]) the hidden dimension for the MLP
:param entcoeff: (float) the entropy loss weight
:param scope: (str) tensorflow variable scope
:param normalize: (bool) Whether to normalize the reward or not
"""
# TODO: support images properly (using a CNN)
self.sess = sess
self.scope = scope
self.observation_shape = observation_space.shape
self.actions_shape = action_space.shape
if isinstance(action_space, gym.spaces.Box):
# Continuous action space
self.discrete_actions = False
self.n_actions = action_space.shape[0]
elif isinstance(action_space, gym.spaces.Discrete):
self.n_actions = action_space.n
self.discrete_actions = True
else:
raise ValueError('Action space not supported: {}'.format(action_space))
self.hidden_size = hidden_size
self.normalize = normalize
self.obs_rms = None
# Placeholders
self.policy_obs_ph = tf.placeholder(observation_space.dtype, (None,) + self.observation_shape,
name="observations_ph")
self.policy_acs_ph = tf.placeholder(action_space.dtype, (None,) + self.actions_shape,
name="actions_ph")
self.policy_gammas_ph = tf.placeholder(tf.float32, (None, 1), name="gammas_ph")
self.expert_obs_ph = tf.placeholder(observation_space.dtype, (None,) + self.observation_shape,
name="expert_observations_ph")
self.expert_acs_ph = tf.placeholder(action_space.dtype, (None,) + self.actions_shape,
name="expert_actions_ph")
self.expert_gammas_ph = tf.placeholder(tf.float32, (None, 1), name="gammas_ph")
self.mix_obs_ph = tf.placeholder(observation_space.dtype, (None,) + self.observation_shape,
name="expert_observations_ph")
self.mix_acs_ph = tf.placeholder(action_space.dtype, (None,) + self.actions_shape,
name="expert_actions_ph")
# Build graph
policy_rewards = self.build_graph(self.policy_obs_ph, self.policy_acs_ph, reuse=False)
expert_rewards = self.build_graph(self.expert_obs_ph, self.expert_acs_ph, reuse=True)
# policy_rewards = tf.math.sigmoid(policy_logits)
# expert_rewards = tf.math.sigmoid(expert_logits)
# policy_scaled_rewards = tf.multiply(policy_rewards, self.policy_gammas_ph)
policy_scaled_rewards = policy_rewards
# policy_value = tf.reduce_sum(policy_scaled_rewards)
policy_value = tf.reduce_mean(policy_scaled_rewards)
# expert_scaled_rewards = tf.multiply(expert_rewards, self.expert_gammas_ph)
expert_scaled_rewards = expert_rewards
# expert_value = tf.reduce_sum(expert_scaled_rewards)
expert_value = tf.reduce_mean(expert_scaled_rewards)
# alpha = tf.random.uniform([], 0.0, 1.0, observation_space.dtype)
# generator_obs_mix = tf.reduce_mean(self.generator_obs_ph, axis=0, keepdims=True)
# generator_acs_mix = tf.reduce_mean(self.generator_acs_ph, axis=0, keepdims=True)
# expert_obs_mix = tf.reduce_mean(self.expert_obs_ph, axis=0, keepdims=True)
# expert_acs_mix = tf.reduce_mean(self.expert_acs_ph, axis=0, keepdims=True)
# generator_obs_mix = (1-0.99) * tf.reduce_sum(tf.cast(self.generator_gammas_ph, observation_space.dtype) * self.generator_obs_ph, axis=0, keepdims=True)
# generator_acs_mix = (1-0.99) * tf.reduce_sum(tf.cast(self.generator_gammas_ph, action_space.dtype) * self.generator_acs_ph, axis=0, keepdims=True)
# expert_obs_mix = (1-0.99) * tf.reduce_sum(tf.cast(self.expert_gammas_ph, observation_space.dtype) * self.expert_obs_ph, axis=0, keepdims=True)
# expert_acs_mix = (1-0.99) * tf.reduce_sum(tf.cast(self.expert_gammas_ph, action_space.dtype) * self.expert_acs_ph, axis=0, keepdims=True)
# mixture_obs = alpha * generator_obs_mix + (1 - alpha) * tf.reduce_mean(expert_obs_mix)
# mixture_acs = tf.cast(alpha, action_space.dtype) * generator_acs_mix\
# + tf.cast((1 - alpha), action_space.dtype) * expert_acs_mix
mixture_rewards = self.build_graph(self.mix_obs_ph, self.mix_acs_ph, reuse=True)
grads = tf.gradients(mixture_rewards, [self.mix_obs_ph, self.mix_acs_ph])[0]
norm = tf.cast(tf.sqrt(tf.reduce_sum(tf.square(grads), axis=1)), tf.float32)
lipschitz_reg = tf.reduce_mean(tf.square(norm - 1.0))
lipschitz_reg_coef = 0.0
lipschitz_reg_loss = lipschitz_reg_coef * lipschitz_reg
rewards = tf.concat([policy_rewards, expert_rewards], 0)
rewards_reg = - tf.reduce_mean(logit_bernoulli_entropy(rewards))
# rewards_reg = tf.reduce_sum(tf.square(rewards))
rewards_reg_coef = 0.0
rewards_reg_loss = rewards_reg_coef * rewards_reg
policy_loss = policy_value - expert_value
self.total_loss = policy_loss + lipschitz_reg_loss + rewards_reg_loss
# Loss + Accuracy terms
self.losses = []
self.loss_name = ["generator_loss", "expert_loss", "entropy", "entropy_loss", "generator_acc", "expert_acc"]
# self.total_loss = loss
# Build Reward for policy
# self.reward_op = tf.stop_gradient(policy_rewards)
self.reward_op = tf.stop_gradient(tf.clip_by_value(policy_rewards, -1.0, 1.0))
# self.reward_op = generator_rewards
print_op = tf.print("Policy loss:", policy_loss,
"GradReg", lipschitz_reg,
"rewards_abs_mean", tf.reduce_mean(tf.abs(rewards)), "rewards_std", tf.math.reduce_std(rewards),
"abs_max", tf.math.reduce_max(tf.abs(rewards)))
var_list = self.get_trainable_variables()
self.lossandgrad = tf_util.function(
[self.policy_obs_ph, self.policy_acs_ph, self.policy_gammas_ph,
self.expert_obs_ph, self.expert_acs_ph, self.expert_gammas_ph,
self.mix_obs_ph, self.mix_acs_ph],
self.losses + [print_op] + [tf_util.flatgrad(self.total_loss, var_list)])
# print_op = tf.print("Value diff:", policy_value - expert_value, "Grad Regularizer:", lipschitz_reg)
# print_op = tf.no_op()
# self.train = tf_util.function(
# [self.policy_obs_ph, self.policy_acs_ph, self.policy_gammas_ph,
# self.expert_obs_ph, self.expert_acs_ph, self.expert_gammas_ph,
# self.mix_obs_ph, self.mix_acs_ph], [rewards_train_op, print_op])
def build_graph(self, obs_ph, acs_ph, reuse=False):
"""
build the graph
:param obs_ph: (tf.Tensor) the observation placeholder
:param acs_ph: (tf.Tensor) the action placeholder
:param reuse: (bool)
:return: (tf.Tensor) the graph output
"""
with tf.variable_scope(self.scope):
if reuse:
tf.get_variable_scope().reuse_variables()
if self.normalize:
with tf.variable_scope("obfilter"):
self.obs_rms = MpiRunningMeanStd(shape=self.observation_shape)
obs = (tf.cast(obs_ph, tf.float32) - self.obs_rms.mean) / tf.cast(self.obs_rms.std, tf.float32)
else:
obs = tf.cast(obs_ph, tf.float32)
if self.discrete_actions:
one_hot_actions = tf.one_hot(acs_ph, self.n_actions)
actions_ph = tf.cast(one_hot_actions, tf.float32)
else:
actions_ph = acs_ph
_input = tf.concat([obs, actions_ph], axis=1) # concatenate the two input -> form a transition
p_h1 = tf.contrib.layers.fully_connected(_input, self.hidden_size, activation_fn=tf.nn.tanh)
p_h2 = tf.contrib.layers.fully_connected(p_h1, self.hidden_size, activation_fn=tf.nn.tanh)
# rewards = tf.contrib.layers.fully_connected(p_h2, 1, activation_fn=tf.nn.tanh)
# rewards = tf.contrib.layers.fully_connected(p_h2, 1, activation_fn=tf.math.sigmoid)
rewards = tf.contrib.layers.fully_connected(p_h2, 1, activation_fn=tf.identity)
# last_layer_init = tf.contrib.layers.variance_scaling_initializer(factor=0.1, mode='FAN_AVG', uniform=True)
# rewards = tf.contrib.layers.fully_connected(p_h2, 1, activation_fn=tf.identity, weights_initializer=last_layer_init)
return rewards
def get_trainable_variables(self):
"""
Get all the trainable variables from the graph
:return: ([tf.Tensor]) the variables
"""
return tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, self.scope)
def get_reward(self, obs, actions):
"""
Predict the reward using the observation and action
:param obs: (tf.Tensor or np.ndarray) the observation
:param actions: (tf.Tensor or np.ndarray) the action
:return: (np.ndarray) the reward
"""
# sess = tf.get_default_session()
if len(obs.shape) == 1:
obs = np.expand_dims(obs, 0)
if len(actions.shape) == 1:
actions = np.expand_dims(actions, 0)
elif len(actions.shape) == 0:
# one discrete action
actions = np.expand_dims(actions, 0)
feed_dict = {self.policy_obs_ph: obs, self.policy_acs_ph: actions}
reward = self.sess.run(self.reward_op, feed_dict)
return reward
class NeuralAdversaryMDPO(object):
def __init__(self, sess, observation_space, action_space, hidden_size=64, scope="adversary", normalize=True):
"""
Reward regression from observations and transitions
:param observation_space: (gym.spaces)
:param action_space: (gym.spaces)
:param hidden_size: ([int]) the hidden dimension for the MLP
:param entcoeff: (float) the entropy loss weight
:param scope: (str) tensorflow variable scope
:param normalize: (bool) Whether to normalize the reward or not
"""
# TODO: support images properly (using a CNN)
self.sess = sess
self.scope = scope
self.observation_shape = observation_space.shape
self.actions_shape = action_space.shape
if isinstance(action_space, gym.spaces.Box):
# Continuous action space
self.discrete_actions = False
self.n_actions = action_space.shape[0]
elif isinstance(action_space, gym.spaces.Discrete):
self.n_actions = action_space.n
self.discrete_actions = True
else:
raise ValueError('Action space not supported: {}'.format(action_space))
self.hidden_size = hidden_size
self.normalize = normalize
self.obs_rms = None
# Placeholders
self.policy_obs_ph = tf.placeholder(observation_space.dtype, (None,) + self.observation_shape,
name="observations_ph")
self.policy_acs_ph = tf.placeholder(action_space.dtype, (None,) + self.actions_shape,
name="actions_ph")
self.policy_gammas_ph = tf.placeholder(tf.float32, (None, 1), name="gammas_ph")
self.expert_obs_ph = tf.placeholder(observation_space.dtype, (None,) + self.observation_shape,
name="expert_observations_ph")
self.expert_acs_ph = tf.placeholder(action_space.dtype, (None,) + self.actions_shape,
name="expert_actions_ph")
self.expert_gammas_ph = tf.placeholder(tf.float32, (None, 1), name="gammas_ph")
self.mix_obs_ph = tf.placeholder(observation_space.dtype, (None,) + self.observation_shape,
name="expert_observations_ph")
self.mix_acs_ph = tf.placeholder(action_space.dtype, (None,) + self.actions_shape,
name="expert_actions_ph")
# Build graph
policy_rewards = self.build_graph(self.policy_obs_ph, self.policy_acs_ph, reuse=False, scope=self.scope)
expert_rewards = self.build_graph(self.expert_obs_ph, self.expert_acs_ph, reuse=True, scope=self.scope)
old_policy_rewards = self.build_graph(self.policy_obs_ph, self.policy_acs_ph, reuse=False, scope="oldreward")
old_expert_rewards = self.build_graph(self.expert_obs_ph, self.expert_acs_ph, reuse=True, scope="oldreward")
# generator_rewards = tf.math.sigmoid(generator_logits)
# expert_rewards = tf.math.sigmoid(expert_logits)
# policy_scaled_rewards = tf.multiply(policy_rewards, self.policy_gammas_ph)
policy_scaled_rewards = policy_rewards
# policy_value = (1-0.99) * tf.reduce_sum(policy_scaled_rewards)
policy_value = tf.reduce_mean(policy_scaled_rewards)
# expert_scaled_rewards = tf.multiply(expert_rewards, self.expert_gammas_ph)
expert_scaled_rewards = expert_rewards
# expert_value = (1-0.99) * tf.reduce_sum(expert_scaled_rewards)
expert_value = tf.reduce_mean(expert_scaled_rewards)
# alpha = tf.random.uniform([], 0.0, 1.0, observation_space.dtype)
# generator_obs_mix = tf.reduce_mean(self.generator_obs_ph, axis=0, keepdims=True)
# generator_acs_mix = tf.reduce_mean(self.generator_acs_ph, axis=0, keepdims=True)
# expert_obs_mix = tf.reduce_mean(self.expert_obs_ph, axis=0, keepdims=True)
# expert_acs_mix = tf.reduce_mean(self.expert_acs_ph, axis=0, keepdims=True)
# generator_obs_mix = (1-0.99) * tf.reduce_sum(tf.cast(self.generator_gammas_ph, observation_space.dtype) * self.generator_obs_ph, axis=0, keepdims=True)
# generator_acs_mix = (1-0.99) * tf.reduce_sum(tf.cast(self.generator_gammas_ph, action_space.dtype) * self.generator_acs_ph, axis=0, keepdims=True)
# expert_obs_mix = (1-0.99) * tf.reduce_sum(tf.cast(self.expert_gammas_ph, observation_space.dtype) * self.expert_obs_ph, axis=0, keepdims=True)
# expert_acs_mix = (1-0.99) * tf.reduce_sum(tf.cast(self.expert_gammas_ph, action_space.dtype) * self.expert_acs_ph, axis=0, keepdims=True)
# mixture_obs = alpha * generator_obs_mix + (1 - alpha) * tf.reduce_mean(expert_obs_mix)
# mixture_acs = tf.cast(alpha, action_space.dtype) * generator_acs_mix\
# + tf.cast((1 - alpha), action_space.dtype) * expert_acs_mix
mixture_rewards = self.build_graph(self.mix_obs_ph, self.mix_acs_ph, reuse=True, scope=self.scope)
grads = tf.gradients(mixture_rewards, [self.mix_obs_ph, self.mix_acs_ph])[0]
norm = tf.cast(tf.sqrt(tf.reduce_sum(tf.square(grads), axis=1)), tf.float32)
lipschitz_reg = tf.reduce_mean(tf.square(norm - 1.0))
lipschitz_reg_coef = 10
lipschitz_reg_loss = lipschitz_reg_coef * lipschitz_reg
#
# rewards = tf.concat([policy_rewards, expert_rewards], 0)
#
# rewards_reg = - tf.reduce_mean(logit_bernoulli_entropy(rewards))
# rewards_reg_coef = 0.001
#
rewards = tf.concat([policy_rewards, expert_rewards], 0)
policy_clipped_rewards = tf.clip_by_value(policy_rewards, -10.0, 10.0)
expert_clipped_rewards = tf.clip_by_value(expert_rewards, -10.0, 10.0)
clipped_rewards = tf.concat([policy_clipped_rewards, expert_clipped_rewards], 0)
# rewards_reg_loss = rewards_reg_coef * rewards_reg
old_rewards = tf.concat([old_policy_rewards, old_expert_rewards], 0)
old_policy_clipped_rewards = tf.clip_by_value(old_policy_rewards, -10.0, 10.0)
old_expert_clipped_rewards = tf.clip_by_value(old_expert_rewards, -10.0, 10.0)
old_clipped_rewards = tf.concat([old_policy_clipped_rewards, old_expert_clipped_rewards], 0)
# rewards_reg_coef = 0.01
# bregman = tf.reduce_mean(tf_util.huber_loss(old_rewards - rewards))
bregman = tf.reduce_mean(tf.square(tf.stop_gradient(old_clipped_rewards) - rewards))
bregman_coeff = 100
bregman_loss = bregman_coeff * bregman
#
# stepsize = 0.001
rewards_reg = - tf.reduce_mean(logit_bernoulli_entropy(rewards))
rewards_reg_coeff = 0.00
# rewards_reg = tf.reduce_mean(tf.square(rewards))
# rewards_reg_coeff = 1
rewards_reg_loss = rewards_reg_coeff * rewards_reg
# rewards_reg_loss = rewards_reg_coef * rewards_reg
# policy_loss = policy_value - expert_value
# rewards_gradient = (tf.gradients(tf.reduce_mean(old_policy_clipped_rewards), [old_policy_rewards])[0]
# - tf.gradients(tf.reduce_mean(old_expert_clipped_rewards), [old_expert_rewards])[0])
old_policy_loss = tf.reduce_mean(old_policy_rewards) - tf.reduce_mean(old_expert_rewards)
old_rewards_gradient = tf.concat(tf.gradients(old_policy_loss, [old_policy_rewards, old_expert_rewards]), axis=0)
policy_loss = tf.reduce_sum(tf.multiply(tf.stop_gradient(old_rewards_gradient), clipped_rewards))
loss = policy_loss + bregman_loss + rewards_reg_loss + lipschitz_reg_loss
# Loss + Accuracy terms
self.losses = [loss]
self.loss_name = ["generator_loss", "expert_loss", "entropy", "entropy_loss", "generator_acc", "expert_acc"]
# self.total_loss = loss
# Build Reward for policy
self.reward_op = old_policy_clipped_rewards
# self.reward_op = tf.stop_gradient(tf.clip_by_value(policy_rewards, -1.0, 1.0))
# self.reward_op = generator_rewards
self.update_old_rewards = \
tf_util.function([], [], updates=[tf.assign(oldv, newv) for (oldv, newv) in
zipsame(tf_util.get_globals_vars("oldreward"),
tf_util.get_globals_vars(self.scope))])
var_list = self.get_trainable_variables()
# rewards_optimizer = tf.train.AdamOptimizer(learning_rate=1e-5, epsilon=1e-5)
# rewards_optimizer = tf.train.AdamOptimizer(learning_rate=3e-4, beta1=0.9895193, beta2=0.9999, epsilon=1e-5)
# rewards_optimizer = tf.train.AdamOptimizer(learning_rate=3e-4, beta1=0, beta2=0.9)
rewards_optimizer = tf.train.AdamOptimizer(learning_rate=3e-4)
# rewards_optimizer = tf.train.AdamOptimizer(learning_rate=1e-3, beta1=0.5)
# rewards_optimizer = tf.train.AdamOptimizer(learning_rate=1e-2)
# accum_vars = [tf.Variable(tf.zeros_like(var.initialized_value()), trainable=False) for var in var_list]
# accumulation_counter = tf.Variable(0.0, trainable=False)
# zero_ops = [var.assign(tf.zeros_like(var)) for var in accum_vars]
# zero_ops.append(accumulation_counter.assign(0.0))
# gvs = rewards_optimizer.compute_gradients(loss, var_list)
# accumulate_ops = [accum_vars[i].assign_add(gv[0]) for i, gv in enumerate(gvs)]
# accumulate_ops.append(accumulation_counter.assign_add(1.0))
# train_step = rewards_optimizer.apply_gradients([(accum_vars[i] / accumulation_counter, gv[1]) for i, gv in enumerate(gvs)])
grads, vars = zip(*rewards_optimizer.compute_gradients(loss, var_list=var_list))
grads, norm = tf.clip_by_global_norm(grads, 1e7)
# grads, norm = tf.clip_by_global_norm(grads, 5.0)
rewards_train_op = rewards_optimizer.apply_gradients(zip(grads, vars))
# norm = tf.constant(0.)
# rewards_train_op = rewards_optimizer.minimize(loss, var_list=var_list)
# rewards_train_op = [rewards_train_op, norm]
# self.zero_grad = tf_util.function([], zero_ops)
# self.compute_grads = tf_util.function(
# [self.generator_obs_ph, self.generator_acs_ph, self.generator_gammas_ph,
# self.expert_obs_ph, self.expert_acs_ph, self.expert_gammas_ph], accumulate_ops)
# self.train = tf_util.function([], train_step)
linear_approximation = tf.stop_gradient(tf.reduce_sum(tf.multiply(old_rewards_gradient, rewards - old_rewards)))
# print_op = tf.print("Value diff:", linear_approximation, "Bregman:", bregman_loss,
# "MD objective", linear_approximation + bregman_loss,
# "Weight GradNorm:", norm, "Loss norm", tf.norm(old_rewards_gradient),
# "rewards_mean", tf.reduce_mean(rewards), "rewards_std", tf.math.reduce_std(rewards),
# "rewards_abs_max", tf.math.reduce_max(rewards))
# print_op = tf.print("old_rewards_gradient", tf.shape(old_rewards_gradient), "rewards",tf.shape(rewards))
print_op = tf.no_op()
# var_list = self.get_trainable_variables()
# grads, vars = zip(*rewards_optimizer.compute_gradients(loss, var_list=var_list))
# grads, norm = tf.clip_by_global_norm(grads, 300.0)
# rewards_train_op = rewards_optimizer.apply_gradients(zip(grads, vars))
self.train = tf_util.function(
[self.policy_obs_ph, self.policy_acs_ph, self.policy_gammas_ph,
self.expert_obs_ph, self.expert_acs_ph, self.expert_gammas_ph,
self.mix_obs_ph, self.mix_acs_ph], [rewards_train_op, print_op])
def build_graph(self, obs_ph, acs_ph, reuse=False, scope=None):
"""
build the graph
:param obs_ph: (tf.Tensor) the observation placeholder
:param acs_ph: (tf.Tensor) the action placeholder
:param reuse: (bool)
:return: (tf.Tensor) the graph output
"""
with tf.variable_scope(scope):
if reuse:
tf.get_variable_scope().reuse_variables()
if self.normalize:
with tf.variable_scope("obfilter"):
self.obs_rms = RunningMeanStd(shape=self.observation_shape)
obs = (tf.cast(obs_ph, tf.float32) - self.obs_rms.mean) / tf.cast(tf.sqrt(self.obs_rms.var), tf.float32)
else:
obs = tf.cast(obs_ph, tf.float32)
if self.discrete_actions:
one_hot_actions = tf.one_hot(acs_ph, self.n_actions)
actions_ph = tf.cast(one_hot_actions, tf.float32)
else:
actions_ph = acs_ph
_input = tf.concat([obs, actions_ph], axis=1) # concatenate the two input -> form a transition
p_h1 = tf.contrib.layers.fully_connected(_input, self.hidden_size, activation_fn=tf.nn.tanh)
p_h2 = tf.contrib.layers.fully_connected(p_h1, self.hidden_size, activation_fn=tf.nn.tanh)
rewards = tf.contrib.layers.fully_connected(p_h2, 1, activation_fn=tf.identity)
return rewards
def get_trainable_variables(self):
"""
Get all the trainable variables from the graph
:return: ([tf.Tensor]) the variables
"""
return tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, self.scope)
def get_reward(self, obs, actions):
"""
Predict the reward using the observation and action
:param obs: (tf.Tensor or np.ndarray) the observation
:param actions: (tf.Tensor or np.ndarray) the action
:return: (np.ndarray) the reward
"""
# sess = tf.get_default_session()
if len(obs.shape) == 1:
obs = np.expand_dims(obs, 0)
if len(actions.shape) == 1:
actions = np.expand_dims(actions, 0)
elif len(actions.shape) == 0:
# one discrete action
actions = np.expand_dims(actions, 0)
feed_dict = {self.policy_obs_ph: obs, self.policy_acs_ph: actions}
reward = self.sess.run(self.reward_op, feed_dict)
return reward
class NeuralAdversaryMD(object):
def __init__(self, sess, observation_space, action_space, hidden_size=64, entcoeff=0.001, lipschitz_reg_coef=0.0,
scope="adversary", normalize=True):
"""
Reward regression from observations and transitions
:param observation_space: (gym.spaces)
:param action_space: (gym.spaces)
:param hidden_size: ([int]) the hidden dimension for the MLP
:param entcoeff: (float) the entropy loss weight
:param scope: (str) tensorflow variable scope
:param normalize: (bool) Whether to normalize the reward or not
"""
# TODO: support images properly (using a CNN)
self.sess = sess
self.scope = scope
self.observation_shape = observation_space.shape
self.actions_shape = action_space.shape
if isinstance(action_space, gym.spaces.Box):
# Continuous action space
self.discrete_actions = False
self.n_actions = action_space.shape[0]
elif isinstance(action_space, gym.spaces.Discrete):
self.n_actions = action_space.n
self.discrete_actions = True
else:
raise ValueError('Action space not supported: {}'.format(action_space))
self.hidden_size = hidden_size
self.normalize = normalize
self.obs_rms = None
# Placeholders
self.policy_obs_ph = tf.placeholder(observation_space.dtype, (None,) + self.observation_shape,
name="observations_ph")
self.policy_acs_ph = tf.placeholder(action_space.dtype, (None,) + self.actions_shape,
name="actions_ph")
self.policy_gammas_ph = tf.placeholder(tf.float32, (None, 1), name="gammas_ph")
self.expert_obs_ph = tf.placeholder(observation_space.dtype, (None,) + self.observation_shape,
name="expert_observations_ph")
self.expert_acs_ph = tf.placeholder(action_space.dtype, (None,) + self.actions_shape,
name="expert_actions_ph")
self.expert_gammas_ph = tf.placeholder(tf.float32, (None, 1), name="expert_gammas_ph")
self.mix_obs_ph = tf.placeholder(observation_space.dtype, (None,) + self.observation_shape,
name="expert_observations_ph")
self.mix_acs_ph = tf.placeholder(action_space.dtype, (None,) + self.actions_shape,
name="expert_actions_ph")
if self.normalize:
with tf.variable_scope("obfilter"):
self.obs_rms = MpiRunningMeanStd(shape=self.observation_shape)
# Build graph
policy_rewards = self.build_graph(self.policy_obs_ph, self.policy_acs_ph, reuse=False, scope=self.scope)
expert_rewards = self.build_graph(self.expert_obs_ph, self.expert_acs_ph, reuse=True, scope=self.scope)
old_policy_rewards = self.build_graph(self.policy_obs_ph, self.policy_acs_ph, reuse=False, scope="oldreward")
old_expert_rewards = self.build_graph(self.expert_obs_ph, self.expert_acs_ph, reuse=True, scope="oldreward")
# policy_scaled_rewards = tf.multiply(policy_rewards, self.policy_gammas_ph)
policy_scaled_rewards = policy_rewards
# policy_value = tf.reduce_sum(policy_scaled_rewards)
policy_value = tf.reduce_mean(policy_scaled_rewards)
# expert_scaled_rewards = tf.multiply(expert_rewards, self.expert_gammas_ph)
expert_scaled_rewards = expert_rewards
# expert_value = tf.reduce_sum(expert_scaled_rewards)
expert_value = tf.reduce_mean(expert_scaled_rewards)
mixture_rewards = self.build_graph(self.mix_obs_ph, self.mix_acs_ph, reuse=True, scope=self.scope)
grads = tf.gradients(mixture_rewards, [self.mix_obs_ph, self.mix_acs_ph])[0]
norm = tf.cast(tf.sqrt(tf.reduce_sum(tf.square(grads), axis=1)), tf.float32)
lipschitz_reg = tf.reduce_mean(tf.square(norm - 1.0))
lipschitz_loss = lipschitz_reg_coef * lipschitz_reg
#
rewards = tf.concat([policy_rewards, expert_rewards], 0)
policy_clipped_rewards = tf.clip_by_value(policy_rewards, -10.0, 10.0)
expert_clipped_rewards = tf.clip_by_value(expert_rewards, -10.0, 10.0)
clipped_rewards = tf.concat([policy_clipped_rewards, expert_clipped_rewards], 0)
#
# rewards_reg = - tf.reduce_mean(logit_bernoulli_entropy(rewards))
# rewards_reg_coef = 0.001
#
# rewards_reg = tf.reduce_sum(tf.square(rewards))
# rewards_reg_coef = 0.01
old_rewards = tf.concat([old_policy_rewards, old_expert_rewards], 0)
old_policy_clipped_rewards = tf.clip_by_value(old_policy_rewards, -10.0, 10.0)
old_expert_clipped_rewards = tf.clip_by_value(old_expert_rewards, -10.0, 10.0)
old_clipped_rewards = tf.concat([old_policy_clipped_rewards, old_expert_clipped_rewards], 0)
# bregman = tf.reduce_mean(tf_util.huber_loss(old_clipped_rewards - rewards))
bregman = tf.reduce_mean(tf.square(tf.stop_gradient(old_clipped_rewards) - rewards))
bregman_coeff = 100
bregman_loss = bregman_coeff * bregman
#
# stepsize = 0.001
# old_policy_loss = tf.reduce_mean(tf.multiply(old_policy_rewards, self.policy_gammas_ph))\
# - tf.reduce_mean(tf.multiply(old_expert_rewards, self.expert_gammas_ph))
new_policy_loss = tf.reduce_mean(tf.multiply(policy_rewards, self.policy_gammas_ph))\
- tf.reduce_mean(tf.multiply(expert_rewards, self.expert_gammas_ph))
old_policy_loss = tf.reduce_mean(old_policy_rewards) - tf.reduce_mean(old_expert_rewards)
old_rewards_gradient = tf.concat(tf.gradients(old_policy_loss, [old_policy_rewards, old_expert_rewards]), axis=0)
policy_loss = tf.reduce_sum(tf.multiply(tf.stop_gradient(old_rewards_gradient), clipped_rewards))
rewards_reg = - tf.reduce_mean(logit_bernoulli_entropy(rewards))
rewards_reg_coeff = 0.001
# rewards_reg = tf.reduce_mean(tf.square(rewards))
# rewards_reg = tf.reduce_mean(tf_util.huber_loss(rewards))
# rewards_reg_coeff = 0
rewards_reg_loss = rewards_reg_coeff * rewards_reg
# rewards_reg_loss = rewards_reg_coef * rewards_reg
# policy_loss = policy_value - exp ert_value
self.total_loss = policy_loss + bregman_loss + rewards_reg_loss + lipschitz_loss
# Loss + Accuracy terms
self.losses = []
self.loss_name = ["generator_loss", "expert_loss", "entropy", "entropy_loss", "generator_acc", "expert_acc"]
# Build Reward for policy
self.reward_op = tf.stop_gradient(old_policy_clipped_rewards)
# self.reward_op = old_policy_rewards
# self.reward_op = tf.clip_by_value(policy_rewards, -1.0, 1.0)
# self.reward_op = generator_rewards
self.update_old_rewards = \
tf_util.function([], [], updates=[tf.assign(oldv, newv) for (oldv, newv) in
zipsame(tf_util.get_globals_vars("oldreward"),
tf_util.get_globals_vars(self.scope))])
var_list = self.get_trainable_variables()
# clip_weights = [tf.assign(var, tf.clip_by_value(var, -0.5, 0.5)) for var in var_list]
clip_weights = tf.no_op()
self.clip_weights = tf_util.function([], [clip_weights])
# rewards_optimizer = tf.train.AdamOptimizer(learning_rate=3e-5)
# grads, vars = zip(*rewards_optimizer.compute_gradients(self.total_loss, var_list=var_list))
# rewards_train_op = rewards_optimizer.apply_gradients(zip(grads, vars))
linear_approximation = tf.stop_gradient(tf.reduce_sum(tf.multiply(old_rewards_gradient, rewards - old_rewards)))
grads = tf.gradients(self.total_loss, var_list)
grads, norm = tf.clip_by_global_norm(grads, 1e7)
# self.train = tf_util.function(
# [self.policy_obs_ph, self.policy_acs_ph, self.policy_gammas_ph,
# self.expert_obs_ph, self.expert_acs_ph, self.expert_gammas_ph,
# self.mix_obs_ph, self.mix_acs_ph], [rewards_train_op, print_op])
loss_op = tf.concat(axis=0, values=[tf.reshape(grad if grad is not None else tf.zeros_like(v), [tf_util.numel(v)])
for (v, grad) in zip(var_list, grads)])
print_op = tf.print("Policy Loss:", new_policy_loss, "Bregman:", bregman_loss,
"MD objective", linear_approximation + bregman_loss,
"GradNorm", norm,
"Loss norm", tf.norm(old_rewards_gradient),
"averageEnt", rewards_reg_loss,
"mean", tf.reduce_mean(tf.abs(rewards)), "std", tf.math.reduce_std(rewards),
"max", tf.math.reduce_max(tf.abs(rewards)))
self.lossandgrad = tf_util.function(
[self.policy_obs_ph, self.policy_acs_ph, self.policy_gammas_ph,
self.expert_obs_ph, self.expert_acs_ph, self.expert_gammas_ph,
self.mix_obs_ph, self.mix_acs_ph],
# self.losses + [tf_util.flatgrad(self.total_loss, var_list)]) #, clip_norm=0.5, clip_by_global_norm=True)])
self.losses + [print_op] + [loss_op])
# print_op = tf.print("Value diff:", policy_value - expert_value, "Grad Regularizer:", lipschitz_reg)
# print_op = tf.no_op()
# self.train = tf_util.function(
# [self.policy_obs_ph, self.policy_acs_ph, self.policy_gammas_ph,
# self.expert_obs_ph, self.expert_acs_ph, self.expert_gammas_ph,
# s, [rewards_train_op, print_op])
def build_graph(self, obs_ph, acs_ph, reuse=False, scope=None):
"""
build the graph
:param obs_ph: (tf.Tensor) the observation placeholder
:param acs_ph: (tf.Tensor) the action placeholder
:param reuse: (bool)
:return: (tf.Tensor) the graph output
"""
with tf.variable_scope(scope):
if reuse:
tf.get_variable_scope().reuse_variables()
if self.normalize:
obs = (tf.cast(obs_ph, tf.float32) - self.obs_rms.mean) / tf.cast(self.obs_rms.std, tf.float32)
else:
obs = tf.cast(obs_ph, tf.float32)
if self.discrete_actions:
one_hot_actions = tf.one_hot(acs_ph, self.n_actions)
actions_ph = tf.cast(one_hot_actions, tf.float32)
else:
actions_ph = acs_ph
_input = tf.concat([obs, actions_ph], axis=1) # concatenate the two input -> form a transition
p_h1 = tf.contrib.layers.fully_connected(_input, self.hidden_size, activation_fn=tf.nn.tanh)
p_h2 = tf.contrib.layers.fully_connected(p_h1, self.hidden_size, activation_fn=tf.nn.tanh)
# rewards = tf.contrib.layers.fully_connected(p_h2, 1, activation_fn=tf.nn.tanh)
# rewards = tf.contrib.layers.fully_connected(p_h2, 1, activation_fn=tf.math.sigmoid)
rewards = tf.contrib.layers.fully_connected(p_h2, 1, activation_fn=tf.identity)
# last_layer_init = tf.contrib.layers.variance_scaling_initializer(factor=0.01, mode='FAN_AVG', uniform=True)
# rewards = tf.contrib.layers.fully_connected(p_h2, 1, activation_fn=tf.identity, weights_initializer=last_layer_init)
return rewards
def get_trainable_variables(self):
"""
Get all the trainable variables from the graph
:return: ([tf.Tensor]) the variables
"""
return tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, self.scope)# + tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "oldreward")
def get_reward(self, obs, actions):
"""
Predict the reward using the observation and action
:param obs: (tf.Tensor or np.ndarray) the observation
:param actions: (tf.Tensor or np.ndarray) the action
:return: (np.ndarray) the reward
"""
# sess = tf.get_default_session()
if len(obs.shape) == 1:
obs = np.expand_dims(obs, 0)
if len(actions.shape) == 1:
actions = np.expand_dims(actions, 0)
elif len(actions.shape) == 0:
# one discrete action
actions = np.expand_dims(actions, 0)
feed_dict = {self.policy_obs_ph: obs, self.policy_acs_ph: actions}
reward = self.sess.run(self.reward_op, feed_dict)
return reward
| 49.161812 | 161 | 0.644461 | 7,805 | 60,764 | 4.73517 | 0.041768 | 0.018507 | 0.017209 | 0.010147 | 0.92505 | 0.898073 | 0.878105 | 0.860896 | 0.848152 | 0.830754 | 0 | 0.013757 | 0.252337 | 60,764 | 1,235 | 162 | 49.201619 | 0.79974 | 0.343443 | 0 | 0.774306 | 0 | 0 | 0.032328 | 0.005115 | 0 | 0 | 0 | 0.004858 | 0 | 1 | 0.041667 | false | 0 | 0.012153 | 0 | 0.09375 | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
196cb152cdaf8fd68a64a57a0c255684bc13e4be | 5,514 | py | Python | tests/test_api.py | brunocroh/wifidog-auth-flask | a215cd9ccebc061723a56ce05db62821354f6785 | [
"MIT"
] | 18 | 2016-04-19T08:04:46.000Z | 2021-12-15T06:45:04.000Z | tests/test_api.py | brunocroh/wifidog-auth-flask | a215cd9ccebc061723a56ce05db62821354f6785 | [
"MIT"
] | 3 | 2016-12-02T14:40:52.000Z | 2022-01-15T01:03:45.000Z | tests/test_api.py | brunocroh/wifidog-auth-flask | a215cd9ccebc061723a56ce05db62821354f6785 | [
"MIT"
] | 8 | 2015-11-30T13:21:44.000Z | 2018-12-31T05:56:41.000Z | import json
from tests import TestCase
class TestApi(TestCase):
def test_api_networks_index_as_anonymous(self):
response = self.client.get('/api/networks')
self.assertEqual(403, response.status_code)
def test_api_networks_index_as_gateway(self):
self.login('main-gateway1@example.com', 'admin')
response = self.client.get('/api/networks')
self.assertEqual(200, response.status_code)
networks = json.loads(response.get_data(True))
self.assertEqual(1, len(networks))
self.assertEqual('main-network', networks[0]['id'])
def test_api_networks_index_as_network(self):
self.login('main-network@example.com', 'admin')
response = self.client.get('/api/networks')
self.assertEqual(200, response.status_code)
networks = json.loads(response.get_data(True))
self.assertEqual(1, len(networks))
self.assertEqual('main-network', networks[0]['id'])
def test_api_networks_index_as_super(self):
self.login('super-admin@example.com', 'admin')
response = self.client.get('/api/networks')
self.assertEqual(200, response.status_code)
networks = json.loads(response.get_data(True))
self.assertEqual(2, len(networks))
self.assertEqual('main-network', networks[0]['id'])
self.assertEqual('other-network', networks[1]['id'])
def test_api_gateways_index_as_anonymous(self):
response = self.client.get('/api/gateways')
self.assertEqual(403, response.status_code)
def test_api_gateways_index_as_gateway(self):
self.login('main-gateway1@example.com', 'admin')
response = self.client.get('/api/gateways', follow_redirects=True)
self.assertEqual(200, response.status_code)
gateways = json.loads(response.get_data(True))
self.assertEqual(1, len(gateways))
self.assertEqual('main-gateway1', gateways[0]['id'])
def test_api_gateways_index_as_network(self):
self.login('main-network@example.com', 'admin')
response = self.client.get('/api/gateways')
self.assertEqual(200, response.status_code)
gateways = json.loads(response.get_data(True))
self.assertEqual(2, len(gateways))
self.assertEqual('main-gateway1', gateways[0]['id'])
self.assertEqual('main-gateway2', gateways[1]['id'])
def test_api_gateways_index_as_super(self):
self.login('super-admin@example.com', 'admin')
response = self.client.get('/api/gateways')
self.assertEqual(200, response.status_code)
gateways = json.loads(response.get_data(True))
self.assertEqual(4, len(gateways))
self.assertEqual('main-gateway1', gateways[0]['id'])
self.assertEqual('main-gateway2', gateways[1]['id'])
self.assertEqual('other-gateway1', gateways[2]['id'])
self.assertEqual('other-gateway2', gateways[3]['id'])
def test_api_users_index_as_anonymous(self):
response = self.client.get('/api/users')
self.assertEqual(403, response.status_code)
def test_api_users_index_as_gateway(self):
self.login('main-gateway1@example.com', 'admin')
response = self.client.get('/api/users')
self.assertEqual(200, response.status_code)
users = json.loads(response.get_data(True))
self.assertEqual(1, len(users))
self.assertEqual('main-gateway1@example.com', users[0]['email'])
def test_api_users_index_as_network(self):
self.login('main-network@example.com', 'admin')
response = self.client.get('/api/users?sort={"email":false}')
self.assertEqual(200, response.status_code)
users = json.loads(response.get_data(True))
self.assertEqual(3, len(users))
self.assertEqual('main-gateway1@example.com', users[0]['email'])
self.assertEqual('main-gateway2@example.com', users[1]['email'])
self.assertEqual('main-network@example.com', users[2]['email'])
def test_api_users_index_as_super(self):
self.login('super-admin@example.com', 'admin')
response = self.client.get('/api/users')
self.assertEqual(200, response.status_code)
users = json.loads(response.get_data(True))
self.assertEqual(7, len(users))
def test_api_vouchers_index_as_anonymous(self):
response = self.client.get('/api/vouchers')
self.assertEqual(403, response.status_code)
def test_api_vouchers_index_as_gateway(self):
self.login('main-gateway1@example.com', 'admin')
response = self.client.get('/api/vouchers')
self.assertEqual(200, response.status_code)
vouchers = json.loads(response.get_data(True))
self.assertEqual(2, len(vouchers))
self.assertEqual('main-1-2', vouchers[0]['code'])
self.assertEqual('main-1-1', vouchers[1]['code'])
def test_api_vouchers_index_as_network(self):
self.login('main-network@example.com', 'admin')
response = self.client.get('/api/vouchers?sort={"code":false}')
self.assertEqual(200, response.status_code)
vouchers = json.loads(response.get_data(True))
self.assertEqual(4, len(vouchers))
def test_api_users_index_as_super(self):
self.login('super-admin@example.com', 'admin')
response = self.client.get('/api/users')
self.assertEqual(200, response.status_code)
users = json.loads(response.get_data(True))
self.assertEqual(7, len(users))
| 35.346154 | 74 | 0.665579 | 695 | 5,514 | 5.123741 | 0.076259 | 0.189554 | 0.044931 | 0.094356 | 0.895254 | 0.895254 | 0.875035 | 0.853131 | 0.834035 | 0.699242 | 0 | 0.021205 | 0.187523 | 5,514 | 155 | 75 | 35.574194 | 0.773661 | 0 | 0 | 0.663462 | 0 | 0 | 0.161226 | 0.081792 | 0 | 0 | 0 | 0 | 0.432692 | 1 | 0.153846 | false | 0 | 0.019231 | 0 | 0.182692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1970d5705201c51095be6346c7b2ee83f6519f47 | 142 | py | Python | codewars/hello-world-without-strings.py | andraantariksa/code-exercise-answer | 69b7dbdc081cdb094cb110a72bc0c9242d3d344d | [
"MIT"
] | 1 | 2019-11-06T15:17:48.000Z | 2019-11-06T15:17:48.000Z | codewars/hello-world-without-strings.py | andraantariksa/code-exercise-answer | 69b7dbdc081cdb094cb110a72bc0c9242d3d344d | [
"MIT"
] | null | null | null | codewars/hello-world-without-strings.py | andraantariksa/code-exercise-answer | 69b7dbdc081cdb094cb110a72bc0c9242d3d344d | [
"MIT"
] | 1 | 2018-11-13T08:43:26.000Z | 2018-11-13T08:43:26.000Z | def hello_world():
return chr(72)+chr(101)+chr(108)+chr(108)+chr(111)+chr(44)+chr(32)+chr(87)+chr(111)+chr(114)+chr(108)+chr(100)+chr(33)
| 47.333333 | 122 | 0.65493 | 30 | 142 | 3.066667 | 0.5 | 0.195652 | 0.293478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.253731 | 0.056338 | 142 | 2 | 123 | 71 | 0.432836 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
1990e7f7d5d0158dcbdab2f22bc4e8c777471c9a | 8,336 | py | Python | src/genie/libs/parser/nxos/tests/ShowRoutingIpv6VrfAll/cli/equal/golden_output_1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/nxos/tests/ShowRoutingIpv6VrfAll/cli/equal/golden_output_1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/nxos/tests/ShowRoutingIpv6VrfAll/cli/equal/golden_output_1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z |
expected_output = {
"vrf": {
"default": {
"address_family": {
"ipv6 unicast": {
"bgp_distance_internal_as": 200,
"ip": {
"2001:db8:1:1::1/128": {
"attach": "attached",
"best_route": {
"unicast": {
"nexthop": {
"2001:db8:1:1::1": {
"protocol": {
"local": {
"interface": "Ethernet1/1",
"metric": "0",
"uptime": "00:15:46",
"preference": "0"}}}}}},
"mbest_num": "0",
"ubest_num": "1"
},
"2001:db8:1:1::/64": {
"attach": "attached",
"best_route": {
"unicast": {
"nexthop": {
"2001:db8:1:1::1": {
"protocol": {
"direct": {
"interface": "Ethernet1/1",
"metric": "0",
"uptime": "00:15:46",
"preference": "0"}}}}}},
"mbest_num": "0",
"ubest_num": "1"
},
"2001:db8:2:2::2/128": {
"attach": "attached",
"best_route": {
"unicast": {
"nexthop": {
"2001:db8:2:2::2": {
"protocol": {
"local": {
"interface": "Ethernet1/1",
"metric": "0",
"tag": "222",
"uptime": "00:15:46",
"preference": "0"}}}}}},
"mbest_num": "0",
"ubest_num": "1"
},
"2001:db8::5054:ff:fed5:63f9/128": {
"attach": "attached",
"best_route": {
"unicast": {
"nexthop": {
"2001:db8::5054:ff:fed5:63f9": {
"protocol": {
"local": {
"interface": "Ethernet1/1",
"metric": "0",
"uptime": "00:15:46",
"preference": "0"}}}}}},
"mbest_num": "0",
"ubest_num": "1"
},
"2001:db8::/64": {
"attach": "attached",
"best_route": {
"unicast": {
"nexthop": {
"2001:db8::5054:ff:fed5:63f9": {
"protocol": {
"direct": {
"interface": "Ethernet1/1",
"metric": "0",
"uptime": "00:15:46",
"preference": "0"}}}}}},
"mbest_num": "0",
"ubest_num": "1"
},
"2001:db8:cdc9:121::/64": {
"mbest_num": "0",
"ubest_num": "1",
"best_route": {
"unicast": {
"nexthop": {
"::ffff:10.4.1.1": {
"protocol": {
"bgp": {
"uptime": "00:35:51",
"tag": "200",
"route_table": "default:IPv4",
"attribute": "internal",
"mpls": True,
"metric": "2219",
"preference": "200",
"protocol_id": "100"}}}}}}},
"2001:db8:2:2::/64": {
"attach": "attached",
"best_route": {
"unicast": {
"nexthop": {
"2001:db8:2:2::2": {
"protocol": {
"direct": {
"interface": "Ethernet1/1",
"metric": "0",
"tag": "222",
"uptime": "00:15:46",
"preference": "0"}}}}}},
"mbest_num": "0",
"ubest_num": "1"}}}}},
"VRF1": {
"address_family": {
"vpnv6 unicast": {
"bgp_distance_internal_as": 200,
"ip": {
"2001:db8:cdc9:144::/64": {
"mbest_num": "0",
"ubest_num": "1",
"best_route": {
"unicast": {
"nexthop": {
"::ffff:10.4.1.1": {
"protocol": {
"bgp": {
"uptime": "00:35:51",
"tag": "200",
"mpls_vpn": True,
"attribute": "internal",
"route_table": "default:IPv4",
"metric": "2219",
"preference": "200",
"protocol_id": "100"}}}}}}}}}}}}
}
| 59.542857 | 93 | 0.168426 | 330 | 8,336 | 4.139394 | 0.19697 | 0.071742 | 0.093704 | 0.1347 | 0.866032 | 0.86164 | 0.851391 | 0.79063 | 0.79063 | 0.721083 | 0 | 0.127126 | 0.732006 | 8,336 | 139 | 94 | 59.971223 | 0.484333 | 0 | 0 | 0.818841 | 0 | 0 | 0.179004 | 0.021236 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
5fffb5e57c2d1e75e947723c18968bb19ba6172b | 64 | py | Python | sphinx_data_viewer/__init__.py | useblocks/sphinx-dataa-viewer | b4bf9f6cd1eff729268728b892b4108ff1df9dd2 | [
"MIT"
] | 4 | 2021-11-26T10:19:50.000Z | 2021-12-10T06:19:39.000Z | sphinx_data_viewer/__init__.py | useblocks/sphinx-dataa-viewer | b4bf9f6cd1eff729268728b892b4108ff1df9dd2 | [
"MIT"
] | 3 | 2021-12-01T15:31:51.000Z | 2021-12-08T09:00:19.000Z | sphinx_data_viewer/__init__.py | useblocks/sphinx-dataa-viewer | b4bf9f6cd1eff729268728b892b4108ff1df9dd2 | [
"MIT"
] | 1 | 2021-12-02T20:54:47.000Z | 2021-12-02T20:54:47.000Z | from sphinx_data_viewer.sphinx_data_viewer import setup # NOQA
| 32 | 63 | 0.859375 | 10 | 64 | 5.1 | 0.7 | 0.392157 | 0.627451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109375 | 64 | 1 | 64 | 64 | 0.894737 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2714cfeca1b2d8c339d6adaa8faad855e888e738 | 133 | py | Python | src/blueprints/telegram_bot/__init__.py | primeithard/yandex-disk-telegram-bot | 21aecdecd503d0ac441c6ef7f1980ff52370d9aa | [
"MIT"
] | 15 | 2020-05-12T23:23:38.000Z | 2022-02-22T00:44:06.000Z | src/blueprints/telegram_bot/__init__.py | primeithard/yandex-disk-telegram-bot | 21aecdecd503d0ac441c6ef7f1980ff52370d9aa | [
"MIT"
] | 5 | 2020-08-03T08:57:37.000Z | 2022-01-31T08:30:03.000Z | src/blueprints/telegram_bot/__init__.py | primeithard/yandex-disk-telegram-bot | 21aecdecd503d0ac441c6ef7f1980ff52370d9aa | [
"MIT"
] | 7 | 2020-08-15T20:24:56.000Z | 2021-09-26T21:43:46.000Z | from .bp import bp as telegram_bot_blueprint
from .webhook import views as webhook_views
from .yd_auth import views as yd_auth_views
| 33.25 | 44 | 0.842105 | 24 | 133 | 4.416667 | 0.458333 | 0.207547 | 0.245283 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135338 | 133 | 3 | 45 | 44.333333 | 0.921739 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
27d44a4dfae9bc7fcacabae119d4378082f27b8b | 7,523 | py | Python | tests/test_pagination_tags.py | ralfzosel/django-pagination-bootstrap | 20d431bb9950269626ff88fa6be7b2f27faf1c29 | [
"MIT"
] | 27 | 2015-03-06T22:19:40.000Z | 2021-11-14T22:04:11.000Z | tests/test_pagination_tags.py | ralfzosel/django-pagination-bootstrap | 20d431bb9950269626ff88fa6be7b2f27faf1c29 | [
"MIT"
] | 90 | 2015-04-16T12:00:16.000Z | 2021-01-23T21:51:11.000Z | tests/test_pagination_tags.py | ralfzosel/django-pagination-bootstrap | 20d431bb9950269626ff88fa6be7b2f27faf1c29 | [
"MIT"
] | 20 | 2015-03-28T19:24:48.000Z | 2022-03-03T09:11:09.000Z | from django import template
from django.test import TestCase
from .test_http_request import TestHttpRequest
class TestTemplatePaginateTags(TestCase):
def test_render_range_by_two(self) -> None:
t = template.Template(
"{% load pagination_tags %}{% autopaginate var 2 %}{% paginate %}"
)
c = template.Context({"var": range(21), "request": TestHttpRequest()})
self.assertEqual(
t.render(c),
'\n\n\n<nav aria-label="Page pagination">\n <ul class="pagination">\n \n <li class="page-item disabled"><a class="page-link" href="#" onclick="javascript: return false;">«</a></li>\n \n \n \n \n <li class="page-item active"><a class="page-link" href="#" onclick="javascript: return false;">1</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=2">2</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=3">3</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=4">4</a></li>\n \n \n \n \n <li class="page-item disabled"><a class="page-link" href="#" onclick="javascript: return false;">...</a></li>\n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=8">8</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=9">9</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=10">10</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=11">11</a></li>\n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=2">»</a></li>\n \n </ul>\n \n</nav>\n',
)
def test_render_range_by_two_one_orphan(self) -> None:
t = template.Template(
"{% load pagination_tags %}{% autopaginate var 2 1 %}{% paginate %}"
)
c = template.Context({"var": range(20), "request": TestHttpRequest()})
self.assertEqual(
t.render(c),
'\n\n\n<nav aria-label="Page pagination">\n <ul class="pagination">\n \n <li class="page-item disabled"><a class="page-link" href="#" onclick="javascript: return false;">«</a></li>\n \n \n \n \n <li class="page-item active"><a class="page-link" href="#" onclick="javascript: return false;">1</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=2">2</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=3">3</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=4">4</a></li>\n \n \n \n \n <li class="page-item disabled"><a class="page-link" href="#" onclick="javascript: return false;">...</a></li>\n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=7">7</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=8">8</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=9">9</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=10">10</a></li>\n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=2">»</a></li>\n \n </ul>\n \n</nav>\n',
)
def test_render_range_by_one(self) -> None:
t = template.Template(
"{% load pagination_tags %}{% autopaginate var %}{% paginate %}"
)
c = template.Context({"var": range(21), "request": TestHttpRequest()})
self.assertEqual(
t.render(c),
'\n\n\n<nav aria-label="Page pagination">\n <ul class="pagination">\n \n <li class="page-item disabled"><a class="page-link" href="#" onclick="javascript: return false;">«</a></li>\n \n \n \n \n <li class="page-item active"><a class="page-link" href="#" onclick="javascript: return false;">1</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=2">2</a></li>\n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=2">»</a></li>\n \n </ul>\n \n</nav>\n',
)
def test_render_range_by_twenty(self) -> None:
t = template.Template(
"{% load pagination_tags %}{% autopaginate var 20 %}{% paginate %}"
)
c = template.Context({"var": range(21), "request": TestHttpRequest()})
self.assertEqual(
t.render(c),
'\n\n\n<nav aria-label="Page pagination">\n <ul class="pagination">\n \n <li class="page-item disabled"><a class="page-link" href="#" onclick="javascript: return false;">«</a></li>\n \n \n \n \n <li class="page-item active"><a class="page-link" href="#" onclick="javascript: return false;">1</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=2">2</a></li>\n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=2">»</a></li>\n \n </ul>\n \n</nav>\n',
)
def test_render_range_by_var(self) -> None:
t = template.Template(
"{% load pagination_tags %}{% autopaginate var by %}{% paginate %}"
)
c = template.Context({"var": range(21), "by": 20, "request": TestHttpRequest()})
self.assertEqual(
t.render(c),
'\n\n\n<nav aria-label="Page pagination">\n <ul class="pagination">\n \n <li class="page-item disabled"><a class="page-link" href="#" onclick="javascript: return false;">«</a></li>\n \n \n \n \n <li class="page-item active"><a class="page-link" href="#" onclick="javascript: return false;">1</a></li>\n \n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=2">2</a></li>\n \n \n \n \n <li class="page-item"><a class="page-link" href="?page=2">»</a></li>\n \n </ul>\n \n</nav>\n',
)
def test_render_range_by_var_as_name(self) -> None:
t = template.Template(
"{% load pagination_tags %}{% autopaginate var by as foo %}{{ foo }}"
)
c = template.Context(
{"var": list(range(21)), "by": 20, "request": TestHttpRequest()}
)
self.assertEqual(
t.render(c),
"[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]",
)
def test_render_invalid_arguments(self) -> None:
with self.assertRaisesMessage(
template.TemplateSyntaxError,
"autopaginate tag takes one required argument and one optional argument",
):
template.Template("{% load pagination_tags %}{% autopaginate %}")
def test_render_invalid_orphans(self) -> None:
with self.assertRaisesMessage(
template.TemplateSyntaxError, "Got a, but expected integer."
):
template.Template("{% load pagination_tags %}{% autopaginate var 2 a %}")
def test_render_range_by_var_as_index_error(self) -> None:
with self.assertRaisesMessage(
template.TemplateSyntaxError,
"Context variable assignment must take the form of {% autopaginate object.example_set.all ... as context_var_name %}",
):
template.Template("{% load pagination_tags %}{% autopaginate var by as %}")
| 84.52809 | 1,344 | 0.545128 | 1,106 | 7,523 | 3.658228 | 0.094937 | 0.077113 | 0.079338 | 0.07217 | 0.886802 | 0.886802 | 0.861097 | 0.80178 | 0.789422 | 0.775828 | 0 | 0.017945 | 0.24445 | 7,523 | 88 | 1,345 | 85.488636 | 0.693878 | 0 | 0 | 0.415584 | 0 | 0.077922 | 0.69334 | 0.123754 | 0 | 0 | 0 | 0 | 0.116883 | 1 | 0.116883 | false | 0 | 0.038961 | 0 | 0.168831 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fd7f95b8e5cd5e33f452f0755937d6a7c59e71d7 | 75,317 | py | Python | dlkit/services/authorization.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 2 | 2018-02-23T12:16:11.000Z | 2020-10-08T17:54:24.000Z | dlkit/services/authorization.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 87 | 2017-04-21T18:57:15.000Z | 2021-12-13T19:43:57.000Z | dlkit/services/authorization.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 1 | 2018-03-01T16:44:25.000Z | 2018-03-01T16:44:25.000Z | """DLKit Services implementations of authorization service."""
# pylint: disable=no-init
# osid specification includes some 'marker' interfaces.
# pylint: disable=too-many-ancestors
# number of ancestors defined in spec.
# pylint: disable=too-few-public-methods,too-many-public-methods
# number of methods defined in spec. Worse yet, these are aggregates.
# pylint: disable=invalid-name
# method and class names defined in spec.
# pylint: disable=no-self-use,unused-argument
# to catch unimplemented methods.
# pylint: disable=super-init-not-called
# it just isn't.
from . import osid
from .osid_errors import Unimplemented, IllegalState, InvalidArgument
from dlkit.abstract_osid.authorization import objects as abc_authorization_objects
from dlkit.manager_impls.authorization import managers as authorization_managers
DEFAULT = 0
COMPARATIVE = 0
PLENARY = 1
FEDERATED = 0
ISOLATED = 1
ANY_STATUS = 0
ACTIVE = 1
UNSEQUESTERED = 0
SEQUESTERED = 1
AUTOMATIC = 0
MANDATORY = 1
DISABLED = -1
class AuthorizationProfile(osid.OsidProfile, authorization_managers.AuthorizationProfile):
"""AuthorizationProfile convenience adapter including related Session methods."""
def __init__(self):
self._provider_manager = None
def supports_authorization(self):
"""Pass through to provider supports_authorization"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.supports_resource_lookup
return self._provider_manager.supports_authorization()
def supports_authorization_lookup(self):
"""Pass through to provider supports_authorization_lookup"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.supports_resource_lookup
return self._provider_manager.supports_authorization_lookup()
def supports_authorization_query(self):
"""Pass through to provider supports_authorization_query"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.supports_resource_lookup
return self._provider_manager.supports_authorization_query()
def supports_authorization_admin(self):
"""Pass through to provider supports_authorization_admin"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.supports_resource_lookup
return self._provider_manager.supports_authorization_admin()
def supports_authorization_vault(self):
"""Pass through to provider supports_authorization_vault"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.supports_resource_lookup
return self._provider_manager.supports_authorization_vault()
def supports_authorization_vault_assignment(self):
"""Pass through to provider supports_authorization_vault_assignment"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.supports_resource_lookup
return self._provider_manager.supports_authorization_vault_assignment()
def supports_vault_lookup(self):
"""Pass through to provider supports_vault_lookup"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.supports_resource_lookup
return self._provider_manager.supports_vault_lookup()
def supports_vault_query(self):
"""Pass through to provider supports_vault_query"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.supports_resource_lookup
return self._provider_manager.supports_vault_query()
def supports_vault_admin(self):
"""Pass through to provider supports_vault_admin"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.supports_resource_lookup
return self._provider_manager.supports_vault_admin()
def supports_vault_hierarchy(self):
"""Pass through to provider supports_vault_hierarchy"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.supports_resource_lookup
return self._provider_manager.supports_vault_hierarchy()
def supports_vault_hierarchy_design(self):
"""Pass through to provider supports_vault_hierarchy_design"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.supports_resource_lookup
return self._provider_manager.supports_vault_hierarchy_design()
def get_authorization_record_types(self):
"""Pass through to provider get_authorization_record_types"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.get_resource_record_types
return self._provider_manager.get_authorization_record_types()
authorization_record_types = property(fget=get_authorization_record_types)
def get_authorization_search_record_types(self):
"""Pass through to provider get_authorization_search_record_types"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.get_resource_record_types
return self._provider_manager.get_authorization_search_record_types()
authorization_search_record_types = property(fget=get_authorization_search_record_types)
def get_function_record_types(self):
"""Pass through to provider get_function_record_types"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.get_resource_record_types
return self._provider_manager.get_function_record_types()
function_record_types = property(fget=get_function_record_types)
def get_function_search_record_types(self):
"""Pass through to provider get_function_search_record_types"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.get_resource_record_types
return self._provider_manager.get_function_search_record_types()
function_search_record_types = property(fget=get_function_search_record_types)
def get_qualifier_record_types(self):
"""Pass through to provider get_qualifier_record_types"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.get_resource_record_types
return self._provider_manager.get_qualifier_record_types()
qualifier_record_types = property(fget=get_qualifier_record_types)
def get_qualifier_search_record_types(self):
"""Pass through to provider get_qualifier_search_record_types"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.get_resource_record_types
return self._provider_manager.get_qualifier_search_record_types()
qualifier_search_record_types = property(fget=get_qualifier_search_record_types)
def get_vault_record_types(self):
"""Pass through to provider get_vault_record_types"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.get_resource_record_types
return self._provider_manager.get_vault_record_types()
vault_record_types = property(fget=get_vault_record_types)
def get_vault_search_record_types(self):
"""Pass through to provider get_vault_search_record_types"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.get_resource_record_types
return self._provider_manager.get_vault_search_record_types()
vault_search_record_types = property(fget=get_vault_search_record_types)
def get_authorization_condition_record_types(self):
"""Pass through to provider get_authorization_condition_record_types"""
# Implemented from kitosid template for -
# osid.resource.ResourceProfile.get_resource_record_types
return self._provider_manager.get_authorization_condition_record_types()
authorization_condition_record_types = property(fget=get_authorization_condition_record_types)
class AuthorizationManager(osid.OsidManager, osid.OsidSession, AuthorizationProfile, authorization_managers.AuthorizationManager):
"""AuthorizationManager convenience adapter including related Session methods."""
def __init__(self, proxy=None):
self._runtime = None
self._provider_manager = None
self._provider_sessions = dict()
self._session_management = AUTOMATIC
self._vault_view = DEFAULT
# This is to initialize self._proxy
osid.OsidSession.__init__(self, proxy)
self._sub_package_provider_managers = dict()
def _set_vault_view(self, session):
"""Sets the underlying vault view to match current view"""
if self._vault_view == COMPARATIVE:
try:
session.use_comparative_vault_view()
except AttributeError:
pass
else:
try:
session.use_plenary_vault_view()
except AttributeError:
pass
def _get_provider_session(self, session_name, proxy=None):
"""Gets the session for the provider"""
agent_key = self._get_agent_key(proxy)
if session_name in self._provider_sessions[agent_key]:
return self._provider_sessions[agent_key][session_name]
else:
session = self._instantiate_session('get_' + session_name, self._proxy)
self._set_vault_view(session)
if self._session_management != DISABLED:
self._provider_sessions[agent_key][session_name] = session
return session
def _get_sub_package_provider_manager(self, sub_package_name):
if sub_package_name in self._sub_package_provider_managers:
return self._sub_package_provider_managers[sub_package_name]
config = self._runtime.get_configuration()
parameter_id = Id('parameter:{0}ProviderImpl@dlkit_service'.format(sub_package_name))
provider_impl = config.get_value_by_parameter(parameter_id).get_string_value()
if self._proxy is None:
# need to add version argument
sub_package = self._runtime.get_manager(sub_package_name.upper(), provider_impl)
else:
# need to add version argument
sub_package = self._runtime.get_proxy_manager(sub_package_name.upper(), provider_impl)
self._sub_package_provider_managers[sub_package_name] = sub_package
return sub_package
def _get_sub_package_provider_session(self, sub_package, session_name, proxy=None):
"""Gets the session from a sub-package"""
agent_key = self._get_agent_key(proxy)
if session_name in self._provider_sessions[agent_key]:
return self._provider_sessions[agent_key][session_name]
else:
manager = self._get_sub_package_provider_manager(sub_package)
try:
session = self._instantiate_session('get_' + session_name + '_for_bank',
proxy=self._proxy,
manager=manager)
except AttributeError:
session = self._instantiate_session('get_' + session_name,
proxy=self._proxy,
manager=manager)
self._set_bank_view(session)
if self._session_management != DISABLED:
self._provider_sessions[agent_key][session_name] = session
return session
def _instantiate_session(self, method_name, proxy=None, *args, **kwargs):
"""Instantiates a provider session"""
if 'manager' in kwargs:
session_class = getattr(kwargs['manager'], method_name)
del kwargs['manager']
else:
session_class = getattr(self._provider_manager, method_name)
if proxy is None:
try:
return session_class(bank_id=self._catalog_id, *args, **kwargs)
except AttributeError:
return session_class(*args, **kwargs)
else:
try:
return session_class(bank_id=self._catalog_id, proxy=proxy, *args, **kwargs)
except AttributeError:
return session_class(proxy=proxy, *args, **kwargs)
def initialize(self, runtime):
"""OSID Manager initialize"""
from .primitives import Id
if self._runtime is not None:
raise IllegalState('Manager has already been initialized')
self._runtime = runtime
config = runtime.get_configuration()
parameter_id = Id('parameter:authorizationProviderImpl@dlkit_service')
provider_impl = config.get_value_by_parameter(parameter_id).get_string_value()
if self._proxy is None:
# need to add version argument
self._provider_manager = runtime.get_manager('AUTHORIZATION', provider_impl)
else:
# need to add version argument
self._provider_manager = runtime.get_proxy_manager('AUTHORIZATION', provider_impl)
def close_sessions(self):
"""Close all sessions, unless session management is set to MANDATORY"""
if self._session_management != MANDATORY:
self._provider_sessions = dict()
def use_automatic_session_management(self):
"""Session state will be saved unless closed by consumers"""
self._session_management = AUTOMATIC
def use_mandatory_session_management(self):
"""Session state will be saved and can not be closed by consumers"""
self._session_management = MANDATORY
def disable_session_management(self):
"""Session state will never be saved"""
self._session_management = DISABLED
self.close_sessions()
def get_authorization_session(self, *args, **kwargs):
"""Pass through to provider get_authorization_session"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_catalog_template
return self._provider_manager.get_authorization_session(*args, **kwargs)
authorization_session = property(fget=get_authorization_session)
def get_authorization_session_for_vault(self, *args, **kwargs):
"""Pass through to provider get_authorization_session_for_vault"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_for_bin_catalog_template
return self._provider_manager.get_authorization_session_for_vault(*args, **kwargs)
def get_authorization_lookup_session(self, *args, **kwargs):
"""Pass through to provider get_authorization_lookup_session"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_catalog_template
return self._provider_manager.get_authorization_lookup_session(*args, **kwargs)
authorization_lookup_session = property(fget=get_authorization_lookup_session)
def get_authorization_lookup_session_for_vault(self, *args, **kwargs):
"""Pass through to provider get_authorization_lookup_session_for_vault"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_for_bin_catalog_template
return self._provider_manager.get_authorization_lookup_session_for_vault(*args, **kwargs)
def get_authorization_query_session(self, *args, **kwargs):
"""Pass through to provider get_authorization_query_session"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_catalog_template
return self._provider_manager.get_authorization_query_session(*args, **kwargs)
authorization_query_session = property(fget=get_authorization_query_session)
def get_authorization_query_session_for_vault(self, *args, **kwargs):
"""Pass through to provider get_authorization_query_session_for_vault"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_for_bin_catalog_template
return self._provider_manager.get_authorization_query_session_for_vault(*args, **kwargs)
def get_authorization_admin_session(self, *args, **kwargs):
"""Pass through to provider get_authorization_admin_session"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_catalog_template
return self._provider_manager.get_authorization_admin_session(*args, **kwargs)
authorization_admin_session = property(fget=get_authorization_admin_session)
def get_authorization_admin_session_for_vault(self, *args, **kwargs):
"""Pass through to provider get_authorization_admin_session_for_vault"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_for_bin_catalog_template
return self._provider_manager.get_authorization_admin_session_for_vault(*args, **kwargs)
def get_authorization_vault_session(self, *args, **kwargs):
"""Pass through to provider get_authorization_vault_session"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_manager_template
return self._provider_manager.get_authorization_vault_session(*args, **kwargs)
authorization_vault_session = property(fget=get_authorization_vault_session)
def get_authorization_vault_assignment_session(self, *args, **kwargs):
"""Pass through to provider get_authorization_vault_assignment_session"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_manager_template
return self._provider_manager.get_authorization_vault_assignment_session(*args, **kwargs)
authorization_vault_assignment_session = property(fget=get_authorization_vault_assignment_session)
def get_vault_lookup_session(self, *args, **kwargs):
"""Pass through to provider get_vault_lookup_session"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_manager_template
return self._provider_manager.get_vault_lookup_session(*args, **kwargs)
vault_lookup_session = property(fget=get_vault_lookup_session)
def get_vault_query_session(self, *args, **kwargs):
"""Pass through to provider get_vault_query_session"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_manager_template
return self._provider_manager.get_vault_query_session(*args, **kwargs)
vault_query_session = property(fget=get_vault_query_session)
def get_vault_admin_session(self, *args, **kwargs):
"""Pass through to provider get_vault_admin_session"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_manager_template
return self._provider_manager.get_vault_admin_session(*args, **kwargs)
vault_admin_session = property(fget=get_vault_admin_session)
def get_vault_hierarchy_session(self, *args, **kwargs):
"""Pass through to provider get_vault_hierarchy_session"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_manager_template
return self._provider_manager.get_vault_hierarchy_session(*args, **kwargs)
vault_hierarchy_session = property(fget=get_vault_hierarchy_session)
def get_vault_hierarchy_design_session(self, *args, **kwargs):
"""Pass through to provider get_vault_hierarchy_design_session"""
# Implemented from kitosid template for -
# osid.resource.ResourceManager.get_resource_lookup_session_manager_template
return self._provider_manager.get_vault_hierarchy_design_session(*args, **kwargs)
vault_hierarchy_design_session = property(fget=get_vault_hierarchy_design_session)
def get_authorization_batch_manager(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services')
authorization_batch_manager = property(fget=get_authorization_batch_manager)
def get_authorization_rules_manager(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services')
authorization_rules_manager = property(fget=get_authorization_rules_manager)
##
# The following methods are from osid.authorization.AuthorizationVaultSession
def use_comparative_vault_view(self):
"""Pass through to provider AuthorizationVaultSession.use_comparative_vault_view"""
self._vault_view = COMPARATIVE
# self._get_provider_session('authorization_vault_session') # To make sure the session is tracked
for session in self._get_provider_sessions():
try:
session.use_comparative_vault_view()
except AttributeError:
pass
def use_plenary_vault_view(self):
"""Pass through to provider AuthorizationVaultSession.use_plenary_vault_view"""
self._vault_view = PLENARY
# self._get_provider_session('authorization_vault_session') # To make sure the session is tracked
for session in self._get_provider_sessions():
try:
session.use_plenary_vault_view()
except AttributeError:
pass
def can_lookup_authorization_vault_mappings(self):
"""Pass through to provider AuthorizationVaultSession.can_lookup_authorization_vault_mappings"""
# Implemented from kitosid template for -
# osid.resource.ResourceBinSession.can_lookup_resource_bin_mappings
return self._get_provider_session('authorization_vault_session').can_lookup_authorization_vault_mappings()
def get_authorization_ids_by_vault(self, *args, **kwargs):
"""Pass through to provider AuthorizationVaultSession.get_authorization_ids_by_vault"""
# Implemented from kitosid template for -
# osid.resource.ResourceBinSession.get_resource_ids_by_bin
return self._get_provider_session('authorization_vault_session').get_authorization_ids_by_vault(*args, **kwargs)
def get_authorizations_by_vault(self, *args, **kwargs):
"""Pass through to provider AuthorizationVaultSession.get_authorizations_by_vault"""
# Implemented from kitosid template for -
# osid.resource.ResourceBinSession.get_resources_by_bin
return self._get_provider_session('authorization_vault_session').get_authorizations_by_vault(*args, **kwargs)
def get_authorizations_ids_by_vault(self, *args, **kwargs):
"""Pass through to provider AuthorizationVaultSession.get_authorizations_ids_by_vault"""
# Implemented from kitosid template for -
# osid.resource.ResourceBinSession.get_resource_ids_by_bin
return self._get_provider_session('authorization_vault_session').get_authorizations_ids_by_vault(*args, **kwargs)
def get_vault_ids_by_authorization(self, *args, **kwargs):
"""Pass through to provider AuthorizationVaultSession.get_vault_ids_by_authorization"""
# Implemented from kitosid template for -
# osid.resource.ResourceBinSession.get_bin_ids_by_resource
return self._get_provider_session('authorization_vault_session').get_vault_ids_by_authorization(*args, **kwargs)
def get_vault_by_authorization(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
##
# The following methods are from osid.authorization.AuthorizationVaultAssignmentSession
def can_assign_authorizations(self):
"""Pass through to provider AuthorizationVaultAssignmentSession.can_assign_authorizations"""
# Implemented from kitosid template for -
# osid.resource.ResourceBinAssignmentSession.can_assign_resources
return self._get_provider_session('authorization_vault_assignment_session').can_assign_authorizations()
def can_assign_authorizations_to_vault(self, *args, **kwargs):
"""Pass through to provider AuthorizationVaultAssignmentSession.can_assign_authorizations_to_vault"""
# Implemented from kitosid template for -
# osid.resource.ResourceBinAssignmentSession.can_assign_resources_to_bin
return self._get_provider_session('authorization_vault_assignment_session').can_assign_authorizations_to_vault(*args, **kwargs)
def get_assignable_vault_ids(self, *args, **kwargs):
"""Pass through to provider AuthorizationVaultAssignmentSession.get_assignable_vault_ids"""
# Implemented from kitosid template for -
# osid.resource.ResourceBinAssignmentSession.get_assignable_bin_ids
return self._get_provider_session('authorization_vault_assignment_session').get_assignable_vault_ids(*args, **kwargs)
def get_assignable_vault_ids_for_authorization(self, *args, **kwargs):
"""Pass through to provider AuthorizationVaultAssignmentSession.get_assignable_vault_ids_for_authorization"""
# Implemented from kitosid template for -
# osid.resource.ResourceBinAssignmentSession.get_assignable_bin_ids_for_resource
return self._get_provider_session('authorization_vault_assignment_session').get_assignable_vault_ids_for_authorization(*args, **kwargs)
def assign_authorization_to_vault(self, *args, **kwargs):
"""Pass through to provider AuthorizationVaultAssignmentSession.assign_authorization_to_vault"""
# Implemented from kitosid template for -
# osid.resource.ResourceBinAssignmentSession.assign_resource_to_bin
self._get_provider_session('authorization_vault_assignment_session').assign_authorization_to_vault(*args, **kwargs)
def unassign_authorization_from_vault(self, *args, **kwargs):
"""Pass through to provider AuthorizationVaultAssignmentSession.unassign_authorization_from_vault"""
# Implemented from kitosid template for -
# osid.resource.ResourceBinAssignmentSession.unassign_resource_from_bin
self._get_provider_session('authorization_vault_assignment_session').unassign_authorization_from_vault(*args, **kwargs)
def reassign_authorization_to_vault(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
##
# The following methods are from osid.authorization.VaultLookupSession
def can_lookup_vaults(self):
"""Pass through to provider VaultLookupSession.can_lookup_vaults"""
# Implemented from kitosid template for -
# osid.resource.BinLookupSession.can_lookup_bins_template
return self._get_provider_session('vault_lookup_session').can_lookup_vaults()
def get_vault(self, *args, **kwargs):
"""Pass through to provider VaultLookupSession.get_vault"""
# Implemented from kitosid template for -
# osid.resource.BinLookupSession.get_bin
return Vault(
self._provider_manager,
self._get_provider_session('vault_lookup_session').get_vault(*args, **kwargs),
self._runtime,
self._proxy)
def get_vaults_by_ids(self, *args, **kwargs):
"""Pass through to provider VaultLookupSession.get_vaults_by_ids"""
# Implemented from kitosid template for -
# osid.resource.BinLookupSession.get_bins_by_ids
catalogs = self._get_provider_session('vault_lookup_session').get_vaults_by_ids(*args, **kwargs)
cat_list = []
for cat in catalogs:
cat_list.append(Vault(self._provider_manager, cat, self._runtime, self._proxy))
return VaultList(cat_list)
def get_vaults_by_genus_type(self, *args, **kwargs):
"""Pass through to provider VaultLookupSession.get_vaults_by_genus_type"""
# Implemented from kitosid template for -
# osid.resource.BinLookupSession.get_bins_by_genus_type
catalogs = self._get_provider_session('vault_lookup_session').get_vaults_by_genus_type(*args, **kwargs)
cat_list = []
for cat in catalogs:
cat_list.append(Vault(self._provider_manager, cat, self._runtime, self._proxy))
return VaultList(cat_list)
def get_vaults_by_parent_genus_type(self, *args, **kwargs):
"""Pass through to provider VaultLookupSession.get_vaults_by_parent_genus_type"""
# Implemented from kitosid template for -
# osid.resource.BinLookupSession.get_bins_by_parent_genus_type
catalogs = self._get_provider_session('vault_lookup_session').get_vaults_by_parent_genus_type(*args, **kwargs)
cat_list = []
for cat in catalogs:
cat_list.append(Vault(self._provider_manager, cat, self._runtime, self._proxy))
return VaultList(cat_list)
def get_vaults_by_record_type(self, *args, **kwargs):
"""Pass through to provider VaultLookupSession.get_vaults_by_record_type"""
# Implemented from kitosid template for -
# osid.resource.BinLookupSession.get_bins_by_record_type
catalogs = self._get_provider_session('vault_lookup_session').get_vaults_by_record_type(*args, **kwargs)
cat_list = []
for cat in catalogs:
cat_list.append(Vault(self._provider_manager, cat, self._runtime, self._proxy))
return VaultList(cat_list)
def get_vaults_by_provider(self, *args, **kwargs):
"""Pass through to provider VaultLookupSession.get_vaults_by_provider"""
# Implemented from kitosid template for -
# osid.resource.BinLookupSession.get_bins_by_provider
catalogs = self._get_provider_session('vault_lookup_session').get_vaults_by_provider(*args, **kwargs)
cat_list = []
for cat in catalogs:
cat_list.append(Vault(self._provider_manager, cat, self._runtime, self._proxy))
return VaultList(cat_list)
def get_vaults(self):
"""Pass through to provider VaultLookupSession.get_vaults"""
# Implemented from kitosid template for -
# osid.resource.BinLookupSession.get_bins_template
catalogs = self._get_provider_session('vault_lookup_session').get_vaults()
cat_list = []
for cat in catalogs:
cat_list.append(Vault(self._provider_manager, cat, self._runtime, self._proxy))
return VaultList(cat_list)
vaults = property(fget=get_vaults)
##
# The following methods are from osid.authorization.VaultQuerySession
def can_search_vaults(self):
"""Pass through to provider VaultQuerySession.can_search_vaults"""
# Implemented from kitosid template for -
# osid.resource.BinQuerySession.can_search_bins_template
return self._get_provider_session('vault_query_session').can_search_vaults()
def get_vault_query(self):
"""Pass through to provider VaultQuerySession.get_vault_query"""
# Implemented from kitosid template for -
# osid.resource.BinQuerySession.get_bin_query_template
return self._get_provider_session('vault_query_session').get_vault_query()
vault_query = property(fget=get_vault_query)
def get_vaults_by_query(self, *args, **kwargs):
"""Pass through to provider VaultQuerySession.get_vaults_by_query"""
# Implemented from kitosid template for -
# osid.resource.BinQuerySession.get_bins_by_query_template
return self._get_provider_session('vault_query_session').get_vaults_by_query(*args, **kwargs)
##
# The following methods are from osid.authorization.VaultAdminSession
def can_create_vaults(self):
"""Pass through to provider VaultAdminSession.can_create_vaults"""
# Implemented from kitosid template for -
# osid.resource.BinAdminSession.can_create_bins
return self._get_provider_session('vault_admin_session').can_create_vaults()
def can_create_vault_with_record_types(self, *args, **kwargs):
"""Pass through to provider VaultAdminSession.can_create_vault_with_record_types"""
# Implemented from kitosid template for -
# osid.resource.BinAdminSession.can_create_bin_with_record_types
return self._get_provider_session('vault_admin_session').can_create_vault_with_record_types(*args, **kwargs)
def get_vault_form_for_create(self, *args, **kwargs):
"""Pass through to provider VaultAdminSession.get_vault_form_for_create"""
# Implemented from kitosid template for -
# osid.resource.BinAdminSession.get_bin_form_for_create
return self._get_provider_session('vault_admin_session').get_vault_form_for_create(*args, **kwargs)
def create_vault(self, *args, **kwargs):
"""Pass through to provider VaultAdminSession.create_vault"""
# Implemented from kitosid template for -
# osid.resource.BinAdminSession.create_bin
return Vault(
self._provider_manager,
self._get_provider_session('vault_admin_session').create_vault(*args, **kwargs),
self._runtime,
self._proxy)
def can_update_vaults(self):
"""Pass through to provider VaultAdminSession.can_update_vaults"""
# Implemented from kitosid template for -
# osid.resource.BinAdminSession.can_update_bins
return self._get_provider_session('vault_admin_session').can_update_vaults()
def get_vault_form_for_update(self, *args, **kwargs):
"""Pass through to provider VaultAdminSession.get_vault_form_for_update"""
# Implemented from kitosid template for -
# osid.resource.BinAdminSession.get_bin_form_for_update
return self._get_provider_session('vault_admin_session').get_vault_form_for_update(*args, **kwargs)
def get_vault_form(self, *args, **kwargs):
"""Pass through to provider VaultAdminSession.get_vault_form_for_update"""
# Implemented from kitosid template for -
# osid.resource.BinAdminSession.get_bin_form_for_update_template
# This method might be a bit sketchy. Time will tell.
if isinstance(args[-1], list) or 'vault_record_types' in kwargs:
return self.get_vault_form_for_create(*args, **kwargs)
else:
return self.get_vault_form_for_update(*args, **kwargs)
def update_vault(self, *args, **kwargs):
"""Pass through to provider VaultAdminSession.update_vault"""
# Implemented from kitosid template for -
# osid.resource.BinAdminSession.update_bin
# OSID spec does not require returning updated catalog
return Vault(
self._provider_manager,
self._get_provider_session('vault_admin_session').update_vault(*args, **kwargs),
self._runtime,
self._proxy)
def save_vault(self, vault_form, *args, **kwargs):
"""Pass through to provider VaultAdminSession.update_vault"""
# Implemented from kitosid template for -
# osid.resource.BinAdminSession.update_bin
if vault_form.is_for_update():
return self.update_vault(vault_form, *args, **kwargs)
else:
return self.create_vault(vault_form, *args, **kwargs)
def can_delete_vaults(self):
"""Pass through to provider VaultAdminSession.can_delete_vaults"""
# Implemented from kitosid template for -
# osid.resource.BinAdminSession.can_delete_bins
return self._get_provider_session('vault_admin_session').can_delete_vaults()
def delete_vault(self, *args, **kwargs):
"""Pass through to provider VaultAdminSession.delete_vault"""
# Implemented from kitosid template for -
# osid.resource.BinAdminSession.delete_bin
self._get_provider_session('vault_admin_session').delete_vault(*args, **kwargs)
def can_manage_vault_aliases(self):
"""Pass through to provider VaultAdminSession.can_manage_vault_aliases"""
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.can_manage_resource_aliases_template
return self._get_provider_session('vault_admin_session').can_manage_vault_aliases()
def alias_vault(self, *args, **kwargs):
"""Pass through to provider VaultAdminSession.alias_vault"""
# Implemented from kitosid template for -
# osid.resource.BinAdminSession.alias_bin
self._get_provider_session('vault_admin_session').alias_vault(*args, **kwargs)
##
# The following methods are from osid.authorization.VaultHierarchySession
def get_vault_hierarchy_id(self):
"""Pass through to provider VaultHierarchySession.get_vault_hierarchy_id"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.get_bin_hierarchy_id
return self._get_provider_session('vault_hierarchy_session').get_vault_hierarchy_id()
vault_hierarchy_id = property(fget=get_vault_hierarchy_id)
def get_vault_hierarchy(self):
"""Pass through to provider VaultHierarchySession.get_vault_hierarchy"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.get_bin_hierarchy
return self._get_provider_session('vault_hierarchy_session').get_vault_hierarchy()
vault_hierarchy = property(fget=get_vault_hierarchy)
def can_access_vault_hierarchy(self):
"""Pass through to provider VaultHierarchySession.can_access_vault_hierarchy"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.can_access_bin_hierarchy
return self._get_provider_session('vault_hierarchy_session').can_access_vault_hierarchy()
def get_root_vault_ids(self):
"""Pass through to provider VaultHierarchySession.get_root_vault_ids"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.get_root_bin_ids
return self._get_provider_session('vault_hierarchy_session').get_root_vault_ids()
root_vault_ids = property(fget=get_root_vault_ids)
def get_root_vaults(self):
"""Pass through to provider VaultHierarchySession.get_root_vaults"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.get_root_bins
return self._get_provider_session('vault_hierarchy_session').get_root_vaults()
root_vaults = property(fget=get_root_vaults)
def has_parent_vaults(self, *args, **kwargs):
"""Pass through to provider VaultHierarchySession.has_parent_vaults"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.has_parent_bins
return self._get_provider_session('vault_hierarchy_session').has_parent_vaults(*args, **kwargs)
def is_parent_of_vault(self, *args, **kwargs):
"""Pass through to provider VaultHierarchySession.is_parent_of_vault"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.is_parent_of_bin
return self._get_provider_session('vault_hierarchy_session').is_parent_of_vault(*args, **kwargs)
def get_parent_vault_ids(self, *args, **kwargs):
"""Pass through to provider VaultHierarchySession.get_parent_vault_ids"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.get_parent_bin_ids
return self._get_provider_session('vault_hierarchy_session').get_parent_vault_ids(*args, **kwargs)
def get_parent_vaults(self, *args, **kwargs):
"""Pass through to provider VaultHierarchySession.get_parent_vaults"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.get_parent_bins
return self._get_provider_session('vault_hierarchy_session').get_parent_vaults(*args, **kwargs)
def is_ancestor_of_vault(self, *args, **kwargs):
"""Pass through to provider VaultHierarchySession.is_ancestor_of_vault"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.is_ancestor_of_bin
return self._get_provider_session('vault_hierarchy_session').is_ancestor_of_vault(*args, **kwargs)
def has_child_vaults(self, *args, **kwargs):
"""Pass through to provider VaultHierarchySession.has_child_vaults"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.has_child_bins
return self._get_provider_session('vault_hierarchy_session').has_child_vaults(*args, **kwargs)
def is_child_of_vault(self, *args, **kwargs):
"""Pass through to provider VaultHierarchySession.is_child_of_vault"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.is_child_of_bin
return self._get_provider_session('vault_hierarchy_session').is_child_of_vault(*args, **kwargs)
def get_child_vault_ids(self, *args, **kwargs):
"""Pass through to provider VaultHierarchySession.get_child_vault_ids"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.get_child_bin_ids
return self._get_provider_session('vault_hierarchy_session').get_child_vault_ids(*args, **kwargs)
def get_child_vaults(self, *args, **kwargs):
"""Pass through to provider VaultHierarchySession.get_child_vaults"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.get_child_bins
return self._get_provider_session('vault_hierarchy_session').get_child_vaults(*args, **kwargs)
def is_descendant_of_vault(self, *args, **kwargs):
"""Pass through to provider VaultHierarchySession.is_descendant_of_vault"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.is_descendant_of_bin
return self._get_provider_session('vault_hierarchy_session').is_descendant_of_vault(*args, **kwargs)
def get_vault_node_ids(self, *args, **kwargs):
"""Pass through to provider VaultHierarchySession.get_vault_node_ids"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.get_bin_node_ids
return self._get_provider_session('vault_hierarchy_session').get_vault_node_ids(*args, **kwargs)
def get_vault_nodes(self, *args, **kwargs):
"""Pass through to provider VaultHierarchySession.get_vault_nodes"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchySession.get_bin_nodes
return self._get_provider_session('vault_hierarchy_session').get_vault_nodes(*args, **kwargs)
##
# The following methods are from osid.authorization.VaultHierarchyDesignSession
def can_modify_vault_hierarchy(self):
"""Pass through to provider VaultHierarchyDesignSession.can_modify_vault_hierarchy"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchyDesignSession.can_modify_bin_hierarchy
return self._get_provider_session('vault_hierarchy_design_session').can_modify_vault_hierarchy()
def create_vault_hierarchy(self, *args, **kwargs):
"""Pass through to provider VaultHierarchyDesignSession.can_modify_vault_hierarchy"""
# Patched in by cjshaw@mit.edu, Jul 23, 2014, added by birdland to template on Aug 8, 2014
# Is not part of specs for catalog hierarchy design sessions, but may want to be in hierarchy service instead
# Will not return an actual object, just JSON
# since a BankHierarchy does not seem to be an OSID thing.
return self._get_provider_session('vault_hierarchy_design_session').create_vault_hierarchy(*args, **kwargs)
def delete_vault_hierarchy(self, *args, **kwargs):
"""Pass through to provider VaultHierarchyDesignSession.can_modify_vault_hierarchy"""
# Patched in by cjshaw@mit.edu, Jul 23, 2014, added by birdland to template on Aug 8, 2014
# Is not part of specs for catalog hierarchy design sessions, but may want to be in hierarchy service instead
# Will not return an actual object, just JSON
# since a BankHierarchy does not seem to be an OSID thing.
return self._get_provider_session('vault_hierarchy_design_session').delete_vault_hierarchy(*args, **kwargs)
def add_root_vault(self, *args, **kwargs):
"""Pass through to provider VaultHierarchyDesignSession.add_root_vault"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchyDesignSession.add_root_bin
self._get_provider_session('vault_hierarchy_design_session').add_root_vault(*args, **kwargs)
def remove_root_vault(self, *args, **kwargs):
"""Pass through to provider VaultHierarchyDesignSession.remove_root_vault"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchyDesignSession.remove_root_bin
self._get_provider_session('vault_hierarchy_design_session').remove_root_vault(*args, **kwargs)
def add_child_vault(self, *args, **kwargs):
"""Pass through to provider VaultHierarchyDesignSession.add_child_vault"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchyDesignSession.add_child_bin
self._get_provider_session('vault_hierarchy_design_session').add_child_vault(*args, **kwargs)
def remove_child_vault(self, *args, **kwargs):
"""Pass through to provider VaultHierarchyDesignSession.remove_child_vault"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchyDesignSession.remove_child_bin
self._get_provider_session('vault_hierarchy_design_session').remove_child_vault(*args, **kwargs)
def remove_child_vaults(self, *args, **kwargs):
"""Pass through to provider VaultHierarchyDesignSession.remove_child_vaults"""
# Implemented from kitosid template for -
# osid.resource.BinHierarchyDesignSession.remove_child_bins
self._get_provider_session('vault_hierarchy_design_session').remove_child_vaults(*args, **kwargs)
class AuthorizationProxyManager(osid.OsidProxyManager, AuthorizationProfile, authorization_managers.AuthorizationProxyManager):
"""AuthorizationProxyManager convenience adapter including related Session methods."""
def get_authorization_session(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorization_session_for_vault(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorization_lookup_session(self, *args, **kwargs):
"""Sends control to Manager"""
# Implemented from kitosid template for -
# osid.resource.ResourceProxyManager.get_resource_lookup_session_template
return AuthorizationManager.get_authorization_lookup_session(*args, **kwargs)
def get_authorization_lookup_session_for_vault(self, *args, **kwargs):
"""Sends control to Manager"""
# Implemented from kitosid template for -
# osid.resource.ResourceProxyManager.get_resource_lookup_session_for_bin_template
return AuthorizationManager.get_authorization_lookup_session_for_vault(*args, **kwargs)
def get_authorization_query_session(self, *args, **kwargs):
"""Sends control to Manager"""
# Implemented from kitosid template for -
# osid.resource.ResourceProxyManager.get_resource_lookup_session_template
return AuthorizationManager.get_authorization_query_session(*args, **kwargs)
def get_authorization_query_session_for_vault(self, *args, **kwargs):
"""Sends control to Manager"""
# Implemented from kitosid template for -
# osid.resource.ResourceProxyManager.get_resource_lookup_session_for_bin_template
return AuthorizationManager.get_authorization_query_session_for_vault(*args, **kwargs)
def get_authorization_admin_session(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorization_admin_session_for_vault(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorization_vault_session(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorization_vault_assignment_session(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_vault_lookup_session(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_vault_query_session(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_vault_admin_session(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_vault_hierarchy_session(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_vault_hierarchy_design_session(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorization_batch_proxy_manager(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services')
authorization_batch_proxy_manager = property(fget=get_authorization_batch_proxy_manager)
def get_authorization_rules_proxy_manager(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services')
authorization_rules_proxy_manager = property(fget=get_authorization_rules_proxy_manager)
class Vault(abc_authorization_objects.Vault, osid.OsidSession, osid.OsidCatalog):
"""Vault convenience adapter including related Session methods."""
# WILL THIS EVER BE CALLED DIRECTLY - OUTSIDE OF A MANAGER?
def __init__(self, provider_manager, catalog, runtime, proxy, **kwargs):
self._provider_manager = provider_manager
self._catalog = catalog
self._runtime = runtime
osid.OsidObject.__init__(self, self._catalog) # This is to initialize self._object
osid.OsidSession.__init__(self, proxy) # This is to initialize self._proxy
self._catalog_id = catalog.get_id()
self._provider_sessions = kwargs
self._session_management = AUTOMATIC
self._vault_view = DEFAULT
self._object_views = dict()
self._operable_views = dict()
self._containable_views = dict()
def _set_vault_view(self, session):
"""Sets the underlying vault view to match current view"""
if self._vault_view == FEDERATED:
try:
session.use_federated_vault_view()
except AttributeError:
pass
else:
try:
session.use_isolated_vault_view()
except AttributeError:
pass
def _set_object_view(self, session):
"""Sets the underlying object views to match current view"""
for obj_name in self._object_views:
if self._object_views[obj_name] == PLENARY:
try:
getattr(session, 'use_plenary_' + obj_name + '_view')()
except AttributeError:
pass
else:
try:
getattr(session, 'use_comparative_' + obj_name + '_view')()
except AttributeError:
pass
def _set_operable_view(self, session):
"""Sets the underlying operable views to match current view"""
for obj_name in self._operable_views:
if self._operable_views[obj_name] == ACTIVE:
try:
getattr(session, 'use_active_' + obj_name + '_view')()
except AttributeError:
pass
else:
try:
getattr(session, 'use_any_status_' + obj_name + '_view')()
except AttributeError:
pass
def _set_containable_view(self, session):
"""Sets the underlying containable views to match current view"""
for obj_name in self._containable_views:
if self._containable_views[obj_name] == SEQUESTERED:
try:
getattr(session, 'use_sequestered_' + obj_name + '_view')()
except AttributeError:
pass
else:
try:
getattr(session, 'use_unsequestered_' + obj_name + '_view')()
except AttributeError:
pass
def _get_provider_session(self, session_name):
"""Returns the requested provider session.
Instantiates a new one if the named session is not already known.
"""
agent_key = self._get_agent_key()
if session_name in self._provider_sessions[agent_key]:
return self._provider_sessions[agent_key][session_name]
else:
session_class = getattr(self._provider_manager, 'get_' + session_name + '_for_vault')
if self._proxy is None:
if 'notification_session' in session_name:
# Is there something else we should do about the receiver field?
session = session_class('fake receiver', self._catalog.get_id())
else:
session = session_class(self._catalog.get_id())
else:
if 'notification_session' in session_name:
# Is there something else we should do about the receiver field?
session = session_class('fake receiver', self._catalog.get_id(), self._proxy)
else:
session = session_class(self._catalog.get_id(), self._proxy)
self._set_vault_view(session)
self._set_object_view(session)
self._set_operable_view(session)
self._set_containable_view(session)
if self._session_management != DISABLED:
self._provider_sessions[agent_key][session_name] = session
return session
def get_vault_id(self):
"""Gets the Id of this vault."""
return self._catalog_id
def get_vault(self):
"""Strange little method to assure conformance for inherited Sessions."""
return self
def __getattr__(self, name):
if '_catalog' in self.__dict__:
try:
return self._catalog[name]
except AttributeError:
pass
raise AttributeError
def close_sessions(self):
"""Close all sessions currently being managed by this Manager to save memory."""
if self._session_management != MANDATORY:
self._provider_sessions = dict()
else:
raise IllegalState()
def use_automatic_session_management(self):
"""Session state will be saved until closed by consumers."""
self._session_management = AUTOMATIC
def use_mandatory_session_management(self):
"""Session state will always be saved and can not be closed by consumers."""
# Session state will be saved and can not be closed by consumers
self._session_management = MANDATORY
def disable_session_management(self):
"""Session state will never be saved."""
self._session_management = DISABLED
self.close_sessions()
def get_vault_record(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
##
# The following methods are from osid.authorization.AuthorizationSession
def can_access_authorizations(self):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services')
def is_authorized(self, *args, **kwargs):
"""Pass through to provider AuthorizationSession.is_authorized"""
return self._get_provider_session('authorization_session').is_authorized(*args, **kwargs)
def get_authorization_condition(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def is_authorized_on_condition(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
##
# The following methods are from osid.authorization.AuthorizationLookupSession
def can_lookup_authorizations(self):
"""Pass through to provider AuthorizationLookupSession.can_lookup_authorizations"""
# Implemented from kitosid template for -
# osid.resource.ResourceLookupSession.can_lookup_resources_template
return self._get_provider_session('authorization_lookup_session').can_lookup_authorizations()
def use_comparative_authorization_view(self):
"""Pass through to provider AuthorizationLookupSession.use_comparative_authorization_view"""
self._object_views['authorization'] = COMPARATIVE
# self._get_provider_session('authorization_lookup_session') # To make sure the session is tracked
for session in self._get_provider_sessions():
try:
session.use_comparative_authorization_view()
except AttributeError:
pass
def use_plenary_authorization_view(self):
"""Pass through to provider AuthorizationLookupSession.use_plenary_authorization_view"""
self._object_views['authorization'] = PLENARY
# self._get_provider_session('authorization_lookup_session') # To make sure the session is tracked
for session in self._get_provider_sessions():
try:
session.use_plenary_authorization_view()
except AttributeError:
pass
def use_federated_vault_view(self):
"""Pass through to provider AuthorizationLookupSession.use_federated_vault_view"""
self._vault_view = FEDERATED
# self._get_provider_session('authorization_lookup_session') # To make sure the session is tracked
for session in self._get_provider_sessions():
try:
session.use_federated_vault_view()
except AttributeError:
pass
def use_isolated_vault_view(self):
"""Pass through to provider AuthorizationLookupSession.use_isolated_vault_view"""
self._vault_view = ISOLATED
# self._get_provider_session('authorization_lookup_session') # To make sure the session is tracked
for session in self._get_provider_sessions():
try:
session.use_isolated_vault_view()
except AttributeError:
pass
def use_effective_authorization_view(self):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services')
def use_any_effective_authorization_view(self):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services')
def use_implicit_authorization_view(self):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services')
def use_explicit_authorization_view(self):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services')
def get_authorization(self, *args, **kwargs):
"""Pass through to provider AuthorizationLookupSession.get_authorization"""
# Implemented from kitosid template for -
# osid.resource.ResourceLookupSession.get_resource_template
return self._get_provider_session('authorization_lookup_session').get_authorization(*args, **kwargs)
def get_authorizations_by_ids(self, *args, **kwargs):
"""Pass through to provider AuthorizationLookupSession.get_authorizations_by_ids"""
# Implemented from kitosid template for -
# osid.resource.ResourceLookupSession.get_resources_by_ids_template
return self._get_provider_session('authorization_lookup_session').get_authorizations_by_ids(*args, **kwargs)
def get_authorizations_by_genus_type(self, *args, **kwargs):
"""Pass through to provider AuthorizationLookupSession.get_authorizations_by_genus_type"""
# Implemented from kitosid template for -
# osid.resource.ResourceLookupSession.get_resources_by_genus_type_template
return self._get_provider_session('authorization_lookup_session').get_authorizations_by_genus_type(*args, **kwargs)
def get_authorizations_by_parent_genus_type(self, *args, **kwargs):
"""Pass through to provider AuthorizationLookupSession.get_authorizations_by_parent_genus_type"""
# Implemented from kitosid template for -
# osid.resource.ResourceLookupSession.get_resources_by_parent_genus_type_template
return self._get_provider_session('authorization_lookup_session').get_authorizations_by_parent_genus_type(*args, **kwargs)
def get_authorizations_by_record_type(self, *args, **kwargs):
"""Pass through to provider AuthorizationLookupSession.get_authorizations_by_record_type"""
# Implemented from kitosid template for -
# osid.resource.ResourceLookupSession.get_resources_by_record_type_template
return self._get_provider_session('authorization_lookup_session').get_authorizations_by_record_type(*args, **kwargs)
def get_authorizations_on_date(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorizations_for_resource(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorizations_for_resource_on_date(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorizations_for_agent(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorizations_for_agent_on_date(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorizations_for_function(self, *args, **kwargs):
"""Pass through to provider AuthorizationLookupSession.get_authorizations_for_function"""
# Implemented from kitosid template for -
# osid.resource.ActivityLookupSession.get_activities_for_objective
return self._get_provider_session('authorization_lookup_session').get_authorizations_for_function(*args, **kwargs)
def get_authorizations_for_function_on_date(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorizations_for_resource_and_function(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorizations_for_resource_and_function_on_date(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorizations_for_agent_and_function(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorizations_for_agent_and_function_on_date(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorizations_by_qualifier(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_explicit_authorization(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def get_authorizations(self):
"""Pass through to provider AuthorizationLookupSession.get_authorizations"""
# Implemented from kitosid template for -
# osid.resource.ResourceLookupSession.get_resources_template
return self._get_provider_session('authorization_lookup_session').get_authorizations()
authorizations = property(fget=get_authorizations)
##
# The following methods are from osid.authorization.AuthorizationQuerySession
def can_search_authorizations(self):
"""Pass through to provider AuthorizationQuerySession.can_search_authorizations"""
# Implemented from kitosid template for -
# osid.resource.ResourceQuerySession.can_search_resources_template
return self._get_provider_session('authorization_query_session').can_search_authorizations()
def get_authorization_query(self):
"""Pass through to provider AuthorizationQuerySession.get_authorization_query"""
# Implemented from kitosid template for -
# osid.resource.ResourceQuerySession.get_item_query_template
return self._get_provider_session('authorization_query_session').get_authorization_query()
authorization_query = property(fget=get_authorization_query)
def get_authorizations_by_query(self, *args, **kwargs):
"""Pass through to provider AuthorizationQuerySession.get_authorizations_by_query"""
# Implemented from kitosid template for -
# osid.resource.ResourceQuerySession.get_items_by_query_template
return self._get_provider_session('authorization_query_session').get_authorizations_by_query(*args, **kwargs)
##
# The following methods are from osid.authorization.AuthorizationAdminSession
def can_create_authorizations(self):
"""Pass through to provider AuthorizationAdminSession.can_create_authorizations"""
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.can_create_resources
return self._get_provider_session('authorization_admin_session').can_create_authorizations()
def can_create_authorization_with_record_types(self, *args, **kwargs):
"""Pass through to provider AuthorizationAdminSession.can_create_authorization_with_record_types"""
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.can_create_resource_with_record_types
return self._get_provider_session('authorization_admin_session').can_create_authorization_with_record_types(*args, **kwargs)
def get_authorization_form_for_create_for_agent(self, *args, **kwargs):
"""Pass through to provider AuthorizationAdminSession.get_authorization_form_for_create_for_agent"""
return self._get_provider_session('authorization_admin_session').get_authorization_form_for_create_for_agent(*args, **kwargs)
def get_authorization_form_for_create_for_resource(self, *args, **kwargs):
"""Pass through to provider AuthorizationAdminSession.get_authorization_form_for_create_for_resource"""
return self._get_provider_session('authorization_admin_session').get_authorization_form_for_create_for_resource(*args, **kwargs)
def get_authorization_form_for_create_for_resource_and_trust(self, *args, **kwargs):
"""Pass through to provider unimplemented"""
raise Unimplemented('Unimplemented in dlkit.services - args=' + str(args) + ', kwargs=' + str(kwargs))
def create_authorization(self, *args, **kwargs):
"""Pass through to provider AuthorizationAdminSession.create_authorization"""
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.create_resource
return self._get_provider_session('authorization_admin_session').create_authorization(*args, **kwargs)
def can_update_authorizations(self):
"""Pass through to provider AuthorizationAdminSession.can_update_authorizations"""
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.can_update_resources
return self._get_provider_session('authorization_admin_session').can_update_authorizations()
def get_authorization_form_for_update(self, *args, **kwargs):
"""Pass through to provider AuthorizationAdminSession.get_authorization_form_for_update"""
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.get_resource_form_for_update
return self._get_provider_session('authorization_admin_session').get_authorization_form_for_update(*args, **kwargs)
def get_authorization_form(self, *args, **kwargs):
"""Pass through to provider AuthorizationAdminSession.get_authorization_form_for_update"""
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.get_resource_form_for_update
# This method might be a bit sketchy. Time will tell.
if isinstance(args[-1], list) or 'authorization_record_types' in kwargs:
return self.get_authorization_form_for_create(*args, **kwargs)
else:
return self.get_authorization_form_for_update(*args, **kwargs)
def duplicate_authorization(self, authorization_id):
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.get_resource_form_for_update
return self._get_provider_session('authorization_admin_session').duplicate_authorization(authorization_id)
def update_authorization(self, *args, **kwargs):
"""Pass through to provider AuthorizationAdminSession.update_authorization"""
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.update_resource
# Note: The OSID spec does not require returning updated object
return self._get_provider_session('authorization_admin_session').update_authorization(*args, **kwargs)
def save_authorization(self, authorization_form, *args, **kwargs):
"""Pass through to provider AuthorizationAdminSession.update_authorization"""
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.update_resource
if authorization_form.is_for_update():
return self.update_authorization(authorization_form, *args, **kwargs)
else:
return self.create_authorization(authorization_form, *args, **kwargs)
def can_delete_authorizations(self):
"""Pass through to provider AuthorizationAdminSession.can_delete_authorizations"""
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.can_delete_resources
return self._get_provider_session('authorization_admin_session').can_delete_authorizations()
def delete_authorization(self, *args, **kwargs):
"""Pass through to provider AuthorizationAdminSession.delete_authorization"""
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.delete_resource
self._get_provider_session('authorization_admin_session').delete_authorization(*args, **kwargs)
def can_manage_authorization_aliases(self):
"""Pass through to provider AuthorizationAdminSession.can_manage_authorization_aliases"""
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.can_manage_resource_aliases_template
return self._get_provider_session('authorization_admin_session').can_manage_authorization_aliases()
def alias_authorization(self, *args, **kwargs):
"""Pass through to provider AuthorizationAdminSession.alias_authorization"""
# Implemented from kitosid template for -
# osid.resource.ResourceAdminSession.alias_resources
self._get_provider_session('authorization_admin_session').alias_authorization(*args, **kwargs)
class VaultList(abc_authorization_objects.VaultList, osid.OsidList):
"""VaultList convenience adapter including related Session methods."""
def get_next_vault(self):
"""Gets next object"""
# Implemented from kitosid template for -
# osid.resource.ResourceList.get_next_resource
try:
next_item = next(self)
except StopIteration:
raise IllegalState('no more elements available in this list')
else:
return next_item
def next(self):
"""next method for enumerator"""
# Implemented from kitosid template for -
# osid.resource.ResourceList.get_next_resource
next_item = osid.OsidList.next(self)
return next_item
__next__ = next
next_vault = property(fget=get_next_vault)
def get_next_vaults(self, n):
"""gets next n objects from list"""
# Implemented from kitosid template for -
# osid.resource.ResourceList.get_next_resources
if n > self.available():
# !!! This is not quite as specified (see method docs) !!!
raise IllegalState('not enough elements available in this list')
else:
next_list = []
i = 0
while i < n:
try:
next_list.append(next(self))
except StopIteration:
break
i += 1
return next_list
| 51.693205 | 143 | 0.720926 | 8,433 | 75,317 | 6.095695 | 0.042334 | 0.045326 | 0.041727 | 0.067406 | 0.865869 | 0.822371 | 0.783504 | 0.735415 | 0.690147 | 0.610077 | 0 | 0.000644 | 0.195852 | 75,317 | 1,456 | 144 | 51.728709 | 0.8481 | 0.36353 | 0 | 0.379458 | 0 | 0 | 0.092639 | 0.038305 | 0 | 0 | 0 | 0 | 0 | 1 | 0.282454 | false | 0.024251 | 0.007133 | 0.001427 | 0.540656 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
fd95acd15a454445711cd382b39c61c451a78b4a | 25,493 | py | Python | code/get_model.py | SIAT-code/Multi-PLI | 073cc65d8562261d5e43500d76e8f13363f936db | [
"Apache-2.0"
] | 4 | 2021-07-04T01:49:22.000Z | 2022-01-11T12:51:42.000Z | code/get_model.py | SIAT-code/Multi-PLI | 073cc65d8562261d5e43500d76e8f13363f936db | [
"Apache-2.0"
] | null | null | null | code/get_model.py | SIAT-code/Multi-PLI | 073cc65d8562261d5e43500d76e8f13363f936db | [
"Apache-2.0"
] | 1 | 2021-12-20T12:54:17.000Z | 2021-12-20T12:54:17.000Z | import keras
from keras.models import Model
from keras.layers import Input,Dense, Dropout, Activation, Flatten,Reshape,concatenate,LSTM,Bidirectional, Average
from keras.layers import Conv2D, MaxPooling2D,Conv1D,MaxPooling1D,AveragePooling2D
from keras.layers import Lambda, dot
import tensorflow as tf
#import string
from keras.utils import multi_gpu_model
from keras.utils import plot_model
import numpy as np
import os
from keras import regularizers
from keras import initializers
import tensorflow
def attention_3d_block(hidden_states,out_shape=128,name='pro_'):
hidden_size = int(hidden_states.shape[2])
# Inside dense layer
# hidden_states dot W => score_first_part
# (batch_size, time_steps, hidden_size) dot (hidden_size, hidden_size) => (batch_size, time_steps, hidden_size)
# W is the trainable weight matrix of attention Luong's multiplicative style score
score_first_part = Dense(hidden_size, use_bias=False, name=name+'attention_score_vec')(hidden_states)
# score_first_part dot last_hidden_state => attention_weights
# (batch_size, time_steps, hidden_size) dot (batch_size, hidden_size) => (batch_size, time_steps)
h_t = Lambda(lambda x: x[:, -1, :], output_shape=(hidden_size,), name=name+'last_hidden_state')(hidden_states)
score = dot([score_first_part, h_t], [2, 1], name=name+'attention_score')
attention_weights = Activation('softmax', name=name+'attention_weight')(score)
# (batch_size, time_steps, hidden_size) dot (batch_size, time_steps) => (batch_size, hidden_size)
context_vector = dot([hidden_states, attention_weights], [1, 1], name=name+'context_vector')
pre_activation = concatenate([context_vector, h_t], name=name+'attention_output')
attention_vector = Dense(out_shape, use_bias=False, activation='tanh', name=name+'attention_vector')(pre_activation)
return attention_vector
def conv2d_bn(x, nb_filter, num_row, num_col,name,
padding='same', strides=(1, 1), use_bias=False):
x = Conv2D(nb_filter, (num_row, num_col),
name=name,
strides=strides,
padding=padding,
use_bias=use_bias,
kernel_regularizer=regularizers.l2(0.00004),
kernel_initializer=initializers.VarianceScaling(scale=2.0, mode='fan_in', distribution='normal', seed=None))(x)
x = Activation('relu')(x)
return x
def block_inception(input,filters_1x1, filters_3x3_reduce, filters_3x3,
filters_5x5_reduce, filters_5x5, filters_pool_proj,layer_name):
branch_0 = conv2d_bn(input, filters_1x1, 1, 1,name=layer_name+'_branch_0')
branch_1 = conv2d_bn(input, filters_3x3_reduce, 1, 1,name=layer_name+'_branch_1_3x3_reduce')
branch_1 = conv2d_bn(branch_1, filters_3x3, 3, 3,name=layer_name+'_branch_1_3x3')
branch_2 = conv2d_bn(input, filters_5x5_reduce, 1, 1,name=layer_name+'_branch_2_5x5_reduce')
branch_2 = conv2d_bn(branch_2, filters_5x5, 3, 3,name=layer_name+'_branch_2_3x3_0')
branch_2 = conv2d_bn(branch_2, filters_5x5, 3, 3,name=layer_name+'_branch_2_3x3_1')
branch_3 = MaxPooling2D((3,3), strides=(1,1), padding='same',name=layer_name+'_branch_3_maxpooling')(input)#AveragePooling2D
branch_3 = conv2d_bn(branch_3, filters_pool_proj, 1, 1,name=layer_name+'_branch_3_pool_proj')
x = concatenate([branch_0, branch_1, branch_2, branch_3], axis=-1,name=layer_name+'_concat')
return x
def block_inception_b(input,filters_1x1, filters_5x5_reduce, filters_5x5,
filters_7x7_reduce, filters_1x7,filters_7x1,filters_pool_proj,layer_name):
branch_0 = conv2d_bn(input, filters_1x1, 1, 1,name=layer_name+'_branch_0')
branch_1 = conv2d_bn(input,filters_7x7_reduce, 1, 1,name=layer_name+'_branch_1_7x7_reduce')
branch_1 = conv2d_bn(branch_1,filters_1x7, 1, 7,name=layer_name+'_branch_1_7x7_0')
branch_1 = conv2d_bn(branch_1,filters_7x1, 7, 1,name=layer_name+'_branch_1_7x7_1')
branch_2 = conv2d_bn(input, filters_5x5_reduce, 1, 1,name=layer_name+'_branch_2_5x5_reduce')
branch_2 = conv2d_bn(branch_2, filters_5x5, 3, 3,name=layer_name+'_branch_2_3x3_0')
branch_2 = conv2d_bn(branch_2, filters_5x5, 3, 3,name=layer_name+'_branch_2_3x3_1')
branch_3 = AveragePooling2D((3,3), strides=(1,1), padding='same')(input)
branch_3 = conv2d_bn(branch_3, filters_pool_proj, 1, 1,name=layer_name+'_branch_3_pool_proj')
x = concatenate([branch_0, branch_1, branch_2, branch_3], axis=-1)#branch_2,
return x
def simple_block(input,nb_filter,num_row,num_col,layer_name):
input = Conv2D(nb_filter, (num_row, num_col), padding='same', activation='relu', name=layer_name+'_conv0')(input)
input = Conv2D(nb_filter, (num_row, num_col), padding='same', activation='relu', name=layer_name+'_conv1')(input)
return input
def get_model_classification(save_dir,alpha,
pro_branch_switch1='',pro_branch_switch2='',
pro_branch_switch3='',pro_add_attention=False,
comp_branch_switch1='',comp_branch_switch2='',
comp_branch_switch3='',comp_add_attention=False,
):
###MODEL
##input
protein_input = Input(shape=(1200, 20, 1), name='protein_input')
comp_input = Input(shape=(200, 67, 1), name='comp_input')
##protein branch
# layer1
with tf.device('/gpu:0'):
if pro_branch_switch1 == 'inception_block':
pro_layer1 = block_inception(protein_input, filters_1x1=8, filters_3x3_reduce=1, filters_3x3=32,
filters_5x5_reduce=1, filters_5x5=32, filters_pool_proj=16,layer_name='pro_layer1')
else:
pro_layer1 = simple_block(protein_input, nb_filter=32, num_row=3, num_col=3, layer_name='pro_layer1')
pro_layer1 = MaxPooling2D(pool_size=(3, 3),padding='same', name='pro_layer1_poll')(pro_layer1)
# layer2
if pro_branch_switch2=='inception_block':
pro_layer2 = block_inception(pro_layer1, filters_1x1=16, filters_3x3_reduce=16, filters_3x3=64,
filters_5x5_reduce=16, filters_5x5=64, filters_pool_proj=32,layer_name='pro_layer2')
else:
pro_layer2=simple_block(pro_layer1,nb_filter=64,num_row=3,num_col=3,layer_name='pro_layer2')
pro_layer2 = MaxPooling2D(pool_size=(3, 3), padding='same',name='pro_layer2_poll')(pro_layer2)
# layer3
if pro_branch_switch3=='inception_block':
pro_layer3 = block_inception(pro_layer2, filters_1x1=32, filters_3x3_reduce=64, filters_3x3=128,
filters_5x5_reduce=64, filters_5x5=128, filters_pool_proj=64,layer_name='pro_layer3')
elif pro_branch_switch3=='inception_block_b':
pro_layer3 = block_inception_b(pro_layer2, filters_1x1=32, filters_5x5_reduce=64, filters_5x5=128,
filters_7x7_reduce=64, filters_1x7=128,filters_7x1=128, filters_pool_proj=64,layer_name='pro_layer3')
else:
pro_layer3 = simple_block(pro_layer2, nb_filter=128, num_row=3, num_col=3, layer_name='pro_layer3')
pro_layer3 = MaxPooling2D(pool_size=(3, 3), padding='same',name='pro_layer3_pool')(pro_layer3)
# layer4
if pro_add_attention:
h_t = Lambda(tf.reshape,output_shape=[45,352,], arguments={'shape': [-1, 45, 352]}, name='pro_convert_to_timestep')(pro_layer3)
pro_layer_tran_result = attention_3d_block(h_t,1024,'pro_')#batch*1024
else:
pro_layer_tran_result = Flatten(name='pro_layer4_flatten')(pro_layer3)
pro_layer_tran_result = Dense(1024, activation='relu', name='pro_layer5_den')(pro_layer_tran_result)
pro_layer_tran_result = Dropout(alpha, name='pro_drop1')(pro_layer_tran_result)
##compound branch
# layer1
with tf.device('/gpu:1'):
if comp_branch_switch1=='inception_block':
comp_layer1 = block_inception(comp_input, filters_1x1=8, filters_3x3_reduce=1, filters_3x3=16,
filters_5x5_reduce=1, filters_5x5=16, filters_pool_proj=16,
layer_name='comp_layer1')
else:
comp_layer1 = simple_block(comp_input, 32, 3, 3, 'comp_layer1')
comp_layer1 = MaxPooling2D(pool_size=(2, 2),padding='same', name='comp_layer1_poll')(comp_layer1)
# layer2
if comp_branch_switch2=='inception_block':
comp_layer2 = block_inception(comp_layer1, filters_1x1=16, filters_3x3_reduce=16, filters_3x3=64,
filters_5x5_reduce=16, filters_5x5=64, filters_pool_proj=32,layer_name='comp_layer2')
else:
comp_layer2=simple_block(comp_layer1,64,3,3,'comp_layer2')
comp_layer2 = MaxPooling2D(pool_size=(2, 2), padding='same',name='comp_layer2_poll')(comp_layer2)
# layer3
if comp_branch_switch3=='inception_block':
comp_layer3 = block_inception(comp_layer2, filters_1x1=32, filters_3x3_reduce=32, filters_3x3=128,
filters_5x5_reduce=32, filters_5x5=128, filters_pool_proj=32,layer_name='comp_layer3')
elif comp_branch_switch3=='inception_block_b':
comp_layer3 = block_inception_b(comp_layer2, filters_1x1=32, filters_5x5_reduce=32, filters_5x5=128,
filters_7x7_reduce=32, filters_1x7=128, filters_7x1=128,
filters_pool_proj=32, layer_name='comp_layer3')
else:
comp_layer3=simple_block(comp_layer2,128,3,3,'comp_layer3')
comp_layer3 = MaxPooling2D(pool_size=(2, 2), padding='same',name='comp_layer3_pool')(comp_layer3)
# layer4
if comp_add_attention:
h_t = Lambda(tf.reshape,output_shape=[25*8,320,], arguments={'shape': [-1, 25*8, 320]}, name='comp_convert_to_timestep')(comp_layer3)
comp_layer_tran_result = attention_3d_block(h_t,1024,'comp_')#batch*1024
else:
comp_layer_tran_result = Flatten(name='comp_layer4_flatten')(comp_layer3)
comp_layer_tran_result = Dense(640, activation='relu', name='comp_layer5_den')(comp_layer_tran_result)
# layer5
comp_layer_tran_result = Dropout(alpha, name='comp_drop1')(comp_layer_tran_result)
with tf.device('/gpu:2'):
pro_com = keras.layers.concatenate([pro_layer_tran_result, comp_layer_tran_result])
# We stack a deep densely-connected network on top
fc_pro_com = Dense(512, activation='relu', name='den1')(pro_com)
fc_pro_com = Dropout(alpha, name='drop1')(fc_pro_com)
dense1 = []
FC1 = Dense(64, activation='relu')
for p in np.linspace(0.1,0.5, 5):
x = Dropout(p)(fc_pro_com)
x = FC1(x)
x = Dense(1,activation='sigmoid')(x)
dense1.append(x)
class_out = Average()(dense1)
classification_model = Model([protein_input, comp_input],class_out)
plot_model(classification_model, to_file=save_dir + '/model_with_classification.png', show_shapes=True)
return classification_model
def get_model_regression(save_dir,alpha,
pro_branch_switch1='',pro_branch_switch2='',
pro_branch_switch3='',pro_add_attention=False,
comp_branch_switch1='',comp_branch_switch2='',
comp_branch_switch3='',comp_add_attention=False,
):
###MODEL
##input
protein_input = Input(shape=(1200, 20, 1), name='protein_input')
comp_input = Input(shape=(200, 67, 1), name='comp_input')
##protein branch
# layer1
with tf.device('/gpu:0'):
if pro_branch_switch1 == 'inception_block':
pro_layer1 = block_inception(protein_input, filters_1x1=8, filters_3x3_reduce=1, filters_3x3=32,
filters_5x5_reduce=1, filters_5x5=32, filters_pool_proj=16,layer_name='pro_layer1')
else:
pro_layer1 = simple_block(protein_input, nb_filter=32, num_row=3, num_col=3, layer_name='pro_layer1')
pro_layer1 = MaxPooling2D(pool_size=(3, 3),padding='same', name='pro_layer1_poll')(pro_layer1)
# layer2
if pro_branch_switch2=='inception_block':
pro_layer2 = block_inception(pro_layer1, filters_1x1=16, filters_3x3_reduce=16, filters_3x3=64,
filters_5x5_reduce=16, filters_5x5=64, filters_pool_proj=32,layer_name='pro_layer2')
else:
pro_layer2=simple_block(pro_layer1,nb_filter=64,num_row=3,num_col=3,layer_name='pro_layer2')
pro_layer2 = MaxPooling2D(pool_size=(3, 3), padding='same',name='pro_layer2_poll')(pro_layer2)
# layer3
if pro_branch_switch3=='inception_block':
pro_layer3 = block_inception(pro_layer2, filters_1x1=32, filters_3x3_reduce=64, filters_3x3=128,
filters_5x5_reduce=64, filters_5x5=128, filters_pool_proj=64,layer_name='pro_layer3')
elif pro_branch_switch3=='inception_block_b':
pro_layer3 = block_inception_b(pro_layer2, filters_1x1=32, filters_5x5_reduce=64, filters_5x5=128,
filters_7x7_reduce=64, filters_1x7=128,filters_7x1=128, filters_pool_proj=64,layer_name='pro_layer3')
else:
pro_layer3 = simple_block(pro_layer2, nb_filter=128, num_row=3, num_col=3, layer_name='pro_layer3')
pro_layer3 = MaxPooling2D(pool_size=(3, 3), padding='same',name='pro_layer3_pool')(pro_layer3)
# layer4
if pro_add_attention:
h_t = Lambda(tf.reshape,output_shape=[45,352,], arguments={'shape': [-1, 45, 352]}, name='pro_convert_to_timestep')(pro_layer3)#0000000000128
# h_t = Bidirectional(LSTM(100, return_sequences=True),input_shape=(45,128))(h_t) #batch*tmiestep*100
pro_layer_tran_result = attention_3d_block(h_t,1024,'pro_')#batch*1024
else:
pro_layer_tran_result = Flatten(name='pro_layer4_flatten')(pro_layer3)
pro_layer_tran_result = Dense(1024, activation='relu', name='pro_layer5_den')(pro_layer_tran_result)
pro_layer_tran_result = Dropout(alpha, name='pro_drop1')(pro_layer_tran_result)
##compound branch
# layer1
with tf.device('/gpu:1'):
if comp_branch_switch1=='inception_block':
comp_layer1 = block_inception(comp_input, filters_1x1=8, filters_3x3_reduce=1, filters_3x3=16,
filters_5x5_reduce=1, filters_5x5=16, filters_pool_proj=16,
layer_name='comp_layer1')
else:
comp_layer1 = simple_block(comp_input, 32, 3, 3, 'comp_layer1')
comp_layer1 = MaxPooling2D(pool_size=(2, 2),padding='same', name='comp_layer1_poll')(comp_layer1)
# layer2
if comp_branch_switch2=='inception_block':
comp_layer2 = block_inception(comp_layer1, filters_1x1=16, filters_3x3_reduce=16, filters_3x3=64,
filters_5x5_reduce=16, filters_5x5=64, filters_pool_proj=32,layer_name='comp_layer2')
else:
comp_layer2=simple_block(comp_layer1,64,3,3,'comp_layer2')
comp_layer2 = MaxPooling2D(pool_size=(2, 2), padding='same',name='comp_layer2_poll')(comp_layer2)
# layer3
if comp_branch_switch3=='inception_block':
comp_layer3 = block_inception(comp_layer2, filters_1x1=32, filters_3x3_reduce=32, filters_3x3=128,
filters_5x5_reduce=32, filters_5x5=128, filters_pool_proj=32,layer_name='comp_layer3')
elif comp_branch_switch3=='inception_block_b':
comp_layer3 = block_inception_b(comp_layer2, filters_1x1=32, filters_5x5_reduce=32, filters_5x5=128,
filters_7x7_reduce=32, filters_1x7=128, filters_7x1=128,
filters_pool_proj=32, layer_name='comp_layer3')
else:
comp_layer3=simple_block(comp_layer2,128,3,3,'comp_layer3')
comp_layer3 = MaxPooling2D(pool_size=(2, 2), padding='same',name='comp_layer3_pool')(comp_layer3)
# layer4
if comp_add_attention:
h_t = Lambda(tf.reshape,output_shape=[25*8,320,], arguments={'shape': [-1, 25*8, 320]}, name='comp_convert_to_timestep')(comp_layer3)#@@@@@@@@@128
# h_t = Bidirectional(LSTM(100, return_sequences=True),input_shape=(25*8,128))(h_t) #batch*tmiestep*100
comp_layer_tran_result = attention_3d_block(h_t,1024,'comp_')#batch*1024
else:
comp_layer_tran_result = Flatten(name='comp_layer4_flatten')(comp_layer3)
comp_layer_tran_result = Dense(640, activation='relu', name='comp_layer5_den')(comp_layer_tran_result)
# layer5
# comp_layer_tran_result = Dense(256, activation='relu', name='comp_layer4_den')(comp_layer_tran_result)
comp_layer_tran_result = Dropout(alpha, name='comp_drop1')(comp_layer_tran_result)
with tf.device('/gpu:2'):
pro_com = keras.layers.concatenate([pro_layer_tran_result, comp_layer_tran_result])
# We stack a deep densely-connected network on top
fc_pro_com = Dense(512, activation='relu', name='den1')(pro_com)
fc_pro_com = Dropout(alpha, name='drop1')(fc_pro_com)
dense1 = []
FC1 = Dense(64, activation='relu')
for p in np.linspace(0.1,0.5, 5):
x = Dropout(p)(fc_pro_com)
x = FC1(x)
x = Dense(1)(x)
dense1.append(x)
class_out = Average()(dense1)
regression_model = Model([protein_input, comp_input],class_out)
plot_model(classification_model, to_file=save_dir + '/model_with_regression.png', show_shapes=True)
return regression_model
def get_model_multi(save_dir,alpha,
pro_branch_switch1='',pro_branch_switch2='',
pro_branch_switch3='',pro_add_attention=False,
comp_branch_switch1='',comp_branch_switch2='',
comp_branch_switch3='',comp_add_attention=False,
):
###MODEL
##input
protein_input = Input(shape=(1200, 20, 1), name='protein_input')
comp_input = Input(shape=(200, 67, 1), name='comp_input')
with tf.device('/gpu:0'):
##protein branch
# layer1
if pro_branch_switch1 == 'inception_block':
pro_layer1 = block_inception(protein_input, filters_1x1=8, filters_3x3_reduce=1, filters_3x3=32,
filters_5x5_reduce=1, filters_5x5=32, filters_pool_proj=16,layer_name='pro_layer1')
else:
pro_layer1 = simple_block(protein_input, nb_filter=32, num_row=3, num_col=3, layer_name='pro_layer1')
pro_layer1 = MaxPooling2D(pool_size=(3, 3),padding='same', name='pro_layer1_poll')(pro_layer1)
# layer2
if pro_branch_switch2=='inception_block':
pro_layer2 = block_inception(pro_layer1, filters_1x1=16, filters_3x3_reduce=16, filters_3x3=64,
filters_5x5_reduce=16, filters_5x5=64, filters_pool_proj=32,layer_name='pro_layer2')
else:
pro_layer2=simple_block(pro_layer1,nb_filter=64,num_row=3,num_col=3,layer_name='pro_layer2')
pro_layer2 = MaxPooling2D(pool_size=(3, 3), padding='same',name='pro_layer2_poll')(pro_layer2)
# layer3
if pro_branch_switch3=='inception_block':
pro_layer3 = block_inception(pro_layer2, filters_1x1=32, filters_3x3_reduce=64, filters_3x3=128,
filters_5x5_reduce=64, filters_5x5=128, filters_pool_proj=64,layer_name='pro_layer3')
elif pro_branch_switch3=='inception_block_b':
pro_layer3 = block_inception_b(pro_layer2, filters_1x1=32, filters_5x5_reduce=64, filters_5x5=128,
filters_7x7_reduce=64, filters_1x7=128,filters_7x1=128, filters_pool_proj=64,layer_name='pro_layer3')
else:
pro_layer3 = simple_block(pro_layer2, nb_filter=128, num_row=3, num_col=3, layer_name='pro_layer3')
pro_layer3 = MaxPooling2D(pool_size=(3, 3), padding='same',name='pro_layer3_pool')(pro_layer3)
# layer4
if pro_add_attention:
h_t = Lambda(tf.reshape,output_shape=[45,352,], arguments={'shape': [-1, 45, 352]}, name='pro_convert_to_timestep')(pro_layer3)
pro_layer_tran_result = attention_3d_block(h_t,1024,'pro_')#batch*1024
else:
pro_layer_tran_result = Flatten(name='pro_layer4_flatten')(pro_layer3)
pro_layer_tran_result = Dense(1024, activation='relu', name='pro_layer5_den')(pro_layer_tran_result)
pro_layer_tran_result = Dropout(alpha, name='pro_drop1')(pro_layer_tran_result)
with tf.device('/gpu:1'):
##compound branch
# layer1
if comp_branch_switch1=='inception_block':
comp_layer1 = block_inception(comp_input, filters_1x1=8, filters_3x3_reduce=1, filters_3x3=16,
filters_5x5_reduce=1, filters_5x5=16, filters_pool_proj=16,
layer_name='comp_layer1')
else:
comp_layer1 = simple_block(comp_input, 32, 3, 3, 'comp_layer1')
comp_layer1 = MaxPooling2D(pool_size=(2, 2),padding='same', name='comp_layer1_poll')(comp_layer1)
# layer2
if comp_branch_switch2=='inception_block':
comp_layer2 = block_inception(comp_layer1, filters_1x1=16, filters_3x3_reduce=16, filters_3x3=64,
filters_5x5_reduce=16, filters_5x5=64, filters_pool_proj=32,layer_name='comp_layer2')
else:
comp_layer2=simple_block(comp_layer1,64,3,3,'comp_layer2')
comp_layer2 = MaxPooling2D(pool_size=(2, 2), padding='same',name='comp_layer2_poll')(comp_layer2)
# layer3
if comp_branch_switch3=='inception_block':
comp_layer3 = block_inception(comp_layer2, filters_1x1=32, filters_3x3_reduce=32, filters_3x3=128,
filters_5x5_reduce=32, filters_5x5=128, filters_pool_proj=32,layer_name='comp_layer3')
elif comp_branch_switch3=='inception_block_b':
comp_layer3 = block_inception_b(comp_layer2, filters_1x1=32, filters_5x5_reduce=32, filters_5x5=128,
filters_7x7_reduce=32, filters_1x7=128, filters_7x1=128,
filters_pool_proj=32, layer_name='comp_layer3')
else:
comp_layer3=simple_block(comp_layer2,128,3,3,'comp_layer3')
comp_layer3 = MaxPooling2D(pool_size=(2, 2), padding='same',name='comp_layer3_pool')(comp_layer3)
# layer4
if comp_add_attention:
h_t = Lambda(tf.reshape,output_shape=[25*8,320,], arguments={'shape': [-1, 25*8, 320]}, name='comp_convert_to_timestep')(comp_layer3)
comp_layer_tran_result = attention_3d_block(h_t,1024,'comp_')#batch*1024
else:
comp_layer_tran_result = Flatten(name='comp_layer4_flatten')(comp_layer3)
comp_layer_tran_result = Dense(640, activation='relu', name='comp_layer5_den')(comp_layer_tran_result)
# layer5
comp_layer_tran_result = Dropout(alpha, name='comp_drop1')(comp_layer_tran_result)
with tf.device('/gpu:2'):
pro_com = keras.layers.concatenate([pro_layer_tran_result, comp_layer_tran_result])
# We stack a deep densely-connected network on top
fc_pro_com = Dense(512, activation='relu', name='den1')(pro_com)
fc_pro_com = Dropout(alpha, name='drop1')(fc_pro_com)
# classification task
dense1 = []
FC1 = Dense(64, activation='relu')
for p in np.linspace(0.1,0.5, 5):
x = Dropout(p)(fc_pro_com)
x = FC1(x)
x = Dense(1,activation='sigmoid')(x)
dense1.append(x)
class_out = Average()(dense1)
# regression task
dense2 = []
FC2 = Dense(64, activation='relu')
for p in np.linspace(0.1,0.5, 5):
x = Dropout(p)(fc_pro_com)
x = FC2(x)
x = Dense(1)(x)
dense2.append(x)
regree_out = Average()(dense2)
class_model = Model(inputs=[protein_input, comp_input], outputs=class_out)
reg_model = Model(inputs=[protein_input, comp_input], outputs=regree_out)
plot_model(class_model, to_file=save_dir + "/multitask_training_respectively_class.png")
plot_model(reg_model, to_file=save_dir + "/multitask_training_respectively_reg.png")
return class_model, reg_model
| 64.867684 | 159 | 0.645864 | 3,383 | 25,493 | 4.483003 | 0.065327 | 0.038243 | 0.043518 | 0.028814 | 0.863181 | 0.853092 | 0.841092 | 0.822959 | 0.802651 | 0.793749 | 0 | 0.077147 | 0.24493 | 25,493 | 392 | 160 | 65.033163 | 0.710738 | 0.059271 | 0 | 0.785714 | 0 | 0 | 0.102986 | 0.011868 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024845 | false | 0 | 0.040373 | 0 | 0.090062 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fdd0e35328955d65772be9b2c0f8cc995918a85f | 47,152 | py | Python | 47/swagger_client/api/wordnet_similarity_controller_api.py | apitore/apitore-sdk-python | c0814c5635ddd09e9a20fcb155b62122bee41d33 | [
"Apache-2.0"
] | 3 | 2018-08-21T06:14:33.000Z | 2019-10-18T23:05:50.000Z | 47/swagger_client/api/wordnet_similarity_controller_api.py | apitore/apitore-sdk-python | c0814c5635ddd09e9a20fcb155b62122bee41d33 | [
"Apache-2.0"
] | null | null | null | 47/swagger_client/api/wordnet_similarity_controller_api.py | apitore/apitore-sdk-python | c0814c5635ddd09e9a20fcb155b62122bee41d33 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
WordNet Similarity APIs
Calculate word similarity.<BR />[Endpoint] https://api.apitore.com/api/47 # noqa: E501
OpenAPI spec version: 0.0.1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from swagger_client.api_client import ApiClient
class WordnetSimilarityControllerApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def hirststonge_using_get(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.hirststonge_using_get_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
else:
(data) = self.hirststonge_using_get_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
return data
def hirststonge_using_get_with_http_info(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get_with_http_info(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['access_token', 'word1', 'word2', 'pos1', 'pos2'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method hirststonge_using_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'access_token' is set
if ('access_token' not in params or
params['access_token'] is None):
raise ValueError("Missing the required parameter `access_token` when calling `hirststonge_using_get`") # noqa: E501
# verify the required parameter 'word1' is set
if ('word1' not in params or
params['word1'] is None):
raise ValueError("Missing the required parameter `word1` when calling `hirststonge_using_get`") # noqa: E501
# verify the required parameter 'word2' is set
if ('word2' not in params or
params['word2'] is None):
raise ValueError("Missing the required parameter `word2` when calling `hirststonge_using_get`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'access_token' in params:
query_params.append(('access_token', params['access_token'])) # noqa: E501
if 'word1' in params:
query_params.append(('word1', params['word1'])) # noqa: E501
if 'pos1' in params:
query_params.append(('pos1', params['pos1'])) # noqa: E501
if 'word2' in params:
query_params.append(('word2', params['word2'])) # noqa: E501
if 'pos2' in params:
query_params.append(('pos2', params['pos2'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/wordnet-similarity/hirststonge', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WordnetSimilarityResponseEntity', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def hirststonge_using_get1(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get1(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.hirststonge_using_get1_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
else:
(data) = self.hirststonge_using_get1_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
return data
def hirststonge_using_get1_with_http_info(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get1_with_http_info(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['access_token', 'word1', 'word2', 'pos1', 'pos2'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method hirststonge_using_get1" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'access_token' is set
if ('access_token' not in params or
params['access_token'] is None):
raise ValueError("Missing the required parameter `access_token` when calling `hirststonge_using_get1`") # noqa: E501
# verify the required parameter 'word1' is set
if ('word1' not in params or
params['word1'] is None):
raise ValueError("Missing the required parameter `word1` when calling `hirststonge_using_get1`") # noqa: E501
# verify the required parameter 'word2' is set
if ('word2' not in params or
params['word2'] is None):
raise ValueError("Missing the required parameter `word2` when calling `hirststonge_using_get1`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'access_token' in params:
query_params.append(('access_token', params['access_token'])) # noqa: E501
if 'word1' in params:
query_params.append(('word1', params['word1'])) # noqa: E501
if 'pos1' in params:
query_params.append(('pos1', params['pos1'])) # noqa: E501
if 'word2' in params:
query_params.append(('word2', params['word2'])) # noqa: E501
if 'pos2' in params:
query_params.append(('pos2', params['pos2'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/wordnet-similarity/jiangconrath', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WordnetSimilarityResponseEntity', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def hirststonge_using_get2(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get2(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.hirststonge_using_get2_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
else:
(data) = self.hirststonge_using_get2_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
return data
def hirststonge_using_get2_with_http_info(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get2_with_http_info(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['access_token', 'word1', 'word2', 'pos1', 'pos2'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method hirststonge_using_get2" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'access_token' is set
if ('access_token' not in params or
params['access_token'] is None):
raise ValueError("Missing the required parameter `access_token` when calling `hirststonge_using_get2`") # noqa: E501
# verify the required parameter 'word1' is set
if ('word1' not in params or
params['word1'] is None):
raise ValueError("Missing the required parameter `word1` when calling `hirststonge_using_get2`") # noqa: E501
# verify the required parameter 'word2' is set
if ('word2' not in params or
params['word2'] is None):
raise ValueError("Missing the required parameter `word2` when calling `hirststonge_using_get2`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'access_token' in params:
query_params.append(('access_token', params['access_token'])) # noqa: E501
if 'word1' in params:
query_params.append(('word1', params['word1'])) # noqa: E501
if 'pos1' in params:
query_params.append(('pos1', params['pos1'])) # noqa: E501
if 'word2' in params:
query_params.append(('word2', params['word2'])) # noqa: E501
if 'pos2' in params:
query_params.append(('pos2', params['pos2'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/wordnet-similarity/leacockchodorow', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WordnetSimilarityResponseEntity', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def hirststonge_using_get3(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get3(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.hirststonge_using_get3_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
else:
(data) = self.hirststonge_using_get3_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
return data
def hirststonge_using_get3_with_http_info(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get3_with_http_info(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['access_token', 'word1', 'word2', 'pos1', 'pos2'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method hirststonge_using_get3" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'access_token' is set
if ('access_token' not in params or
params['access_token'] is None):
raise ValueError("Missing the required parameter `access_token` when calling `hirststonge_using_get3`") # noqa: E501
# verify the required parameter 'word1' is set
if ('word1' not in params or
params['word1'] is None):
raise ValueError("Missing the required parameter `word1` when calling `hirststonge_using_get3`") # noqa: E501
# verify the required parameter 'word2' is set
if ('word2' not in params or
params['word2'] is None):
raise ValueError("Missing the required parameter `word2` when calling `hirststonge_using_get3`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'access_token' in params:
query_params.append(('access_token', params['access_token'])) # noqa: E501
if 'word1' in params:
query_params.append(('word1', params['word1'])) # noqa: E501
if 'pos1' in params:
query_params.append(('pos1', params['pos1'])) # noqa: E501
if 'word2' in params:
query_params.append(('word2', params['word2'])) # noqa: E501
if 'pos2' in params:
query_params.append(('pos2', params['pos2'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/wordnet-similarity/lesk', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WordnetSimilarityResponseEntity', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def hirststonge_using_get4(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get4(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.hirststonge_using_get4_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
else:
(data) = self.hirststonge_using_get4_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
return data
def hirststonge_using_get4_with_http_info(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get4_with_http_info(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['access_token', 'word1', 'word2', 'pos1', 'pos2'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method hirststonge_using_get4" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'access_token' is set
if ('access_token' not in params or
params['access_token'] is None):
raise ValueError("Missing the required parameter `access_token` when calling `hirststonge_using_get4`") # noqa: E501
# verify the required parameter 'word1' is set
if ('word1' not in params or
params['word1'] is None):
raise ValueError("Missing the required parameter `word1` when calling `hirststonge_using_get4`") # noqa: E501
# verify the required parameter 'word2' is set
if ('word2' not in params or
params['word2'] is None):
raise ValueError("Missing the required parameter `word2` when calling `hirststonge_using_get4`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'access_token' in params:
query_params.append(('access_token', params['access_token'])) # noqa: E501
if 'word1' in params:
query_params.append(('word1', params['word1'])) # noqa: E501
if 'pos1' in params:
query_params.append(('pos1', params['pos1'])) # noqa: E501
if 'word2' in params:
query_params.append(('word2', params['word2'])) # noqa: E501
if 'pos2' in params:
query_params.append(('pos2', params['pos2'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/wordnet-similarity/lin', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WordnetSimilarityResponseEntity', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def hirststonge_using_get5(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get5(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.hirststonge_using_get5_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
else:
(data) = self.hirststonge_using_get5_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
return data
def hirststonge_using_get5_with_http_info(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get5_with_http_info(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['access_token', 'word1', 'word2', 'pos1', 'pos2'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method hirststonge_using_get5" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'access_token' is set
if ('access_token' not in params or
params['access_token'] is None):
raise ValueError("Missing the required parameter `access_token` when calling `hirststonge_using_get5`") # noqa: E501
# verify the required parameter 'word1' is set
if ('word1' not in params or
params['word1'] is None):
raise ValueError("Missing the required parameter `word1` when calling `hirststonge_using_get5`") # noqa: E501
# verify the required parameter 'word2' is set
if ('word2' not in params or
params['word2'] is None):
raise ValueError("Missing the required parameter `word2` when calling `hirststonge_using_get5`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'access_token' in params:
query_params.append(('access_token', params['access_token'])) # noqa: E501
if 'word1' in params:
query_params.append(('word1', params['word1'])) # noqa: E501
if 'pos1' in params:
query_params.append(('pos1', params['pos1'])) # noqa: E501
if 'word2' in params:
query_params.append(('word2', params['word2'])) # noqa: E501
if 'pos2' in params:
query_params.append(('pos2', params['pos2'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/wordnet-similarity/path', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WordnetSimilarityResponseEntity', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def hirststonge_using_get6(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get6(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.hirststonge_using_get6_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
else:
(data) = self.hirststonge_using_get6_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
return data
def hirststonge_using_get6_with_http_info(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get6_with_http_info(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['access_token', 'word1', 'word2', 'pos1', 'pos2'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method hirststonge_using_get6" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'access_token' is set
if ('access_token' not in params or
params['access_token'] is None):
raise ValueError("Missing the required parameter `access_token` when calling `hirststonge_using_get6`") # noqa: E501
# verify the required parameter 'word1' is set
if ('word1' not in params or
params['word1'] is None):
raise ValueError("Missing the required parameter `word1` when calling `hirststonge_using_get6`") # noqa: E501
# verify the required parameter 'word2' is set
if ('word2' not in params or
params['word2'] is None):
raise ValueError("Missing the required parameter `word2` when calling `hirststonge_using_get6`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'access_token' in params:
query_params.append(('access_token', params['access_token'])) # noqa: E501
if 'word1' in params:
query_params.append(('word1', params['word1'])) # noqa: E501
if 'pos1' in params:
query_params.append(('pos1', params['pos1'])) # noqa: E501
if 'word2' in params:
query_params.append(('word2', params['word2'])) # noqa: E501
if 'pos2' in params:
query_params.append(('pos2', params['pos2'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/wordnet-similarity/resnik', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WordnetSimilarityResponseEntity', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def hirststonge_using_get7(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get7(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.hirststonge_using_get7_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
else:
(data) = self.hirststonge_using_get7_with_http_info(access_token, word1, word2, **kwargs) # noqa: E501
return data
def hirststonge_using_get7_with_http_info(self, access_token, word1, word2, **kwargs): # noqa: E501
"""WordNet Similarity WebAPI. # noqa: E501
WordNet similarity.<BR />Response<BR /> Github: <a href=\"https://github.com/keigohtr/apitore-response-parent/tree/master/wordnet-response\">wordnet-response</a><BR /> # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.hirststonge_using_get7_with_http_info(access_token, word1, word2, async=True)
>>> result = thread.get()
:param async bool
:param str access_token: Access Token (required)
:param str word1: Word1 (required)
:param str word2: Word2 (required)
:param str pos1: Part-of-speech1. [n:noun,v:verb,a:adjective,r:adverb]
:param str pos2: Part-of-speech2. [n:noun,v:verb,a:adjective,r:adverb]
:return: WordnetSimilarityResponseEntity
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['access_token', 'word1', 'word2', 'pos1', 'pos2'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method hirststonge_using_get7" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'access_token' is set
if ('access_token' not in params or
params['access_token'] is None):
raise ValueError("Missing the required parameter `access_token` when calling `hirststonge_using_get7`") # noqa: E501
# verify the required parameter 'word1' is set
if ('word1' not in params or
params['word1'] is None):
raise ValueError("Missing the required parameter `word1` when calling `hirststonge_using_get7`") # noqa: E501
# verify the required parameter 'word2' is set
if ('word2' not in params or
params['word2'] is None):
raise ValueError("Missing the required parameter `word2` when calling `hirststonge_using_get7`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'access_token' in params:
query_params.append(('access_token', params['access_token'])) # noqa: E501
if 'word1' in params:
query_params.append(('word1', params['word1'])) # noqa: E501
if 'pos1' in params:
query_params.append(('pos1', params['pos1'])) # noqa: E501
if 'word2' in params:
query_params.append(('word2', params['word2'])) # noqa: E501
if 'pos2' in params:
query_params.append(('pos2', params['pos2'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/wordnet-similarity/wupalmer', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WordnetSimilarityResponseEntity', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 46.272816 | 195 | 0.621861 | 5,500 | 47,152 | 5.153818 | 0.033818 | 0.049954 | 0.031609 | 0.041487 | 0.978233 | 0.978233 | 0.97714 | 0.973365 | 0.973365 | 0.973365 | 0 | 0.029853 | 0.269702 | 47,152 | 1,018 | 196 | 46.318271 | 0.793321 | 0.071174 | 0 | 0.83906 | 1 | 0 | 0.216841 | 0.064044 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.007233 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
fde8789b955bc5a6cc82428718cf49d05e74f49e | 8,900 | py | Python | worker/HaiNanCrawler.py | xfrzrcj/SCCrawler | cbb6163f73f6a47eccffec481ff2fdf869ba5eb2 | [
"MIT"
] | 1 | 2021-05-18T00:46:30.000Z | 2021-05-18T00:46:30.000Z | worker/HaiNanCrawler.py | xfrzrcj/SCCrawler | cbb6163f73f6a47eccffec481ff2fdf869ba5eb2 | [
"MIT"
] | null | null | null | worker/HaiNanCrawler.py | xfrzrcj/SCCrawler | cbb6163f73f6a47eccffec481ff2fdf869ba5eb2 | [
"MIT"
] | null | null | null | from worker import Crawler
import pymysql
from selenium import webdriver
import time
import re
baseUrl = "http://www.hnlzw.net"
indexUrl = "http://www.hnlzw.net/jlsc_sggb.php"
djcf_sggbUrl = "http://www.hnlzw.net/djcf_sggb.php"
jlsc_sxgbUrl = "http://www.hnlzw.net/jlsc_sxgb.php"
djcf_sxgbUrl = "http://www.hnlzw.net/djcf_sxgb.php"
browser = webdriver.Chrome("d:/chromedriver.exe")
class HN_jlsc_sggb_Crawler(Crawler.CrawlerInterface):
def get_num(self):
soup = self.get_soup(indexUrl)
a = soup.find("a", text="末页").get("href")
pages = a.replace('/jlsc_sggb.php?ncount=8&nbegin=', "")
return int(pages)//8+1
def get_index(self):
return indexUrl
def join_url(self, i):
url = baseUrl+"/jlsc_sggb.php?ncount=8&nbegin="+str(i*8)
return url
def get_urls(self, url):
soup = self.get_soup(url)
lists = soup.find("div", id="mainrconlist")
tags = lists.find_all("li")
urls = []
for tag in tags:
ss = tag.find("a").get("href")
if ss.startswith("http"):
urls.append(tag.find("a").get("href"))
else:
urls.append(baseUrl + "/" + tag.find("a").get("href"))
return urls
def get_info(self, url):
if url == "http://www.hnlzw.net/page.php?xuh=44175" or url == \
"http://www.hnlzw.net/page.php?xuh=39496":
return None
info_result = Crawler.Info()
info_result.url = url
soup = self.get_soup(url)
title = soup.find("div", id="arttitl")
info_result.title = title.text
time_source = soup.find("div", id="artdes").text
ts = time_source.split(" ")
info_result.time = ts[0]
info_result.source = ts[1]
article = soup.find("div", id="artcon")
ps = article.find_all("p")
text = ""
for p in ps:
text = text + p.text + "\n"
self.get_resum_description_from_text(text, info_result)
return info_result
def process_info(self, info):
info.province = "海南"
info.source = info.source.replace("来源:", "")
info.time = info.time.replace("发布时间:", "")
info.postion = "审查调查(省一级党和国家机关、国企干部)"
return info
class HN_djcf_sggb_Crawler(Crawler.CrawlerInterface):
def get_num(self):
soup = self.get_soup(djcf_sggbUrl)
a = soup.find("a", text="末页").get("href")
pages = a.replace('/djcf_sggb.php?ncount=8&nbegin=', "")
return int(pages)//8+1
def get_index(self):
return djcf_sggbUrl
def join_url(self, i):
url = baseUrl+"/djcf_sggb.php?ncount=8&nbegin="+str(i*8)
return url
def get_urls(self, url):
soup = self.get_soup(url)
lists = soup.find("div", id="mainrconlist")
tags = lists.find_all("li")
urls = []
for tag in tags:
ss = tag.find("a").get("href")
if ss.startswith("http"):
urls.append(tag.find("a").get("href"))
else:
urls.append(baseUrl + "/" + tag.find("a").get("href"))
return urls
def get_info(self, url):
info_result = Crawler.Info()
info_result.url = url
browser.get(url)
time.sleep(0.5)
title = browser.find_element_by_id("arttitl").text
info_result.title = title
time_source = browser.find_element_by_id("artdes").text
tt = re.findall("\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}",
time_source)
if len(tt) != 0:
info_result.time = tt[0]
source = re.findall("来源:.*?作者", time_source)
if len(source) == 0:
source = re.findall("来源:.*?编辑", time_source)
if len(source) > 0:
info_result.source = source[0]
article = browser.find_element_by_id("artcon")
ps = article.find_elements_by_tag_name("p")
text = ""
for p in ps:
text = text + p.text + "\n"
self.get_resum_description_from_text(text, info_result)
return info_result
def process_info(self, info):
info.province = "海南"
info.source = info.source.replace("来源:", "").replace("作者", "").replace("编辑", "")
info.time = info.time.replace("发布时间:", "")
info.postion = "纪律处分(省一级党和国家机关、国企干部)"
return info
class HN_jlsc_sxgb_Crawler(Crawler.CrawlerInterface):
def get_num(self):
soup = self.get_soup(jlsc_sxgbUrl)
a = soup.find("a", text="末页").get("href")
pages = a.replace('/jlsc_sxgb.php?ncount=8&nbegin=', "")
return int(pages)//8+1
def get_index(self):
return indexUrl
def join_url(self, i):
url = baseUrl+"/jlsc_sxgb.php?ncount=8&nbegin="+str(i*8)
return url
def get_urls(self, url):
soup = self.get_soup(url)
lists = soup.find("div", id="mainrconlist")
tags = lists.find_all("li")
urls = []
for tag in tags:
ss = tag.find("a").get("href")
if ss.startswith("http"):
urls.append(tag.find("a").get("href"))
else:
urls.append(baseUrl + "/" + tag.find("a").get("href"))
return urls
def get_info(self, url):
info_result = Crawler.Info()
info_result.url = url
browser.get(url)
time.sleep(0.5)
title = browser.find_element_by_id("arttitl").text
info_result.title = title
time_source = browser.find_element_by_id("artdes").text
tt = re.findall("\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}",
time_source)
if len(tt) != 0:
info_result.time = tt[0]
source = re.findall("来源:.*?作者", time_source)
if len(source) == 0:
source = re.findall("来源:.*?编辑", time_source)
if len(source) > 0:
info_result.source = source[0]
article = browser.find_element_by_id("artcon")
ps = article.find_elements_by_tag_name("p")
text = ""
for p in ps:
text = text + p.text + "\n"
self.get_resum_description_from_text(text, info_result)
return info_result
def process_info(self, info):
info.province = "海南"
info.source = info.source.replace("来源:", "").replace("作者", "").replace("编辑", "")
info.time = info.time.replace("发布时间:", "")
info.postion = "审查调查(市县干部)"
return info
class HN_djcf_sxgb_Crawler(Crawler.CrawlerInterface):
def get_num(self):
soup = self.get_soup(djcf_sxgbUrl)
a = soup.find("a", text="末页").get("href")
pages = a.replace('/djcf_sxgb.php?ncount=8&nbegin=', "")
return int(pages)//8+1
def get_index(self):
return indexUrl
def join_url(self, i):
url = baseUrl+"/djcf_sxgb.php?ncount=8&nbegin="+str(i*8)
return url
def get_urls(self, url):
soup = self.get_soup(url)
lists = soup.find("div", id="mainrconlist")
tags = lists.find_all("li")
urls = []
for tag in tags:
ss = tag.find("a").get("href")
if ss.startswith("http"):
urls.append(tag.find("a").get("href"))
else:
urls.append(baseUrl + "/" + tag.find("a").get("href"))
return urls
def get_info(self, url):
info_result = Crawler.Info()
info_result.url = url
browser.get(url)
time.sleep(0.5)
title = browser.find_element_by_id("arttitl").text
info_result.title = title
time_source = browser.find_element_by_id("artdes").text
tt = re.findall("\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}",
time_source)
if len(tt) != 0:
info_result.time = tt[0]
source = re.findall("来源:.*?作者", time_source)
if len(source) == 0:
source = re.findall("来源:.*?编辑", time_source)
if len(source) > 0:
info_result.source = source[0]
article = browser.find_element_by_id("artcon")
ps = article.find_elements_by_tag_name("p")
text = ""
for p in ps:
text = text + p.text + "\n"
self.get_resum_description_from_text(text, info_result)
return info_result
def process_info(self, info):
info.province = "海南"
info.source = info.source.replace("来源:", "").replace("作者", "").replace("编辑", "")
info.time = info.time.replace("发布时间:", "")
info.postion = "纪律处分(市县干部)"
return info
c = HN_djcf_sxgb_Crawler()
conns = pymysql.connect(host='127.0.0.1', port=3306, user='root', passwd='123456', db='data', charset='utf8')
c.start(conns)
conns.close()
browser.quit()
# print(c.get_num())
# print(c.join_url(1))
# print(c.get_urls("http://www.hnlzw.net/jlsc_sggb.php?ncount=8&nbegin=8"))
# c.get_info("http://www.hnlzw.net/page.php?xuh=45930")
| 33.584906 | 109 | 0.560337 | 1,213 | 8,900 | 3.967848 | 0.112119 | 0.058176 | 0.019946 | 0.027426 | 0.873468 | 0.849159 | 0.830251 | 0.814253 | 0.788074 | 0.778517 | 0 | 0.014944 | 0.278202 | 8,900 | 264 | 110 | 33.712121 | 0.734278 | 0.018764 | 0 | 0.78125 | 0 | 0.013393 | 0.123052 | 0.041132 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107143 | false | 0.004464 | 0.022321 | 0.017857 | 0.258929 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e31a4de4931a56e3eaf32b8673c48b765163782a | 14,967 | py | Python | src/meterpreter_traffic_parser/enums/tlv_type.py | SpartaEN/meterpreter-traffic-parser | 5681c3973ac8fc3bbd478c82fdc27de4ab6c447a | [
"MIT"
] | 1 | 2021-12-22T15:14:54.000Z | 2021-12-22T15:14:54.000Z | src/meterpreter_traffic_parser/enums/tlv_type.py | SpartaEN/meterpreter-traffic-parser | 5681c3973ac8fc3bbd478c82fdc27de4ab6c447a | [
"MIT"
] | null | null | null | src/meterpreter_traffic_parser/enums/tlv_type.py | SpartaEN/meterpreter-traffic-parser | 5681c3973ac8fc3bbd478c82fdc27de4ab6c447a | [
"MIT"
] | null | null | null | from enum import Enum
from ..exceptions import UnknowType
class TLVType(Enum):
# Base
# Represents an undefined/arbitrary value.
TLV_TYPE_ANY = 0
# Represents a command identifier.
TLV_TYPE_COMMAND_ID = 1
# Represents a request identifier value.
TLV_TYPE_REQUEST_ID = 2
# Represents an exception value.
TLV_TYPE_EXCEPTION = 3
# Represents a result value.
TLV_TYPE_RESULT = 4
# Argument basic types
# Represents a string value.
TLV_TYPE_STRING = 10
# Represents an unsigned integer value.
TLV_TYPE_UINT = 11
# Represents a boolean value.
TLV_TYPE_BOOL = 12
# Extended types
# Represents a length (unsigned integer).
TLV_TYPE_LENGTH = 25
# Represents arbitrary data (raw).
TLV_TYPE_DATA = 26
# Represents a set of flags (unsigned integer).
TLV_TYPE_FLAGS = 27
# Channel types
# Represents a channel identifier (unsigned integer).
TLV_TYPE_CHANNEL_ID = 50
# Represents a channel type (string).
TLV_TYPE_CHANNEL_TYPE = 51
# Represents channel data (raw).
TLV_TYPE_CHANNEL_DATA = 52
# Represents a channel data group (group).
TLV_TYPE_CHANNEL_DATA_GROUP = 53
# Represents a channel class (unsigned integer).
TLV_TYPE_CHANNEL_CLASS = 54
# Represents a channel parent identifier (unsigned integer).
TLV_TYPE_CHANNEL_PARENTID = 55
# Channel extended types
TLV_TYPE_SEEK_WHENCE = 70
TLV_TYPE_SEEK_OFFSET = 71
TLV_TYPE_SEEK_POS = 72
# Grouped identifiers
# Represents an exception code value (unsigned in).
TLV_TYPE_EXCEPTION_CODE = 300
# Represents an exception message value (string).
TLV_TYPE_EXCEPTION_STRING = 301
# Library loading
# Represents a path to the library to be loaded (string).
TLV_TYPE_LIBRARY_PATH = 400
# Represents a target path (string).
TLV_TYPE_TARGET_PATH = 401
# Represents a process identifier of the migration target (unsigned integer).
TLV_TYPE_MIGRATE_PID = 402
# Represents a migration payload (raw).
TLV_TYPE_MIGRATE_PAYLOAD = 404
# Represents a migration target architecture.
TLV_TYPE_MIGRATE_ARCH = 405
# Represents a migration technique (unsigned int).
TLV_TYPE_MIGRATE_TECHNIQUE = 406
# Represents a migration payload base address (unsigned int).
TLV_TYPE_MIGRATE_BASE_ADDR = 407
# Represents a migration payload entry point (unsigned int).
TLV_TYPE_MIGRATE_ENTRY_POINT = 408
# Represents a unix domain socket path, used to migrate on linux (string)
TLV_TYPE_MIGRATE_SOCKET_PATH = 409
# Represents a migration stub (raw).
TLV_TYPE_MIGRATE_STUB = 411
# Represents the name of the ReflectiveLoader function (string).
TLV_TYPE_LIB_LOADER_NAME = 412
# Represents the ordinal of the ReflectiveLoader function (int).
TLV_TYPE_LIB_LOADER_ORDINAL = 413
# Transport switching
# Represents the type of transport to switch to.
TLV_TYPE_TRANS_TYPE = 430
# Represents the new URL of the transport to use.
TLV_TYPE_TRANS_URL = 431
# Represents the user agent (for http).
TLV_TYPE_TRANS_UA = 432
# Represents the communications timeout.
TLV_TYPE_TRANS_COMM_TIMEOUT = 433
# Represents the session expiration.
TLV_TYPE_TRANS_SESSION_EXP = 434
# Represents the certificate hash (for https).
TLV_TYPE_TRANS_CERT_HASH = 435
# Represents the proxy host string (for http/s).
TLV_TYPE_TRANS_PROXY_HOST = 436
# Represents the proxy user name (for http/s).
TLV_TYPE_TRANS_PROXY_USER = 437
# Represents the proxy password (for http/s).
TLV_TYPE_TRANS_PROXY_PASS = 438
# Total time (seconds) to continue retrying comms.
TLV_TYPE_TRANS_RETRY_TOTAL = 439
# Time (seconds) to wait between reconnect attempts.
TLV_TYPE_TRANS_RETRY_WAIT = 440
# List of custom headers to send with the requests.
TLV_TYPE_TRANS_HEADERS = 441
# A single transport grouping.
TLV_TYPE_TRANS_GROUP = 442
# session/machine identification
# Represents a machine identifier.
TLV_TYPE_MACHINE_ID = 460
# Represents a UUID.
TLV_TYPE_UUID = 461
# Represents a Session GUID.
TLV_TYPE_SESSION_GUID = 462
# Packet encryption
# Represents DER-encoded RSA public key
TLV_TYPE_RSA_PUB_KEY = 550
# Represents the type of symmetric key
TLV_TYPE_SYM_KEY_TYPE = 551
# Represents the symmetric key
TLV_TYPE_SYM_KEY = 552
# Represents and RSA-encrypted symmetric key
TLV_TYPE_ENC_SYM_KEY = 553
# Pivots
# Represents the id of the pivot listener
TLV_TYPE_PIVOT_ID = 650
# Represents the data to be staged on new connections.
TLV_TYPE_PIVOT_STAGE_DATA = 651
# Represents named pipe name.
TLV_TYPE_PIVOT_NAMED_PIPE_NAME = 653
# Represents an extension value.
TLV_TYPE_EXTENSIONS = 20000
# Represents a user value.
TLV_TYPE_USER = 40000
# Represents a temporary value.
TLV_TYPE_TEMP = 60000
@staticmethod
def get_by_value(value):
for packet_type in TLVType:
if packet_type.value == value:
return packet_type.name
raise UnknowType(f"Unknow packet type, got {value}")
# Not working on some versions :(
# class TLVType(Enum):
# # Base
# # Represents an undefined/arbitrary value.
# TLV_TYPE_ANY = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_NONE.value, 0)
# # Represents a command identifier.
# TLV_TYPE_COMMAND_ID = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 1)
# # Represents a request identifier value.
# TLV_TYPE_REQUEST_ID = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 2)
# # Represents an exception value.
# TLV_TYPE_EXCEPTION = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_GROUP.value, 3)
# # Represents a result value.
# TLV_TYPE_RESULT = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 4)
# # Argument basic types
# # Represents a string value.
# TLV_TYPE_STRING = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 10)
# # Represents an unsigned integer value.
# TLV_TYPE_UINT = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 11)
# # Represents a boolean value.
# TLV_TYPE_BOOL = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_BOOL.value, 12)
# # Extended types
# # Represents a length (unsigned integer).
# TLV_TYPE_LENGTH = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 25)
# # Represents arbitrary data (raw).
# TLV_TYPE_DATA = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_RAW.value, 26)
# # Represents a set of flags (unsigned integer).
# TLV_TYPE_FLAGS = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 27)
# # Channel types
# # Represents a channel identifier (unsigned integer).
# TLV_TYPE_CHANNEL_ID = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 50)
# # Represents a channel type (string).
# TLV_TYPE_CHANNEL_TYPE = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 51)
# # Represents channel data (raw).
# TLV_TYPE_CHANNEL_DATA = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_RAW.value, 52)
# # Represents a channel data group (group).
# TLV_TYPE_CHANNEL_DATA_GROUP = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_GROUP.value, 53)
# # Represents a channel class (unsigned integer).
# TLV_TYPE_CHANNEL_CLASS = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 54)
# # Represents a channel parent identifier (unsigned integer).
# TLV_TYPE_CHANNEL_PARENTID = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 55)
# # Channel extended types
# TLV_TYPE_SEEK_WHENCE = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 70)
# TLV_TYPE_SEEK_OFFSET = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 71)
# TLV_TYPE_SEEK_POS = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 72)
# # Grouped identifiers
# # Represents an exception code value (unsigned in).
# TLV_TYPE_EXCEPTION_CODE = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 300)
# # Represents an exception message value (string).
# TLV_TYPE_EXCEPTION_STRING = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 301)
# # Library loading
# # Represents a path to the library to be loaded (string).
# TLV_TYPE_LIBRARY_PATH = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 400)
# # Represents a target path (string).
# TLV_TYPE_TARGET_PATH = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 401)
# # Represents a process identifier of the migration target (unsigned integer).
# TLV_TYPE_MIGRATE_PID = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 402)
# # Represents a migration payload (raw).
# TLV_TYPE_MIGRATE_PAYLOAD = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_RAW.value, 404)
# # Represents a migration target architecture.
# TLV_TYPE_MIGRATE_ARCH = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 405)
# # Represents a migration technique (unsigned int).
# TLV_TYPE_MIGRATE_TECHNIQUE = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 406)
# # Represents a migration payload base address (unsigned int).
# TLV_TYPE_MIGRATE_BASE_ADDR = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 407)
# # Represents a migration payload entry point (unsigned int).
# TLV_TYPE_MIGRATE_ENTRY_POINT = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 408)
# # Represents a unix domain socket path, used to migrate on linux (string)
# TLV_TYPE_MIGRATE_SOCKET_PATH = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 409)
# # Represents a migration stub (raw).
# TLV_TYPE_MIGRATE_STUB = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_RAW.value, 411)
# # Represents the name of the ReflectiveLoader function (string).
# TLV_TYPE_LIB_LOADER_NAME = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 412)
# # Represents the ordinal of the ReflectiveLoader function (int).
# TLV_TYPE_LIB_LOADER_ORDINAL = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 413)
# # Transport switching
# # Represents the type of transport to switch to.
# TLV_TYPE_TRANS_TYPE = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 430)
# # Represents the new URL of the transport to use.
# TLV_TYPE_TRANS_URL = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 431)
# # Represents the user agent (for http).
# TLV_TYPE_TRANS_UA = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 432)
# # Represents the communications timeout.
# TLV_TYPE_TRANS_COMM_TIMEOUT = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 433)
# # Represents the session expiration.
# TLV_TYPE_TRANS_SESSION_EXP = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 434)
# # Represents the certificate hash (for https).
# TLV_TYPE_TRANS_CERT_HASH = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_RAW.value, 435)
# # Represents the proxy host string (for http/s).
# TLV_TYPE_TRANS_PROXY_HOST = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 436)
# # Represents the proxy user name (for http/s).
# TLV_TYPE_TRANS_PROXY_USER = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 437)
# # Represents the proxy password (for http/s).
# TLV_TYPE_TRANS_PROXY_PASS = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 438)
# # Total time (seconds) to continue retrying comms.
# TLV_TYPE_TRANS_RETRY_TOTAL = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 439)
# # Time (seconds) to wait between reconnect attempts.
# TLV_TYPE_TRANS_RETRY_WAIT = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 440)
# # List of custom headers to send with the requests.
# TLV_TYPE_TRANS_HEADERS = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 441)
# # A single transport grouping.
# TLV_TYPE_TRANS_GROUP = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_GROUP.value, 442)
# # session/machine identification
# # Represents a machine identifier.
# TLV_TYPE_MACHINE_ID = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 460)
# # Represents a UUID.
# TLV_TYPE_UUID = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_RAW.value, 461)
# # Represents a Session GUID.
# TLV_TYPE_SESSION_GUID = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_RAW.value, 462)
# # Packet encryption
# # Represents DER-encoded RSA public key
# TLV_TYPE_RSA_PUB_KEY = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_RAW.value, 550)
# # Represents the type of symmetric key
# TLV_TYPE_SYM_KEY_TYPE = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_UINT.value, 551)
# # Represents the symmetric key
# TLV_TYPE_SYM_KEY = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_RAW.value, 552)
# # Represents and RSA-encrypted symmetric key
# TLV_TYPE_ENC_SYM_KEY = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_RAW.value, 553)
# # Pivots
# # Represents the id of the pivot listener
# TLV_TYPE_PIVOT_ID = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_RAW.value, 650)
# # Represents the data to be staged on new connections.
# TLV_TYPE_PIVOT_STAGE_DATA = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_RAW.value, 651)
# # Represents named pipe name.
# TLV_TYPE_PIVOT_NAMED_PIPE_NAME = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_STRING.value, 653)
# # Represents an extension value.
# TLV_TYPE_EXTENSIONS = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_COMPLEX.value, 20000)
# # Represents a user value.
# TLV_TYPE_USER = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_COMPLEX.value, 40000)
# # Represents a temporary value.
# TLV_TYPE_TEMP = get_tlv_value(
# PacketMetaType.TLV_META_TYPE_COMPLEX.value, 60000)
# @staticmethod
# def get_by_value(value):
# for packet_type in TLVType:
# if packet_type.value == value:
# return packet_type.name
# raise UnknowType(f"Unknow packet type, got {value}")
| 41.345304 | 83 | 0.697535 | 1,955 | 14,967 | 4.990281 | 0.122251 | 0.086101 | 0.067651 | 0.153752 | 0.990775 | 0.9836 | 0.9836 | 0.970275 | 0.905904 | 0.681427 | 0 | 0.028073 | 0.233647 | 14,967 | 361 | 84 | 41.459834 | 0.822493 | 0.79408 | 0 | 0 | 0 | 0 | 0.011277 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014493 | false | 0.014493 | 0.028986 | 0 | 0.942029 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
e37a55ccb07a05c843d24ea7d41d7fde3c6ca824 | 500 | py | Python | eval_medseg_timm-regnetx_002_Emboss.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | eval_medseg_timm-regnetx_002_Emboss.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | eval_medseg_timm-regnetx_002_Emboss.py | BrunoKrinski/segtool | cb604b5f38104c43a76450136e37c3d1c4b6d275 | [
"MIT"
] | null | null | null | import os
ls=["python main.py --configs configs/eval_medseg_unetplusplus_timm-regnetx_002_0_Emboss.yml",
"python main.py --configs configs/eval_medseg_unetplusplus_timm-regnetx_002_1_Emboss.yml",
"python main.py --configs configs/eval_medseg_unetplusplus_timm-regnetx_002_2_Emboss.yml",
"python main.py --configs configs/eval_medseg_unetplusplus_timm-regnetx_002_3_Emboss.yml",
"python main.py --configs configs/eval_medseg_unetplusplus_timm-regnetx_002_4_Emboss.yml",
]
for l in ls:
os.system(l) | 45.454545 | 94 | 0.834 | 80 | 500 | 4.8375 | 0.3 | 0.129199 | 0.155039 | 0.245478 | 0.894057 | 0.894057 | 0.894057 | 0.894057 | 0.894057 | 0.894057 | 0 | 0.042644 | 0.062 | 500 | 11 | 95 | 45.454545 | 0.782516 | 0 | 0 | 0 | 0 | 0 | 0.868263 | 0.618762 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
e37cc8bfa606ab7201d6e330ec5b02b888bcc3a1 | 54 | py | Python | tests/test_key_point_extraction.py | fitternaveen15/key_point_extraction | f2af4dfa86fe5336556f2294109f74321ae41555 | [
"MIT"
] | null | null | null | tests/test_key_point_extraction.py | fitternaveen15/key_point_extraction | f2af4dfa86fe5336556f2294109f74321ae41555 | [
"MIT"
] | null | null | null | tests/test_key_point_extraction.py | fitternaveen15/key_point_extraction | f2af4dfa86fe5336556f2294109f74321ae41555 | [
"MIT"
] | null | null | null | from key_point_extraction import key_point_extraction
| 27 | 53 | 0.925926 | 8 | 54 | 5.75 | 0.625 | 0.347826 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 54 | 1 | 54 | 54 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
479aa116fdef14fec4b5622028331183e16b0017 | 10,734 | py | Python | performance_test.py | andrew732/EAS499 | a0b1435913a11a999cc83df475eb875d45e51ce1 | [
"MIT"
] | 1 | 2020-06-04T00:07:42.000Z | 2020-06-04T00:07:42.000Z | performance_test.py | andrew732/EAS499 | a0b1435913a11a999cc83df475eb875d45e51ce1 | [
"MIT"
] | null | null | null | performance_test.py | andrew732/EAS499 | a0b1435913a11a999cc83df475eb875d45e51ce1 | [
"MIT"
] | null | null | null | import poker_bot
import time
import numpy as np
import random
import itertools
# runtime of preflop_action
def test_preflop():
positions = ["bb", "sb", "btn", "co", "hj", "lj", "utg2", "utg1", "utg"]
numbers = ['A', 'K', 'Q', 'J', 'T', '9', '8', '7', '6', '5', '4', '3', '2']
bets = [2, 6, 20]
no_limpers = 2
times = []
for last_bet in bets:
three_bet = last_bet == 20
for position in positions:
for i in range(len(numbers)):
for j in range(i, len(numbers)):
cards = numbers[i]+numbers[j]
if i == j:
hand = cards
start = time.time()
poker_bot.preflop_action(hand, position, last_bet, no_limpers, three_bet=three_bet)
length = time.time()-start
times.append(length)
else:
hand = cards+'o'
start = time.time()
poker_bot.preflop_action(hand, position, last_bet, no_limpers, three_bet=three_bet)
length = time.time() - start
times.append(length)
hand = cards+'s'
start = time.time()
poker_bot.preflop_action(hand, position, last_bet, no_limpers, three_bet=three_bet)
length = time.time() - start
times.append(length)
print("Number of iterations: "+str(len(times)))
print("Average runtime: "+str(np.mean(times)))
print("Standard Deviation runtime: "+str(np.std(times)))
# runtime of preflop_action
def test_betting_round_1(opponents):
numbers = ['A', 'K', 'Q', 'J', 'T', '9', '8', '7', '6', '5', '4', '3', '2']
suits = ['h', 'c', 's', 'd']
times = []
pot = 50
bets = [0, 30]
types = [2, 3, 4]
for bet in bets:
for pot_type in types:
for i in range(len(numbers)):
for j in range(i, len(numbers)):
if i == j:
for suit1, suit2 in itertools.combinations(suits, 2):
cards = numbers[i]+suit1+numbers[j]+suit2
used = {numbers[i]+suit1, numbers[j]+suit2}
for i in range(10):
board_cards = ""
while len(board_cards) < 6:
candidate_number = random.randint(0, 12)
candidate_suit = random.randint(0, 3)
card = numbers[candidate_number]+suits[candidate_suit]
if card not in used:
used.add(card)
board_cards += card
# if len(times) % 1000 == 0:
# print(len(times))
start = time.time()
poker_bot.betting_round_1(cards, board_cards, opponents, bet, pot, pot_type=pot_type)
length = time.time()-start
times.append(length)
else:
for suit1, suit2 in zip(suits, suits):
cards = numbers[i]+suit1+numbers[j]+suit2
used = {numbers[i]+suit1, numbers[j]+suit2}
for i in range(10):
board_cards = ""
while len(board_cards) < 6:
candidate_number = random.randint(0, 12)
candidate_suit = random.randint(0, 3)
card = numbers[candidate_number]+suits[candidate_suit]
if card not in used:
used.add(card)
board_cards += card
# if len(times) % 1000 == 0:
# print(len(times))
start = time.time()
poker_bot.betting_round_1(cards, board_cards, opponents, bet, pot, pot_type=pot_type)
length = time.time()-start
times.append(length)
print("Number of iterations: "+str(len(times)))
print("Average runtime: "+str(np.mean(times)))
print("Standard Deviation runtime: "+str(np.std(times)))
# runtime of preflop_action
def test_betting_round_2(opponents):
numbers = ['A', 'K', 'Q', 'J', 'T', '9', '8', '7', '6', '5', '4', '3', '2']
suits = ['h', 'c', 's', 'd']
times = []
pot = 50
bets = [0, 30]
types = [2, 3, 4]
for bet in bets:
for pot_type in types:
for i in range(len(numbers)):
for j in range(i, len(numbers)):
if i == j:
for suit1, suit2 in itertools.combinations(suits, 2):
cards = numbers[i]+suit1+numbers[j]+suit2
used = {numbers[i]+suit1, numbers[j]+suit2}
for i in range(10):
board_cards = ""
while len(board_cards) < 8:
candidate_number = random.randint(0, 12)
candidate_suit = random.randint(0, 3)
card = numbers[candidate_number]+suits[candidate_suit]
if card not in used:
used.add(card)
board_cards += card
# if len(times) % 1000 == 0:
# print(len(times))
start = time.time()
poker_bot.betting_round_2(cards, board_cards, opponents, bet, pot, pot_type=pot_type)
length = time.time()-start
times.append(length)
else:
for suit1, suit2 in zip(suits, suits):
cards = numbers[i]+suit1+numbers[j]+suit2
used = {numbers[i]+suit1, numbers[j]+suit2}
for i in range(10):
board_cards = ""
while len(board_cards) < 8:
candidate_number = random.randint(0, 12)
candidate_suit = random.randint(0, 3)
card = numbers[candidate_number]+suits[candidate_suit]
if card not in used:
used.add(card)
board_cards += card
# if len(times) % 1000 == 0:
# print(len(times))
start = time.time()
poker_bot.betting_round_2(cards, board_cards, opponents, bet, pot, pot_type=pot_type)
length = time.time()-start
times.append(length)
print("Number of iterations: "+str(len(times)))
print("Average runtime: "+str(np.mean(times)))
print("Standard Deviation runtime: "+str(np.std(times)))
# runtime of preflop_action
def test_river(opponents):
numbers = ['A', 'K', 'Q', 'J', 'T', '9', '8', '7', '6', '5', '4', '3', '2']
suits = ['h', 'c', 's', 'd']
times = []
pot = 50
bets = [0, 30]
types = [2, 3, 4]
for bet in bets:
for pot_type in types:
for i in range(len(numbers)):
for j in range(i, len(numbers)):
if i == j:
for suit1, suit2 in itertools.combinations(suits, 2):
cards = numbers[i]+suit1+numbers[j]+suit2
used = {numbers[i]+suit1, numbers[j]+suit2}
for i in range(10):
board_cards = ""
while len(board_cards) < 10:
candidate_number = random.randint(0, 12)
candidate_suit = random.randint(0, 3)
card = numbers[candidate_number]+suits[candidate_suit]
if card not in used:
used.add(card)
board_cards += card
# if len(times) % 1000 == 0:
# print(len(times))
start = time.time()
poker_bot.river_action(cards, board_cards, opponents, bet, pot, pot_type=pot_type)
length = time.time()-start
times.append(length)
else:
for suit1, suit2 in zip(suits, suits):
cards = numbers[i]+suit1+numbers[j]+suit2
used = {numbers[i]+suit1, numbers[j]+suit2}
for i in range(10):
board_cards = ""
while len(board_cards) < 10:
candidate_number = random.randint(0, 12)
candidate_suit = random.randint(0, 3)
card = numbers[candidate_number]+suits[candidate_suit]
if card not in used:
used.add(card)
board_cards += card
# if len(times) % 1000 == 0:
# print(len(times))
start = time.time()
poker_bot.river_action(cards, board_cards, opponents, bet, pot, pot_type=pot_type)
length = time.time()-start
times.append(length)
print("Number of iterations: "+str(len(times)))
print("Average runtime: "+str(np.mean(times)))
print("Standard Deviation runtime: "+str(np.std(times)))
test_preflop()
test_betting_round_1(1)
test_betting_round_1(2)
test_betting_round_2(1)
test_betting_round_2(2)
test_river(1)
test_river(2) | 48.570136 | 117 | 0.412428 | 1,054 | 10,734 | 4.077799 | 0.096774 | 0.05584 | 0.036296 | 0.05584 | 0.919032 | 0.919032 | 0.912285 | 0.912285 | 0.911354 | 0.911354 | 0 | 0.036069 | 0.483417 | 10,734 | 221 | 118 | 48.570136 | 0.739044 | 0.036985 | 0 | 0.861702 | 0 | 0 | 0.034687 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021277 | false | 0 | 0.026596 | 0 | 0.047872 | 0.06383 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
479f6d026e2b162a76d5ccd86d3f06c3cba275ef | 68,568 | py | Python | benchmarks/SimResults/combinations_spec_mylocality/cmp_calculixlibquantumzeusmpsjeng/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_spec_mylocality/cmp_calculixlibquantumzeusmpsjeng/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_spec_mylocality/cmp_calculixlibquantumzeusmpsjeng/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.266951,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.412364,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.38304,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.635667,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 1.10075,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.631309,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 2.36772,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.41629,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 8.22407,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.261285,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0230434,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.268957,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.17042,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.530242,
'Execution Unit/Register Files/Runtime Dynamic': 0.193464,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.723044,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.55743,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 4.93636,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00188641,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00188641,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00164681,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000639555,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.0024481,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00786772,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0179528,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.163829,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.402046,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.556438,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 1.14813,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.155315,
'L2/Runtime Dynamic': 0.0314597,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.58364,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.68203,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.108268,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.108268,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 5.09698,
'Load Store Unit/Runtime Dynamic': 2.32423,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.26697,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.53394,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0947485,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0970608,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0659695,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.722295,
'Memory Management Unit/Runtime Dynamic': 0.16303,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 27.7291,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.911566,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0434736,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.312977,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.26802,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 9.87124,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.112822,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.291304,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.592092,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.252481,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.407243,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.205563,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.865287,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.19799,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 5.26347,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.111859,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0105902,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.119518,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.078321,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.231377,
'Execution Unit/Register Files/Runtime Dynamic': 0.0889112,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.280003,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.604376,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.25526,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00128101,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00128101,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.0011453,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000459521,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00112509,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00483242,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0112269,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.075292,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 4.78922,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.196589,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.255726,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 7.24016,
'Instruction Fetch Unit/Runtime Dynamic': 0.543666,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.055677,
'L2/Runtime Dynamic': 0.0100731,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.77853,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.76309,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0498683,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0498683,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.01402,
'Load Store Unit/Runtime Dynamic': 1.05889,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.122967,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.245934,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0436413,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0444645,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.297776,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0322663,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.528853,
'Memory Management Unit/Runtime Dynamic': 0.0767308,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 19.6917,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.29425,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0149722,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.122407,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.431629,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 4.37625,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.103663,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.28411,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.54496,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.246711,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.397935,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.200864,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.84551,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.198616,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 5.1803,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.102955,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0103481,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.114244,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0765309,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.217198,
'Execution Unit/Register Files/Runtime Dynamic': 0.086879,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.266601,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.579435,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.20131,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00138975,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00138975,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.0012451,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000500937,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00109937,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00512397,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0120876,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0735711,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 4.67975,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.196965,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.249881,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 7.12538,
'Instruction Fetch Unit/Runtime Dynamic': 0.537629,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0513867,
'L2/Runtime Dynamic': 0.0094676,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.75983,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.75191,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0492634,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0492635,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.99247,
'Load Store Unit/Runtime Dynamic': 1.04412,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.121475,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.242951,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.043112,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0438687,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.29097,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0323341,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.521137,
'Memory Management Unit/Runtime Dynamic': 0.0762028,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 19.4601,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.270827,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0144268,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.119935,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.405189,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 4.27392,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.202689,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.061615,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.0993827,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.050165,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.211163,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.0704698,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 3.96061,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00258441,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0186886,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0191133,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0186886,
'Execution Unit/Register Files/Runtime Dynamic': 0.0216977,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.0393716,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.110646,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 0.951572,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000629671,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000629671,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000554092,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000217588,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000274564,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.002088,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0058354,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0183741,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 1.16875,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.0827355,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.0624067,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 3.44399,
'Instruction Fetch Unit/Runtime Dynamic': 0.17144,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0292991,
'L2/Runtime Dynamic': 0.00893327,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 1.5541,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.166453,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0102551,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.010255,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 1.60253,
'Load Store Unit/Runtime Dynamic': 0.227281,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.0252872,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.0505739,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.00897453,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.00941403,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.0726686,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0135644,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.244194,
'Memory Management Unit/Runtime Dynamic': 0.0229784,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 12.8701,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0027799,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0317188,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.0344987,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 1.4167,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 2.266696003552378,
'Runtime Dynamic': 2.266696003552378,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.280314,
'Runtime Dynamic': 0.142193,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 80.0313,
'Peak Power': 113.144,
'Runtime Dynamic': 20.0803,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 79.751,
'Total Cores/Runtime Dynamic': 19.9381,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.280314,
'Total L3s/Runtime Dynamic': 0.142193,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | 75.019694 | 124 | 0.681892 | 8,082 | 68,568 | 5.779263 | 0.06731 | 0.123662 | 0.113043 | 0.093517 | 0.939646 | 0.93104 | 0.918922 | 0.887685 | 0.862208 | 0.842982 | 0 | 0.131279 | 0.224463 | 68,568 | 914 | 125 | 75.019694 | 0.747071 | 0 | 0 | 0.642232 | 0 | 0 | 0.657805 | 0.048127 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
47b14250b404a02e114be03c12b2e1df819d3584 | 2,026 | py | Python | models/models.py | sfcurre/aedes_model | 101fb8f49868bf3b2421cff7b14e89aab69b9c64 | [
"MIT"
] | null | null | null | models/models.py | sfcurre/aedes_model | 101fb8f49868bf3b2421cff7b14e89aab69b9c64 | [
"MIT"
] | 1 | 2021-01-08T17:07:12.000Z | 2021-01-08T17:07:12.000Z | models/models.py | sfcurre/aedes_model | 101fb8f49868bf3b2421cff7b14e89aab69b9c64 | [
"MIT"
] | null | null | null | import tensorflow as tf
def ff_model(input_shape):
xin = tf.keras.layers.Input(input_shape)
c1 = tf.keras.layers.Conv1D(64, 3, activation = 'relu', data_format = 'channels_last')(xin)
c2 = tf.keras.layers.Conv1D(64, 3, activation = 'relu', data_format = 'channels_last')(c1)
h1 = tf.keras.layers.BatchNormalization()(c2)
h2 = tf.keras.layers.Flatten()(h1)
h3 = tf.keras.layers.Dense(64, activation = 'relu')(h2)
#h3 = tf.keras.layers.Dropout(0.2)(h3)
h4 = tf.keras.layers.Dense(64, activation = 'relu')(h3)
#h4 = tf.keras.layers.Dropout(0.2)(h4)
xout = tf.keras.layers.Dense(1, activation = 'relu', kernel_regularizer = tf.keras.regularizers.l2(0.001))(h4)
return tf.keras.models.Model(xin, xout)
def lstm_model(input_shape):
xin = tf.keras.layers.Input(input_shape)
c1 = tf.keras.layers.Conv1D(64, 3, activation = 'relu', data_format = 'channels_last')(xin)
#c1 = tf.keras.layers.Dropout(0.2)(c1)
c2 = tf.keras.layers.Conv1D(64, 3, activation = 'relu', data_format = 'channels_last')(c1)
#c2 = tf.keras.layers.Dropout(0.2)(c2)
h1 = tf.keras.layers.BatchNormalization()(c2)
r1 = tf.keras.layers.LSTM(64, return_sequences = True)(h1)
r2 = tf.keras.layers.LSTM(64)(r1)
xout = tf.keras.layers.Dense(1, activation = 'relu', kernel_regularizer = tf.keras.regularizers.l2(0.001))(r2)
return tf.keras.models.Model(xin, xout)
def gru_model(input_shape):
xin = tf.keras.layers.Input(input_shape)
c1 = tf.keras.layers.Conv1D(64, 3, activation = 'relu', data_format = 'channels_last')(xin)
c2 = tf.keras.layers.Conv1D(64, 3, activation = 'relu', data_format = 'channels_last')(c1)
h1 = tf.keras.layers.BatchNormalization()(c2)
r1 = tf.keras.layers.GRU(64, return_sequences = True)(h1)
r2 = tf.keras.layers.GRU(64)(r1)
xout = tf.keras.layers.Dense(1, activation = 'relu', kernel_regularizer = tf.keras.regularizers.l2(0.001))(r2)
return tf.keras.models.Model(xin, xout) | 40.52 | 114 | 0.666338 | 304 | 2,026 | 4.355263 | 0.157895 | 0.169184 | 0.255287 | 0.086103 | 0.94713 | 0.927492 | 0.861027 | 0.809668 | 0.781722 | 0.72432 | 0 | 0.060391 | 0.166338 | 2,026 | 50 | 115 | 40.52 | 0.723505 | 0.07305 | 0 | 0.586207 | 0 | 0 | 0.065067 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0 | 0.034483 | 0 | 0.241379 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d028c03c3d09c30f7a17bc1b40548c860455ab15 | 1,805 | py | Python | face_recognition/api_aws/migrations/0003_auto_20200616_1247.py | Fabriciooml/AWS-Rekognition-with-Django | 9a30015dbf8334fe7d2c3fd25e8c32dc4bc9358f | [
"MIT"
] | null | null | null | face_recognition/api_aws/migrations/0003_auto_20200616_1247.py | Fabriciooml/AWS-Rekognition-with-Django | 9a30015dbf8334fe7d2c3fd25e8c32dc4bc9358f | [
"MIT"
] | null | null | null | face_recognition/api_aws/migrations/0003_auto_20200616_1247.py | Fabriciooml/AWS-Rekognition-with-Django | 9a30015dbf8334fe7d2c3fd25e8c32dc4bc9358f | [
"MIT"
] | 1 | 2021-11-05T19:32:16.000Z | 2021-11-05T19:32:16.000Z | # Generated by Django 3.0.3 on 2020-06-16 15:47
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api_aws', '0002_auto_20200616_1244'),
]
operations = [
migrations.AddField(
model_name='face',
name='faceId',
field=models.CharField(default=100, max_length=50),
preserve_default=False,
),
migrations.AddField(
model_name='face',
name='height',
field=models.FloatField(default=0, max_length=15),
preserve_default=False,
),
migrations.AddField(
model_name='face',
name='imageId',
field=models.CharField(default=0, max_length=50),
preserve_default=False,
),
migrations.AddField(
model_name='face',
name='left',
field=models.FloatField(default=0, max_length=15),
preserve_default=False,
),
migrations.AddField(
model_name='face',
name='name',
field=models.CharField(default=0, max_length=50),
preserve_default=False,
),
migrations.AddField(
model_name='face',
name='similarity',
field=models.FloatField(default=0, max_length=15),
preserve_default=False,
),
migrations.AddField(
model_name='face',
name='top',
field=models.FloatField(default=0, max_length=15),
preserve_default=False,
),
migrations.AddField(
model_name='face',
name='width',
field=models.FloatField(default=0, max_length=15),
preserve_default=False,
),
]
| 29.112903 | 63 | 0.54349 | 174 | 1,805 | 5.477011 | 0.270115 | 0.151102 | 0.193075 | 0.226653 | 0.764953 | 0.764953 | 0.728227 | 0.728227 | 0.728227 | 0.728227 | 0 | 0.048183 | 0.344598 | 1,805 | 61 | 64 | 29.590164 | 0.757396 | 0.024931 | 0 | 0.709091 | 1 | 0 | 0.060865 | 0.013083 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.018182 | 0 | 0.072727 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
efbbffb7c9bb21c250ad8650cb8c7abbaa4dd773 | 49 | py | Python | listmaker/inputManager.py | vsuley/listmaker | ffd115e5d4cd184e8f3f9e6b2a4e279a512456fc | [
"MIT"
] | null | null | null | listmaker/inputManager.py | vsuley/listmaker | ffd115e5d4cd184e8f3f9e6b2a4e279a512456fc | [
"MIT"
] | null | null | null | listmaker/inputManager.py | vsuley/listmaker | ffd115e5d4cd184e8f3f9e6b2a4e279a512456fc | [
"MIT"
] | null | null | null | import sys
class InputManager(object):
pass
| 9.8 | 27 | 0.734694 | 6 | 49 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204082 | 49 | 4 | 28 | 12.25 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
4be7889caf4dc43d72f19de2243c4a8fb78de338 | 195,472 | py | Python | core/tests/data/image_constants.py | Tim810306/oppia | 6f90044d12dbe0979c999265cbe46f267c4c592d | [
"Apache-2.0"
] | 5,422 | 2015-08-14T01:56:44.000Z | 2022-03-31T23:31:56.000Z | core/tests/data/image_constants.py | Tim810306/oppia | 6f90044d12dbe0979c999265cbe46f267c4c592d | [
"Apache-2.0"
] | 14,178 | 2015-08-14T05:21:45.000Z | 2022-03-31T23:54:10.000Z | core/tests/data/image_constants.py | Tim810306/oppia | 6f90044d12dbe0979c999265cbe46f267c4c592d | [
"Apache-2.0"
] | 3,574 | 2015-08-14T04:20:06.000Z | 2022-03-29T01:52:37.000Z | # Copyright 2020 The Oppia Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
PNG_IMAGE_WRONG_DIMENSIONS_BASE64 = (
'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAKAAAACWCAIAAADWjhTKAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAB3RJTUUH5AoOCyEtLns60QAAAB1pVFh0Q29tbWVudAAAAAAAQ3JlYXRlZCB3aXRoIEdJTVBkLmUHAAAgAElEQVR4AQClgVp+Af///wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAADOw4nEAACAASURBVAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA1HEJBQAAIABJREFUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADm6wa/AAAgAElEQVQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAINw00UAACAASURBVAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKWBWn4CAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAApInU9AAAIABJREFUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADlBDWUAAAgAElEQVQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPyB0sYAACAASURBVAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAnAkMBwAAGetJREFUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABjBZz6QIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKbOBCnlgRRIAAAAAElFTkSuQmCC') # pylint: disable=line-too-long
PNG_IMAGE_BROKEN_BASE64 = (
'data:image/png;base64,iVBORw0KGgoCCAANSUhEUgAAAKAAAACWCAIAAADWjhTKAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAB3RJTUUH5AoOCyEtLns60QAAAB1pVFh0Q29tbWVudAAAAAAAQ3JlYXRlZCB3aXRoIEdJTVBkLmUHAAAgAElEQVR4AQClgVp+Af///wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAADOw4nEAACAASURBVAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAA1HEJBQAAIABJREFUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADm6wa/AAAgAElEQVQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAINw00UAACAASURBVAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKWBWn4CAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAApInU9AAAIABJREFUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADlBDWUAAAgAElEQVQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPyB0sYAACAASURBVAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAnAkMBwAAGetJREFUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABjBZz6QIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKbOBCnlgRRIAAAAAElFTkSuQmCC') # pylint: disable=line-too-long
JPG_IMAGE_BASE64 = (
'data:image/png;base64,/9j/4AAQSkZJRgABAQEASABIAAD//gATQ3JlYXRlZCB3aXRoIEdJTVD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wgARCAABAAEDAREAAhEBAxEB/8QAFAABAAAAAAAAAAAAAAAAAAAACf/EABQBAQAAAAAAAAAAAAAAAAAAAAD/2gAMAwEAAhADEAAAAX8P/8QAFBABAAAAAAAAAAAAAAAAAAAAAP/aAAgBAQABBQJ//8QAFBEBAAAAAAAAAAAAAAAAAAAAAP/aAAgBAwEBPwF//8QAFBEBAAAAAAAAAAAAAAAAAAAAAP/aAAgBAgEBPwF//8QAFBABAAAAAAAAAAAAAAAAAAAAAP/aAAgBAQAGPwJ//8QAFBABAAAAAAAAAAAAAAAAAAAAAP/aAAgBAQABPyF//9oADAMBAAIAAwAAABAf/8QAFBEBAAAAAAAAAAAAAAAAAAAAAP/aAAgBAwEBPxB//8QAFBEBAAAAAAAAAAAAAAAAAAAAAP/aAAgBAgEBPxB//8QAFBABAAAAAAAAAAAAAAAAAAAAAP/aAAgBAQABPxB//9k=') # pylint: disable=line-too-long
BROKEN_BASE64 = (
'data:imag64,/9j/4AAQSkZJRgABAQEASABIAAD//gATQ3JlYXRlZCB3aXRoIEdJTVD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wgARCAABAAEDAREAAhEBAxEB/8QAFAABAAAAAAAAAAAAAAAAAAAACf/EABQBAQAAAAAAAAAAAAAAAAAAAAD/2gAMAwEAAhADEAAAAX8P/8QAFBABAAAAAAAAAAAAAAAAAAAAAP/aAAgBAQABBQJ//8QAFBEBAAAAAAAAAAAAAAAAAAAAAP/aAAgBAwEBPwF//8QAFBEBAAAAAAAAAAAAAAAAAAAAAP/aAAgBAgEBPwF//8QAFBABAAAAAAAAAAAAAAAAAAAAAP/aAAgBAQAGPwJ//8QAFBABAAAAAAAAAAAAAAAAAAAAAP/aAAgBAQABPyF//9oADAMBAAIAAwAAABAf/8QAFBEBAAAAAAAAAAAAAAAAAAAAAP/aAAgBAwEBPxB//8QAFBEBAAAAAAAAAAAAAAAAAAAAAP/aAAgBAgEBPxB//8QAFBABAAAAAAAAAAAAAAAAAAAAAP/aAAgBAQABPxB//9k=') # pylint: disable=line-too-long
| 8,498.782609 | 96,602 | 0.998312 | 204 | 195,472 | 956.529412 | 0.519608 | 0.000307 | 0.000348 | 0.00041 | 0.995541 | 0.995173 | 0.995173 | 0.995173 | 0.007246 | 0.007246 | 0 | 0.000532 | 0.000778 | 195,472 | 22 | 96,603 | 8,885.090909 | 0.998505 | 0.003566 | 0 | 0 | 0 | 0.25 | 0.999184 | 0.999184 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
08e20d557a83c0ed3e1a9a7936385a6ff89199c6 | 35,852 | py | Python | anthill/exec/admin.py | anthill-services/anthill-exec | a5f6e6d78e157ec8e5715fc44f646c56f4fda46b | [
"MIT"
] | null | null | null | anthill/exec/admin.py | anthill-services/anthill-exec | a5f6e6d78e157ec8e5715fc44f646c56f4fda46b | [
"MIT"
] | null | null | null | anthill/exec/admin.py | anthill-services/anthill-exec | a5f6e6d78e157ec8e5715fc44f646c56f4fda46b | [
"MIT"
] | 1 | 2017-12-03T22:05:20.000Z | 2017-12-03T22:05:20.000Z |
from tornado.gen import with_timeout, TimeoutError
from anthill.common import admin as a
from . model.sources import NoSuchSourceError, SourceCodeError, JavascriptSourceError
from . model.build import JavascriptBuildError, NoSuchClass, NoSuchMethod
from . model.session import APIError, JavascriptSessionError
from anthill.common.environment import EnvironmentClient, AppNotFound
from anthill.common.access import AccessToken
from anthill.common.internal import Internal, InternalError
from anthill.common.validate import validate
from anthill.common import ElapsedTime
from anthill.common.jsonrpc import JsonRPCError
from anthill.common.source import NoSuchProjectError, SourceCodeRoot
from datetime import datetime, timedelta
import traceback
import logging
import ujson
class ApplicationController(a.AdminController):
@validate(app_id="str")
async def get(self, app_id):
environment_client = EnvironmentClient(self.application.cache)
sources = self.application.sources
try:
app = await environment_client.get_app_info(app_id)
except AppNotFound:
raise a.ActionError("App was not found.")
try:
await sources.get_project(self.gamespace, app_id)
except NoSuchProjectError:
has_no_settings = True
commits = {}
else:
has_no_settings = False
try:
commits = await sources.list_versions(self.gamespace, app_id)
except SourceCodeError:
commits = {}
result = {
"app_id": app_id,
"app_record_id": app.id,
"app_name": app.title,
"versions": app.versions,
"commits": commits,
"has_no_settings": has_no_settings
}
return result
def render(self, data):
r = [
a.breadcrumbs([
a.link("apps", "Applications")
], data["app_name"])
]
if data["has_no_settings"]:
r.append(ApplicationSettingsController.no_settings_notice())
commits = data["commits"]
def get_version_commit(version):
commit = commits.get(version, None)
if commit is not None:
return commit.repository_commit[:7]
return None
r.extend([
a.links("Application '{0}' versions".format(data["app_name"]), links=[
a.link("app_version", v_name, icon="tags", badge=get_version_commit(v_name),
app_id=self.context.get("app_id"), app_version=v_name)
for v_name, v_id in data["versions"].items()
]),
a.links("Navigate", [
a.link("app_settings", "Application Settings", icon="cogs", app_id=self.context.get("app_id")),
a.link("apps", "Go back", icon="chevron-left"),
a.link("/environment/app", "Manage app '{0}' at 'Environment' service.".format(data["app_name"]),
icon="link text-danger", record_id=data["app_record_id"]),
])
])
return r
def access_scopes(self):
return ["exec_admin"]
class ApplicationSettingsController(a.AdminController):
@validate(app_id="str")
async def get(self, app_id):
environment_client = EnvironmentClient(self.application.cache)
sources = self.application.sources
try:
app = await environment_client.get_app_info(app_id)
except AppNotFound as e:
raise a.ActionError("App was not found.")
try:
project = await sources.get_project(self.gamespace, app_id)
except NoSuchProjectError as e:
repository_url = ""
ssh_private_key = ""
repository_branch = SourceCodeRoot.DEFAULT_BRANCH
else:
repository_url = project.repository_url
repository_branch = project.repository_branch
ssh_private_key = project.ssh_private_key
result = {
"app_id": app_id,
"app_record_id": app.id,
"app_name": app.title,
"repository_url": repository_url,
"repository_branch": repository_branch,
"ssh_private_key": ssh_private_key
}
return result
@validate(repository_url="str", repository_branch="str", ssh_private_key="str")
async def update_settings(self, repository_url, repository_branch, ssh_private_key, *ignored):
app_id = self.context.get("app_id")
environment_client = EnvironmentClient(self.application.cache)
sources = self.application.sources
builds = self.application.builds
try:
await environment_client.get_app_info(app_id)
except AppNotFound as e:
raise a.ActionError("App was not found.")
if not (await builds.validate_repository_url(repository_url, ssh_private_key)):
raise a.ActionError("Error: \"{0}\" is not a valid Git repository URL, or "
"the repository does not exist, or the ssh key is wrong.".format(repository_url))
await sources.update_project(self.gamespace, app_id, repository_url, repository_branch, ssh_private_key)
raise a.Redirect("app_settings",
message="Application settings have been updated.",
app_id=app_id)
@staticmethod
def no_settings_notice():
return a.notice(
"Source Code Repository Is Not Configured",
"""
You have not defined your source code repository yet. <br>
To deploy your codebase to the Exec service, you must use a Git repository. <br>
Please create one if you don't have it and define it in the settings below.
""", style="danger")
def render(self, data):
r = [
a.breadcrumbs([
a.link("apps", "Applications"),
a.link("app", data["app_name"], app_id=self.context.get("app_id")),
], "Application Settings")]
if not data["repository_url"]:
r.append(ApplicationSettingsController.no_settings_notice())
r.extend([
a.form("Application Settings", fields={
"repository_url": a.field(
"Git source code repository url (ssh only)", "text", "primary",
description="""
You need to use SSH remote url in order to deploy source code to this service. See
<a href="https://help.github.com/articles/which-remote-url-should-i-use/#cloning-with-ssh-urls"
target="_blank">this</a>.
""", order=1),
"repository_branch": a.field(
"Git branch to use on the source code repository", "text", "primary",
order=2),
"ssh_private_key": a.field(
"Private SSH key", "text", "primary",
multiline=6, description="""
Please generate SSH key pair, paste private key (for example,
<span class="label label-default">id_rsa</span>) here, and add public key (for example,
<span class="label label-default">id_rsa.pub</span>) into user's SSH keys with
read access to the repository above.
For example, on GitHub, it can be done <a href="https://github.com/settings/keys"
target="_blank">here</a>.
""", order=3)
}, methods={
"update_settings": a.method("Update Settings", "primary")
}, data=data),
a.links("Navigate", [
a.link("app", "Go back", icon="chevron-left", app_id=self.context.get("app_id")),
a.link("/environment/app", "Manage app '{0}' at 'Environment' service.".format(data["app_name"]),
icon="link text-danger", record_id=data["app_record_id"]),
])
])
return r
def access_scopes(self):
return ["exec_admin"]
class ServerCodeSettingsController(a.AdminController):
async def get(self):
sources = self.application.sources
try:
project = await sources.get_server_project(self.gamespace)
except NoSuchProjectError as e:
repository_url = ""
ssh_private_key = ""
repository_branch = SourceCodeRoot.DEFAULT_BRANCH
else:
repository_url = project.repository_url
repository_branch = project.repository_branch
ssh_private_key = project.ssh_private_key
result = {
"repository_url": repository_url,
"repository_branch": repository_branch,
"ssh_private_key": ssh_private_key
}
return result
@validate(repository_url="str", repository_branch="str", ssh_private_key="str")
async def update_settings(self, repository_url, repository_branch, ssh_private_key, *ignored):
sources = self.application.sources
builds = self.application.builds
if not (await builds.validate_repository_url(repository_url, ssh_private_key)):
raise a.ActionError("Error: \"{0}\" is not a valid Git repository URL, or "
"the repository does not exist, or the ssh key is wrong.".format(repository_url))
await sources.update_server_project(self.gamespace, repository_url, repository_branch, ssh_private_key)
raise a.Redirect("server", message="Server Code settings have been updated.")
@staticmethod
def no_settings_notice():
return a.notice(
"Server Code Repository Is Not Configured",
"""
You have not defined your source code repository yet. <br>
To deploy your codebase to the Exec service, you must use a Git repository. <br>
Please create one if you don't have it and define it in the settings below.
""", style="danger")
def render(self, data):
r = [
a.breadcrumbs([
a.link("server", "Server Code"),
], "Server Code Settings"),
ServerCodeController.about_notice()
]
if not data["repository_url"]:
r.append(ServerCodeSettingsController.no_settings_notice())
r.extend([
a.form("Server Code Settings", fields={
"repository_url": a.field(
"Git source code repository url (ssh only)", "text", "primary",
description="""
You need to use SSH remote url in order to deploy source code to this service. See
<a href="https://help.github.com/articles/which-remote-url-should-i-use/#cloning-with-ssh-urls"
target="_blank">this</a>.
""", order=1),
"repository_branch": a.field(
"Git branch to use on the source code repository", "text", "primary",
order=2),
"ssh_private_key": a.field(
"Private SSH key", "text", "primary",
multiline=6, description="""
Please generate SSH key pair, paste private key (for example,
<span class="label label-default">id_rsa</span>) here, and add public key (for example,
<span class="label label-default">id_rsa.pub</span>) into user's SSH keys with
read access to the repository above.
For example, on GitHub, it can be done <a href="https://github.com/settings/keys"
target="_blank">here</a>.
""", order=3)
}, methods={
"update_settings": a.method("Update Settings", "primary")
}, data=data),
a.links("Navigate", [
a.link("server", "Go back", icon="chevron-left"),
])
])
return r
def access_scopes(self):
return ["exec_admin"]
class ApplicationVersionController(a.AdminController):
@validate(app_id="str", app_version="str")
async def get(self, app_id, app_version):
environment_client = EnvironmentClient(self.application.cache)
sources = self.application.sources
builds = self.application.builds
try:
app = await environment_client.get_app_info(app_id)
except AppNotFound:
raise a.ActionError("App was not found.")
try:
project_settings = await sources.get_project(self.gamespace, app_id)
except NoSuchProjectError as e:
raise a.Redirect("app_settings", message="Please define project settings first", app_id=app_id)
try:
commit = await sources.get_version_commit(self.gamespace, app_id, app_version)
except NoSuchSourceError:
current_commit = None
else:
current_commit = commit.repository_commit
try:
project = builds.get_project(project_settings)
await with_timeout(timedelta(seconds=10), project.init())
except JavascriptBuildError as e:
raise a.ActionError(e.message)
except TimeoutError as e:
commits_history = None
else:
try:
commits_history = await project.get_commits_history(50)
except SourceCodeError as e:
raise a.ActionError(e.message)
result = {
"app_id": app_id,
"app_record_id": app.id,
"app_name": app.title,
"versions": app.versions,
"current_commit": current_commit,
"commits_history": commits_history
}
return result
@staticmethod
def no_commit_notice(app_version):
return a.notice(
"This version ({0}) is disabled".format(app_version),
"""
The version {0} is not attached to any commit for the Git repository. <br>
Therefore, running source code for this version is not possible. <br>
Please attach the version to a commit using either "Update To The Last Commit",
or by clicking "Use This" on a commit of the resent commits history.
""".format(app_version), style="danger")
async def switch_commit_context(self):
commit = self.context.get("commit")
await self.switch_commit(commit)
async def switch_to_latest_commit(self):
app_id = self.context.get("app_id")
app_version = self.context.get("app_version")
environment_client = EnvironmentClient(self.application.cache)
sources = self.application.sources
builds = self.application.builds
try:
await environment_client.get_app_info(app_id)
except AppNotFound:
raise a.ActionError("App was not found.")
try:
project_settings = await sources.get_project(self.gamespace, app_id)
except NoSuchProjectError as e:
raise a.Redirect("app_settings", message="Please define project settings first", app_id=app_id)
try:
project = builds.get_project(project_settings)
await with_timeout(timedelta(seconds=10), project.init())
except JavascriptBuildError as e:
raise a.ActionError(e.message)
except TimeoutError:
raise a.ActionError("Repository has not updated itself yet.")
try:
latest_commit = await project.pull_and_get_latest_commit()
except SourceCodeError as e:
raise a.ActionError(e.message)
if not latest_commit:
raise a.ActionError("Failed to check the latest commit")
try:
updated = await sources.update_commit(self.gamespace, app_id, app_version, latest_commit)
except SourceCodeError as e:
raise a.ActionError(e.message)
if updated:
raise a.Redirect(
"app_version",
message="Version has been updated",
app_id=app_id, app_version=app_version)
raise a.Redirect(
"app_version",
message="Already up-to-date.",
app_id=app_id, app_version=app_version)
async def detach_version(self):
app_id = self.context.get("app_id")
app_version = self.context.get("app_version")
environment_client = EnvironmentClient(self.application.cache)
sources = self.application.sources
builds = self.application.builds
try:
await environment_client.get_app_info(app_id)
except AppNotFound:
raise a.ActionError("App was not found.")
try:
deleted = await sources.delete_commit(self.gamespace, app_id, app_version)
except SourceCodeError as e:
raise a.ActionError(e.message)
if deleted:
raise a.Redirect(
"app_version",
message="Version has been disabled",
app_id=app_id, app_version=app_version)
raise a.Redirect(
"app_version",
message="Version was already disabled",
app_id=app_id, app_version=app_version)
async def pull_updates(self):
app_id = self.context.get("app_id")
app_version = self.context.get("app_version")
environment_client = EnvironmentClient(self.application.cache)
sources = self.application.sources
builds = self.application.builds
try:
await environment_client.get_app_info(app_id)
except AppNotFound:
raise a.ActionError("App was not found.")
try:
project_settings = await sources.get_project(self.gamespace, app_id)
except NoSuchProjectError as e:
raise a.Redirect("app_settings", message="Please define project settings first", app_id=app_id)
try:
project = builds.get_project(project_settings)
await with_timeout(timedelta(seconds=10), project.init())
except JavascriptBuildError as e:
raise a.ActionError(e.message)
except TimeoutError:
raise a.ActionError("Repository has not updated itself yet.")
try:
pulled = await project.pull()
except SourceCodeError as e:
raise a.ActionError(e.message)
if not pulled:
raise a.ActionError("Failed to pull updates")
raise a.Redirect(
"app_version",
message="Updates has been pulled.",
app_id=app_id, app_version=app_version)
@validate(commit="str_name")
async def switch_commit(self, commit):
app_id = self.context.get("app_id")
app_version = self.context.get("app_version")
environment_client = EnvironmentClient(self.application.cache)
sources = self.application.sources
builds = self.application.builds
try:
await environment_client.get_app_info(app_id)
except AppNotFound:
raise a.ActionError("App was not found.")
try:
project_settings = await sources.get_project(self.gamespace, app_id)
except NoSuchProjectError as e:
raise a.Redirect("app_settings", message="Please define project settings first", app_id=app_id)
try:
project = builds.get_project(project_settings)
await with_timeout(timedelta(seconds=10), project.init())
except JavascriptBuildError as e:
raise a.ActionError(e.message)
except TimeoutError:
raise a.ActionError("Repository has not updated itself yet.")
try:
commit_exists = await project.check_commit(commit)
except SourceCodeError as e:
raise a.ActionError(e.message)
if not commit_exists:
raise a.ActionError("No such commit")
try:
await sources.update_commit(self.gamespace, app_id, app_version, commit)
except SourceCodeError as e:
raise a.ActionError(e.message)
raise a.Redirect("app_version", message="Version has been updated", app_id=app_id, app_version=app_version)
def render(self, data):
r = [
a.breadcrumbs([
a.link("apps", "Applications"),
a.link("app", data["app_name"], app_id=self.context.get("app_id")),
], self.context.get("app_version"))
]
if data["commits_history"] is None:
r.append(a.notice("Repository is in progress", "Please wait until repository is updated"))
else:
methods = {
"switch_to_latest_commit": a.method("Update To The Last Commit", "primary", order=1),
"pull_updates": a.method("Pull Updates", "default", order=2),
}
if data["current_commit"]:
methods["detach_version"] = a.method(
"Disable This Version", "danger", order=3,
danger="Are you sure you would like to disable this version from launching? After this action, "
"users would not be able to open sessions on this version.")
else:
r.append(ApplicationVersionController.no_commit_notice(self.context.get("app_version")))
r.extend([
a.form("Actions", fields={}, methods=methods, data=data),
a.content(title="Recent Commits History", headers=[
{
"id": "actions",
"title": "Actions"
},
{
"id": "message",
"title": "Commit Message"
},
{
"id": "hash",
"title": "Commit Hash"
},
{
"id": "date",
"title": "Commit Date"
},
{
"id": "author",
"title": "Commit Author"
}
], items=[
{
"hash": [
a.status(commit.hexsha[:7], "default")
],
"message": commit.message[:48],
"date": str(commit.committed_datetime),
"author": str(commit.author.name) + " (" + commit.author.email + ")",
"actions": [
a.status("Current Commit", "success", "check")
if data["current_commit"] == commit.hexsha else
a.button("app_version", "Use This", "primary", _method="switch_commit_context",
commit=str(commit.hexsha), app_id=self.context.get("app_id"),
app_version=self.context.get("app_version"))
]
}
for commit in data["commits_history"]
], style="primary")
])
r.extend([
a.links("Navigate", [
a.link("app", "Go back", icon="chevron-left", app_id=self.context.get("app_id"))
])
])
return r
def access_scopes(self):
return ["exec_admin"]
class ServerCodeController(a.AdminController):
async def get(self):
sources = self.application.sources
builds = self.application.builds
try:
project_settings = await sources.get_server_project(self.gamespace)
except NoSuchProjectError:
raise a.Redirect("server_settings", message="Please define project settings first")
try:
commit = await sources.get_server_commit(self.gamespace)
except NoSuchSourceError:
current_commit = None
else:
current_commit = commit.repository_commit
server_fetch_error = None
try:
project = builds.get_server_project(project_settings)
await with_timeout(timedelta(seconds=10), project.init())
except JavascriptBuildError as e:
raise a.ActionError(e.message)
except TimeoutError:
commits_history = None
except SourceCodeError as e:
commits_history = None
server_fetch_error = str(e.message)
except Exception as e:
commits_history = None
server_fetch_error = str(e)
else:
try:
commits_history = await project.get_commits_history(50)
except SourceCodeError as e:
raise a.ActionError(e.message)
result = {
"current_commit": current_commit,
"commits_history": commits_history,
"server_fetch_error": server_fetch_error,
}
return result
@staticmethod
def no_commit_notice():
return a.notice(
"The Server Code is disabled",
"""
The server code is not attached to any commit for the Git repository. <br>
Therefore, running source code for the server code is not possible. <br>
Please attach the version to a commit using either "Update To The Last Commit",
or by clicking "Use This" on a commit of the resent commits history.
""", style="danger")
@staticmethod
def about_notice():
return a.notice(
"About The Server Code",
"""
The Server Code is a way to deploy restricted application-independent code to the exec service. <br>
Users cannot call the Server Code, but other services can. Therefore, services that have no
application context can call functions on exec service with Server Code. <br>
Please refer to the API for more information.
""", style="info")
async def switch_commit_context(self):
commit = self.context.get("commit")
await self.switch_commit(commit)
async def switch_to_latest_commit(self):
sources = self.application.sources
builds = self.application.builds
try:
project_settings = await sources.get_server_project(self.gamespace)
except NoSuchProjectError:
raise a.Redirect("server_settings", message="Please define project settings first")
try:
project = builds.get_server_project(project_settings)
await with_timeout(timedelta(seconds=10), project.init())
except JavascriptBuildError as e:
raise a.ActionError(e.message)
except TimeoutError:
raise a.ActionError("Repository has not updated itself yet.")
try:
latest_commit = await project.pull_and_get_latest_commit()
except SourceCodeError as e:
raise a.ActionError(e.message)
if not latest_commit:
raise a.ActionError("Failed to check the latest commit")
try:
updated = await sources.update_server_commit(self.gamespace, latest_commit)
except SourceCodeError as e:
raise a.ActionError(e.message)
if updated:
raise a.Redirect("server", message="Server Code has been updated")
raise a.Redirect("server", message="Already up-to-date.")
async def detach_version(self):
sources = self.application.sources
try:
deleted = await sources.delete_server_commit(self.gamespace)
except SourceCodeError as e:
raise a.ActionError(e.message)
if deleted:
raise a.Redirect("server", message="Server code has been disabled")
raise a.Redirect("server", message="Server code was already disabled")
async def pull_updates(self):
sources = self.application.sources
builds = self.application.builds
try:
project_settings = await sources.get_server_project(self.gamespace)
except NoSuchProjectError:
raise a.Redirect("server_settings", message="Please define project settings first")
try:
project = builds.get_server_project(project_settings)
await with_timeout(timedelta(seconds=10), project.init())
except JavascriptBuildError as e:
raise a.ActionError(e.message)
except TimeoutError:
raise a.ActionError("Repository has not updated itself yet.")
try:
pulled = await project.pull()
except SourceCodeError as e:
raise a.ActionError(e.message)
if not pulled:
raise a.ActionError("Failed to pull updates")
raise a.Redirect("server", message="Updates has been pulled.")
@validate(commit="str_name")
async def switch_commit(self, commit):
sources = self.application.sources
builds = self.application.builds
try:
project_settings = await sources.get_server_project(self.gamespace)
except NoSuchProjectError as e:
raise a.Redirect("server_settings", message="Please define project settings first")
try:
project = builds.get_server_project(project_settings)
await with_timeout(timedelta(seconds=10), project.init())
except JavascriptBuildError as e:
raise a.ActionError(e.message)
except TimeoutError:
raise a.ActionError("Repository has not updated itself yet.")
try:
commit_exists = await project.check_commit(commit)
except SourceCodeError as e:
raise a.ActionError(e.message)
if not commit_exists:
raise a.ActionError("No such commit")
try:
await sources.update_server_commit(self.gamespace, commit)
except SourceCodeError as e:
raise a.ActionError(e.message)
raise a.Redirect("server", message="Server code commit has been updated")
def render(self, data):
r = [
a.breadcrumbs([], "Server Code")
]
if data["commits_history"] is None:
if data["server_fetch_error"]:
r.append(a.notice("Failed to update repository",
"Please check repository settings: {0}".format(data["server_fetch_error"]),
"danger"))
else:
r.append(a.notice("Repository is in progress", "Please wait until repository is updated"))
else:
methods = {
"switch_to_latest_commit": a.method("Update To The Last Commit", "primary", order=1),
"pull_updates": a.method("Pull Updates", "default", order=2),
}
if data["current_commit"]:
methods["detach_version"] = a.method(
"Disable Server Code", "danger", order=3,
danger="Are you sure you would like to disable the Server Code from launching? After this action, "
"services would not be able to call functions on the Server Code.")
else:
r.append(ServerCodeController.no_commit_notice())
r.extend([
a.form("Actions", fields={}, methods=methods, data=data),
a.content(title="Recent Commits History", headers=[
{
"id": "actions",
"title": "Actions"
},
{
"id": "message",
"title": "Commit Message"
},
{
"id": "hash",
"title": "Commit Hash"
},
{
"id": "date",
"title": "Commit Date"
},
{
"id": "author",
"title": "Commit Author"
}
], items=[
{
"hash": [
a.status(commit.hexsha[:7], "default")
],
"message": commit.message[:48],
"date": str(commit.committed_datetime),
"author": str(commit.author.name) + " (" + commit.author.email + ")",
"actions": [
a.status("Current Commit", "success", "check")
if data["current_commit"] == commit.hexsha else
a.button("server", "Use This", "primary", _method="switch_commit_context",
commit=str(commit.hexsha))
]
}
for commit in data["commits_history"]
], style="primary")
])
r.extend([
ServerCodeController.about_notice(),
a.links("Navigate", [
a.link("index", "Go back", icon="chevron-left"),
a.link("server_settings", "Server Code Settings", icon="cogs")
])
])
return r
def access_scopes(self):
return ["exec_admin"]
class ApplicationsController(a.AdminController):
async def get(self):
environment_client = EnvironmentClient(self.application.cache)
apps = await environment_client.list_apps()
result = {
"apps": apps
}
return result
def render(self, data):
return [
a.breadcrumbs([], "Applications"),
a.links("Select application", links=[
a.link("app", app_name, icon="mobile", app_id=app_id)
for app_id, app_name in data["apps"].items()
]),
a.links("Navigate", [
a.link("index", "Go back", icon="chevron-left"),
a.link("/environment/apps", "Manage apps", icon="link text-danger"),
])
]
def access_scopes(self):
return ["exec_admin"]
class RootAdminController(a.AdminController):
def render(self, data):
return [
a.links("Exec service", [
a.link("apps", "Applications", icon="mobile"),
a.link("server", "Server Code", icon="server"),
])
]
def access_scopes(self):
return ["exec_admin"]
class FunctionsController(a.AdminController):
def render(self, data):
return [
a.breadcrumbs([], "Functions"),
a.links("Functions", [
a.link("function", f.name, icon="code", function_name=f.name)
for f in data["functions"]
]),
a.notice("Notice", "Please note that the function should be bound "
"to the application in order to be called."),
a.links("Navigate", [
a.link("index", "Go back", icon="chevron-left"),
a.link("new_function", "New function", icon="plus"),
])
]
async def get(self):
functions = self.application.functions
return {
"functions": (await functions.list_functions(self.gamespace))
}
def access_scopes(self):
return ["exec_admin"]
| 37.541361 | 120 | 0.564515 | 3,783 | 35,852 | 5.210944 | 0.084589 | 0.021306 | 0.037945 | 0.01324 | 0.839243 | 0.810075 | 0.787146 | 0.76239 | 0.734794 | 0.725663 | 0 | 0.002069 | 0.339479 | 35,852 | 954 | 121 | 37.580713 | 0.83037 | 0 | 0 | 0.750999 | 0 | 0.010652 | 0.209049 | 0.009143 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029294 | false | 0 | 0.021305 | 0.021305 | 0.101198 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
08e9bc060b5ddd7d0e88b04b56360d13ada5fd9b | 143 | py | Python | jupyter/jupyter_notebook_config.py | IIAT-MR-LL/CPI_prediction | 498775e7e0803f041c175a1af5c8c6efbeed705f | [
"Apache-2.0"
] | null | null | null | jupyter/jupyter_notebook_config.py | IIAT-MR-LL/CPI_prediction | 498775e7e0803f041c175a1af5c8c6efbeed705f | [
"Apache-2.0"
] | 9 | 2020-01-28T22:16:53.000Z | 2022-02-10T00:35:39.000Z | jupyter/jupyter_notebook_config.py | IIAT-MR-LL/CPI_prediction | 498775e7e0803f041c175a1af5c8c6efbeed705f | [
"Apache-2.0"
] | null | null | null | c.NotebookApp.open_browser = True
c.NotebookApp.ip = '*'
c.NotebookApp.password = u'sha1:a488dc80584d:5c33900ec1f5e4f7b658dca101166d3cad7c9e58' | 47.666667 | 86 | 0.832168 | 15 | 143 | 7.866667 | 0.733333 | 0.305085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.244444 | 0.055944 | 143 | 3 | 86 | 47.666667 | 0.62963 | 0 | 0 | 0 | 0 | 0 | 0.409722 | 0.402778 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
08f15c82a1e2d2e0b54254f60efea2f0019d9f1c | 130 | py | Python | app/ui/commands.py | ExiledNarwal28/cardbot | 1e0fa1e9c08b512056c2af0b408eded224c9adbb | [
"MIT"
] | null | null | null | app/ui/commands.py | ExiledNarwal28/cardbot | 1e0fa1e9c08b512056c2af0b408eded224c9adbb | [
"MIT"
] | 41 | 2020-05-10T20:18:19.000Z | 2020-12-08T10:42:31.000Z | app/ui/commands.py | ExiledNarwal28/cardbot | 1e0fa1e9c08b512056c2af0b408eded224c9adbb | [
"MIT"
] | null | null | null | from app.sessions.ui.commands import register_sessions_commands
def register_commands(bot):
register_sessions_commands(bot)
| 21.666667 | 63 | 0.838462 | 17 | 130 | 6.117647 | 0.529412 | 0.307692 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 130 | 5 | 64 | 26 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4101e0360527aa63617594756ffa1206f403ba45 | 254 | py | Python | parameters_8000.py | ziming9/Said-it | ce9f3047940af9e3b380edcc142c9ca1e72b43d0 | [
"BSD-3-Clause"
] | null | null | null | parameters_8000.py | ziming9/Said-it | ce9f3047940af9e3b380edcc142c9ca1e72b43d0 | [
"BSD-3-Clause"
] | null | null | null | parameters_8000.py | ziming9/Said-it | ce9f3047940af9e3b380edcc142c9ca1e72b43d0 | [
"BSD-3-Clause"
] | 1 | 2019-04-12T23:26:03.000Z | 2019-04-12T23:26:03.000Z | <<<<<<< HEAD
password="pbkdf2(1000,20,sha512)$b2f536b4dddadf3d$f4c32852252c006499e0ad416c2250c1437a4f0d"
=======
password="pbkdf2(1000,20,sha512)$80fc4cfa56d6c3e8$aaeace46c9a63dc83936eea458b2e2d46e418452"
>>>>>>> ac107c2232ef7d609545d2dd50d64d79d81a428f
| 42.333333 | 91 | 0.818898 | 16 | 254 | 13 | 0.6875 | 0.134615 | 0.173077 | 0.192308 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0.453441 | 0.027559 | 254 | 5 | 92 | 50.8 | 0.388664 | 0 | 0 | 0 | 0 | 0 | 0.629921 | 0.629921 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.4 | 0 | null | null | 0 | 1 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 10 |
f5cdc39d10fdbf5e4da2bcd1b6763b715faf0ccc | 405 | py | Python | src/graph_transpiler/webdnn/backend/webgl/optimize_rules/split_texture/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | 1 | 2021-04-09T15:55:35.000Z | 2021-04-09T15:55:35.000Z | src/graph_transpiler/webdnn/backend/webgl/optimize_rules/split_texture/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | null | null | null | src/graph_transpiler/webdnn/backend/webgl/optimize_rules/split_texture/__init__.py | steerapi/webdnn | 1df51cc094e5a528cfd3452c264905708eadb491 | [
"MIT"
] | null | null | null | from webdnn.backend.webgl.optimize_rules.split_texture import assert_texture_size
from webdnn.backend.webgl.optimize_rules.split_texture import check_texture_size
from webdnn.backend.webgl.optimize_rules.split_texture import split_input_texture
from webdnn.backend.webgl.optimize_rules.split_texture import split_output_texture
from webdnn.backend.webgl.optimize_rules.split_texture import split_variable
| 67.5 | 82 | 0.901235 | 59 | 405 | 5.864407 | 0.254237 | 0.144509 | 0.245665 | 0.317919 | 0.913295 | 0.913295 | 0.913295 | 0.913295 | 0.913295 | 0.760116 | 0 | 0 | 0.049383 | 405 | 5 | 83 | 81 | 0.898701 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 12 |
eb24c159d391cba0a51bb9d12e362ecac7f013ad | 138 | py | Python | networkx/drawing/__init__.py | jdrudolph/networkx | d49733b61fb7dedfbbbdd9c04464adf6beb5db07 | [
"BSD-3-Clause"
] | 1 | 2020-11-10T08:15:23.000Z | 2020-11-10T08:15:23.000Z | networkx/drawing/__init__.py | jdrudolph/networkx | d49733b61fb7dedfbbbdd9c04464adf6beb5db07 | [
"BSD-3-Clause"
] | null | null | null | networkx/drawing/__init__.py | jdrudolph/networkx | d49733b61fb7dedfbbbdd9c04464adf6beb5db07 | [
"BSD-3-Clause"
] | 1 | 2022-03-31T20:55:28.000Z | 2022-03-31T20:55:28.000Z | # graph drawing and interface to graphviz
from .layout import *
from .nx_pylab import *
from .nx_agraph import *
from .nx_pydot import *
| 19.714286 | 41 | 0.76087 | 21 | 138 | 4.857143 | 0.619048 | 0.294118 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 138 | 6 | 42 | 23 | 0.894737 | 0.282609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
dee379caadfe39249a6d939393b1343c5191097b | 698 | py | Python | fuel_tank.py | GYosifov88/Python-Basics | f4290061264aebc417bde4948948e4a64739fec9 | [
"MIT"
] | null | null | null | fuel_tank.py | GYosifov88/Python-Basics | f4290061264aebc417bde4948948e4a64739fec9 | [
"MIT"
] | null | null | null | fuel_tank.py | GYosifov88/Python-Basics | f4290061264aebc417bde4948948e4a64739fec9 | [
"MIT"
] | null | null | null | fuel_type = input()
litres_of_fuel = int(input())
if fuel_type == 'Diesel':
if litres_of_fuel >=25:
print(f'You have enough {str.lower(fuel_type)}.')
elif litres_of_fuel < 25:
print(f'Fill your tank with {str.lower(fuel_type)}!')
elif fuel_type == 'Gasoline':
if litres_of_fuel >=25:
print(f'You have enough {str.lower(fuel_type)}.')
elif litres_of_fuel < 25:
print(f'Fill your tank with {str.lower(fuel_type)}!')
elif fuel_type == 'Gas':
if litres_of_fuel >=25:
print(f'You have enough {str.lower(fuel_type)}.')
elif litres_of_fuel < 25:
print(f'Fill your tank with {str.lower(fuel_type)}!')
else:
print('Invalid fuel!') | 34.9 | 61 | 0.643266 | 111 | 698 | 3.828829 | 0.225225 | 0.188235 | 0.197647 | 0.197647 | 0.811765 | 0.811765 | 0.811765 | 0.811765 | 0.811765 | 0.811765 | 0 | 0.0217 | 0.207736 | 698 | 20 | 62 | 34.9 | 0.746835 | 0 | 0 | 0.631579 | 0 | 0 | 0.39485 | 0.197425 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.368421 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7200a7baeba64c173e82d210e9cc9bb711f4b69b | 16,666 | py | Python | sdk/python/pulumi_databricks/cluster_policy.py | pulumi/pulumi-databricks | 43580d4adbd04b72558f368ff0eef3d03432ebc1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_databricks/cluster_policy.py | pulumi/pulumi-databricks | 43580d4adbd04b72558f368ff0eef3d03432ebc1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_databricks/cluster_policy.py | pulumi/pulumi-databricks | 43580d4adbd04b72558f368ff0eef3d03432ebc1 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = ['ClusterPolicyArgs', 'ClusterPolicy']
@pulumi.input_type
class ClusterPolicyArgs:
def __init__(__self__, *,
definition: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a ClusterPolicy resource.
:param pulumi.Input[str] definition: Policy definition JSON document expressed in [Databricks Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definition).
:param pulumi.Input[str] name: Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
"""
if definition is not None:
pulumi.set(__self__, "definition", definition)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def definition(self) -> Optional[pulumi.Input[str]]:
"""
Policy definition JSON document expressed in [Databricks Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definition).
"""
return pulumi.get(self, "definition")
@definition.setter
def definition(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "definition", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class _ClusterPolicyState:
def __init__(__self__, *,
definition: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
policy_id: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering ClusterPolicy resources.
:param pulumi.Input[str] definition: Policy definition JSON document expressed in [Databricks Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definition).
:param pulumi.Input[str] name: Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
:param pulumi.Input[str] policy_id: Canonical unique identifier for the cluster policy.
"""
if definition is not None:
pulumi.set(__self__, "definition", definition)
if name is not None:
pulumi.set(__self__, "name", name)
if policy_id is not None:
pulumi.set(__self__, "policy_id", policy_id)
@property
@pulumi.getter
def definition(self) -> Optional[pulumi.Input[str]]:
"""
Policy definition JSON document expressed in [Databricks Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definition).
"""
return pulumi.get(self, "definition")
@definition.setter
def definition(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "definition", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="policyId")
def policy_id(self) -> Optional[pulumi.Input[str]]:
"""
Canonical unique identifier for the cluster policy.
"""
return pulumi.get(self, "policy_id")
@policy_id.setter
def policy_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "policy_id", value)
class ClusterPolicy(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
definition: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
This resource creates a cluster policy, which limits the ability to create clusters based on a set of rules. The policy rules limit the attributes or attribute values available for cluster creation. cluster policies have ACLs that limit their use to specific users and groups. Only admin users can create, edit, and delete policies. Admin users also have access to all policies.
Cluster policies let you:
* Limit users to create clusters with prescribed settings.
* Simplify the user interface and enable more users to create their own clusters (by fixing and hiding some values).
* Control cost by limiting per cluster maximum cost (by setting limits on attributes whose values contribute to hourly price).
Cluster policy permissions limit which policies a user can select in the Policy drop-down when the user creates a cluster:
* If no policies have been created in the workspace, the Policy drop-down does not display.
* A user who has cluster create permission can select the `Free form` policy and create fully-configurable clusters.
* A user who has both cluster create permission and access to cluster policies can select the Free form policy and policies they have access to.
* A user that has access to only cluster policies, can select the policies they have access to.
## Related Resources
The following resources are often used in the same context:
* Dynamic Passthrough Clusters for a Group guide
* End to end workspace management guide
* get_clusters data to retrieve a list of Cluster ids.
* Cluster to create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).
* get_current_user data to retrieve information about User or databricks_service_principal, that is calling Databricks REST API.
* GlobalInitScript to manage [global init scripts](https://docs.databricks.com/clusters/init-scripts.html#global-init-scripts), which are run on all Cluster and databricks_job.
* InstancePool to manage [instance pools](https://docs.databricks.com/clusters/instance-pools/index.html) to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.
* InstanceProfile to manage AWS EC2 instance profiles that users can launch Cluster and access data, like databricks_mount.
* IpAccessList to allow access from [predefined IP ranges](https://docs.databricks.com/security/network/ip-access-list.html).
* Library to install a [library](https://docs.databricks.com/libraries/index.html) on databricks_cluster.
* get_node_type data to get the smallest node type for Cluster that fits search criteria, like amount of RAM or number of cores.
* Permissions to manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.
* get_spark_version data to get [Databricks Runtime (DBR)](https://docs.databricks.com/runtime/dbr.html) version that could be used for `spark_version` parameter in Cluster and other resources.
* UserInstanceProfile to attach InstanceProfile (AWS) to databricks_user.
* WorkspaceConf to manage workspace configuration for expert usage.
## Import
The resource cluster policy can be imported using the policy idbash
```sh
$ pulumi import databricks:index/clusterPolicy:ClusterPolicy this <cluster-policy-id>
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] definition: Policy definition JSON document expressed in [Databricks Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definition).
:param pulumi.Input[str] name: Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: Optional[ClusterPolicyArgs] = None,
opts: Optional[pulumi.ResourceOptions] = None):
"""
This resource creates a cluster policy, which limits the ability to create clusters based on a set of rules. The policy rules limit the attributes or attribute values available for cluster creation. cluster policies have ACLs that limit their use to specific users and groups. Only admin users can create, edit, and delete policies. Admin users also have access to all policies.
Cluster policies let you:
* Limit users to create clusters with prescribed settings.
* Simplify the user interface and enable more users to create their own clusters (by fixing and hiding some values).
* Control cost by limiting per cluster maximum cost (by setting limits on attributes whose values contribute to hourly price).
Cluster policy permissions limit which policies a user can select in the Policy drop-down when the user creates a cluster:
* If no policies have been created in the workspace, the Policy drop-down does not display.
* A user who has cluster create permission can select the `Free form` policy and create fully-configurable clusters.
* A user who has both cluster create permission and access to cluster policies can select the Free form policy and policies they have access to.
* A user that has access to only cluster policies, can select the policies they have access to.
## Related Resources
The following resources are often used in the same context:
* Dynamic Passthrough Clusters for a Group guide
* End to end workspace management guide
* get_clusters data to retrieve a list of Cluster ids.
* Cluster to create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).
* get_current_user data to retrieve information about User or databricks_service_principal, that is calling Databricks REST API.
* GlobalInitScript to manage [global init scripts](https://docs.databricks.com/clusters/init-scripts.html#global-init-scripts), which are run on all Cluster and databricks_job.
* InstancePool to manage [instance pools](https://docs.databricks.com/clusters/instance-pools/index.html) to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.
* InstanceProfile to manage AWS EC2 instance profiles that users can launch Cluster and access data, like databricks_mount.
* IpAccessList to allow access from [predefined IP ranges](https://docs.databricks.com/security/network/ip-access-list.html).
* Library to install a [library](https://docs.databricks.com/libraries/index.html) on databricks_cluster.
* get_node_type data to get the smallest node type for Cluster that fits search criteria, like amount of RAM or number of cores.
* Permissions to manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.
* get_spark_version data to get [Databricks Runtime (DBR)](https://docs.databricks.com/runtime/dbr.html) version that could be used for `spark_version` parameter in Cluster and other resources.
* UserInstanceProfile to attach InstanceProfile (AWS) to databricks_user.
* WorkspaceConf to manage workspace configuration for expert usage.
## Import
The resource cluster policy can be imported using the policy idbash
```sh
$ pulumi import databricks:index/clusterPolicy:ClusterPolicy this <cluster-policy-id>
```
:param str resource_name: The name of the resource.
:param ClusterPolicyArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ClusterPolicyArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
definition: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ClusterPolicyArgs.__new__(ClusterPolicyArgs)
__props__.__dict__["definition"] = definition
__props__.__dict__["name"] = name
__props__.__dict__["policy_id"] = None
super(ClusterPolicy, __self__).__init__(
'databricks:index/clusterPolicy:ClusterPolicy',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
definition: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
policy_id: Optional[pulumi.Input[str]] = None) -> 'ClusterPolicy':
"""
Get an existing ClusterPolicy resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] definition: Policy definition JSON document expressed in [Databricks Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definition).
:param pulumi.Input[str] name: Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
:param pulumi.Input[str] policy_id: Canonical unique identifier for the cluster policy.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ClusterPolicyState.__new__(_ClusterPolicyState)
__props__.__dict__["definition"] = definition
__props__.__dict__["name"] = name
__props__.__dict__["policy_id"] = policy_id
return ClusterPolicy(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def definition(self) -> pulumi.Output[Optional[str]]:
"""
Policy definition JSON document expressed in [Databricks Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definition).
"""
return pulumi.get(self, "definition")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Cluster policy name. This must be unique. Length must be between 1 and 100 characters.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="policyId")
def policy_id(self) -> pulumi.Output[str]:
"""
Canonical unique identifier for the cluster policy.
"""
return pulumi.get(self, "policy_id")
| 54.642623 | 386 | 0.693988 | 2,081 | 16,666 | 5.433926 | 0.140798 | 0.035019 | 0.042094 | 0.042802 | 0.851167 | 0.834365 | 0.826406 | 0.81942 | 0.81491 | 0.802706 | 0 | 0.002399 | 0.224529 | 16,666 | 304 | 387 | 54.822368 | 0.872563 | 0.595884 | 0 | 0.598485 | 1 | 0 | 0.076072 | 0.007607 | 0 | 0 | 0 | 0 | 0 | 1 | 0.151515 | false | 0.007576 | 0.037879 | 0 | 0.280303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7222e9e5d68f07f50abb5c66e79c73ead5357524 | 193 | py | Python | views.py | To-Gia-Hao/testing-01 | a2deee4156c2cf82f9df88c0adb2f8b81d22f97c | [
"MIT"
] | null | null | null | views.py | To-Gia-Hao/testing-01 | a2deee4156c2cf82f9df88c0adb2f8b81d22f97c | [
"MIT"
] | null | null | null | views.py | To-Gia-Hao/testing-01 | a2deee4156c2cf82f9df88c0adb2f8b81d22f97c | [
"MIT"
] | null | null | null | from app import app
from auth.views import *
from upload_image.views import *
from user.views import *
@app.route("/", methods = ['GET', 'POST'])
def index():
return 'hello world';
| 19.3 | 43 | 0.658031 | 27 | 193 | 4.666667 | 0.62963 | 0.261905 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196891 | 193 | 9 | 44 | 21.444444 | 0.812903 | 0 | 0 | 0 | 0 | 0 | 0.103261 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | true | 0 | 0.571429 | 0.142857 | 0.857143 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
9d3bfa75ad32423e4d49ad1b538f61059f921607 | 94 | py | Python | lib/nier/constructs/__main__.py | dskrypa/nier_replicant | c3d1fd3d58beafc19e3a0286f57ab09a3e91d8e1 | [
"MIT"
] | null | null | null | lib/nier/constructs/__main__.py | dskrypa/nier_replicant | c3d1fd3d58beafc19e3a0286f57ab09a3e91d8e1 | [
"MIT"
] | null | null | null | lib/nier/constructs/__main__.py | dskrypa/nier_replicant | c3d1fd3d58beafc19e3a0286f57ab09a3e91d8e1 | [
"MIT"
] | null | null | null | from .adapters import * # noqa
from .game_data import * # noqa
from .utils import * # noqa
| 23.5 | 32 | 0.680851 | 13 | 94 | 4.846154 | 0.538462 | 0.47619 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.223404 | 94 | 3 | 33 | 31.333333 | 0.863014 | 0.148936 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9d546953004461b5d2843148012c3c41e3a47a8c | 21,226 | py | Python | restraintlib/lib/ribose_purine_terminal_C5.py | mkowiel/restraintlib | 32de01d67ae290a45f3199e90c729acc258a6249 | [
"BSD-3-Clause"
] | null | null | null | restraintlib/lib/ribose_purine_terminal_C5.py | mkowiel/restraintlib | 32de01d67ae290a45f3199e90c729acc258a6249 | [
"BSD-3-Clause"
] | 1 | 2021-11-11T18:45:10.000Z | 2021-11-11T18:45:10.000Z | restraintlib/lib/ribose_purine_terminal_C5.py | mkowiel/restraintlib | 32de01d67ae290a45f3199e90c729acc258a6249 | [
"BSD-3-Clause"
] | null | null | null | RIBOSE_PURINE_TERMINAL_C5_PDB_CODES = ['A', 'G', 'IG']
RIBOSE_PURINE_TERMINAL_C5_ALL_PDB_CODES = RIBOSE_PURINE_TERMINAL_C5_PDB_CODES
RIBOSE_PURINE_TERMINAL_C5_CHI_GAMMA_PDB_CODES = RIBOSE_PURINE_TERMINAL_C5_PDB_CODES
RIBOSE_PURINE_TERMINAL_C5_CHI_PDB_CODES = RIBOSE_PURINE_TERMINAL_C5_PDB_CODES
RIBOSE_PURINE_TERMINAL_C5_BASE_FUNC_OF_TORSION_CHI_PDB_CODES = RIBOSE_PURINE_TERMINAL_C5_PDB_CODES
RIBOSE_PURINE_TERMINAL_C5_CONFORMATION_PDB_CODES = RIBOSE_PURINE_TERMINAL_C5_PDB_CODES
RIBOSE_PURINE_TERMINAL_C5_SUGAR_PDB_CODES = RIBOSE_PURINE_TERMINAL_C5_PDB_CODES
RIBOSE_PURINE_TERMINAL_C5_CHI_CONFORMATION_PDB_CODES = RIBOSE_PURINE_TERMINAL_C5_PDB_CODES
RIBOSE_PURINE_TERMINAL_C5_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_PDB_CODES = RIBOSE_PURINE_TERMINAL_C5_PDB_CODES
RIBOSE_PURINE_TERMINAL_C5_GAMMA_PDB_CODES = RIBOSE_PURINE_TERMINAL_C5_PDB_CODES
RIBOSE_PURINE_TERMINAL_C5_ALL_FUNC_OF_TORSION_CHI_PDB_CODES = RIBOSE_PURINE_TERMINAL_C5_PDB_CODES
RIBOSE_PURINE_TERMINAL_C5_ATOM_NAMES = {
"C1'": "C1'",
"C1*": "C1'",
"C2'": "C2'",
"C2*": "C2'",
"C3'": "C3'",
"C3*": "C3'",
"C4": "C4",
"C4'": "C4'",
"C4*": "C4'",
"C5'": "C5'",
"C5*": "C5'",
"C8": "C8",
"N9": "N9",
"O2'": "O2'",
"O2*": "O2'",
"O3'": "O3'",
"O3*": "O3'",
"O4'": "O4'",
"O4*": "O4'",
"O5'": "O5'",
"O5*": "O5'",
"P": "P"
}
RIBOSE_PURINE_TERMINAL_C5_ALL_ATOM_NAMES = RIBOSE_PURINE_TERMINAL_C5_ATOM_NAMES
RIBOSE_PURINE_TERMINAL_C5_CHI_GAMMA_ATOM_NAMES = RIBOSE_PURINE_TERMINAL_C5_ATOM_NAMES
RIBOSE_PURINE_TERMINAL_C5_CHI_ATOM_NAMES = RIBOSE_PURINE_TERMINAL_C5_ATOM_NAMES
RIBOSE_PURINE_TERMINAL_C5_BASE_FUNC_OF_TORSION_CHI_ATOM_NAMES = RIBOSE_PURINE_TERMINAL_C5_ATOM_NAMES
RIBOSE_PURINE_TERMINAL_C5_CONFORMATION_ATOM_NAMES = RIBOSE_PURINE_TERMINAL_C5_ATOM_NAMES
RIBOSE_PURINE_TERMINAL_C5_SUGAR_ATOM_NAMES = RIBOSE_PURINE_TERMINAL_C5_ATOM_NAMES
RIBOSE_PURINE_TERMINAL_C5_CHI_CONFORMATION_ATOM_NAMES = RIBOSE_PURINE_TERMINAL_C5_ATOM_NAMES
RIBOSE_PURINE_TERMINAL_C5_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_ATOM_NAMES = RIBOSE_PURINE_TERMINAL_C5_ATOM_NAMES
RIBOSE_PURINE_TERMINAL_C5_GAMMA_ATOM_NAMES = RIBOSE_PURINE_TERMINAL_C5_ATOM_NAMES
RIBOSE_PURINE_TERMINAL_C5_ALL_FUNC_OF_TORSION_CHI_ATOM_NAMES = RIBOSE_PURINE_TERMINAL_C5_ATOM_NAMES
RIBOSE_PURINE_TERMINAL_C5_ATOM_RES = {
"C1'": 0,
"C2'": 0,
"C3'": 0,
"C4": 0,
"C4'": 0,
"C5'": 0,
"C8": 0,
"N9": 0,
"O2'": 0,
"O3'": 0,
"O4'": 0,
"O5'": 0
}
RIBOSE_PURINE_TERMINAL_C5_ALL_ATOM_RES = RIBOSE_PURINE_TERMINAL_C5_ATOM_RES
RIBOSE_PURINE_TERMINAL_C5_CHI_GAMMA_ATOM_RES = RIBOSE_PURINE_TERMINAL_C5_ATOM_RES
RIBOSE_PURINE_TERMINAL_C5_CHI_ATOM_RES = RIBOSE_PURINE_TERMINAL_C5_ATOM_RES
RIBOSE_PURINE_TERMINAL_C5_BASE_FUNC_OF_TORSION_CHI_ATOM_RES = RIBOSE_PURINE_TERMINAL_C5_ATOM_RES
RIBOSE_PURINE_TERMINAL_C5_CONFORMATION_ATOM_RES = RIBOSE_PURINE_TERMINAL_C5_ATOM_RES
RIBOSE_PURINE_TERMINAL_C5_SUGAR_ATOM_RES = RIBOSE_PURINE_TERMINAL_C5_ATOM_RES
RIBOSE_PURINE_TERMINAL_C5_CHI_CONFORMATION_ATOM_RES = RIBOSE_PURINE_TERMINAL_C5_ATOM_RES
RIBOSE_PURINE_TERMINAL_C5_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_ATOM_RES = RIBOSE_PURINE_TERMINAL_C5_ATOM_RES
RIBOSE_PURINE_TERMINAL_C5_GAMMA_ATOM_RES = RIBOSE_PURINE_TERMINAL_C5_ATOM_RES
RIBOSE_PURINE_TERMINAL_C5_ALL_FUNC_OF_TORSION_CHI_ATOM_RES = RIBOSE_PURINE_TERMINAL_C5_ATOM_RES
RIBOSE_PURINE_TERMINAL_C5_REQUIRED_CONDITION = [
("C1'", "C2'", 2.0, 0, 0),
("C2'", "C3'", 2.0, 0, 0),
("C3'", "C4'", 2.0, 0, 0),
("C4'", "O4'", 2.0, 0, 0),
("C1'", "O4'", 2.0, 0, 0),
("C3'", "O3'", 2.0, 0, 0),
("C4'", "C5'", 2.0, 0, 0),
("C5'", "O5'", 2.0, 0, 0),
("C2'", "O2'", 2.0, 0, 0),
("C1'", 'N9', 2.0, 0, 0),
("O3'", 'P', 2.5, 0, 1)
]
RIBOSE_PURINE_TERMINAL_C5_ALL_REQUIRED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_REQUIRED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_CHI_GAMMA_REQUIRED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_REQUIRED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_CHI_REQUIRED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_REQUIRED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_BASE_FUNC_OF_TORSION_CHI_REQUIRED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_REQUIRED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_CONFORMATION_REQUIRED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_REQUIRED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_SUGAR_REQUIRED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_REQUIRED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_CHI_CONFORMATION_REQUIRED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_REQUIRED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_REQUIRED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_REQUIRED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_GAMMA_REQUIRED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_REQUIRED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_ALL_FUNC_OF_TORSION_CHI_REQUIRED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_REQUIRED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_DISALLOWED_CONDITION = [
("O5'", 'P', 2.5, 0, 0)
]
RIBOSE_PURINE_TERMINAL_C5_ALL_DISALLOWED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_DISALLOWED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_CHI_GAMMA_DISALLOWED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_DISALLOWED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_CHI_DISALLOWED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_DISALLOWED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_BASE_FUNC_OF_TORSION_CHI_DISALLOWED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_DISALLOWED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_CONFORMATION_DISALLOWED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_DISALLOWED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_SUGAR_DISALLOWED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_DISALLOWED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_CHI_CONFORMATION_DISALLOWED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_DISALLOWED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_DISALLOWED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_DISALLOWED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_GAMMA_DISALLOWED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_DISALLOWED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_ALL_FUNC_OF_TORSION_CHI_DISALLOWED_CONDITION = RIBOSE_PURINE_TERMINAL_C5_DISALLOWED_CONDITION
RIBOSE_PURINE_TERMINAL_C5_DISTANCE_MEASURE = {
'measure': 'euclidean_angles',
'restraint_names': ["aC4'C5'O5'", "aC4'C3'O3'", "aN9C1'C2'", "aC1'N9C4", "aC1'N9C8", "aN9C1'O4'", "aC2'C1'O4'", "aC1'C2'O2'", "aC3'C2'O2'", "aC2'C3'O3'", "aC1'C2'C3'", "aC2'C3'C4'", "aC3'C4'O4'", "aC1'O4'C4'", "aC3'C4'C5'", "aC5'C4'O4'"]
}
RIBOSE_PURINE_TERMINAL_C5_ALL_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_CHI_GAMMA_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_CHI_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_BASE_FUNC_OF_TORSION_CHI_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_CONFORMATION_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_SUGAR_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_CHI_CONFORMATION_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_GAMMA_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_ALL_FUNC_OF_TORSION_CHI_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_CONDITION_DISTANCE_MEASURE = {
'measure': 'euclidean_angles',
'restraint_names': ["tO4'C1'N9C4", "tC3'C4'C5'O5'", "pC1'C2'C3'C4'O4'"]
}
RIBOSE_PURINE_TERMINAL_C5_ALL_CONDITION_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_CONDITION_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_CHI_GAMMA_CONDITION_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_CONDITION_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_CHI_CONDITION_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_CONDITION_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_BASE_FUNC_OF_TORSION_CHI_CONDITION_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_CONDITION_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_CONFORMATION_CONDITION_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_CONDITION_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_SUGAR_CONDITION_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_CONDITION_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_CHI_CONFORMATION_CONDITION_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_CONDITION_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_CONDITION_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_CONDITION_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_GAMMA_CONDITION_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_CONDITION_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_ALL_FUNC_OF_TORSION_CHI_CONDITION_DISTANCE_MEASURE = RIBOSE_PURINE_TERMINAL_C5_CONDITION_DISTANCE_MEASURE
RIBOSE_PURINE_TERMINAL_C5_ALL_RESTRAINTS = [
{
'conditions': [],
'name': 'ribose_purine_terminal_C5==All=All',
'restraints': [['dist', "dC5'O5'", ["C5'", "O5'"], 1.421, 0.011], ['dist', "dC1'C2'", ["C1'", "C2'"], 1.525, 0.012], ['dist', "dC2'C3'", ["C2'", "C3'"], 1.523, 0.011]]
}
]
RIBOSE_PURINE_TERMINAL_C5_CHI_GAMMA_RESTRAINTS = [
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 180, 22.5], ['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], 60, 8.75]],
'name': 'ribose_purine_terminal_C5==Chi=anti__Gamma=gauche+',
'restraints': [['angle', "aC4'C5'O5'", ["C4'", "C5'", "O5'"], 111.5, 1.7]]
},
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 180, 22.5], ['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], -60, 8.75]],
'name': 'ribose_purine_terminal_C5==Chi=anti__Gamma=gauche-',
'restraints': [['angle', "aC4'C5'O5'", ["C4'", "C5'", "O5'"], 109.7, 1.8]]
},
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 180, 22.5], ['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], 180, 21.25]],
'name': 'ribose_purine_terminal_C5==Chi=anti__Gamma=trans',
'restraints': [['angle', "aC4'C5'O5'", ["C4'", "C5'", "O5'"], 110.0, 1.9]]
},
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 0, 22.5], ['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], 60, 8.75]],
'name': 'ribose_purine_terminal_C5==Chi=syn__Gamma=gauche+',
'restraints': [['angle', "aC4'C5'O5'", ["C4'", "C5'", "O5'"], 112.8, 1.7]]
},
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 0, 22.5], ['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], -60, 8.75]],
'name': 'ribose_purine_terminal_C5==Chi=syn__Gamma=gauche-',
'restraints': [['angle', "aC4'C5'O5'", ["C4'", "C5'", "O5'"], 111.0, 0.8]]
},
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 0, 22.5], ['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], 180, 21.25]],
'name': 'ribose_purine_terminal_C5==Chi=syn__Gamma=trans',
'restraints': [['angle', "aC4'C5'O5'", ["C4'", "C5'", "O5'"], 111.1, 1.6]]
}
]
RIBOSE_PURINE_TERMINAL_C5_CHI_RESTRAINTS = [
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 180, 22.5]],
'name': 'ribose_purine_terminal_C5==Chi=anti',
'restraints': [['angle', "aC4'C3'O3'", ["C4'", "C3'", "O3'"], 110.7, 2.3]]
},
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 0, 22.5]],
'name': 'ribose_purine_terminal_C5==Chi=syn',
'restraints': [['angle', "aC4'C3'O3'", ["C4'", "C3'", "O3'"], 109.8, 2.1]]
}
]
RIBOSE_PURINE_TERMINAL_C5_BASE_FUNC_OF_TORSION_CHI_RESTRAINTS = [
{
'conditions': [],
'name': 'ribose_purine_terminal_C5==Base=purine',
'restraints': [ ['angle', "aN9C1'C2'", ['N9', "C1'", "C2'"], None, None, None, None, "purine-N1-C1'-C2' or N9-C1'-C2'.pickle", ['torsion_chi', ["O4'", "C1'", 'N9', 'C4']]],
['angle', "aC1'N9C4", ["C1'", 'N9', 'C4'], None, None, None, None, "purine-C1'-N1-C2 or C1'-N9-C4.pickle", ['torsion_chi', ["O4'", "C1'", 'N9', 'C4']]],
['angle', "aC1'N9C8", ["C1'", 'N9', 'C8'], None, None, None, None, "purine-C1'-N1-C6 or C1'-N9-C8.pickle", ['torsion_chi', ["O4'", "C1'", 'N9', 'C4']]],
['angle', "aN9C1'O4'", ['N9', "C1'", "O4'"], None, None, None, None, "purine-N1-C1'-O4' or N9-C1'-O4'.pickle", ['torsion_chi', ["O4'", "C1'", 'N9', 'C4']]]]
}
]
RIBOSE_PURINE_TERMINAL_C5_CONFORMATION_RESTRAINTS = [
{
'conditions': [['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 162, 4.5]],
'name': "ribose_purine_terminal_C5==Conformation=C2'-endo",
'restraints': [['dist', "dC3'C4'", ["C3'", "C4'"], 1.527, 0.01], ['dist', "dC2'O2'", ["C2'", "O2'"], 1.41, 0.009], ['angle', "aC2'C1'O4'", ["C2'", "C1'", "O4'"], 106.0, 0.8]]
},
{
'conditions': [['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 18, 4.5]],
'name': "ribose_purine_terminal_C5==Conformation=C3'-endo",
'restraints': [['dist', "dC3'C4'", ["C3'", "C4'"], 1.52, 0.009], ['dist', "dC2'O2'", ["C2'", "O2'"], 1.416, 0.008], ['angle', "aC2'C1'O4'", ["C2'", "C1'", "O4'"], 107.3, 0.6]]
},
{
'conditions': [],
'name': 'ribose_purine_terminal_C5==Conformation=Other',
'restraints': [['dist', "dC3'C4'", ["C3'", "C4'"], 1.531, 0.009], ['dist', "dC2'O2'", ["C2'", "O2'"], 1.413, 0.008], ['angle', "aC2'C1'O4'", ["C2'", "C1'", "O4'"], 106.2, 1.3]]
}
]
RIBOSE_PURINE_TERMINAL_C5_SUGAR_RESTRAINTS = [{
'conditions': [], 'name': 'ribose_purine_terminal_C5==Sugar=ribose', 'restraints': [['dist', "dC4'O4'", ["C4'", "O4'"], 1.45, 0.009]]
}
]
RIBOSE_PURINE_TERMINAL_C5_CHI_CONFORMATION_RESTRAINTS = [
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 180, 22.5], ['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 162, 4.5]],
'name': "ribose_purine_terminal_C5==Chi=anti__Conformation=C2'-endo",
'restraints': [ ['angle', "aC1'C2'O2'", ["C1'", "C2'", "O2'"], 112.0, 2.1],
['angle', "aC3'C2'O2'", ["C3'", "C2'", "O2'"], 113.6, 2.5],
['angle', "aC2'C3'O3'", ["C2'", "C3'", "O3'"], 109.4, 2.4]]
},
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 180, 22.5], ['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 18, 4.5]],
'name': "ribose_purine_terminal_C5==Chi=anti__Conformation=C3'-endo",
'restraints': [ ['angle', "aC1'C2'O2'", ["C1'", "C2'", "O2'"], 108.7, 2.3],
['angle', "aC3'C2'O2'", ["C3'", "C2'", "O2'"], 110.4, 2.1],
['angle', "aC2'C3'O3'", ["C2'", "C3'", "O3'"], 113.4, 2.1]]
},
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 180, 22.5]],
'name': 'ribose_purine_terminal_C5==Chi=anti__Conformation=Other',
'restraints': [ ['angle', "aC1'C2'O2'", ["C1'", "C2'", "O2'"], 112.9, 1.4],
['angle', "aC3'C2'O2'", ["C3'", "C2'", "O2'"], 113.3, 0.9],
['angle', "aC2'C3'O3'", ["C2'", "C3'", "O3'"], 111.9, 2.5]]
},
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 0, 22.5], ['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 162, 4.5]],
'name': "ribose_purine_terminal_C5==Chi=syn__Conformation=C2'-endo",
'restraints': [ ['angle', "aC1'C2'O2'", ["C1'", "C2'", "O2'"], 112.5, 2.1],
['angle', "aC3'C2'O2'", ["C3'", "C2'", "O2'"], 114.1, 1.9],
['angle', "aC2'C3'O3'", ["C2'", "C3'", "O3'"], 110.1, 2.2]]
},
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 0, 22.5], ['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 18, 4.5]],
'name': "ribose_purine_terminal_C5==Chi=syn__Conformation=C3'-endo",
'restraints': [ ['angle', "aC1'C2'O2'", ["C1'", "C2'", "O2'"], 109.9, 2.7],
['angle', "aC3'C2'O2'", ["C3'", "C2'", "O2'"], 110.0, 2.0],
['angle', "aC2'C3'O3'", ["C2'", "C3'", "O3'"], 114.2, 0.9]]
},
{
'conditions': [['torsion', "tO4'C1'N9C4", ["O4'", "C1'", 'N9', 'C4'], 0, 22.5]],
'name': 'ribose_purine_terminal_C5==Chi=syn__Conformation=Other',
'restraints': [ ['angle', "aC1'C2'O2'", ["C1'", "C2'", "O2'"], 107.7, 1.6],
['angle', "aC3'C2'O2'", ["C3'", "C2'", "O2'"], 111.9, 1.1],
['angle', "aC2'C3'O3'", ["C2'", "C3'", "O3'"], 113.0, 1.7]]
}
]
RIBOSE_PURINE_TERMINAL_C5_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_RESTRAINTS = [
{
'conditions': [['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 162, 4.5]],
'name': "ribose_purine_terminal_C5==Sugar=ribose__Conformation=C2'-endo",
'restraints': [ ['angle', "aC1'C2'C3'", ["C1'", "C2'", "C3'"], None, None, None, None, "ribose-C2'-endo-C1'-C2'-C3'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC2'C3'C4'", ["C2'", "C3'", "C4'"], None, None, None, None, "ribose-C2'-endo-C2'-C3'-C4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC3'C4'O4'", ["C3'", "C4'", "O4'"], None, None, None, None, "ribose-C2'-endo-C3'-C4'-O4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC1'O4'C4'", ["C1'", "O4'", "C4'"], None, None, None, None, "ribose-C2'-endo-C1'-O4'-C4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]]]
},
{
'conditions': [['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 18, 4.5]],
'name': "ribose_purine_terminal_C5==Sugar=ribose__Conformation=C3'-endo",
'restraints': [ ['angle', "aC1'C2'C3'", ["C1'", "C2'", "C3'"], None, None, None, None, "ribose-C3'-endo-C1'-C2'-C3'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC2'C3'C4'", ["C2'", "C3'", "C4'"], None, None, None, None, "ribose-C3'-endo-C2'-C3'-C4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC3'C4'O4'", ["C3'", "C4'", "O4'"], None, None, None, None, "ribose-C3'-endo-C3'-C4'-O4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC1'O4'C4'", ["C1'", "O4'", "C4'"], None, None, None, None, "ribose-C3'-endo-C1'-O4'-C4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]]]
},
{
'conditions': [],
'name': 'ribose_purine_terminal_C5==Sugar=ribose__Conformation=Other',
'restraints': [ ['angle', "aC1'C2'C3'", ["C1'", "C2'", "C3'"], None, None, None, None, "ribose-Other-C1'-C2'-C3'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC2'C3'C4'", ["C2'", "C3'", "C4'"], None, None, None, None, "ribose-Other-C2'-C3'-C4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC3'C4'O4'", ["C3'", "C4'", "O4'"], None, None, None, None, "ribose-Other-C3'-C4'-O4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC1'O4'C4'", ["C1'", "O4'", "C4'"], None, None, None, None, "ribose-Other-C1'-O4'-C4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]]]
}
]
RIBOSE_PURINE_TERMINAL_C5_GAMMA_RESTRAINTS = [
{
'conditions': [['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], 60, 8.75]],
'name': 'ribose_purine_terminal_C5==Gamma=gauche+',
'restraints': [['dist', "dC4'C5'", ["C4'", "C5'"], 1.508, 0.009], ['angle', "aC3'C4'C5'", ["C3'", "C4'", "C5'"], 115.7, 1.2], ['angle', "aC5'C4'O4'", ["C5'", "C4'", "O4'"], 109.4, 1.0]]
},
{
'conditions': [['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], -60, 8.75]],
'name': 'ribose_purine_terminal_C5==Gamma=gauche-',
'restraints': [['dist', "dC4'C5'", ["C4'", "C5'"], 1.518, 0.009], ['angle', "aC3'C4'C5'", ["C3'", "C4'", "C5'"], 114.5, 1.2], ['angle', "aC5'C4'O4'", ["C5'", "C4'", "O4'"], 107.8, 0.9]]
},
{
'conditions': [['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], 180, 21.25]],
'name': 'ribose_purine_terminal_C5==Gamma=trans',
'restraints': [['dist', "dC4'C5'", ["C4'", "C5'"], 1.509, 0.01], ['angle', "aC3'C4'C5'", ["C3'", "C4'", "C5'"], 113.8, 1.3], ['angle', "aC5'C4'O4'", ["C5'", "C4'", "O4'"], 109.9, 1.2]]
}
]
RIBOSE_PURINE_TERMINAL_C5_ALL_FUNC_OF_TORSION_CHI_RESTRAINTS = [
{
'conditions': [],
'name': 'ribose_purine_terminal_C5==All=All',
'restraints': [ ['dist', "dC1'N9", ["C1'", 'N9'], None, None, None, None, "All-C1'-N1 or C1'-N9.pickle", ['torsion_chi', ["O4'", "C1'", 'N9', 'C4']]],
['dist', "dC1'O4'", ["C1'", "O4'"], None, None, None, None, "All-C1'-O4'.pickle", ['torsion_chi', ["O4'", "C1'", 'N9', 'C4']]]]
}
] | 65.110429 | 241 | 0.625318 | 2,956 | 21,226 | 4.108254 | 0.040934 | 0.181818 | 0.30303 | 0.333333 | 0.943099 | 0.925313 | 0.898304 | 0.863472 | 0.820652 | 0.786067 | 0 | 0.091197 | 0.15123 | 21,226 | 326 | 242 | 65.110429 | 0.582871 | 0 | 0 | 0.054839 | 0 | 0 | 0.278702 | 0.079333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c21661a85d7577ee8bc6405c5c73c7162c9572a4 | 4,144 | py | Python | tests/functional/rpc/test_chain.py | btclib-org/btclib_node | 3e5b2a55195e60e1d30505b52bc1ddd8d51c74cf | [
"MIT"
] | 4 | 2021-01-25T23:39:13.000Z | 2021-08-08T06:27:53.000Z | tests/functional/rpc/test_chain.py | btclib-org/btclib_node | 3e5b2a55195e60e1d30505b52bc1ddd8d51c74cf | [
"MIT"
] | null | null | null | tests/functional/rpc/test_chain.py | btclib-org/btclib_node | 3e5b2a55195e60e1d30505b52bc1ddd8d51c74cf | [
"MIT"
] | 1 | 2020-12-18T06:28:18.000Z | 2020-12-18T06:28:18.000Z | import json
import requests
from btclib_node import Node
from btclib_node.chains import RegTest
from btclib_node.config import Config
from tests.helpers import generate_random_header_chain, get_random_port, wait_until
def test_best_block_hash(tmp_path):
node = Node(
config=Config(
chain="regtest",
data_dir=tmp_path,
allow_p2p=False,
rpc_port=get_random_port(),
)
)
node.start()
wait_until(lambda: node.rpc_manager.is_alive())
chain = generate_random_header_chain(2000, RegTest().genesis.hash)
node.index.add_headers(chain)
response = json.loads(
requests.post(
url=f"http://127.0.0.1:{node.rpc_port}",
data=json.dumps(
{
"jsonrpc": "1.0",
"id": "pytest",
"method": "getbestblockhash",
}
).encode(),
headers={"Content-Type": "text/plain"},
).text
)
assert response["result"] == chain[-1].hash
node.stop()
def test_block_hash(tmp_path):
node = Node(
config=Config(
chain="regtest",
data_dir=tmp_path,
allow_p2p=False,
rpc_port=get_random_port(),
)
)
node.start()
wait_until(lambda: node.rpc_manager.is_alive())
chain = generate_random_header_chain(2000, RegTest().genesis.hash)
node.index.add_headers(chain)
response = json.loads(
requests.post(
url=f"http://127.0.0.1:{node.rpc_port}",
data=json.dumps(
{
"jsonrpc": "1.0",
"id": "pytest",
"method": "getblockhash",
"params": [2000],
}
).encode(),
headers={"Content-Type": "text/plain"},
).text
)
assert response["result"] == chain[-1].hash
node.stop()
def test_block_header_last(tmp_path):
node = Node(
config=Config(
chain="regtest",
data_dir=tmp_path,
allow_p2p=False,
rpc_port=get_random_port(),
)
)
node.start()
wait_until(lambda: node.rpc_manager.is_alive())
chain = generate_random_header_chain(2000, RegTest().genesis.hash)
node.index.add_headers(chain)
response = json.loads(
requests.post(
url=f"http://127.0.0.1:{node.rpc_port}",
data=json.dumps(
{
"jsonrpc": "1.0",
"id": "pytest",
"method": "getblockheader",
"params": [chain[-1].hash],
}
).encode(),
headers={"Content-Type": "text/plain"},
).text
)
assert response["result"]["hash"] == chain[-1].hash
assert response["result"]["height"] == 2000
assert response["result"]["previousblockhash"] == chain[-2].hash
assert "nextblockhash" not in response["result"]
node.stop()
def test_block_header_middle(tmp_path):
node = Node(
config=Config(
chain="regtest",
data_dir=tmp_path,
allow_p2p=False,
rpc_port=get_random_port(),
)
)
node.start()
wait_until(lambda: node.rpc_manager.is_alive())
chain = generate_random_header_chain(2000, RegTest().genesis.hash)
node.index.add_headers(chain)
response = json.loads(
requests.post(
url=f"http://127.0.0.1:{node.rpc_port}",
data=json.dumps(
{
"jsonrpc": "1.0",
"id": "pytest",
"method": "getblockheader",
"params": [chain[-1001].hash],
}
).encode(),
headers={"Content-Type": "text/plain"},
).text
)
assert response["result"]["hash"] == chain[-1001].hash
assert response["result"]["height"] == 1000
assert response["result"]["previousblockhash"] == chain[-1002].hash
assert response["result"]["nextblockhash"] == chain[-1000].hash
node.stop()
| 26.735484 | 83 | 0.527268 | 438 | 4,144 | 4.805936 | 0.180365 | 0.066508 | 0.085511 | 0.059382 | 0.827078 | 0.75867 | 0.743468 | 0.743468 | 0.743468 | 0.743468 | 0 | 0.030753 | 0.333012 | 4,144 | 154 | 84 | 26.909091 | 0.730825 | 0 | 0 | 0.661417 | 1 | 0 | 0.133687 | 0 | 0 | 0 | 0 | 0 | 0.07874 | 1 | 0.031496 | false | 0 | 0.047244 | 0 | 0.07874 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c22f29803112b1be49d56ce4cd56e3e69b29feee | 17,969 | py | Python | functions/score_calc.py | adisen99/srfp | 836059c942b396a5e70d36a0d544d0fa4a36272d | [
"MIT"
] | null | null | null | functions/score_calc.py | adisen99/srfp | 836059c942b396a5e70d36a0d544d0fa4a36272d | [
"MIT"
] | null | null | null | functions/score_calc.py | adisen99/srfp | 836059c942b396a5e70d36a0d544d0fa4a36272d | [
"MIT"
] | null | null | null | """
Python module to calculate the score for critical,
very unhealthy and unhealthy air quality
"""
import numpy as np
import pandas as pd
# SCore calc for Critical Conditions (severe conditions only)
def get_critical_score(data):
temp = data.copy(deep=True)
conditions1 = [
(temp['quality_mod_pm25'] == 5) & (temp['quality_obs_pm25'] == 5),
(temp['quality_mod_pm25'] == 5) & (temp['quality_obs_pm25'] != 5),
(temp['quality_mod_pm25'] != 5) & (temp['quality_obs_pm25'] == 5),
(temp['quality_mod_pm25'] != 5) & (temp['quality_obs_pm25'] != 5)]
choices1 = ['a', 'b', 'c', 'd']
conditions2 = [
(temp['quality_mod_pm10'] == 5) & (temp['quality_obs_pm10'] == 5),
(temp['quality_mod_pm10'] == 5) & (temp['quality_obs_pm10'] != 5),
(temp['quality_mod_pm10'] != 5) & (temp['quality_obs_pm10'] == 5),
(temp['quality_mod_pm10'] != 5) & (temp['quality_obs_pm10'] != 5)]
choices2 = ['a', 'b', 'c', 'd']
temp['category_pm25'] = np.select(conditions1, choices1, default=np.nan)
temp['category_pm10'] = np.select(conditions2, choices2, default=np.nan)
test1 = temp.groupby('category_pm25')
test2 = temp.groupby('category_pm10')
# Need to write a program to give appropriate values to a,b,c and d
key25 = []
key10 = []
for i in range(0, len(list(test1))):
key25.append(list(test1)[i][0])
for i in range(0, len(list(test2))):
key10.append(list(test2)[i][0])
val25 = []
val10 = []
for i in range(0, len(key25)):
val25.append(list(test1)[i][1].shape[0])
for i in range(0, len(key10)):
val10.append(list(test2)[i][1].shape[0])
res25 = dict(zip(key25, val25))
res10 = dict(zip(key10, val10))
expected = ['a', 'b', 'c', 'd']
for j in expected:
if j not in res25.keys():
new25 = {j: 0}
res25.update(new25)
else:
pass
for j in expected:
if j not in res10.keys():
new10 = {j: 0}
res10.update(new10)
else:
pass
print("Key25 is : ", key25)
print("The list25 from algorithm is : ", res25.keys())
print("val25 is : ", res25.values())
print("Key10 is : ", key10)
print("The list10 from algorithm is : ", res10.keys())
print("Val10 is : ", res10.values())
a = res25['a']
b = res25['b']
c = res25['c']
d = res25['d']
A = ((a + d)/(a+b+c+d)) * 100 # Accuracy of events
if a != 0 or b != 0:
FAR = ((b)/(a+b)) * 100 # False event forecase
if a != 0 or c != 0:
POD = ((a)/(a+c)) * 100 # Expected events
try:
CSI = ((a)/(a+b+c)) * 100 # Events and event forecasts that were hits
pointer1 = 0
except ZeroDivisionError:
print("ZeroDivisionError")
pointer1 = 1
if a != 0 or c != 0:
FOM = ((c)/(a+c))*100 # Surprise events
if a != 0 or b != 0:
FOH = ((a)/(a+b))*100 # Correct event forecasts
if b != 0 or d != 0:
PON = ((d)/(b+d))*100 # Expected non events
POFD = ((b)/(b+d))*100 # Unexpected non events
if c != 0 or d != 0:
DFR = ((c)/(c+d))*100 # False non-event forecasts
FOCN = ((d)/(c+d))*100 # Correct non-event forecasts
try:
# True skill statistic (Expected events - unexpected non-events)
TSS = ((a)/(a+c)) - ((b)/(b+d))
Heidke = (((a+c)*(d-b))+((a-c)*(d+b)))/(((a+c)*(c+d))+((a+b)*(b+d)))
pointer2 = 0
except ZeroDivisionError:
print("ZeroDivisionError")
pointer2 = 1
print("Performance metrics or Skill score for Critical PM2.5 are:\n")
print("A = ", A)
if a != 0 or b != 0:
print("FAR = ", FAR)
if a != 0 or c != 0:
print("POD = ", POD)
if pointer1 == 0:
print("CSI = ", CSI)
if a != 0 or c != 0:
print("FOM = ", FOM)
if a != 0 or b != 0:
print("FOH = ", FOH)
if b != 0 or d != 0:
print("PON = ", PON)
print("POFD = ", POFD)
if c != 0 or d != 0:
print("DFR = ", DFR)
print("FOCN = ", FOCN)
if pointer2 == 0:
print("TSS = ", TSS)
print("Heidke = ", Heidke, "\n")
a = res10['a']
b = res10['b']
c = res10['c']
d = res10['d']
A = ((a + d)/(a+b+c+d)) * 100 # Accuracy of events
if a != 0 or b != 0:
FAR = ((b)/(a+b)) * 100 # False event forecase
if a != 0 or c != 0:
POD = ((a)/(a+c)) * 100 # Expected events
try:
CSI = ((a)/(a+b+c)) * 100 # Events and event forecasts that were hits
pointer1 = 0
except ZeroDivisionError:
print("ZeroDivisionError")
pointer1 = 1
if a != 0 or c != 0:
FOM = ((c)/(a+c))*100 # Surprise events
if a != 0 or b != 0:
FOH = ((a)/(a+b))*100 # Correct event forecasts
if b != 0 or d != 0:
PON = ((d)/(b+d))*100 # Expected non events
POFD = ((b)/(b+d))*100 # Unexpected non events
if c != 0 or d != 0:
DFR = ((c)/(c+d))*100 # False non-event forecasts
FOCN = ((d)/(c+d))*100 # Correct non-event forecasts
try:
# True skill statistic (Expected events - unexpected non-events)
TSS = ((a)/(a+c)) - ((b)/(b+d))
Heidke = (((a+c)*(d-b))+((a-c)*(d+b)))/(((a+c)*(c+d))+((a+b)*(b+d)))
pointer2 = 0
except ZeroDivisionError:
print("ZeroDivisionError")
pointer2 = 1
print("Performance metrics or Skill score for Critical PM10 are:\n")
print("A = ", A)
if a != 0 or b != 0:
print("FAR = ", FAR)
if a != 0 or c != 0:
print("POD = ", POD)
if pointer1 == 0:
print("CSI = ", CSI)
if a != 0 or c != 0:
print("FOM = ", FOM)
if a != 0 or b != 0:
print("FOH = ", FOH)
if b != 0 or d != 0:
print("PON = ", PON)
print("POFD = ", POFD)
if c != 0 or d != 0:
print("DFR = ", DFR)
print("FOCN = ", FOCN)
if pointer2 == 0:
print("TSS = ", TSS)
print("Heidke = ", Heidke, "\n")
# SCore calc for Very Unhealthy Conditions (very poor, severe conditions only)
def get_veryunhealthy_score(data):
temp = data.copy(deep=True)
conditions1 = [
(temp['quality_mod_pm25'] >= 4) & (temp['quality_obs_pm25'] >= 4),
(temp['quality_mod_pm25'] >= 4) & (temp['quality_obs_pm25'] < 4),
(temp['quality_mod_pm25'] < 4) & (temp['quality_obs_pm25'] >= 4),
(temp['quality_mod_pm25'] < 4) & (temp['quality_obs_pm25'] < 4)]
choices1 = ['a', 'b', 'c', 'd']
conditions2 = [
(temp['quality_mod_pm10'] >= 4) & (temp['quality_obs_pm10'] >= 4),
(temp['quality_mod_pm10'] >= 4) & (temp['quality_obs_pm10'] < 4),
(temp['quality_mod_pm10'] < 4) & (temp['quality_obs_pm10'] >= 4),
(temp['quality_mod_pm10'] < 4) & (temp['quality_obs_pm10'] < 4)]
choices2 = ['a', 'b', 'c', 'd']
temp['category_pm25'] = np.select(conditions1, choices1, default=np.nan)
temp['category_pm10'] = np.select(conditions2, choices2, default=np.nan)
test1 = temp.groupby('category_pm25')
test2 = temp.groupby('category_pm10')
# Need to write a program to give appropriate values to a,b,c and d
key25 = []
key10 = []
for i in range(0, len(list(test1))):
key25.append(list(test1)[i][0])
for i in range(0, len(list(test2))):
key10.append(list(test2)[i][0])
val25 = []
val10 = []
for i in range(0, len(key25)):
val25.append(list(test1)[i][1].shape[0])
for i in range(0, len(key10)):
val10.append(list(test2)[i][1].shape[0])
res25 = dict(zip(key25, val25))
res10 = dict(zip(key10, val10))
expected = ['a', 'b', 'c', 'd']
for j in expected:
if j not in res25.keys():
new25 = {j: 0}
res25.update(new25)
else:
pass
for j in expected:
if j not in res10.keys():
new10 = {j: 0}
res10.update(new10)
else:
pass
print("Key25 is : ", key25)
print("The list25 from algorithm is : ", res25.keys())
print("val25 is : ", res25.values())
print("Key10 is : ", key10)
print("The list10 from algorithm is : ", res10.keys())
print("Val10 is : ", res10.values())
a = res25['a']
b = res25['b']
c = res25['c']
d = res25['d']
A = ((a + d)/(a+b+c+d)) * 100 # Accuracy of events
if a != 0 or b != 0:
FAR = ((b)/(a+b)) * 100 # False event forecase
if a != 0 or c != 0:
POD = ((a)/(a+c)) * 100 # Expected events
try:
CSI = ((a)/(a+b+c)) * 100 # Events and event forecasts that were hits
pointer1 = 0
except ZeroDivisionError:
print("ZeroDivisionError")
pointer1 = 1
if a != 0 or c != 0:
FOM = ((c)/(a+c))*100 # Surprise events
if a != 0 or b != 0:
FOH = ((a)/(a+b))*100 # Correct event forecasts
if b != 0 or d != 0:
PON = ((d)/(b+d))*100 # Expected non events
POFD = ((b)/(b+d))*100 # Unexpected non events
if c != 0 or d != 0:
DFR = ((c)/(c+d))*100 # False non-event forecasts
FOCN = ((d)/(c+d))*100 # Correct non-event forecasts
try:
# True skill statistic (Expected events - unexpected non-events)
TSS = ((a)/(a+c)) - ((b)/(b+d))
Heidke = (((a+c)*(d-b))+((a-c)*(d+b)))/(((a+c)*(c+d))+((a+b)*(b+d)))
pointer2 = 0
except ZeroDivisionError:
print("ZeroDivisionError")
pointer2 = 1
print("Performance metrics or Skill score for Very Unhealthy PM2.5 are:\n")
print("A = ", A)
if a != 0 or b != 0:
print("FAR = ", FAR)
if a != 0 or c != 0:
print("POD = ", POD)
if pointer1 == 0:
print("CSI = ", CSI)
if a != 0 or c != 0:
print("FOM = ", FOM)
if a != 0 or b != 0:
print("FOH = ", FOH)
if b != 0 or d != 0:
print("PON = ", PON)
print("POFD = ", POFD)
if c != 0 or d != 0:
print("DFR = ", DFR)
print("FOCN = ", FOCN)
if pointer2 == 0:
print("TSS = ", TSS)
print("Heidke = ", Heidke, "\n")
a = res10['a']
b = res10['b']
c = res10['c']
d = res10['d']
A = ((a + d)/(a+b+c+d)) * 100 # Accuracy of events
if a != 0 or b != 0:
FAR = ((b)/(a+b)) * 100 # False event forecase
if a != 0 or c != 0:
POD = ((a)/(a+c)) * 100 # Expected events
try:
CSI = ((a)/(a+b+c)) * 100 # Events and event forecasts that were hits
pointer1 = 0
except ZeroDivisionError:
print("ZeroDivisionError")
pointer1 = 1
if a != 0 or c != 0:
FOM = ((c)/(a+c))*100 # Surprise events
if a != 0 or b != 0:
FOH = ((a)/(a+b))*100 # Correct event forecasts
if b != 0 or d != 0:
PON = ((d)/(b+d))*100 # Expected non events
POFD = ((b)/(b+d))*100 # Unexpected non events
if c != 0 or d != 0:
DFR = ((c)/(c+d))*100 # False non-event forecasts
FOCN = ((d)/(c+d))*100 # Correct non-event forecasts
try:
# True skill statistic (Expected events - unexpected non-events)
TSS = ((a)/(a+c)) - ((b)/(b+d))
Heidke = (((a+c)*(d-b))+((a-c)*(d+b)))/(((a+c)*(c+d))+((a+b)*(b+d)))
pointer2 = 0
except ZeroDivisionError:
print("ZeroDivisionError")
pointer2 = 1
print("Performance metrics or Skill score for Very Unhealthy PM10 are:\n")
print("A = ", A)
if a != 0 or b != 0:
print("FAR = ", FAR)
if a != 0 or c != 0:
print("POD = ", POD)
if pointer1 == 0:
print("CSI = ", CSI)
if a != 0 or c != 0:
print("FOM = ", FOM)
if a != 0 or b != 0:
print("FOH = ", FOH)
if b != 0 or d != 0:
print("PON = ", PON)
print("POFD = ", POFD)
if c != 0 or d != 0:
print("DFR = ", DFR)
print("FOCN = ", FOCN)
if pointer2 == 0:
print("TSS = ", TSS)
print("Heidke = ", Heidke, "\n")
# SCore calc for Unhealthy Conditions (very poor, severe conditions only)
def get_unhealthy_score(data):
temp = data.copy(deep=True)
conditions1 = [
(temp['quality_mod_pm25'] >= 3) & (temp['quality_obs_pm25'] >= 3),
(temp['quality_mod_pm25'] >= 3) & (temp['quality_obs_pm25'] < 3),
(temp['quality_mod_pm25'] < 3) & (temp['quality_obs_pm25'] >= 3),
(temp['quality_mod_pm25'] < 3) & (temp['quality_obs_pm25'] < 3)]
choices1 = ['a', 'b', 'c', 'd']
conditions2 = [
(temp['quality_mod_pm10'] >= 3) & (temp['quality_obs_pm10'] >= 3),
(temp['quality_mod_pm10'] >= 3) & (temp['quality_obs_pm10'] < 3),
(temp['quality_mod_pm10'] < 3) & (temp['quality_obs_pm10'] >= 3),
(temp['quality_mod_pm10'] < 3) & (temp['quality_obs_pm10'] < 3)]
choices2 = ['a', 'b', 'c', 'd']
temp['category_pm25'] = np.select(conditions1, choices1, default=np.nan)
temp['category_pm10'] = np.select(conditions2, choices2, default=np.nan)
test1 = temp.groupby('category_pm25')
test2 = temp.groupby('category_pm10')
# Need to write a program to give appropriate values to a,b,c and d
key25 = []
key10 = []
for i in range(0, len(list(test1))):
key25.append(list(test1)[i][0])
for i in range(0, len(list(test2))):
key10.append(list(test2)[i][0])
val25 = []
val10 = []
for i in range(0, len(key25)):
val25.append(list(test1)[i][1].shape[0])
for i in range(0, len(key10)):
val10.append(list(test2)[i][1].shape[0])
res25 = dict(zip(key25, val25))
res10 = dict(zip(key10, val10))
expected = ['a', 'b', 'c', 'd']
for j in expected:
if j not in res25.keys():
new25 = {j: 0}
res25.update(new25)
else:
pass
for j in expected:
if j not in res10.keys():
new10 = {j: 0}
res10.update(new10)
else:
pass
print("Key25 is : ", key25)
print("The list25 from algorithm is : ", res25.keys())
print("val25 is : ", res25.values())
print("Key10 is : ", key10)
print("The list10 from algorithm is : ", res10.keys())
print("Val10 is : ", res10.values())
a = res25['a']
b = res25['b']
c = res25['c']
d = res25['d']
A = ((a + d)/(a+b+c+d)) * 100 # Accuracy of events
if a != 0 or b != 0:
FAR = ((b)/(a+b)) * 100 # False event forecase
if a != 0 or c != 0:
POD = ((a)/(a+c)) * 100 # Expected events
try:
CSI = ((a)/(a+b+c)) * 100 # Events and event forecasts that were hits
pointer1 = 0
except ZeroDivisionError:
print("ZeroDivisionError")
pointer1 = 1
if a != 0 or c != 0:
FOM = ((c)/(a+c))*100 # Surprise events
if a != 0 or b != 0:
FOH = ((a)/(a+b))*100 # Correct event forecasts
if b != 0 or d != 0:
PON = ((d)/(b+d))*100 # Expected non events
POFD = ((b)/(b+d))*100 # Unexpected non events
if c != 0 or d != 0:
DFR = ((c)/(c+d))*100 # False non-event forecasts
FOCN = ((d)/(c+d))*100 # Correct non-event forecasts
try:
# True skill statistic (Expected events - unexpected non-events)
TSS = ((a)/(a+c)) - ((b)/(b+d))
Heidke = (((a+c)*(d-b))+((a-c)*(d+b)))/(((a+c)*(c+d))+((a+b)*(b+d)))
pointer2 = 0
except ZeroDivisionError:
print("ZeroDivisionError")
pointer2 = 1
print("Performance metrics or Skill score for Unhealthy PM2.5 are:\n")
print("A = ", A)
if a != 0 or b != 0:
print("FAR = ", FAR)
if a != 0 or c != 0:
print("POD = ", POD)
if pointer1 == 0:
print("CSI = ", CSI)
if a != 0 or c != 0:
print("FOM = ", FOM)
if a != 0 or b != 0:
print("FOH = ", FOH)
if b != 0 or d != 0:
print("PON = ", PON)
print("POFD = ", POFD)
if c != 0 or d != 0:
print("DFR = ", DFR)
print("FOCN = ", FOCN)
if pointer2 == 0:
print("TSS = ", TSS)
print("Heidke = ", Heidke, "\n")
a = res10['a']
b = res10['b']
c = res10['c']
d = res10['d']
A = ((a + d)/(a+b+c+d)) * 100 # Accuracy of events
if a != 0 or b != 0:
FAR = ((b)/(a+b)) * 100 # False event forecase
if a != 0 or c != 0:
POD = ((a)/(a+c)) * 100 # Expected events
try:
CSI = ((a)/(a+b+c)) * 100 # Events and event forecasts that were hits
pointer1 = 0
except ZeroDivisionError:
print("ZeroDivisionError")
pointer1 = 1
if a != 0 or c != 0:
FOM = ((c)/(a+c))*100 # Surprise events
if a != 0 or b != 0:
FOH = ((a)/(a+b))*100 # Correct event forecasts
if b != 0 or d != 0:
PON = ((d)/(b+d))*100 # Expected non events
POFD = ((b)/(b+d))*100 # Unexpected non events
if c != 0 or d != 0:
DFR = ((c)/(c+d))*100 # False non-event forecasts
FOCN = ((d)/(c+d))*100 # Correct non-event forecasts
try:
# True skill statistic (Expected events - unexpected non-events)
TSS = ((a)/(a+c)) - ((b)/(b+d))
Heidke = (((a+c)*(d-b))+((a-c)*(d+b)))/(((a+c)*(c+d))+((a+b)*(b+d)))
pointer2 = 0
except ZeroDivisionError:
print("ZeroDivisionError")
pointer2 = 1
print("Performance metrics or Skill score for Unhealthy PM10 are:\n")
print("A = ", A)
if a != 0 or b != 0:
print("FAR = ", FAR)
if a != 0 or c != 0:
print("POD = ", POD)
if pointer1 == 0:
print("CSI = ", CSI)
if a != 0 or c != 0:
print("FOM = ", FOM)
if a != 0 or b != 0:
print("FOH = ", FOH)
if b != 0 or d != 0:
print("PON = ", PON)
print("POFD = ", POFD)
if c != 0 or d != 0:
print("DFR = ", DFR)
print("FOCN = ", FOCN)
if pointer2 == 0:
print("TSS = ", TSS)
print("Heidke = ", Heidke, "\n")
| 31.088235 | 79 | 0.498414 | 2,594 | 17,969 | 3.408635 | 0.050887 | 0.024429 | 0.021715 | 0.032572 | 0.980547 | 0.977607 | 0.977607 | 0.977607 | 0.977607 | 0.965619 | 0 | 0.074649 | 0.311147 | 17,969 | 577 | 80 | 31.142114 | 0.639683 | 0.127275 | 0 | 0.926004 | 0 | 0 | 0.149907 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006342 | false | 0.012685 | 0.004228 | 0 | 0.010571 | 0.22833 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c24aae6a3d48138a9eb9158cb37a42cd9d3087c4 | 4,566 | py | Python | Sampling.py | GrzegorzMika/Markov-Chains-Monte-Carlo | 29754eab860036d518d36a9654f904b3a2f59647 | [
"MIT"
] | null | null | null | Sampling.py | GrzegorzMika/Markov-Chains-Monte-Carlo | 29754eab860036d518d36a9654f904b3a2f59647 | [
"MIT"
] | null | null | null | Sampling.py | GrzegorzMika/Markov-Chains-Monte-Carlo | 29754eab860036d518d36a9654f904b3a2f59647 | [
"MIT"
] | null | null | null | from typing import Optional, List, Tuple, Union, Callable
import numpy as np
from ProposalDistribution import ProposalDistribution
# TODO: deal with overflow
class MetropolisHastingsSymmetric:
def __init__(self, target: Callable, proposal: ProposalDistribution,
initial: Optional[Union[float, np.ndarray]] = None,
shape: Optional[Union[Tuple[int], List[int]]] = None):
self.target: Callable = target
self.proposal: ProposalDistribution = proposal
self.accepted: float = 0
self.sample: Optional[np.ndarray] = None
self.used: bool = False
assert initial is not None or shape, 'At least one of the initial or shape arguments must be specified!'
if initial is not None:
self.initial: np.ndarray = np.array(initial)
else:
self.initial: np.ndarray = np.random.uniform(low=-1, high=1, size=shape)
def run(self, size: int, burnin: Optional[int] = 1000, thinning: Optional[int] = None, verbose: int = 0):
if self.used:
self.initial = self.sample[-1]
burnin = 0
self.sample = np.empty((size + burnin, *self.initial.shape))
self.sample[0] = self.initial
u = np.random.uniform(0, 1, size + burnin)
counter = 0
for i in range(1, burnin):
current_x = self.sample[i - 1]
proposed = self.proposal.sample(current_x)
a = np.min([1, self.target(proposed) / self.target(current_x)])
if u[i] < a:
self.sample[i] = proposed
else:
self.sample[i] = current_x
for i in range(burnin + 1, size + burnin):
current_x = self.sample[i - 1]
proposed = self.proposal.sample(current_x)
a = np.min([1, self.target(proposed) / self.target(current_x)])
if u[i] < a:
counter += 1
self.sample[i] = proposed
else:
self.sample[i] = current_x
self.accepted = counter / size * 100
if verbose > 0:
print("Proportion of samples accepted: {}%".format(round(counter / size * 100, 2)))
self.used = True
return self.sample[burnin:][::thinning]
class MetropolisHastings:
def __init__(self, target: Callable, proposal: ProposalDistribution,
initial: Optional[Union[float, np.ndarray]] = None,
shape: Optional[Union[Tuple[int], List[int]]] = None):
self.target: Callable = target
self.proposal: ProposalDistribution = proposal
self.accepted: float = 0
self.sample: Optional[np.ndarray] = None
self.used: bool = False
assert initial is not None or shape, 'At least one of the initial or shape arguments must be specified!'
if initial is not None:
self.initial: np.ndarray = np.array(initial)
else:
self.initial: np.ndarray = np.random.uniform(low=-1, high=1, size=shape)
def run(self, size: int, burnin: Optional[int] = 1000, thinning: Optional[int] = None, verbose: int = 0):
if self.used:
self.initial = self.sample[-1]
burnin = 0
self.sample = np.empty((size + burnin, *self.initial.shape))
self.sample[0] = self.initial
u = np.random.uniform(0, 1, size + burnin)
counter = 0
for i in range(1, burnin):
current_x = self.sample[i - 1]
proposed = self.proposal.sample(current_x)
a = np.min([1, (self.target(proposed) * self.proposal.pdf(current_x, proposed)) /
(self.target(current_x) * self.proposal.pdf(proposed, current_x))])
if u[i] < a:
self.sample[i] = proposed
else:
self.sample[i] = current_x
for i in range(burnin + 1, size + burnin):
current_x = self.sample[i - 1]
proposed = self.proposal.sample(current_x)
a = np.min([1, (self.target(proposed) * self.proposal.pdf(current_x, proposed)) /
(self.target(current_x) * self.proposal.pdf(proposed, current_x))])
if u[i] < a:
counter += 1
self.sample[i] = proposed
else:
self.sample[i] = current_x
self.accepted = counter / size * 100
if verbose > 0:
print("Proportion of samples accepted: {}%".format(round(counter / size * 100, 2)))
self.used = True
return self.sample[burnin:][::thinning]
| 41.509091 | 112 | 0.57293 | 560 | 4,566 | 4.621429 | 0.151786 | 0.085008 | 0.051005 | 0.02473 | 0.92813 | 0.92813 | 0.92813 | 0.92813 | 0.92813 | 0.92813 | 0 | 0.018448 | 0.311432 | 4,566 | 109 | 113 | 41.889908 | 0.804707 | 0.005256 | 0 | 0.946237 | 0 | 0 | 0.044053 | 0 | 0 | 0 | 0 | 0.009174 | 0.021505 | 1 | 0.043011 | false | 0 | 0.032258 | 0 | 0.11828 | 0.021505 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dfd5ee3106f080860b3472d12a3f41e6e0189730 | 212 | py | Python | trapper/metrics/__init__.py | obss/trapper | 40e6fc25a2d8c1ece8bf006c362a9cb163c4355c | [
"MIT"
] | 36 | 2021-11-01T19:29:31.000Z | 2022-02-25T15:19:08.000Z | trapper/metrics/__init__.py | obss/trapper | 40e6fc25a2d8c1ece8bf006c362a9cb163c4355c | [
"MIT"
] | 7 | 2021-11-01T14:33:21.000Z | 2022-03-22T09:01:36.000Z | trapper/metrics/__init__.py | obss/trapper | 40e6fc25a2d8c1ece8bf006c362a9cb163c4355c | [
"MIT"
] | 4 | 2021-11-30T00:34:20.000Z | 2022-03-31T21:06:30.000Z | from trapper.metrics.input_handlers import MetricInputHandler
from trapper.metrics.jury import JuryMetric
from trapper.metrics.metric import Metric
from trapper.metrics.output_handlers import MetricOutputHandler
| 42.4 | 63 | 0.886792 | 26 | 212 | 7.153846 | 0.461538 | 0.236559 | 0.387097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 212 | 4 | 64 | 53 | 0.94898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
a075a1c48293e06b5c08b8b728230567b57b9b80 | 18,868 | py | Python | ckanext-hdx_org_group/ckanext/hdx_org_group/tests/test_org_custom_fields.py | OCHA-DAP/hdx-ckan | 202e0c44adc4ea8d0b90141e69365b65cce68672 | [
"Apache-2.0"
] | 58 | 2015-01-11T09:05:15.000Z | 2022-03-17T23:44:07.000Z | ckanext-hdx_org_group/ckanext/hdx_org_group/tests/test_org_custom_fields.py | OCHA-DAP/hdx-ckan | 202e0c44adc4ea8d0b90141e69365b65cce68672 | [
"Apache-2.0"
] | 1,467 | 2015-01-01T16:47:44.000Z | 2022-02-28T16:51:20.000Z | ckanext-hdx_org_group/ckanext/hdx_org_group/tests/test_org_custom_fields.py | OCHA-DAP/hdx-ckan | 202e0c44adc4ea8d0b90141e69365b65cce68672 | [
"Apache-2.0"
] | 17 | 2015-05-06T14:04:21.000Z | 2021-11-11T19:58:16.000Z | '''
Created on May 18, 2020
@author: dan mihaila
'''
import logging as logging
import ckan.model as model
import ckan.tests.legacy as tests
import ckan.lib.helpers as h
import ckan.plugins.toolkit as tk
import ckanext.hdx_theme.tests.hdx_test_base as hdx_test_base
import ckanext.hdx_org_group.tests as org_group_base
log = logging.getLogger(__name__)
from ckanext.hdx_org_group.helpers.static_lists import ORGANIZATION_TYPE_LIST
class TestOrgFTSIDAPI(org_group_base.OrgGroupBaseTest):
@classmethod
def _load_plugins(cls):
hdx_test_base.load_plugin('ytp_request hdx_org_group hdx_theme')
@classmethod
def _get_action(cls, action_name):
return tk.get_action(action_name)
def test_create_org_api(self):
context_sysadmin = {'model': model, 'session': model.Session, 'user': 'testsysadmin',
'allow_partial_update': True}
new_org_dict = {
'name': 'test_org_dd',
'title': 'Test Org D',
'org_url': 'www.exampleorganization.org',
'description': 'just a simple description',
'hdx_org_type': ORGANIZATION_TYPE_LIST[0][1]
}
try:
org_dict = self._get_action('organization_create')(context_sysadmin, new_org_dict)
assert org_dict.get('name') == 'test_org_dd'
except Exception as ex:
assert False
edit_org_dict = {
'id': org_dict.get('id'),
'name': 'test_org_dd',
'title': 'Test Org DD',
'org_url': 'www.exampleorganization.org',
'description': 'just a simple description',
'hdx_org_type': ORGANIZATION_TYPE_LIST[0][1]
}
try:
org_dict = self._get_action('organization_update')(context_sysadmin, edit_org_dict)
assert org_dict.get('title') == 'Test Org DD'
except Exception as ex:
assert False
def test_fts_id_api(self):
context_usr = {'model': model, 'session': model.Session, 'user': 'tester', 'allow_partial_update': True}
context_sysadmin = {'model': model, 'session': model.Session, 'user': 'testsysadmin'}
new_org_dict = {
'name': 'test_org_d',
'title': 'Test Org D',
'fts_id': '123456',
'org_url': 'www.exampleorganization.org',
'description': 'just a simple description',
'hdx_org_type': ORGANIZATION_TYPE_LIST[0][1]
}
try:
org_dict = self._get_action('organization_create')(context_sysadmin, new_org_dict)
assert 'fts_id' in org_dict
assert org_dict.get('fts_id') == '123456'
except Exception as ex:
assert False
try:
_org_update_dict = {
'id': org_dict.get('id'),
'name': org_dict.get('name'),
'title': org_dict.get('title'),
'fts_id': '1234567',
'org_url': org_dict.get('org_url'),
'description': org_dict.get('description'),
'hdx_org_type': org_dict.get('hdx_org_type'),
}
self._get_action('organization_update')(context_sysadmin, _org_update_dict)
org_updated_dict = self._get_action('organization_show')(context_sysadmin, {'id': org_dict.get('id')})
assert 'fts_id' in org_updated_dict
assert org_updated_dict.get('fts_id') == '1234567'
except Exception as ex:
assert False
try:
member_dict = {'id': org_dict.get('id'), 'username': 'tester', 'role': 'admin'}
self._get_action('organization_member_create')(context_sysadmin, member_dict)
_org_dict = self._get_action('organization_show')(context_sysadmin, {'id': org_dict.get('id')})
assert True
except Exception as ex:
assert False
try:
_org_update_dict = {
'id': org_updated_dict.get('id'),
'name': org_updated_dict.get('name'),
'title': org_updated_dict.get('title'),
'org_url': org_updated_dict.get('org_url'),
'description': org_updated_dict.get('description'),
'hdx_org_type': org_updated_dict.get('hdx_org_type'),
}
self._get_action('organization_update')(context_usr, _org_update_dict)
org_updated_dict = self._get_action('organization_show')(context_sysadmin, {'id': org_dict.get('id')})
assert 'fts_id' in org_updated_dict
assert org_updated_dict.get('fts_id') == '1234567'
except Exception as ex:
assert False
try:
_org_update_dict = {
'id': org_dict.get('id'),
'name': org_dict.get('name'),
'fts_id': '12345678',
'title': org_dict.get('title'),
'org_url': org_dict.get('org_url'),
'description': org_dict.get('description'),
'hdx_org_type': org_dict.get('hdx_org_type'),
}
self._get_action('organization_update')(context_usr, _org_update_dict)
org_updated_dict = self._get_action('organization_show')(context_sysadmin, {'id': org_dict.get('id')})
assert 'fts_id' in org_updated_dict
assert org_updated_dict.get('fts_id') == '1234567'
except Exception as ex:
assert False
assert True
class TestOrgFTSIDController(org_group_base.OrgGroupBaseTest):
@classmethod
def _load_plugins(cls):
hdx_test_base.load_plugin('ytp_request hdx_org_group hdx_theme')
@classmethod
def _get_action(cls, action_name):
return tk.get_action(action_name)
def test_fts_is_controller(self):
context_usr = {'model': model, 'session': model.Session, 'user': 'tester', 'allow_partial_update': True}
context_sysadmin = {'model': model, 'session': model.Session, 'user': 'testsysadmin',
'allow_partial_update': True}
testsysadmin = model.User.by_name('testsysadmin')
sysadmin_auth = {'Authorization': str(testsysadmin.apikey)}
tester = model.User.by_name('tester')
tester_auth = {'Authorization': str(tester.apikey)}
test_client = self.get_backwards_compatible_test_client()
new_org_url = h.url_for(
controller='ckanext.hdx_org_group.controllers.organization_controller:HDXOrganizationController',
action='new')
new_org_params = {
'name': 'test_org_d',
'title': 'Test Org D',
'fts_id': '123456',
'org_url': 'www.exampleorganization.org',
'description': 'just a simple description',
'hdx_org_type': ORGANIZATION_TYPE_LIST[0][1],
'save': 'save'
}
try:
result = test_client.post(new_org_url, data=new_org_params, extra_environ=sysadmin_auth)
org_dict = self._get_action('organization_show')(context_sysadmin, {'id': 'test_org_d'})
assert '123456' == org_dict.get('fts_id')
assert '302 Found' in result.body
except Exception as ex:
assert False
try:
member_dict = {'id': org_dict.get('id'), 'username': 'tester', 'role': 'admin'}
self._get_action('organization_member_create')(context_sysadmin, member_dict)
org_dict = self._get_action('organization_show')(context_sysadmin, {'id': org_dict.get('id')})
assert True
except Exception as ex:
assert False
edit_org_url = h.url_for(
controller='ckanext.hdx_org_group.controllers.organization_controller:HDXOrganizationController',
action='edit', id=org_dict.get('id'))
edit_org_params = {
'id': org_dict.get('id'),
'name': 'test_org_d',
'title': 'Test Org E',
'org_url': 'www.exampleorganization.org',
'description': 'just a simple description',
'hdx_org_type': ORGANIZATION_TYPE_LIST[0][1],
'save': 'save'
}
try:
result = test_client.post(edit_org_url, data=edit_org_params, extra_environ=tester_auth)
org_dict = self._get_action('organization_show')(context_sysadmin, {'id': 'test_org_d'})
assert '123456' == org_dict.get('fts_id')
assert 'Test Org E' == org_dict.get('title')
except Exception as ex:
assert False
edit_org_params = {
'id': org_dict.get('id'),
'name': 'test_org_d',
'title': 'Test Org E',
'fts_id': '789',
'org_url': 'www.exampleorganization.org',
'description': 'just a simple description',
'hdx_org_type': ORGANIZATION_TYPE_LIST[0][1],
'save': 'save'
}
try:
result = test_client.post(edit_org_url, data=edit_org_params, extra_environ=tester_auth)
org_dict = self._get_action('organization_show')(context_sysadmin, {'id': 'test_org_d'})
assert '123456' == org_dict.get('fts_id')
assert 'Test Org E' == org_dict.get('title')
except Exception as ex:
assert False
assert True
class TestOrgUserSurveyUrlAPI(org_group_base.OrgGroupBaseTest):
USER_SURVEY_URL = 'https://google.com'
USER_SURVEY_UPDATED_URL = 'https://google.org'
@classmethod
def _load_plugins(cls):
hdx_test_base.load_plugin('ytp_request hdx_org_group hdx_theme')
@classmethod
def _get_action(cls, action_name):
return tk.get_action(action_name)
def test_create_org_api(self):
context_sysadmin = {'model': model, 'session': model.Session, 'user': 'testsysadmin',
'allow_partial_update': True}
new_org_dict = {
'name': 'test_org_dd',
'title': 'Test Org D',
'org_url': 'www.exampleorganization.org',
'description': 'just a simple description',
'hdx_org_type': ORGANIZATION_TYPE_LIST[0][1]
}
try:
org_dict = self._get_action('organization_create')(context_sysadmin, new_org_dict)
assert org_dict.get('name') == 'test_org_dd'
except Exception as ex:
assert False
edit_org_dict = {
'id': org_dict.get('id'),
'name': 'test_org_dd',
'title': 'Test Org DD',
'org_url': 'www.exampleorganization.org',
'description': 'just a simple description',
'hdx_org_type': ORGANIZATION_TYPE_LIST[0][1]
}
try:
org_dict = self._get_action('organization_update')(context_sysadmin, edit_org_dict)
assert org_dict.get('title') == 'Test Org DD'
except Exception as ex:
assert False
def test_user_survey_url_api(self):
context_usr = {'model': model, 'session': model.Session, 'user': 'tester', 'allow_partial_update': True}
context_sysadmin = {'model': model, 'session': model.Session, 'user': 'testsysadmin'}
new_org_dict = {
'name': 'test_org_d',
'title': 'Test Org D',
'user_survey_url': self.USER_SURVEY_URL,
'org_url': 'www.exampleorganization.org',
'description': 'just a simple description',
'hdx_org_type': ORGANIZATION_TYPE_LIST[0][1]
}
try:
org_dict = self._get_action('organization_create')(context_sysadmin, new_org_dict)
assert 'user_survey_url' in org_dict
assert org_dict.get('user_survey_url') == self.USER_SURVEY_URL
except Exception as ex:
assert False
try:
_org_update_dict = {
'id': org_dict.get('id'),
'name': org_dict.get('name'),
'title': org_dict.get('title'),
'user_survey_url': self.USER_SURVEY_UPDATED_URL,
'org_url': org_dict.get('org_url'),
'description': org_dict.get('description'),
'hdx_org_type': org_dict.get('hdx_org_type'),
}
self._get_action('organization_update')(context_sysadmin, _org_update_dict)
org_updated_dict = self._get_action('organization_show')(context_sysadmin, {'id': org_dict.get('id')})
assert 'user_survey_url' in org_updated_dict
assert org_updated_dict.get('user_survey_url') == self.USER_SURVEY_UPDATED_URL
except Exception as ex:
assert False
try:
member_dict = {'id': org_dict.get('id'), 'username': 'tester', 'role': 'admin'}
self._get_action('organization_member_create')(context_sysadmin, member_dict)
_org_dict = self._get_action('organization_show')(context_sysadmin, {'id': org_dict.get('id')})
assert True
except Exception as ex:
assert False
try:
_org_update_dict = {
'id': org_updated_dict.get('id'),
'name': org_updated_dict.get('name'),
'title': org_updated_dict.get('title'),
'org_url': org_updated_dict.get('org_url'),
'description': org_updated_dict.get('description'),
'hdx_org_type': org_updated_dict.get('hdx_org_type'),
}
self._get_action('organization_update')(context_usr, _org_update_dict)
org_updated_dict = self._get_action('organization_show')(context_sysadmin, {'id': org_dict.get('id')})
assert 'user_survey_url' in org_updated_dict
assert org_updated_dict.get('user_survey_url') == self.USER_SURVEY_UPDATED_URL
except Exception as ex:
assert False
try:
_org_update_dict = {
'id': org_dict.get('id'),
'name': org_dict.get('name'),
'user_survey_url': 'https://yahoo.com',
'title': org_dict.get('title'),
'org_url': org_dict.get('org_url'),
'description': org_dict.get('description'),
'hdx_org_type': org_dict.get('hdx_org_type'),
}
self._get_action('organization_update')(context_usr, _org_update_dict)
org_updated_dict = self._get_action('organization_show')(context_sysadmin, {'id': org_dict.get('id')})
assert 'user_survey_url' in org_updated_dict
assert org_updated_dict.get('user_survey_url') == self.USER_SURVEY_UPDATED_URL
except Exception as ex:
assert False
assert True
class TestOrgUserSurveyUrlController(org_group_base.OrgGroupBaseTest):
USER_SURVEY_URL = 'https://google.com'
USER_SURVEY_UPDATED_URL = 'https://google.org'
@classmethod
def _load_plugins(cls):
hdx_test_base.load_plugin('ytp_request hdx_org_group hdx_theme')
@classmethod
def _get_action(cls, action_name):
return tk.get_action(action_name)
def test_user_survey_url_controller(self):
context_usr = {'model': model, 'session': model.Session, 'user': 'tester', 'allow_partial_update': True}
context_sysadmin = {'model': model, 'session': model.Session, 'user': 'testsysadmin',
'allow_partial_update': True}
testsysadmin = model.User.by_name('testsysadmin')
sysadmin_auth = {'Authorization': str(testsysadmin.apikey)}
tester = model.User.by_name('tester')
tester_auth = {'Authorization': str(tester.apikey)}
test_client = self.get_backwards_compatible_test_client()
new_org_url = h.url_for(
controller='ckanext.hdx_org_group.controllers.organization_controller:HDXOrganizationController',
action='new')
new_org_params = {
'name': 'test_org_d',
'title': 'Test Org D',
'user_survey_url': self.USER_SURVEY_URL,
'org_url': 'www.exampleorganization.org',
'description': 'just a simple description',
'hdx_org_type': ORGANIZATION_TYPE_LIST[0][1],
'save': 'save'
}
try:
result = test_client.post(new_org_url, data=new_org_params, extra_environ=sysadmin_auth)
org_dict = self._get_action('organization_show')(context_sysadmin, {'id': 'test_org_d'})
assert self.USER_SURVEY_URL == org_dict.get('user_survey_url')
assert '302 Found' in result.body
except Exception as ex:
assert False
try:
member_dict = {'id': org_dict.get('id'), 'username': 'tester', 'role': 'admin'}
self._get_action('organization_member_create')(context_sysadmin, member_dict)
org_dict = self._get_action('organization_show')(context_sysadmin, {'id': org_dict.get('id')})
assert True
except Exception as ex:
assert False
edit_org_url = h.url_for(
controller='ckanext.hdx_org_group.controllers.organization_controller:HDXOrganizationController',
action='edit', id=org_dict.get('id'))
edit_org_params = {
'id': org_dict.get('id'),
'name': 'test_org_d',
'title': 'Test Org E',
'org_url': 'www.exampleorganization.org',
'description': 'just a simple description',
'hdx_org_type': ORGANIZATION_TYPE_LIST[0][1],
'save': 'save'
}
try:
result = test_client.get(edit_org_url, extra_environ=sysadmin_auth)
assert self.USER_SURVEY_URL in result.body
result = test_client.post(edit_org_url, data=edit_org_params, extra_environ=tester_auth)
org_dict = self._get_action('organization_show')(context_sysadmin, {'id': 'test_org_d'})
assert self.USER_SURVEY_URL == org_dict.get('user_survey_url')
assert 'Test Org E' == org_dict.get('title')
except Exception as ex:
assert False
edit_org_params = {
'id': org_dict.get('id'),
'name': 'test_org_d',
'title': 'Test Org E',
'user_survey_url': self.USER_SURVEY_UPDATED_URL,
'org_url': 'www.exampleorganization.org',
'description': 'just a simple description',
'hdx_org_type': ORGANIZATION_TYPE_LIST[0][1],
'save': 'save'
}
try:
result = test_client.post(edit_org_url, data=edit_org_params, extra_environ=tester_auth)
org_dict = self._get_action('organization_show')(context_sysadmin, {'id': 'test_org_d'})
assert self.USER_SURVEY_URL == org_dict.get('user_survey_url')
assert 'Test Org E' == org_dict.get('title')
except Exception as ex:
assert False
assert True
| 42.979499 | 114 | 0.600329 | 2,220 | 18,868 | 4.744595 | 0.058559 | 0.061141 | 0.058863 | 0.075952 | 0.947024 | 0.941422 | 0.940188 | 0.936106 | 0.932878 | 0.929555 | 0 | 0.008173 | 0.280157 | 18,868 | 438 | 115 | 43.077626 | 0.767339 | 0.002385 | 0 | 0.883721 | 0 | 0 | 0.23173 | 0.040393 | 0 | 0 | 0 | 0 | 0.162791 | 1 | 0.036176 | false | 0 | 0.020672 | 0.010336 | 0.087855 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2656a77bf830e28b8189f113da970d45e42c2283 | 13 | py | Python | ex004.py | ClevertonCodev/Python | 5fc9c372ba9053cdbeefcae110b3f32474ce8f74 | [
"MIT"
] | null | null | null | ex004.py | ClevertonCodev/Python | 5fc9c372ba9053cdbeefcae110b3f32474ce8f74 | [
"MIT"
] | null | null | null | ex004.py | ClevertonCodev/Python | 5fc9c372ba9053cdbeefcae110b3f32474ce8f74 | [
"MIT"
] | null | null | null | print (5+5*2) | 13 | 13 | 0.615385 | 4 | 13 | 2 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0.076923 | 13 | 1 | 13 | 13 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
2670a766baa4b3d23975af940bf1335c780e025b | 108 | py | Python | matcher/__init__.py | gianscarpe/vipm-project | 2d9384173a8741e4e56439a06f2fd837b5e1ce4e | [
"MIT"
] | 1 | 2020-08-02T12:26:38.000Z | 2020-08-02T12:26:38.000Z | matcher/__init__.py | gianscarpe/vipm-project | 2d9384173a8741e4e56439a06f2fd837b5e1ce4e | [
"MIT"
] | 2 | 2022-01-13T02:16:33.000Z | 2022-03-12T00:24:48.000Z | matcher/__init__.py | gianscarpe/vipm-project | 2d9384173a8741e4e56439a06f2fd837b5e1ce4e | [
"MIT"
] | null | null | null | import os
def match(image):
return os.path.abspath("data/fashion-product-images-small/images/1163.jpg") | 27 | 79 | 0.759259 | 17 | 108 | 4.823529 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040816 | 0.092593 | 108 | 4 | 79 | 27 | 0.795918 | 0 | 0 | 0 | 0 | 0 | 0.449541 | 0.449541 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
268f6cf6854198d27ba698382a05c16eab92eac0 | 12,491 | py | Python | src/offazure/azext_offazure/generated/_help.py | Mannan2812/azure-cli-extensions | e2b34efe23795f6db9c59100534a40f0813c3d95 | [
"MIT"
] | 207 | 2017-11-29T06:59:41.000Z | 2022-03-31T10:00:53.000Z | src/offazure/azext_offazure/generated/_help.py | Mannan2812/azure-cli-extensions | e2b34efe23795f6db9c59100534a40f0813c3d95 | [
"MIT"
] | 4,061 | 2017-10-27T23:19:56.000Z | 2022-03-31T23:18:30.000Z | src/offazure/azext_offazure/generated/_help.py | Mannan2812/azure-cli-extensions | e2b34efe23795f6db9c59100534a40f0813c3d95 | [
"MIT"
] | 802 | 2017-10-11T17:36:26.000Z | 2022-03-31T22:24:32.000Z | # --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
# pylint: disable=too-many-lines
from knack.help_files import helps
helps['offazure hyperv cluster'] = """
type: group
short-summary: Manage hyper v cluster with offazure
"""
helps['offazure hyperv cluster list'] = """
type: command
short-summary: "Method to get all clusters in a site."
examples:
- name: List cluster by site
text: |-
az offazure hyperv cluster list --resource-group "ipsahoo-RI-121119" --site-name "hyperv121319c813site" \
--subscription-id "4bd2aa0f-2bd2-4d67-91a8-5a4533d58600"
"""
helps['offazure hyperv cluster show'] = """
type: command
short-summary: "Method to get a Hyper-V cluster."
examples:
- name: Get cluster
text: |-
az offazure hyperv cluster show --cluster-name "hypgqlclusrs1-ntdev-corp-micros-11e77b27-67cc-5e46-a5d8-\
0ff3dc2ef179" --resource-group "ipsahoo-RI-121119" --site-name "hyperv121319c813site" --subscription-id \
"4bd2aa0f-2bd2-4d67-91a8-5a4533d58600"
"""
helps['offazure hyperv host'] = """
type: group
short-summary: Manage hyper v host with offazure
"""
helps['offazure hyperv host list'] = """
type: command
short-summary: "Method to get all hosts in a site."
examples:
- name: List hosts by site
text: |-
az offazure hyperv host list --resource-group "pajindTest" --site-name "appliance1e39site" \
--subscription-id "4bd2aa0f-2bd2-4d67-91a8-5a4533d58600"
"""
helps['offazure hyperv host show'] = """
type: command
short-summary: "Method to get a Hyper-V host."
examples:
- name: Get host
text: |-
az offazure hyperv host show --host-name "bcdr-ewlab-46-ntdev-corp-micros-e4638031-3b19-5642-926d-385da6\
0cfb8a" --resource-group "pajindTest" --site-name "appliance1e39site" --subscription-id "4bd2aa0f-2bd2-4d67-91a8-5a4533\
d58600"
"""
helps['offazure hyperv machine'] = """
type: group
short-summary: Manage hyper v machine with offazure
"""
helps['offazure hyperv machine list'] = """
type: command
short-summary: "Method to get machine."
examples:
- name: List hosts by site
text: |-
az offazure hyperv machine list --resource-group "pajindTest" --site-name "appliance1e39site" \
--subscription-id "4bd2aa0f-2bd2-4d67-91a8-5a4533d58600"
"""
helps['offazure hyperv machine show'] = """
type: command
short-summary: "Method to get machine."
examples:
- name: Get machine.
text: |-
az offazure hyperv machine show --machine-name "96d27052-052b-48db-aa84-b9978eddbf5d" --resource-group \
"pajindTest" --site-name "appliance1e39site" --subscription-id "4bd2aa0f-2bd2-4d67-91a8-5a4533d58600"
"""
helps['offazure hyperv run-as-account'] = """
type: group
short-summary: Manage hyper v run as account with offazure
"""
helps['offazure hyperv run-as-account list'] = """
type: command
short-summary: "Method to get run as accounts."
examples:
- name: List Run As Accounts by site
text: |-
az offazure hyperv run-as-account list --resource-group "pajindTest" --site-name "appliance1e39site" \
--subscription-id "4bd2aa0f-2bd2-4d67-91a8-5a4533d58600"
"""
helps['offazure hyperv run-as-account show'] = """
type: command
short-summary: "Method to get run as account."
examples:
- name: Get run as account.
text: |-
az offazure hyperv run-as-account show --account-name "account1" --resource-group "pajindTest" \
--site-name "appliance1e39site" --subscription-id "4bd2aa0f-2bd2-4d67-91a8-5a4533d58600"
"""
helps['offazure hyperv site'] = """
type: group
short-summary: Manage hyper v site with offazure
"""
helps['offazure hyperv site show'] = """
type: command
short-summary: "Method to get a site."
examples:
- name: Get Hyper-V site
text: |-
az offazure hyperv site show --resource-group "pajindTest" --site-name "appliance1e39site" \
--subscription-id "4bd2aa0f-2bd2-4d67-91a8-5a4533d58600"
"""
helps['offazure hyperv site create'] = """
type: command
short-summary: "Method to create or update a site."
parameters:
- name: --service-principal-identity-details
short-summary: "Service principal identity details used by agent for communication to the service."
long-summary: |
Usage: --service-principal-identity-details tenant-id=XX application-id=XX object-id=XX audience=XX \
aad-authority=XX raw-cert-data=XX
tenant-id: Tenant Id for the service principal with which the on-premise management/data plane components \
would communicate with our Azure services.
application-id: Application/client Id for the service principal with which the on-premise management/data \
plane components would communicate with our Azure services.
object-id: Object Id of the service principal with which the on-premise management/data plane components \
would communicate with our Azure services.
audience: Intended audience for the service principal.
aad-authority: AAD Authority URL which was used to request the token for the service principal.
raw-cert-data: Raw certificate data for building certificate expiry flows.
- name: --agent-details
short-summary: "On-premises agent details."
long-summary: |
Usage: --agent-details key-vault-uri=XX key-vault-id=XX
key-vault-uri: Key vault URI.
key-vault-id: Key vault ARM Id.
examples:
- name: Create Hyper-V site
text: |-
az offazure hyperv site create --location "eastus" --service-principal-identity-details \
aad-authority="https://login.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47" application-id="e9f013df-2a2a-4871-b766-\
e79867f30348" audience="https://72f988bf-86f1-41af-91ab-2d7cd011db47/MaheshSite17ac9agentauthaadapp" \
object-id="2cd492bc-7ef3-4ee0-b301-59a88108b47b" tenant-id="72f988bf-86f1-41af-91ab-2d7cd011db47" --resource-group \
"pajindTest" --site-name "appliance1e39site" --subscription-id "4bd2aa0f-2bd2-4d67-91a8-5a4533d58600"
"""
helps['offazure hyperv site delete'] = """
type: command
short-summary: "Method to delete a site."
examples:
- name: Delete Hyper-V site.
text: |-
az offazure hyperv site delete --resource-group "pajindTest" --site-name "appliance1e39site" \
--subscription-id "4bd2aa0f-2bd2-4d67-91a8-5a4533d58600"
"""
helps['offazure vmware machine'] = """
type: group
short-summary: Manage machine with offazure
"""
helps['offazure vmware machine list'] = """
type: command
short-summary: "Method to get machine."
examples:
- name: Get VMware machines
text: |-
az offazure vmware machine list --resource-group "myResourceGroup" --site-name "pajind_site1" \
--subscription-id "75dd7e42-4fd1-4512-af04-83ad9864335b"
"""
helps['offazure vmware machine show'] = """
type: command
short-summary: "Method to get machine."
examples:
- name: Get VMware machine.
text: |-
az offazure vmware machine show --name "machine1" --resource-group "myResourceGroup" --site-name \
"pajind_site1" --subscription-id "75dd7e42-4fd1-4512-af04-83ad9864335b"
"""
helps['offazure vmware run-as-account'] = """
type: group
short-summary: Manage run as account with offazure
"""
helps['offazure vmware run-as-account list'] = """
type: command
short-summary: "Method to get run as accounts."
examples:
- name: List VMware run as account by site.
text: |-
az offazure vmware run-as-account list --resource-group "myResourceGroup" --site-name "pajind_site1" \
--subscription-id "75dd7e42-4fd1-4512-af04-83ad9864335b"
"""
helps['offazure vmware run-as-account show'] = """
type: command
short-summary: "Method to get run as account."
examples:
- name: Get VMware run as account.
text: |-
az offazure vmware run-as-account show --account-name "account1" --resource-group "myResourceGroup" \
--site-name "pajind_site1" --subscription-id "75dd7e42-4fd1-4512-af04-83ad9864335b"
"""
helps['offazure vmware site'] = """
type: group
short-summary: Manage site with offazure
"""
helps['offazure vmware site show'] = """
type: command
short-summary: "Method to get a site."
examples:
- name: Get VMware site
text: |-
az offazure vmware site show --resource-group "myResourceGroup" --name "pajind_site1" --subscription-id \
"75dd7e42-4fd1-4512-af04-83ad9864335b"
"""
helps['offazure vmware site create'] = """
type: command
short-summary: "Method to create or update a site."
parameters:
- name: --service-principal-identity-details
short-summary: "Service principal identity details used by agent for communication to the service."
long-summary: |
Usage: --service-principal-identity-details tenant-id=XX application-id=XX object-id=XX audience=XX \
aad-authority=XX raw-cert-data=XX
tenant-id: Tenant Id for the service principal with which the on-premise management/data plane components \
would communicate with our Azure services.
application-id: Application/client Id for the service principal with which the on-premise management/data \
plane components would communicate with our Azure services.
object-id: Object Id of the service principal with which the on-premise management/data plane components \
would communicate with our Azure services.
audience: Intended audience for the service principal.
aad-authority: AAD Authority URL which was used to request the token for the service principal.
raw-cert-data: Raw certificate data for building certificate expiry flows.
- name: --agent-details
short-summary: "On-premises agent details."
long-summary: |
Usage: --agent-details key-vault-uri=XX key-vault-id=XX
key-vault-uri: Key vault URI.
key-vault-id: Key vault ARM Id.
examples:
- name: Create VMware site
text: |-
az offazure vmware site create --location "eastus" --service-principal-identity-details \
aad-authority="https://login.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47" application-id="e9f013df-2a2a-4871-b766-\
e79867f30348" audience="https://72f988bf-86f1-41af-91ab-2d7cd011db47/MaheshSite17ac9agentauthaadapp" \
object-id="2cd492bc-7ef3-4ee0-b301-59a88108b47b" tenant-id="72f988bf-86f1-41af-91ab-2d7cd011db47" --resource-group \
"pajindTest" --site-name "appliance1e39site" --subscription-id "4bd2aa0f-2bd2-4d67-91a8-5a4533d58600"
"""
helps['offazure vmware site delete'] = """
type: command
short-summary: "Method to delete a site."
examples:
- name: Delete VMware site
text: |-
az offazure vmware site delete --resource-group "myResourceGroup" --name "pajind_site1" \
--subscription-id "75dd7e42-4fd1-4512-af04-83ad9864335b"
"""
helps['offazure vmware vcenter'] = """
type: group
short-summary: Manage v center with offazure
"""
helps['offazure vmware vcenter list'] = """
type: command
short-summary: "Method to get all vCenters in a site."
examples:
- name: List VMware vCenters by site
text: |-
az offazure vmware vcenter list --resource-group "rahasijaBugBash050919" --site-name \
"rahasapp122119d37csite" --subscription-id "4bd2aa0f-2bd2-4d67-91a8-5a4533d58600"
"""
helps['offazure vmware vcenter show'] = """
type: command
short-summary: "Method to get a vCenter."
examples:
- name: Get VMware Vcenter.
text: |-
az offazure vmware vcenter show --resource-group "rahasijaBugBash050919" --site-name \
"rahasapp122119d37csite" --subscription-id "4bd2aa0f-2bd2-4d67-91a8-5a4533d58600" --name \
"10-150-8-50-6af5f800-e9f6-56ff-9c3c-7be56d242c31"
"""
| 39.780255 | 120 | 0.673205 | 1,509 | 12,491 | 5.567926 | 0.143804 | 0.047132 | 0.038086 | 0.054749 | 0.895739 | 0.84218 | 0.815877 | 0.766603 | 0.748512 | 0.7209 | 0 | 0.082156 | 0.200945 | 12,491 | 313 | 121 | 39.907348 | 0.759643 | 0.037627 | 0 | 0.649254 | 0 | 0.11194 | 0.947794 | 0.132556 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.003731 | 0 | 0.003731 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
cd9c3894adf807197dec564b12648b5992d0e4dc | 22,105 | py | Python | epm_client/apis/po_p_api.py | tub-elastest/epm-client-python | 4e708a7e8c80334337d2f05c0baec46fdd581b8f | [
"Apache-2.0"
] | 1 | 2019-05-23T12:51:18.000Z | 2019-05-23T12:51:18.000Z | epm_client/apis/po_p_api.py | tub-elastest/epm-client-python | 4e708a7e8c80334337d2f05c0baec46fdd581b8f | [
"Apache-2.0"
] | null | null | null | epm_client/apis/po_p_api.py | tub-elastest/epm-client-python | 4e708a7e8c80334337d2f05c0baec46fdd581b8f | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
EPM REST API
REST API description of the ElasTest Platform Manager Module.
OpenAPI spec version: 0.1.2
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class PoPApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def get_all_po_ps(self, **kwargs):
"""
Returns all PoPs.
Returns all PoPs with all its details.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_all_po_ps(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:return: list[PoP]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_all_po_ps_with_http_info(**kwargs)
else:
(data) = self.get_all_po_ps_with_http_info(**kwargs)
return data
def get_all_po_ps_with_http_info(self, **kwargs):
"""
Returns all PoPs.
Returns all PoPs with all its details.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_all_po_ps_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:return: list[PoP]
If the method is called asynchronously,
returns the request thread.
"""
all_params = []
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_all_po_ps" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/pop'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[PoP]',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_po_p_by_id(self, id, **kwargs):
"""
Returns a PoP.
Returns the PoP with the given ID. Returns all its details.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_po_p_by_id(id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str id: ID of PoP (required)
:return: PoP
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_po_p_by_id_with_http_info(id, **kwargs)
else:
(data) = self.get_po_p_by_id_with_http_info(id, **kwargs)
return data
def get_po_p_by_id_with_http_info(self, id, **kwargs):
"""
Returns a PoP.
Returns the PoP with the given ID. Returns all its details.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_po_p_by_id_with_http_info(id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str id: ID of PoP (required)
:return: PoP
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_po_p_by_id" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params) or (params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `get_po_p_by_id`")
collection_formats = {}
resource_path = '/pop/{id}'.replace('{format}', 'json')
path_params = {}
if 'id' in params:
path_params['id'] = params['id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PoP',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def register_po_p(self, body, **kwargs):
"""
Registers a new PoP
Registers a new Point-of-Presence represented by a PoP
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.register_po_p(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param PoP body: Defintion of a PoP which defines a Point-of-Presence used to host resources (required)
:return: PoP
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.register_po_p_with_http_info(body, **kwargs)
else:
(data) = self.register_po_p_with_http_info(body, **kwargs)
return data
def register_po_p_with_http_info(self, body, **kwargs):
"""
Registers a new PoP
Registers a new Point-of-Presence represented by a PoP
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.register_po_p_with_http_info(body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param PoP body: Defintion of a PoP which defines a Point-of-Presence used to host resources (required)
:return: PoP
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method register_po_p" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `register_po_p`")
collection_formats = {}
resource_path = '/pop'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PoP',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def unregister_po_p(self, id, **kwargs):
"""
Unregisters a PoP.
Unregisters the PoP that matches with a given ID.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.unregister_po_p(id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str id: ID of PoP (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.unregister_po_p_with_http_info(id, **kwargs)
else:
(data) = self.unregister_po_p_with_http_info(id, **kwargs)
return data
def unregister_po_p_with_http_info(self, id, **kwargs):
"""
Unregisters a PoP.
Unregisters the PoP that matches with a given ID.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.unregister_po_p_with_http_info(id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str id: ID of PoP (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method unregister_po_p" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params) or (params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `unregister_po_p`")
collection_formats = {}
resource_path = '/pop/{id}'.replace('{format}', 'json')
path_params = {}
if 'id' in params:
path_params['id'] = params['id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['*/*'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_po_p(self, id, body, **kwargs):
"""
Updates a PoP.
Updates an already registered PoP.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_po_p(id, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str id: ID of PoP (required)
:param PoP body: PoP object that needs to be updated. (required)
:return: PoP
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.update_po_p_with_http_info(id, body, **kwargs)
else:
(data) = self.update_po_p_with_http_info(id, body, **kwargs)
return data
def update_po_p_with_http_info(self, id, body, **kwargs):
"""
Updates a PoP.
Updates an already registered PoP.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_po_p_with_http_info(id, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str id: ID of PoP (required)
:param PoP body: PoP object that needs to be updated. (required)
:return: PoP
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'body']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_po_p" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params) or (params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `update_po_p`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `update_po_p`")
collection_formats = {}
resource_path = '/pop/{id}'.replace('{format}', 'json')
path_params = {}
if 'id' in params:
path_params['id'] = params['id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PoP',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 39.193262 | 111 | 0.54983 | 2,308 | 22,105 | 5.033795 | 0.081023 | 0.068859 | 0.024101 | 0.030986 | 0.948528 | 0.935101 | 0.930883 | 0.91496 | 0.908418 | 0.891462 | 0 | 0.00043 | 0.368604 | 22,105 | 563 | 112 | 39.262877 | 0.831984 | 0.321104 | 0 | 0.785455 | 1 | 0 | 0.135046 | 0.024286 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.025455 | 0 | 0.123636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
26ec5b106ccbe44d39e9ae71df90d157386958c4 | 345,962 | py | Python | functions/utility_functions.py | mtasa-typescript/mtasa-wiki-dump | edea1746850fb6c99d6155d1d7891e2cceb33a5c | [
"MIT"
] | null | null | null | functions/utility_functions.py | mtasa-typescript/mtasa-wiki-dump | edea1746850fb6c99d6155d1d7891e2cceb33a5c | [
"MIT"
] | 1 | 2021-02-24T21:50:18.000Z | 2021-02-24T21:50:18.000Z | functions/utility_functions.py | mtasa-typescript/mtasa-wiki-dump | edea1746850fb6c99d6155d1d7891e2cceb33a5c | [
"MIT"
] | null | null | null | # Autogenerated file. ANY CHANGES WILL BE OVERWRITTEN
from to_python.core.types import FunctionType, \
FunctionArgument, \
FunctionArgumentValues, \
FunctionReturnTypes, \
FunctionSignature, \
FunctionDoc, \
FunctionData, \
CompoundFunctionData
DUMP_PARTIAL = [
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='addDebugHook',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='hookType',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='callbackFunction',
argument_type=FunctionType(
names=['function'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='nameList',
argument_type=FunctionType(
names=['table'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function allows tracing of MTA functions and events. It should only be used when debugging scripts as it may degrade script performance.\nDebug hooks are not recursive, so functions and events triggered inside the hook callback will not be traced.' ,
arguments={
"hookType": """The type of hook to add. This can be:
** preEvent
** postEvent
** preFunction
** postFunction
* preEventFunction
* postEventFunction """,
"callbackFunction": """The function to call
** Returning the string "skip" from the callback function will cause the original function/event to be skipped """,
"nameList": """Table of strings for restricting which functions and events the hook will be triggered on
** addDebugHook and removeDebugHook will only be hooked if they are specified in the name list """
},
result='returns true if the hook was successfully added, or false otherwise.' ,
),
url='addDebugHook',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='addDebugHook',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='hookType',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='callbackFunction',
argument_type=FunctionType(
names=['function'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='nameList',
argument_type=FunctionType(
names=['table'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function allows tracing of MTA functions and events. It should only be used when debugging scripts as it may degrade script performance.\nDebug hooks are not recursive, so functions and events triggered inside the hook callback will not be traced.' ,
arguments={
"hookType": """The type of hook to add. This can be:
** preEvent
** postEvent
** preFunction
** postFunction
* preEventFunction
* postEventFunction """,
"callbackFunction": """The function to call
** Returning the string "skip" from the callback function will cause the original function/event to be skipped """,
"nameList": """Table of strings for restricting which functions and events the hook will be triggered on
** addDebugHook and removeDebugHook will only be hooked if they are specified in the name list """
},
result='returns true if the hook was successfully added, or false otherwise.' ,
),
url='addDebugHook',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='base64Decode',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='data',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns the decrypted data from https://en.wikipedia.org/wiki/Base64 base64 representation of the encrypted block' ,
arguments={
"data": """The block of data you want to decrypt """
},
result='returns the decrypted data from https://en.wikipedia.org/wiki/base64 base64 representation of the encrypted block if the decryption process was successfully completed, false otherwise.' ,
),
url='base64Decode',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='base64Decode',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='data',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns the decrypted data from https://en.wikipedia.org/wiki/Base64 base64 representation of the encrypted block' ,
arguments={
"data": """The block of data you want to decrypt """
},
result='returns the decrypted data from https://en.wikipedia.org/wiki/base64 base64 representation of the encrypted block if the decryption process was successfully completed, false otherwise.' ,
),
url='base64Decode',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='base64Encode',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='data',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns the https://en.wikipedia.org/wiki/Base64 base64 representation of the encoded block of data' ,
arguments={
"data": """The block of data you want to encode """
},
result='returns the https://en.wikipedia.org/wiki/base64 base64 representation of the encoded data if the encoding process was successfully completed, false otherwise.' ,
),
url='base64Encode',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='base64Encode',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='data',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns the https://en.wikipedia.org/wiki/Base64 base64 representation of the encoded block of data' ,
arguments={
"data": """The block of data you want to encode """
},
result='returns the https://en.wikipedia.org/wiki/base64 base64 representation of the encoded data if the encoding process was successfully completed, false otherwise.' ,
),
url='base64Encode',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='bitAnd',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['uint'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var1',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='var2',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=True,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function performs a bitwise AND-conjunction on two or more (unsigned) 32-bit Int|integers. See http://en.wikipedia.org/wiki/Bitwise_operation#AND Bitwise operation for more details.' ,
arguments={
"varN": """The value you want to perform an AND-conjunction on """
},
result='returns the conjuncted value.' ,
),
url='bitAnd',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='bitAnd',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['uint'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var1',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='var2',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=True,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function performs a bitwise AND-conjunction on two or more (unsigned) 32-bit Int|integers. See http://en.wikipedia.org/wiki/Bitwise_operation#AND Bitwise operation for more details.' ,
arguments={
"varN": """The value you want to perform an AND-conjunction on """
},
result='returns the conjuncted value.' ,
),
url='bitAnd',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='bitArShift',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='n',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This functions performs an arithmetic shift on the integer value by integer n positions. In an arithmetic shift, zeros are shifted in to replace the discarded bits. In a right arithmetic shift, the https://en.wikipedia.org/wiki/Sign_bit sign bit is shifted in on the left, thus preserving the sign of the operand.\nSee https://en.wikipedia.org/wiki/Bitwise_operation#Arithmetic_shift Bitwise operation for more details.' ,
arguments={
"value": """The value you want to perform the arithmetic shift on. """,
"n": """The amount of positions to shift the value by. """
},
result='returns the arithmetic shifted value as integer.' ,
),
url='bitArShift',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='bitArShift',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='n',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This functions performs an arithmetic shift on the integer value by integer n positions. In an arithmetic shift, zeros are shifted in to replace the discarded bits. In a right arithmetic shift, the https://en.wikipedia.org/wiki/Sign_bit sign bit is shifted in on the left, thus preserving the sign of the operand.\nSee https://en.wikipedia.org/wiki/Bitwise_operation#Arithmetic_shift Bitwise operation for more details.' ,
arguments={
"value": """The value you want to perform the arithmetic shift on. """,
"n": """The amount of positions to shift the value by. """
},
result='returns the arithmetic shifted value as integer.' ,
),
url='bitArShift',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='bitExtract',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['uint'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='field',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='width',
argument_type=FunctionType(
names=['int'],
is_optional=True,
),
default_value='1',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns the unsigned number formed by the bits field to field + width - 1 (range: 0-31).' ,
arguments={
"var": """The value """,
"field": """The field number """,
"width": """Number of bits to extract """
},
result='returns the extracted value/bit sequence.' ,
),
url='bitExtract',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='bitExtract',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['uint'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='field',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='width',
argument_type=FunctionType(
names=['int'],
is_optional=True,
),
default_value='1',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns the unsigned number formed by the bits field to field + width - 1 (range: 0-31).' ,
arguments={
"var": """The value """,
"field": """The field number """,
"width": """Number of bits to extract """
},
result='returns the extracted value/bit sequence.' ,
),
url='bitExtract',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='bitLRotate',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='n',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This functions performs a bitwise circular left-rotation on the integer value by integer n positions.\nSee https://en.wikipedia.org/wiki/Bitwise_operation#Rotate_no_carry Bitwise operation for more details.' ,
arguments={
"value": """The value you want to perform the rotation on. """,
"n": """The amount of positions to rotate the value by. """
},
result='returns the circular left-rotated value as integer.' ,
),
url='bitLRotate',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='bitLRotate',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='n',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This functions performs a bitwise circular left-rotation on the integer value by integer n positions.\nSee https://en.wikipedia.org/wiki/Bitwise_operation#Rotate_no_carry Bitwise operation for more details.' ,
arguments={
"value": """The value you want to perform the rotation on. """,
"n": """The amount of positions to rotate the value by. """
},
result='returns the circular left-rotated value as integer.' ,
),
url='bitLRotate',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='bitLShift',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='n',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This functions performs a logical left shift on the integer value by integer n positions. In a logical shift, zeros are shifted in to replace the discarded bits.\nSee https://en.wikipedia.org/wiki/Bitwise_operation#Logical_shift Bitwise operation for more details.' ,
arguments={
"value": """The value you want to perform the shift on. """,
"n": """The amount of positions to shift the value by. """
},
result='returns the logical left shifted value as integer.' ,
),
url='bitLShift',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='bitLShift',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='n',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This functions performs a logical left shift on the integer value by integer n positions. In a logical shift, zeros are shifted in to replace the discarded bits.\nSee https://en.wikipedia.org/wiki/Bitwise_operation#Logical_shift Bitwise operation for more details.' ,
arguments={
"value": """The value you want to perform the shift on. """,
"n": """The amount of positions to shift the value by. """
},
result='returns the logical left shifted value as integer.' ,
),
url='bitLShift',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='bitNot',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['uint'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function performs a bitwise NOT on an (unsigned) 32-bit Int|integer. See http://en.wikipedia.org/wiki/Bitwise_operation#NOT Bitwise operation for more details.' ,
arguments={
"var": """The value you want to perform a bitwise NOT on """
},
result='returns the value on which the operation has been performed.' ,
),
url='bitNot',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='bitNot',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['uint'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function performs a bitwise NOT on an (unsigned) 32-bit Int|integer. See http://en.wikipedia.org/wiki/Bitwise_operation#NOT Bitwise operation for more details.' ,
arguments={
"var": """The value you want to perform a bitwise NOT on """
},
result='returns the value on which the operation has been performed.' ,
),
url='bitNot',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='bitOr',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['uint'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var1',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='var2',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=True,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function performs a bitwise OR-conjunction on two or more (unsigned) 32-bit Int|integers. See http://en.wikipedia.org/wiki/Bitwise_operation#OR Bitwise operation for more details.' ,
arguments={
"varN": """The value you want to perform an OR-conjunction on """
},
result='returns the conjuncted value.' ,
),
url='bitOr',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='bitOr',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['uint'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var1',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='var2',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=True,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function performs a bitwise OR-conjunction on two or more (unsigned) 32-bit Int|integers. See http://en.wikipedia.org/wiki/Bitwise_operation#OR Bitwise operation for more details.' ,
arguments={
"varN": """The value you want to perform an OR-conjunction on """
},
result='returns the conjuncted value.' ,
),
url='bitOr',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='bitReplace',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['uint'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='replaceValue',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='field',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='width',
argument_type=FunctionType(
names=['int'],
is_optional=True,
),
default_value='1',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns the unsigned number formed by var value with replacement specified at bits field to field + width - 1' ,
arguments={
"var": """The value """,
"replaceValue": """The replaceValue """,
"field": """The field number """,
"width": """Number of bits to extract """
},
result='returns the replaced value/bit sequence.' ,
),
url='bitReplace',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='bitReplace',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['uint'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='replaceValue',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='field',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='width',
argument_type=FunctionType(
names=['int'],
is_optional=True,
),
default_value='1',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns the unsigned number formed by var value with replacement specified at bits field to field + width - 1' ,
arguments={
"var": """The value """,
"replaceValue": """The replaceValue """,
"field": """The field number """,
"width": """Number of bits to extract """
},
result='returns the replaced value/bit sequence.' ,
),
url='bitReplace',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='bitRRotate',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='n',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This functions performs a bitwise circular right-rotation on the integer value by integer n positions.\nSee https://en.wikipedia.org/wiki/Bitwise_operation#Rotate_no_carry Bitwise operation for more details.' ,
arguments={
"value": """The value you want to perform the rotation on. """,
"n": """The amount of positions to rotate the value by. """
},
result='returns the circular right-rotated value as integer.' ,
),
url='bitRRotate',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='bitRRotate',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='n',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This functions performs a bitwise circular right-rotation on the integer value by integer n positions.\nSee https://en.wikipedia.org/wiki/Bitwise_operation#Rotate_no_carry Bitwise operation for more details.' ,
arguments={
"value": """The value you want to perform the rotation on. """,
"n": """The amount of positions to rotate the value by. """
},
result='returns the circular right-rotated value as integer.' ,
),
url='bitRRotate',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='bitRShift',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='n',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This functions performs a logical right shift on the integer value by integer n positions. In a logical shift, zeros are shifted in to replace the discarded bits.\nSee https://en.wikipedia.org/wiki/Bitwise_operation#Logical_shift Bitwise operation for more details.' ,
arguments={
"value": """The value you want to perform the shift on. """,
"n": """The amount of positions to shift the value by. """
},
result='returns the logical right shifted value as integer.' ,
),
url='bitRShift',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='bitRShift',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='n',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This functions performs a logical right shift on the integer value by integer n positions. In a logical shift, zeros are shifted in to replace the discarded bits.\nSee https://en.wikipedia.org/wiki/Bitwise_operation#Logical_shift Bitwise operation for more details.' ,
arguments={
"value": """The value you want to perform the shift on. """,
"n": """The amount of positions to shift the value by. """
},
result='returns the logical right shifted value as integer.' ,
),
url='bitRShift',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='bitTest',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var1',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='var2',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=True,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function performs an AND-conjunction on two or more (unsigned) 32-bit Int|integers and checks, whether the conjuncted value is zero or not. See http://en.wikipedia.org/wiki/Bitwise_operation#AND Bitwise operation for more details.' ,
arguments={
"varN": """The value you want to perform the operation on (see above) """
},
result='returns true if the conjuncted value is not zero, false otherwise. if a bad argument was passed to bittest, youll get nil.' ,
),
url='bitTest',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='bitTest',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var1',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='var2',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=True,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function performs an AND-conjunction on two or more (unsigned) 32-bit Int|integers and checks, whether the conjuncted value is zero or not. See http://en.wikipedia.org/wiki/Bitwise_operation#AND Bitwise operation for more details.' ,
arguments={
"varN": """The value you want to perform the operation on (see above) """
},
result='returns true if the conjuncted value is not zero, false otherwise. if a bad argument was passed to bittest, youll get nil.' ,
),
url='bitTest',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='bitXor',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['uint'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var1',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='var2',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=True,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function performs a bitwise XOR-conjunction (exclusive OR) on two or more (unsigned) 32-bit Int|integers. See http://en.wikipedia.org/wiki/Bitwise_operation#XOR Bitwise operation for more details.' ,
arguments={
"varN": """The value you want to perform a XOR-conjunction on """
},
result='returns the conjuncted value.' ,
),
url='bitXor',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='bitXor',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['uint'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var1',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='var2',
argument_type=FunctionType(
names=['uint'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=True,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function performs a bitwise XOR-conjunction (exclusive OR) on two or more (unsigned) 32-bit Int|integers. See http://en.wikipedia.org/wiki/Bitwise_operation#XOR Bitwise operation for more details.' ,
arguments={
"varN": """The value you want to perform a XOR-conjunction on """
},
result='returns the conjuncted value.' ,
),
url='bitXor',
)
],
),
CompoundFunctionData(
server=[
],
client=[
FunctionData(
signature=FunctionSignature(
name='createTrayNotification',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='notificationText',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='iconType',
argument_type=FunctionType(
names=['string'],
is_optional=True,
),
default_value='"default"',
)
],
[
FunctionArgument(
name='useSound',
argument_type=FunctionType(
names=['bool'],
is_optional=True,
),
default_value='true',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This functions creates a notification ballon on the desktop.' ,
arguments={
"notificationText": """The text to send in the notification. """,
"iconType": """The notification icon type. Possible values are: default (the MTA icon), info, warning, error """,
"useSound": """A boolean value indicating whether or not to play a sound when receiving the notification. """
},
result='returns true if the notification is correctly created, false otherwise.' ,
),
url='createTrayNotification',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='debugSleep',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='sleep',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='debugSleep freezes the client/server for the specified time. This means that all synchronization, rendering and script execution will stop except HTTP processing invoked by fetchRemote. This function only works, if development mode is enabled by setDevelopmentMode and can be utilised to build a debugger that communicates via HTTP requests with the editor/IDE.' ,
arguments={
"sleep": """: An integer value in milliseconds. """
},
result='returns true if the development mode is enabled and arguments are correct, false otherwise.' ,
),
url='debugSleep',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='debugSleep',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='sleep',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='debugSleep freezes the client/server for the specified time. This means that all synchronization, rendering and script execution will stop except HTTP processing invoked by fetchRemote. This function only works, if development mode is enabled by setDevelopmentMode and can be utilised to build a debugger that communicates via HTTP requests with the editor/IDE.' ,
arguments={
"sleep": """: An integer value in milliseconds. """
},
result='returns true if the development mode is enabled and arguments are correct, false otherwise.' ,
),
url='debugSleep',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='decodeString',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='algorithm',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='input',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='options',
argument_type=FunctionType(
names=['table'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='callback',
argument_type=FunctionType(
names=['function'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function decodes an encoded string using the specified algorithm. The counterpart of this function is encodeString.' ,
arguments={
"algorithm": """The algorithm to use. """,
"input": """The input to decode. """,
"options": """A table with options and other necessary data for the algorithm, as detailed below. """,
"callback": """providing a callback will run this function asynchronously, the arguments to the callback are the same as the returned values below. """
},
result='returns the decoded string if successful, false otherwise. if a callback was provided, the decoded string is argument to the callback.' ,
),
url='decodeString',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='decodeString',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='algorithm',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='input',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='options',
argument_type=FunctionType(
names=['table'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='callback',
argument_type=FunctionType(
names=['function'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function decodes an encoded string using the specified algorithm. The counterpart of this function is encodeString.' ,
arguments={
"algorithm": """The algorithm to use. """,
"input": """The input to decode. """,
"options": """A table with options and other necessary data for the algorithm, as detailed below. """,
"callback": """providing a callback will run this function asynchronously, the arguments to the callback are the same as the returned values below. """
},
result='returns the decoded string if successful, false otherwise. if a callback was provided, the decoded string is argument to the callback.' ,
),
url='decodeString',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='deref',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['mixed'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='reference',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function will take a reference obtained by the ref function and returns its Lua element.' ,
arguments={
"reference": """The valid reference, which you want to dereference """
},
result='returns mixed if the reference were valid. returns false if the reference were invalid.' ,
),
url='deref',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='deref',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['mixed'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='reference',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function will take a reference obtained by the ref function and returns its Lua element.' ,
arguments={
"reference": """The valid reference, which you want to dereference """
},
result='returns mixed if the reference were valid. returns false if the reference were invalid.' ,
),
url='deref',
)
],
),
CompoundFunctionData(
server=[
],
client=[
FunctionData(
signature=FunctionSignature(
name='downloadFile',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='fileName',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function ensures the requested resource file is correct and then triggers onClientFileDownloadComplete. If the file has been previously downloaded and the CRC matches, the file will not be downloaded again but onClientFileDownloadComplete will still run. The file should also be included in the resource meta.xml with the download attribute set to false, see meta.xml for more details.' ,
arguments={
"fileName": """: A string referencing the name of the file to download """
},
result='returns true if file download has been queued, false otherwise.' ,
),
url='downloadFile',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='encodeString',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='algorithm',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='input',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='options',
argument_type=FunctionType(
names=['table'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='callback',
argument_type=FunctionType(
names=['function'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function encodes a string using the specified algorithm. The counterpart of this function is decodeString.' ,
arguments={
"algorithm": """The algorithm to use. """,
"input": """The input to encode. """,
"options": """A table with options and other necessary data for the algorithm, as detailed below. """,
"callback": """providing a callback will run this function asynchronously, the arguments to the callback are the same as the returned values below. """
},
result='* tea\n** encodedstring: the encoded string if successful, false otherwise. if a callback was provided, true is returned immediately, and the encoded string is passed as an argument to the callback.\n* aes128\n** encodedstring: the encoded string if successful, false otherwise. if a callback was provided, true is returned immediately, and the encoded string is passed as an argument to the callback.\n** iv (https://en.wikipedia.org/wiki/initialization_vector initialization vector): this is a string generated by the encryption algorithm that is needed to decrypt the message by decodestring. if a callback was provided, true is returned immediately, and the iv is passed as an argument to the callback.\n|20898}}' ,
),
url='encodeString',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='encodeString',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='algorithm',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='input',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='options',
argument_type=FunctionType(
names=['table'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='callback',
argument_type=FunctionType(
names=['function'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function encodes a string using the specified algorithm. The counterpart of this function is decodeString.' ,
arguments={
"algorithm": """The algorithm to use. """,
"input": """The input to encode. """,
"options": """A table with options and other necessary data for the algorithm, as detailed below. """,
"callback": """providing a callback will run this function asynchronously, the arguments to the callback are the same as the returned values below. """
},
result='* tea\n** encodedstring: the encoded string if successful, false otherwise. if a callback was provided, true is returned immediately, and the encoded string is passed as an argument to the callback.\n* aes128\n** encodedstring: the encoded string if successful, false otherwise. if a callback was provided, true is returned immediately, and the encoded string is passed as an argument to the callback.\n** iv (https://en.wikipedia.org/wiki/initialization_vector initialization vector): this is a string generated by the encryption algorithm that is needed to decrypt the message by decodestring. if a callback was provided, true is returned immediately, and the iv is passed as an argument to the callback.\n|20898}}' ,
),
url='encodeString',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='fromJSON',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['var'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='json',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function parses a JSON formatted string into variables. You can use toJSON to encode variables into a JSON string that can be read by this function.' ,
arguments={
"json": """A JSON formatted string """
},
result='returns variables read from the json string.\nnote: indices of a json object such as 1: cat are being returned as string, not as integer.' ,
),
url='fromJSON',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='fromJSON',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['var'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='json',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function parses a JSON formatted string into variables. You can use toJSON to encode variables into a JSON string that can be read by this function.' ,
arguments={
"json": """A JSON formatted string """
},
result='returns variables read from the json string.\nnote: indices of a json object such as 1: cat are being returned as string, not as integer.' ,
),
url='fromJSON',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getColorFromString',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
),
FunctionType(
names=['int'],
is_optional=False,
),
FunctionType(
names=['int'],
is_optional=False,
),
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theColor',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function will extract Red, Green, Blue and Alpha values from a hex string you provide it. These strings follow the same format as used in HTML, with addition of the Alpha values.' ,
arguments={
"theColor": """A string containing a valid color code.
:Valid strings are: """,
"#RRGGBB": """: Colors specified, Alpha assumed to be 255. """,
"#RRGGBBAA": """: All values specified. """,
"#RGB": """: Shortened form, will be expanded internally to RRGGBB, as such it provides a smaller number of colors. """,
"#RGBA": """: As above, shortened - each character is duplicated.
:For example: """,
"#FF00FF": """is Red: 255, Green: 0, Blue: 255, Alpha: 255 """,
"#F0F": """is Red: 255, Green: 0, Blue: 255, Alpha: 255 (the same as the example above) """,
"#34455699": """is Red: 52, Green: 69, Blue: 86, Alpha: 153
All colors used must begin with a # sign. """
},
result='returns four integers in rgba format, with a maximum value of 255 for each. each stands for red, green, blue, and alpha. alpha decides transparancy where 255 is opaque and 0 is transparent. false is returned if the string passed is invalid (for example, is missing the preceeding # sign).' ,
),
url='getColorFromString',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getColorFromString',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
),
FunctionType(
names=['int'],
is_optional=False,
),
FunctionType(
names=['int'],
is_optional=False,
),
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theColor',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function will extract Red, Green, Blue and Alpha values from a hex string you provide it. These strings follow the same format as used in HTML, with addition of the Alpha values.' ,
arguments={
"theColor": """A string containing a valid color code.
:Valid strings are: """,
"#RRGGBB": """: Colors specified, Alpha assumed to be 255. """,
"#RRGGBBAA": """: All values specified. """,
"#RGB": """: Shortened form, will be expanded internally to RRGGBB, as such it provides a smaller number of colors. """,
"#RGBA": """: As above, shortened - each character is duplicated.
:For example: """,
"#FF00FF": """is Red: 255, Green: 0, Blue: 255, Alpha: 255 """,
"#F0F": """is Red: 255, Green: 0, Blue: 255, Alpha: 255 (the same as the example above) """,
"#34455699": """is Red: 52, Green: 69, Blue: 86, Alpha: 153
All colors used must begin with a # sign. """
},
result='returns four integers in rgba format, with a maximum value of 255 for each. each stands for red, green, blue, and alpha. alpha decides transparancy where 255 is opaque and 0 is transparent. false is returned if the string passed is invalid (for example, is missing the preceeding # sign).' ,
),
url='getColorFromString',
)
],
),
CompoundFunctionData(
server=[
],
client=[
FunctionData(
signature=FunctionSignature(
name='getDevelopmentMode',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function is used to get the development mode of the client. For more information see setDevelopmentMode' ,
arguments={
},
result='returns true if the development mode is on, false if off.' ,
),
url='getDevelopmentMode',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getDistanceBetweenPoints2D',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['float'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='x1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='y1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='x2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='y2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns the distance between two 2 dimensional points using the pythagorean theorem.' ,
arguments={
"x1": """: The X position of the first point """,
"y1": """: The Y position of the first point """,
"x2": """: The X position of the second point """,
"y2": """: The Y position of the second point """
},
result='returns a float containing the 2d distance between the two points. returns false if invalid parameters are passed.' ,
),
url='getDistanceBetweenPoints2D',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getDistanceBetweenPoints2D',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['float'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='x1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='y1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='x2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='y2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns the distance between two 2 dimensional points using the pythagorean theorem.' ,
arguments={
"x1": """: The X position of the first point """,
"y1": """: The Y position of the first point """,
"x2": """: The X position of the second point """,
"y2": """: The Y position of the second point """
},
result='returns a float containing the 2d distance between the two points. returns false if invalid parameters are passed.' ,
),
url='getDistanceBetweenPoints2D',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getDistanceBetweenPoints3D',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['float'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='x1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='y1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='z1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='x2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='y2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='z2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns the distance between two 3 dimensional points using the pythagorean theorem.' ,
arguments={
"x1": """: The X position of the first point """,
"y1": """: The Y position of the first point """,
"z1": """: The Z position of the first point """,
"x2": """: The X position of the second point """,
"y2": """: The Y position of the second point """,
"z2": """: The Z position of the second point """
},
result='returns a float containing the distance between the two points as a float. returns false if an argument passed was invalid.' ,
),
url='getDistanceBetweenPoints3D',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getDistanceBetweenPoints3D',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['float'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='x1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='y1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='z1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='x2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='y2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='z2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns the distance between two 3 dimensional points using the pythagorean theorem.' ,
arguments={
"x1": """: The X position of the first point """,
"y1": """: The Y position of the first point """,
"z1": """: The Z position of the first point """,
"x2": """: The X position of the second point """,
"y2": """: The Y position of the second point """,
"z2": """: The Z position of the second point """
},
result='returns a float containing the distance between the two points as a float. returns false if an argument passed was invalid.' ,
),
url='getDistanceBetweenPoints3D',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getEasingValue',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['float'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='fProgress',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='strEasingType',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='fEasingPeriod',
argument_type=FunctionType(
names=['float'],
is_optional=True,
),
default_value=None,
)
],
[
FunctionArgument(
name='fEasingAmplitude',
argument_type=FunctionType(
names=['float'],
is_optional=True,
),
default_value=None,
)
],
[
FunctionArgument(
name='fEasingOvershoot',
argument_type=FunctionType(
names=['float'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='Used for custom Lua based interpolation, returns the easing value (animation time to use in your custom interpolation) given a progress and an Easing|easing function.\nIn most cases, either moveObject or interpolateBetween can do the job. getEasingValue is only provided in case you want to do your own custom interpolation based on easing.' ,
arguments={
"fProgress": """float between 0 and 1 indicating the interpolation progress (0 at the beginning of the interpolation, 1 at the end). """,
"strEasingType": """the Easing|easing function to use for the interpolation """,
"fEasingPeriod": """the period of the Easing|easing function (only some easing functions use this parameter) """,
"fEasingAmplitude": """the amplitude of the Easing|easing function (only some easing functions use this parameter) """,
"fEasingOvershoot": """the overshoot of the Easing|easing function (only some easing functions use this parameter) """
},
result='returns fanimationtime the animation time given by the easing function (can be < 0 or > 1 since some easing|easing functions have overshoot or bounce/spring effects, false otherwise (error in parameters).' ,
),
url='getEasingValue',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getEasingValue',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['float'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='fProgress',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='strEasingType',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='fEasingPeriod',
argument_type=FunctionType(
names=['float'],
is_optional=True,
),
default_value=None,
)
],
[
FunctionArgument(
name='fEasingAmplitude',
argument_type=FunctionType(
names=['float'],
is_optional=True,
),
default_value=None,
)
],
[
FunctionArgument(
name='fEasingOvershoot',
argument_type=FunctionType(
names=['float'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='Used for custom Lua based interpolation, returns the easing value (animation time to use in your custom interpolation) given a progress and an Easing|easing function.\nIn most cases, either moveObject or interpolateBetween can do the job. getEasingValue is only provided in case you want to do your own custom interpolation based on easing.' ,
arguments={
"fProgress": """float between 0 and 1 indicating the interpolation progress (0 at the beginning of the interpolation, 1 at the end). """,
"strEasingType": """the Easing|easing function to use for the interpolation """,
"fEasingPeriod": """the period of the Easing|easing function (only some easing functions use this parameter) """,
"fEasingAmplitude": """the amplitude of the Easing|easing function (only some easing functions use this parameter) """,
"fEasingOvershoot": """the overshoot of the Easing|easing function (only some easing functions use this parameter) """
},
result='returns fanimationtime the animation time given by the easing function (can be < 0 or > 1 since some easing|easing functions have overshoot or bounce/spring effects, false otherwise (error in parameters).' ,
),
url='getEasingValue',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getFPSLimit',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function retrieves the maximum http://en.wikipedia.org/wiki/Frame_rate FPS (Frames per second) that players on the server can run their game at.' ,
arguments={
},
result='returns an integer between 25 and 100 of the maximum fps that players can run their game at.' ,
),
url='getFPSLimit',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getFPSLimit',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function retrieves the maximum http://en.wikipedia.org/wiki/Frame_rate FPS (Frames per second) that players on the server can run their game at.' ,
arguments={
},
result='returns an integer between 25 and 100 of the maximum fps that players can run their game at.' ,
),
url='getFPSLimit',
)
],
),
CompoundFunctionData(
server=[
],
client=[
FunctionData(
signature=FunctionSignature(
name='getKeyboardLayout',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='' ,
arguments={
},
result='returns a table with keyboard layout properties:\n{| class=wikitable style=cellpadding: 10px;\n|-\n! property || values and description\n|-\n| <code>readinglayout</code> ||\n{| class=prettytable\n|-\n| <code>ltr</code> || left to right (english)\n|-\n| <code>rtl</code> || right to left (arabic, hebrew)\n|-\n| <code>ttb-rtl-ltr</code> || either read vertically from top to bottom with columns going from right to left, or read in horizontal rows from left to right, as for the japanese (japan) locale.\n|-\n| <code>ttb-ltr</code> || read vertically from top to bottom with columns going from left to right, as for the mongolian (mongolian) locale.\n|}\n|}' ,
),
url='getKeyboardLayout',
)
],
),
CompoundFunctionData(
server=[
],
client=[
FunctionData(
signature=FunctionSignature(
name='getLocalization',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function gets the players localization setting as set in the MTA client.' ,
arguments={
},
result='returns a table with the following entries:\n*code : the language code (eg. en_us for english (united states) or ar for arabic).\n*name : the name of the language (eg. english (united states) or arabic).' ,
),
url='getLocalization',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getNetworkStats',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='thePlayer',
argument_type=FunctionType(
names=['element'],
is_optional=True,
),
default_value='nil',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns network status information.' ,
arguments={
},
result='' ,
),
url='getNetworkStats',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getNetworkStats',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns network status information.' ,
arguments={
},
result='' ,
),
url='getNetworkStats',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getNetworkUsageData',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns a table containing network usage information about inbound and outbound packets.' ,
arguments={
},
result='returns a table with two fields: in and out. each of these contain a table with two fields: bits and count. each of these contain a table with 256 numeric fields ranging from 0 to 255, containing the appropriate network usage data for such packet id.' ,
),
url='getNetworkUsageData',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getNetworkUsageData',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns a table containing network usage information about inbound and outbound packets.' ,
arguments={
},
result='returns a table with two fields: in and out. each of these contain a table with two fields: bits and count. each of these contain a table with 256 numeric fields ranging from 0 to 255, containing the appropriate network usage data for such packet id.' ,
),
url='getNetworkUsageData',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getPerformanceStats',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
),
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='category',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='options',
argument_type=FunctionType(
names=['string'],
is_optional=True,
),
default_value='""',
)
],
[
FunctionArgument(
name='filter',
argument_type=FunctionType(
names=['string'],
is_optional=True,
),
default_value='""',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns performance information.' ,
arguments={
"category": """Performance statistics category. If empty string is given, list of all categories is returned.See categories for more information. """,
"options": """Category specific , separated options. All categories supports h option for help. """,
"filter": """Case-sensitive filter used to select returned rows. Only name column is filtered. """
},
result='returns two tables. first contains column names. the second contains result rows. each row is table of cells.' ,
),
url='getPerformanceStats',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getPerformanceStats',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
),
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='category',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='options',
argument_type=FunctionType(
names=['string'],
is_optional=True,
),
default_value='""',
)
],
[
FunctionArgument(
name='filter',
argument_type=FunctionType(
names=['string'],
is_optional=True,
),
default_value='""',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns performance information.' ,
arguments={
"category": """Performance statistics category. If empty string is given, list of all categories is returned.See categories for more information. """,
"options": """Category specific , separated options. All categories supports h option for help. """,
"filter": """Case-sensitive filter used to select returned rows. Only name column is filtered. """
},
result='returns two tables. first contains column names. the second contains result rows. each row is table of cells.' ,
),
url='getPerformanceStats',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getRealTime',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='seconds',
argument_type=FunctionType(
names=['int'],
is_optional=True,
),
default_value='current',
)
],
[
FunctionArgument(
name='localTime',
argument_type=FunctionType(
names=['bool'],
is_optional=True,
),
default_value='true',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function gets the server or client (if used client sided it returns time as set on clients computer) real time and returns it in a table. If you want to get the in-game time (shown on GTAs clock) use getTime.' ,
arguments={
"seconds": """A count in seconds from the year 1970. Useful for storing points in time, or for retrieving time information for getBanTime. The valid range of this argument is 0 to 32,000,000,000 """,
"localTime": """Set to true to adjust for the locally set timezone. """
},
result='returns a table of substrings with different time format or false if the seconds argument is out of range.\n{| border=2 cellpadding=2 cellspacing=0 style=margin: 1em 1em 1em 0; background: #f9f9f9; border: 1px #aaa solid; border-collapse: collapse; font-size: 95%;\n|member\n|meaning\n|range\n|-\n|second\n|seconds after the minute\n|0-61*\n|-\n|minute\n|minutes after the hour\n|0-59\n|-\n|hour\n|hours since midnight\n|0-23\n|-\n|monthday\n|day of the month\n|1-31\n|-\n|month\n|months since january\n|0-11\n|-\n|year\n|years since 1900\n|-\n|weekday\n|days since sunday\n|0-6\n|-\n|yearday\n|days since january 1\n|0-365\n|-\n|isdst\n|daylight saving time flag\n|-\n|timestamp\n|seconds since 1970 (ignoring set timezone)\n|\n|}\n* second is generally 0-59. extra range to accommodate for leap seconds in certain systems.' ,
),
url='getRealTime',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getRealTime',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='seconds',
argument_type=FunctionType(
names=['int'],
is_optional=True,
),
default_value='current',
)
],
[
FunctionArgument(
name='localTime',
argument_type=FunctionType(
names=['bool'],
is_optional=True,
),
default_value='true',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function gets the server or client (if used client sided it returns time as set on clients computer) real time and returns it in a table. If you want to get the in-game time (shown on GTAs clock) use getTime.' ,
arguments={
"seconds": """A count in seconds from the year 1970. Useful for storing points in time, or for retrieving time information for getBanTime. The valid range of this argument is 0 to 32,000,000,000 """,
"localTime": """Set to true to adjust for the locally set timezone. """
},
result='returns a table of substrings with different time format or false if the seconds argument is out of range.\n{| border=2 cellpadding=2 cellspacing=0 style=margin: 1em 1em 1em 0; background: #f9f9f9; border: 1px #aaa solid; border-collapse: collapse; font-size: 95%;\n|member\n|meaning\n|range\n|-\n|second\n|seconds after the minute\n|0-61*\n|-\n|minute\n|minutes after the hour\n|0-59\n|-\n|hour\n|hours since midnight\n|0-23\n|-\n|monthday\n|day of the month\n|1-31\n|-\n|month\n|months since january\n|0-11\n|-\n|year\n|years since 1900\n|-\n|weekday\n|days since sunday\n|0-6\n|-\n|yearday\n|days since january 1\n|0-365\n|-\n|isdst\n|daylight saving time flag\n|-\n|timestamp\n|seconds since 1970 (ignoring set timezone)\n|\n|}\n* second is generally 0-59. extra range to accommodate for leap seconds in certain systems.' ,
),
url='getRealTime',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getServerConfigSetting',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='name',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function retrieves server settings which are usually stored in the mtaserver.conf file.\nAvailable in 1.1 and onwards' ,
arguments={
"name": """The name of the setting (setting names can be found Server_mtaserver.conf|here) """
},
result='returns a string containing the current value for the named setting, or false if the setting does not exist.<br>\nif the setting name is serverip, may return the string auto on local servers.' ,
),
url='getServerConfigSetting',
)
],
client=[
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getTickCount',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns amount of time that your system has been running in milliseconds. By comparing two values of getTickCount, you can determine how much time has passed (in milliseconds) between two events. This could be used to determine how efficient your code is, or to time how long a player takes to complete a task.' ,
arguments={
},
result='returns an integer containing the number of milliseconds since the system the server is running on started. this has the potential to wrap-around.' ,
),
url='getTickCount',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getTickCount',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns amount of time that your system has been running in milliseconds. By comparing two values of getTickCount, you can determine how much time has passed (in milliseconds) between two events. This could be used to determine how efficient your code is, or to time how long a player takes to complete a task.' ,
arguments={
},
result='returns an integer containing the number of milliseconds since the system the server is running on started. this has the potential to wrap-around.' ,
),
url='getTickCount',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getTimerDetails',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
),
FunctionType(
names=['int'],
is_optional=False,
),
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theTimer',
argument_type=FunctionType(
names=['timer'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function is for getting the details of a running timer.' ,
arguments={
"theTimer": """A timer element. """
},
result='* integer one represents the time left in miliseconds (1000th of a second) of the current time left in the loop.\n* integer two represents the amount of times the timer has left to execute.\n* integer three represents the time interval of timer.\n* returns false if the timer doesnt exist or stopped running. also, debugscript will say bad argument @ gettimerdetails. to prevent this, you can check if the timer exists with istimer().' ,
),
url='getTimerDetails',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getTimerDetails',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
),
FunctionType(
names=['int'],
is_optional=False,
),
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theTimer',
argument_type=FunctionType(
names=['timer'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function is for getting the details of a running timer.' ,
arguments={
"theTimer": """A timer element. """
},
result='* integer one represents the time left in miliseconds (1000th of a second) of the current time left in the loop.\n* integer two represents the amount of times the timer has left to execute.\n* integer three represents the time interval of timer.\n* returns false if the timer doesnt exist or stopped running. also, debugscript will say bad argument @ gettimerdetails. to prevent this, you can check if the timer exists with istimer().' ,
),
url='getTimerDetails',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getTimers',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theTime',
argument_type=FunctionType(
names=['int'],
is_optional=True,
),
default_value='nil',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns a table of all active timers that the resource that calls it has created. Alternatively, only the timers with a remaining time less than or equal to a certain value can be retrieved.' ,
arguments={
"theTime": """The maximum time left (in milliseconds) on the timers you wish to retrieve. """
},
result='returns a table of all the active timers.' ,
),
url='getTimers',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getTimers',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theTime',
argument_type=FunctionType(
names=['int'],
is_optional=True,
),
default_value='nil',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns a table of all active timers that the resource that calls it has created. Alternatively, only the timers with a remaining time less than or equal to a certain value can be retrieved.' ,
arguments={
"theTime": """The maximum time left (in milliseconds) on the timers you wish to retrieve. """
},
result='returns a table of all the active timers.' ,
),
url='getTimers',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='gettok',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='text',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='tokenNumber',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='separatingCharacter',
argument_type=FunctionType(
names=['string', 'int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function splits a string using the given separating character and returns one specified substring.' ,
arguments={
"text": """the string that should be split. """,
"tokenNumber": """which token should be returned (1 for the first, 2 for the second, and so on). """,
"separatingCharacter": """the ASCII|ASCII number representing the character you want to use to separate the tokens. You can easily retrieve this by running string.byte on a string containing the separating character. """
},
result='returns a string containing the token if it exists, false otherwise.' ,
),
url='gettok',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='gettok',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='text',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='tokenNumber',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='separatingCharacter',
argument_type=FunctionType(
names=['string', 'int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function splits a string using the given separating character and returns one specified substring.' ,
arguments={
"text": """the string that should be split. """,
"tokenNumber": """which token should be returned (1 for the first, 2 for the second, and so on). """,
"separatingCharacter": """the ASCII|ASCII number representing the character you want to use to separate the tokens. You can easily retrieve this by running string.byte on a string containing the separating character. """
},
result='returns a string containing the token if it exists, false otherwise.' ,
),
url='gettok',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getUserdataType',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['userdata'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='' ,
arguments={
"value": """: A userdata value to get the type of. Userdata types can be: """,
"Shared": """ """,
"resource-data": """: a Resource|resource pointer. """,
"xml-node": """: a Xmlnode|XML node. """,
"lua-timer": """: a timer. """,
"vector2": """: a 2D vector, used in the Vector/Vector2|Vector2 class. """,
"vector3": """: a 3D vector, used in the Vector/Vector3|Vector3 class. """,
"vector4": """: a 4D vector, used in the Vector/Vector4|Vector4 class. """,
"matrix": """: a matrix, used in the Matrix class. """,
"userdata": """: a fallback userdata type return value, when no other type could be found for the object. """,
"Server only": """ """,
"account": """: a Account|player account. """,
"db-query": """: a dbQuery|database query handle. """,
"acl": """: an ACL|ACL entry. """,
"acl-group": """: an Aclgroup|ACL group. """,
"ban": """: a Ban|player ban. """,
"text-item": """: a Textitem|text display item. """,
"text-display": """: a Textdisplay|text display item.
Source code commit: https://github.com/multitheftauto/mtasa-blue/commit/df8576fc3f80fa2d7a73e70a68e8f116b591cb68#diff-09a3546021ff952dc0f94a99aae11356R297 """,
"weapon": """: a Weapon|custom weapon. """
},
result='returns a string containing the specified userdatas type, or false plus an error message if the given value is not userdata.' ,
),
url='getUserdataType',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getUserdataType',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['userdata'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='' ,
arguments={
"value": """: A userdata value to get the type of. Userdata types can be: """,
"Shared": """ """,
"resource-data": """: a Resource|resource pointer. """,
"xml-node": """: a Xmlnode|XML node. """,
"lua-timer": """: a timer. """,
"vector2": """: a 2D vector, used in the Vector/Vector2|Vector2 class. """,
"vector3": """: a 3D vector, used in the Vector/Vector3|Vector3 class. """,
"vector4": """: a 4D vector, used in the Vector/Vector4|Vector4 class. """,
"matrix": """: a matrix, used in the Matrix class. """,
"userdata": """: a fallback userdata type return value, when no other type could be found for the object. """,
"Server only": """ """,
"account": """: a Account|player account. """,
"db-query": """: a dbQuery|database query handle. """,
"acl": """: an ACL|ACL entry. """,
"acl-group": """: an Aclgroup|ACL group. """,
"ban": """: a Ban|player ban. """,
"text-item": """: a Textitem|text display item. """,
"text-display": """: a Textdisplay|text display item.
Source code commit: https://github.com/multitheftauto/mtasa-blue/commit/df8576fc3f80fa2d7a73e70a68e8f116b591cb68#diff-09a3546021ff952dc0f94a99aae11356R297 """,
"weapon": """: a Weapon|custom weapon. """
},
result='returns a string containing the specified userdatas type, or false plus an error message if the given value is not userdata.' ,
),
url='getUserdataType',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='getVersion',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function gives you various version information about MTA and the operating system.' ,
arguments={
},
result='returns a table with version information. specifically these keys are present in the table:\n*number: the mta server or client version (depending where the function was called) in pure numerical form, e.g. 256\n*mta: the mta server or client version (depending where the function was called) in textual form, e.g. 1.0\n*name: the full mta product name, either mta:sa server or mta:sa client.\n*netcode: the netcode version number.\n*os: returns the operating system on which the server or client is running\n*type: the type of build. can be:\n**nightly rx - a nightly development build. x represents the nightly build revision.\n**custom - a build compiled manually\n**release - a build that is publicly released (provisional).\n*tag: the build tag (from 1.0.3 onwards). contains infomation about the underlying version used. i.e. the final version of 1.0.3 has the build tag of 1.0.3 rc-9. (this can be confirmed by using the console command ver.)\n*sortable: a 15 character sortable version string (from 1.0.4 onwards). format of the string is described in getplayerversion.' ,
),
url='getVersion',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='getVersion',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function gives you various version information about MTA and the operating system.' ,
arguments={
},
result='returns a table with version information. specifically these keys are present in the table:\n*number: the mta server or client version (depending where the function was called) in pure numerical form, e.g. 256\n*mta: the mta server or client version (depending where the function was called) in textual form, e.g. 1.0\n*name: the full mta product name, either mta:sa server or mta:sa client.\n*netcode: the netcode version number.\n*os: returns the operating system on which the server or client is running\n*type: the type of build. can be:\n**nightly rx - a nightly development build. x represents the nightly build revision.\n**custom - a build compiled manually\n**release - a build that is publicly released (provisional).\n*tag: the build tag (from 1.0.3 onwards). contains infomation about the underlying version used. i.e. the final version of 1.0.3 has the build tag of 1.0.3 rc-9. (this can be confirmed by using the console command ver.)\n*sortable: a 15 character sortable version string (from 1.0.4 onwards). format of the string is described in getplayerversion.' ,
),
url='getVersion',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='hash',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='algorithm',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='dataToHash',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns a hash of the specified string in the specified algorithm.' ,
arguments={
"algorithm": """: A string which must be one of these: md5, sha1, sha224, sha256, sha384, sha512 """,
"dataToHash": """: A string of the data to hash. """
},
result='returns the hash of the data, false if an invalid argument was used.' ,
),
url='hash',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='hash',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='algorithm',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='dataToHash',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns a hash of the specified string in the specified algorithm.' ,
arguments={
"algorithm": """: A string which must be one of these: md5, sha1, sha224, sha256, sha384, sha512 """,
"dataToHash": """: A string of the data to hash. """
},
result='returns the hash of the data, false if an invalid argument was used.' ,
),
url='hash',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='inspect',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var',
argument_type=FunctionType(
names=['mixed'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='options',
argument_type=FunctionType(
names=['table'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns human-readable representations of tables and MTA datatypes as a string.' ,
arguments={
"var": """A variable of any datatype. """,
"options": """A table of options. It is not mandatory, but when it is provided, it must be a table. For a list of options, see the https://github.com/kikito/inspect.lua#options Inspects GitHub page. """
},
result='always returns a string. the contents can change if we update the inspect library, so it is not expected to be consistent across lua versions.' ,
),
url='inspect',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='inspect',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var',
argument_type=FunctionType(
names=['mixed'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='options',
argument_type=FunctionType(
names=['table'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns human-readable representations of tables and MTA datatypes as a string.' ,
arguments={
"var": """A variable of any datatype. """,
"options": """A table of options. It is not mandatory, but when it is provided, it must be a table. For a list of options, see the https://github.com/kikito/inspect.lua#options Inspects GitHub page. """
},
result='always returns a string. the contents can change if we update the inspect library, so it is not expected to be consistent across lua versions.' ,
),
url='inspect',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='interpolateBetween',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['float'],
is_optional=False,
),
FunctionType(
names=['float'],
is_optional=False,
),
FunctionType(
names=['float'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='x1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='y1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='z1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='x2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='y2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='z2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='fProgress',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='strEasingType',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='fEasingPeriod',
argument_type=FunctionType(
names=['float'],
is_optional=True,
),
default_value=None,
)
],
[
FunctionArgument(
name='fEasingAmplitude',
argument_type=FunctionType(
names=['float'],
is_optional=True,
),
default_value=None,
)
],
[
FunctionArgument(
name='fEasingOvershoot',
argument_type=FunctionType(
names=['float'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='Interpolates a 3D Vector between a source value and a target value using either linear interpolation or any other Easing|easing function.\nIt can also be used to interpolate 2D vectors or scalars by only setting some of the x, y, z values and putting 0 to the others.' ,
arguments={
"x1, y1, z1": """3D coordinates of source vector/value """,
"x2, y2, z2": """3D coordinates of target vector/value """,
"fProgress": """float between 0 and 1 indicating the interpolation progress (0 at the beginning of the interpolation, 1 at the end). If it is higher than 1, it will start from the beginning. """,
"strEasingType": """the Easing|easing function to use for the interpolation """,
"fEasingPeriod": """the period of the Easing|easing function (only some easing functions use this parameter) """,
"fEasingAmplitude": """the amplitude of the Easing|easing function (only some easing functions use this parameter) """,
"fEasingOvershoot": """the overshoot of the Easing|easing function (only some easing functions use this parameter) """
},
result='returns x, y, z the interpolated 3d vector/value if successful, false otherwise (error in parameters).\nas mentioned before, interpolatebetween can be used on 2d vectors or scalars in which case only some (x, y or just x) of the returned values are to be used (cf. alpha interpolation in marker example or size interpolation in window example).' ,
),
url='interpolateBetween',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='interpolateBetween',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['float'],
is_optional=False,
),
FunctionType(
names=['float'],
is_optional=False,
),
FunctionType(
names=['float'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='x1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='y1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='z1',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='x2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='y2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='z2',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='fProgress',
argument_type=FunctionType(
names=['float'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='strEasingType',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='fEasingPeriod',
argument_type=FunctionType(
names=['float'],
is_optional=True,
),
default_value=None,
)
],
[
FunctionArgument(
name='fEasingAmplitude',
argument_type=FunctionType(
names=['float'],
is_optional=True,
),
default_value=None,
)
],
[
FunctionArgument(
name='fEasingOvershoot',
argument_type=FunctionType(
names=['float'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='Interpolates a 3D Vector between a source value and a target value using either linear interpolation or any other Easing|easing function.\nIt can also be used to interpolate 2D vectors or scalars by only setting some of the x, y, z values and putting 0 to the others.' ,
arguments={
"x1, y1, z1": """3D coordinates of source vector/value """,
"x2, y2, z2": """3D coordinates of target vector/value """,
"fProgress": """float between 0 and 1 indicating the interpolation progress (0 at the beginning of the interpolation, 1 at the end). If it is higher than 1, it will start from the beginning. """,
"strEasingType": """the Easing|easing function to use for the interpolation """,
"fEasingPeriod": """the period of the Easing|easing function (only some easing functions use this parameter) """,
"fEasingAmplitude": """the amplitude of the Easing|easing function (only some easing functions use this parameter) """,
"fEasingOvershoot": """the overshoot of the Easing|easing function (only some easing functions use this parameter) """
},
result='returns x, y, z the interpolated 3d vector/value if successful, false otherwise (error in parameters).\nas mentioned before, interpolatebetween can be used on 2d vectors or scalars in which case only some (x, y or just x) of the returned values are to be used (cf. alpha interpolation in marker example or size interpolation in window example).' ,
),
url='interpolateBetween',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='iprint',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var1',
argument_type=FunctionType(
names=['mixed'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='var2',
argument_type=FunctionType(
names=['mixed'],
is_optional=True,
),
default_value=None,
)
],
[
FunctionArgument(
name='var3',
argument_type=FunctionType(
names=['mixed'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=True,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function intelligently outputs debug messages into the Debug Console. It is similar to outputDebugString, but outputs useful information for any variable type, and does not require use of Luas tostring. This includes information about element types, and table structures. It is especially useful for quick debug tasks.' ,
arguments={
"var1": """A variable of any type to print intelligent information for. """,
"var2+": """Another variable to be output. An unlimited number of arguments can be supplied """
},
result='always returns nil.' ,
),
url='iprint',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='iprint',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='var1',
argument_type=FunctionType(
names=['mixed'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='var2',
argument_type=FunctionType(
names=['mixed'],
is_optional=True,
),
default_value=None,
)
],
[
FunctionArgument(
name='var3',
argument_type=FunctionType(
names=['mixed'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=True,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function intelligently outputs debug messages into the Debug Console. It is similar to outputDebugString, but outputs useful information for any variable type, and does not require use of Luas tostring. This includes information about element types, and table structures. It is especially useful for quick debug tasks.' ,
arguments={
"var1": """A variable of any type to print intelligent information for. """,
"var2+": """Another variable to be output. An unlimited number of arguments can be supplied """
},
result='always returns nil.' ,
),
url='iprint',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='isOOPEnabled',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='' ,
arguments={
},
result='returns true or false if oop is enabled or not. returns nil if an error arised.' ,
),
url='isOOPEnabled',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='isOOPEnabled',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='' ,
arguments={
},
result='returns true or false if oop is enabled or not. returns nil if an error arised.' ,
),
url='isOOPEnabled',
)
],
),
CompoundFunctionData(
server=[
],
client=[
FunctionData(
signature=FunctionSignature(
name='isShowCollisionsEnabled',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='' ,
arguments={
},
result='* returns true if the collision previews are enabled, false otherwise.' ,
),
url='isShowCollisionsEnabled',
)
],
),
CompoundFunctionData(
server=[
],
client=[
FunctionData(
signature=FunctionSignature(
name='isShowSoundEnabled',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='' ,
arguments={
},
result='* returns true if world sound ids should be printed in the debug window, false otherwise.' ,
),
url='isShowSoundEnabled',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='isTimer',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theTimer',
argument_type=FunctionType(
names=['timer'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function checks if a variable is a timer.' ,
arguments={
"theTimer": """: The variable that we want to check. """
},
result='returns true if the passed value is a timer, false otherwise.' ,
),
url='isTimer',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='isTimer',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theTimer',
argument_type=FunctionType(
names=['timer'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function checks if a variable is a timer.' ,
arguments={
"theTimer": """: The variable that we want to check. """
},
result='returns true if the passed value is a timer, false otherwise.' ,
),
url='isTimer',
)
],
),
CompoundFunctionData(
server=[
],
client=[
FunctionData(
signature=FunctionSignature(
name='isTrayNotificationEnabled',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns a boolean value whether the client has enabled tray notifications in his settings or not.' ,
arguments={
},
result='returns true if the tray notifications are enabled in the settings, false otherwise.' ,
),
url='isTrayNotificationEnabled',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='killTimer',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theTimer',
argument_type=FunctionType(
names=['timer'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function allows you to kill/halt existing timers.' ,
arguments={
"theTimer": """The timer you wish to halt. """
},
result='returns true if the timer was successfully killed, false if no such timer existed.' ,
),
url='killTimer',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='killTimer',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theTimer',
argument_type=FunctionType(
names=['timer'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function allows you to kill/halt existing timers.' ,
arguments={
"theTimer": """The timer you wish to halt. """
},
result='returns true if the timer was successfully killed, false if no such timer existed.' ,
),
url='killTimer',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='md5',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='str',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='Calculates the MD5 hash of the specified string and returns its hexadecimal representation.' ,
arguments={
"str": """the string to hash. """
},
result='returns the md5 hash of the input string if successful, false otherwise.' ,
),
url='md5',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='md5',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='str',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='Calculates the MD5 hash of the specified string and returns its hexadecimal representation.' ,
arguments={
"str": """the string to hash. """
},
result='returns the md5 hash of the input string if successful, false otherwise.' ,
),
url='md5',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='passwordHash',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='password',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='algorithm',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='options',
argument_type=FunctionType(
names=['table'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='callback',
argument_type=FunctionType(
names=['function'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function creates a new password hash using a specified hashing algorithm.' ,
arguments={
"password": """The password to hash. """,
"algorithm": """The algorithm to use: """,
"bcrypt": """: use the bcrypt hashing algorithm. Hash length: 60 characters. <span style=color:red>Note that only the prefix $2y$ is supported (older prefixes can cause security issues).</span> """,
"options": """table with options for the hashing algorithm, as detailed below. """,
"callback": """providing a callback will run this function asynchronously, the arguments to the callback are the same as the returned values below. """
},
result='returns the hash as a string if hashing was successful, false otherwise. if a callback was provided, the aforementioned values are arguments to the callback, and this function will always return true.' ,
),
url='passwordHash',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='passwordHash',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='password',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='algorithm',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='options',
argument_type=FunctionType(
names=['table'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='callback',
argument_type=FunctionType(
names=['function'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function creates a new password hash using a specified hashing algorithm.' ,
arguments={
"password": """The password to hash. """,
"algorithm": """The algorithm to use: """,
"bcrypt": """: use the bcrypt hashing algorithm. Hash length: 60 characters. <span style=color:red>Note that only the prefix $2y$ is supported (older prefixes can cause security issues).</span> """,
"options": """table with options for the hashing algorithm, as detailed below. """,
"callback": """providing a callback will run this function asynchronously, the arguments to the callback are the same as the returned values below. """
},
result='returns the hash as a string if hashing was successful, false otherwise. if a callback was provided, the aforementioned values are arguments to the callback, and this function will always return true.' ,
),
url='passwordHash',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='passwordVerify',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='password',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='hash',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='options',
argument_type=FunctionType(
names=['table'],
is_optional=True,
),
default_value=None,
)
],
[
FunctionArgument(
name='callback',
argument_type=FunctionType(
names=['function'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function verifies whether a password matches a password hash.' ,
arguments={
"password": """The password to check. """,
"hash": """A supported hash (see passwordHash). <span style=color:red>Note that only the prefix $2y$ is supported for type bcrypt (older prefixes can cause security issues).</span> """,
"options": """advanced options """,
"insecureBcrypt": """If set to true, you can use the $2a$ prefix for bcrypt hashes as well. It is strongly not recommended to use it though, because the underlying implementation has a bug that leads to such hashes being relatively easy to crack. This bug was fixed for $2y$. """,
"callback": """providing a callback will run this function asynchronously, the arguments to the callback are the same as the returned values below.
|11281}} """
},
result='returns true if the password matches the hash. returns false if the password does not match, or if an unknown hash was passed. if a callback was provided, the aforementioned values are arguments to the callback, and this function will always return true.' ,
),
url='passwordVerify',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='passwordVerify',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='password',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='hash',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='options',
argument_type=FunctionType(
names=['table'],
is_optional=True,
),
default_value=None,
)
],
[
FunctionArgument(
name='callback',
argument_type=FunctionType(
names=['function'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function verifies whether a password matches a password hash.' ,
arguments={
"password": """The password to check. """,
"hash": """A supported hash (see passwordHash). <span style=color:red>Note that only the prefix $2y$ is supported for type bcrypt (older prefixes can cause security issues).</span> """,
"options": """advanced options """,
"insecureBcrypt": """If set to true, you can use the $2a$ prefix for bcrypt hashes as well. It is strongly not recommended to use it though, because the underlying implementation has a bug that leads to such hashes being relatively easy to crack. This bug was fixed for $2y$. """,
"callback": """providing a callback will run this function asynchronously, the arguments to the callback are the same as the returned values below.
|11281}} """
},
result='returns true if the password matches the hash. returns false if the password does not match, or if an unknown hash was passed. if a callback was provided, the aforementioned values are arguments to the callback, and this function will always return true.' ,
),
url='passwordVerify',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='pregFind',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='subject',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='pattern',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='flags',
argument_type=FunctionType(
names=['int', 'string'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function stops at the first occurrence of the pattern in the input string and returns the result of the search.' ,
arguments={
"subject": """The input string """,
"pattern": """The pattern string to search for in the input string. """,
"flags": """Conjuncted value that contains flags ( 1 - ignorecase, 2 - multiline, 4 - dotall, 8 - extended, 16 - unicode ) or ( i - Ignore case, m - Multiline, d - Dotall, e - Extended, u - Unicode ) """
},
result='returns true if the pattern was found in the input string, false otherwise.' ,
),
url='pregFind',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='pregFind',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='subject',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='pattern',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='flags',
argument_type=FunctionType(
names=['int', 'string'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function stops at the first occurrence of the pattern in the input string and returns the result of the search.' ,
arguments={
"subject": """The input string """,
"pattern": """The pattern string to search for in the input string. """,
"flags": """Conjuncted value that contains flags ( 1 - ignorecase, 2 - multiline, 4 - dotall, 8 - extended, 16 - unicode ) or ( i - Ignore case, m - Multiline, d - Dotall, e - Extended, u - Unicode ) """
},
result='returns true if the pattern was found in the input string, false otherwise.' ,
),
url='pregFind',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='pregMatch',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='base',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='pattern',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='flags',
argument_type=FunctionType(
names=['int', 'string'],
is_optional=True,
),
default_value='0',
)
],
[
FunctionArgument(
name='maxResults',
argument_type=FunctionType(
names=['int'],
is_optional=True,
),
default_value='100000',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns all matches.' ,
arguments={
"base": """The base string for replace. """,
"pattern": """The pattern for match in base string. """,
"flags": """Conjuncted value that contains flags ( 1 - ignorecase, 2 - multiline, 4 - dotall, 8 - extended, 16 - unicode ) or ( i - Ignore case, m - Multiline, d - Dotall, e - Extended, u - Unicode ) """,
"maxResults": """Maximum number of results to return """
},
result='returns a table if one or more match is found, false otherwise.' ,
),
url='pregMatch',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='pregMatch',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='base',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='pattern',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='flags',
argument_type=FunctionType(
names=['int', 'string'],
is_optional=True,
),
default_value='0',
)
],
[
FunctionArgument(
name='maxResults',
argument_type=FunctionType(
names=['int'],
is_optional=True,
),
default_value='100000',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function returns all matches.' ,
arguments={
"base": """The base string for replace. """,
"pattern": """The pattern for match in base string. """,
"flags": """Conjuncted value that contains flags ( 1 - ignorecase, 2 - multiline, 4 - dotall, 8 - extended, 16 - unicode ) or ( i - Ignore case, m - Multiline, d - Dotall, e - Extended, u - Unicode ) """,
"maxResults": """Maximum number of results to return """
},
result='returns a table if one or more match is found, false otherwise.' ,
),
url='pregMatch',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='pregReplace',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='subject',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='pattern',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='replacement',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='flags',
argument_type=FunctionType(
names=['int', 'string'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function performs a regular expression search and replace and returns the replaced string.' ,
arguments={
"subject": """The input string. """,
"pattern": """The pattern string to search for in the input string. """,
"replacement": """The replacement string to replace all matches within the input string. """,
"flags": """Conjuncted value that contains flags ( 1 - ignorecase, 2 - multiline, 4 - dotall, 8 - extended, 16 - unicode ) or ( i - Ignore case, m - Multiline, d - Dotall, e - Extended, u - Unicode ) """
},
result='returns the replaced string, or bool false otherwise.' ,
),
url='pregReplace',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='pregReplace',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='subject',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='pattern',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='replacement',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='flags',
argument_type=FunctionType(
names=['int', 'string'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function performs a regular expression search and replace and returns the replaced string.' ,
arguments={
"subject": """The input string. """,
"pattern": """The pattern string to search for in the input string. """,
"replacement": """The replacement string to replace all matches within the input string. """,
"flags": """Conjuncted value that contains flags ( 1 - ignorecase, 2 - multiline, 4 - dotall, 8 - extended, 16 - unicode ) or ( i - Ignore case, m - Multiline, d - Dotall, e - Extended, u - Unicode ) """
},
result='returns the replaced string, or bool false otherwise.' ,
),
url='pregReplace',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='ref',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='objectToReference',
argument_type=FunctionType(
names=['mixed'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function will create a reference to the given argument.' ,
arguments={
"objectToReference": """The Lua element, which you want to reference """
},
result='returns an int if the reference were successfully created. returns false if the parameter were invalid.' ,
),
url='ref',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='ref',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='objectToReference',
argument_type=FunctionType(
names=['mixed'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function will create a reference to the given argument.' ,
arguments={
"objectToReference": """The Lua element, which you want to reference """
},
result='returns an int if the reference were successfully created. returns false if the parameter were invalid.' ,
),
url='ref',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='removeDebugHook',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='hookType',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='callbackFunction',
argument_type=FunctionType(
names=['function'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function removes hooks added by addDebugHook' ,
arguments={
"hookType": """The type of hook to remove. This can be:
** preEvent
** postEvent
** preFunction
** postFunction """,
"callbackFunction": """The callback function to remove """
},
result='returns true if the hook was successfully removed, or false otherwise.' ,
),
url='removeDebugHook',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='removeDebugHook',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='hookType',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='callbackFunction',
argument_type=FunctionType(
names=['function'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function removes hooks added by addDebugHook' ,
arguments={
"hookType": """The type of hook to remove. This can be:
** preEvent
** postEvent
** preFunction
** postFunction """,
"callbackFunction": """The callback function to remove """
},
result='returns true if the hook was successfully removed, or false otherwise.' ,
),
url='removeDebugHook',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='resetTimer',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theTimer',
argument_type=FunctionType(
names=['timer'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function allows you to reset the elapsed time in existing timers to zero. The function does not reset the times to execute count on timers which have a limited amout of repetitions.' ,
arguments={
"theTimer": """The timer whose elapsed time you wish to reset. """
},
result='returns true if the timer was successfully reset, false otherwise.' ,
),
url='resetTimer',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='resetTimer',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theTimer',
argument_type=FunctionType(
names=['timer'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function allows you to reset the elapsed time in existing timers to zero. The function does not reset the times to execute count on timers which have a limited amout of repetitions.' ,
arguments={
"theTimer": """The timer whose elapsed time you wish to reset. """
},
result='returns true if the timer was successfully reset, false otherwise.' ,
),
url='resetTimer',
)
],
),
CompoundFunctionData(
server=[
],
client=[
FunctionData(
signature=FunctionSignature(
name='setClipboard',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theText',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function sets the players clipboard text (what appears when you paste with CTRL + V)' ,
arguments={
"theText": """The new text to be in the players clipboard when the player pastes with CTRL + V. """
},
result='returns true if the text in the clip board was set correctly.' ,
),
url='setClipboard',
)
],
),
CompoundFunctionData(
server=[
],
client=[
FunctionData(
signature=FunctionSignature(
name='setDevelopmentMode',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='enable',
argument_type=FunctionType(
names=['bool'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='enableWeb',
argument_type=FunctionType(
names=['bool'],
is_optional=True,
),
default_value='false',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function is used to set the development mode. Setting development mode allows access to special commands which can assist with script debugging.\nClient-side development mode commands:\n* Client_Commands#showcol|showcol: Enables colshapes to be viewed as a wireframe object.\n* Client_Commands#showsound|showsound: Enables world sound ids to be printed in the debug output window.\nShared development mode functions:\n* debugSleep: Sets the freeze time for the client/server.' ,
arguments={
"enable": """: A boolean to indicate whether development mode is on (true) or off (false) """,
"enableWeb": """: A boolean to indicate whether browser debug messages will be filtered (false) or not (true) """
},
result='returns true if the mode was set correctly, false otherwise.' ,
),
url='setDevelopmentMode',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='setFPSLimit',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='fpsLimit',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function sets the maximum http://en.wikipedia.org/wiki/Frame_rate FPS (Frames per second) that players on the server can run their game at.' ,
arguments={
"fpsLimit": """An integer value representing the maximum FPS. This value may be between 25 and 100 FPS. You can also pass 0 or false, in which case the FPS limit will be the one set in the client settings (by default, 100 FPS and the client fps limit should also be manually changed via fps_limit=0 in console or MTA San Andreas 1.5\MTA\config\coreconfig.xml). """
},
result='returns true if successful, or false if it was not possible to set the limit or an invalid value was passed.' ,
),
url='setFPSLimit',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='setFPSLimit',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='fpsLimit',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function sets the maximum http://en.wikipedia.org/wiki/Frame_rate FPS (Frames per second) that players on the server can run their game at.' ,
arguments={
"fpsLimit": """An integer value representing the maximum FPS. This value may be between 25 and 100 FPS. You can also pass 0 or false, in which case the FPS limit will be the one set in the client settings (by default, 100 FPS and the client fps limit should also be manually changed via fps_limit=0 in console or MTA San Andreas 1.5\MTA\config\coreconfig.xml). """
},
result='returns true if successful, or false if it was not possible to set the limit or an invalid value was passed.' ,
),
url='setFPSLimit',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='setServerConfigSetting',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='name',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='bSave',
argument_type=FunctionType(
names=['bool'],
is_optional=True,
),
default_value='false',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function sets server settings which are stored in the Server mtaserver.conf|mtaserver.conf file.' ,
arguments={
"name": """The name of the setting. Only certain settings from Server mtaserver.conf|mtaserver.conf can be changed with this function. These are:
** minclientversion
** recommendedclientversion
** password
** fpslimit - (0-100)
** networkencryption - 0 for off, 1 for on
** bandwidth_reduction - "none", "medium", "maximum" Set to maximum for less bandwidth usage (medium is recommended for race servers)
** player_sync_interval - See [[Sync_interval_settings]] for all *_sync_interval settings
** lightweight_sync_interval
** camera_sync_interval
** ped_sync_interval
** unoccupied_vehicle_sync_interval
** keysync_mouse_sync_interval
** keysync_analog_sync_interval
** bullet_sync """,
"value": """The value of the setting """,
"bSave": """Set to true to make the setting permanent, or false for use only until the next server restart. """
},
result='returns true if the setting was successfully set, or false otherwise.' ,
),
url='setServerConfigSetting',
)
],
client=[
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='setTimer',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['timer'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theFunction',
argument_type=FunctionType(
names=['function'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='timeInterval',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='timesToExecute',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='arguments',
argument_type=FunctionType(
names=['var'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=True,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function allows you to trigger a function after a number of milliseconds have elapsed. You can call one of your own functions or a built-in function. For example, you could set a timer to spawn a player after a number of seconds have elapsed.\nOnce a timer has finished repeating, it no longer exists.\nThe minimum accepted interval is 0ms.\nMulti Theft Auto guarantees that the timer will be triggered after at least the interval you specify. The resolution of the timer is tied to the frame rate (server side and client-side). All the overdue timers are triggered at a single point each frame. This means that if, for example, the player is running at 30 frames per second, then two timers specified to occur after 100ms and 110ms would more than likely occur during the same frame, as the difference in time between the two timers (10ms) is less than half the length of the frame (33ms). As with most timers provided by other languages, you shouldnt rely on the timer triggering at an exact point in the future.' ,
arguments={
"theFunction": """The function you wish the timer to call. """,
"timeInterval": """The number of milliseconds that should elapse before the function is called. (the minimum is 50 (0 on 1.5.6 r16715); 1000 milliseconds = 1 second) """,
"timesToExecute": """The number of times you want the timer to execute, or 0 for infinite repetitions. """,
"arguments": """Any arguments you wish to pass to the function can be listed after the timesToExecute argument. Note that any tables you want to pass will get cloned, whereas metatables and functions/function references in that passed table will get lost. Also changes you make in the original table before the function gets called wont get transferred. """
},
result='returns a timer pointer if the timer was set successfully, false if the arguments are invalid or the timer could not be set.' ,
),
url='setTimer',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='setTimer',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['timer'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theFunction',
argument_type=FunctionType(
names=['function'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='timeInterval',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='timesToExecute',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='arguments',
argument_type=FunctionType(
names=['var'],
is_optional=True,
),
default_value=None,
)
]
],
variable_length=True,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function allows you to trigger a function after a number of milliseconds have elapsed. You can call one of your own functions or a built-in function. For example, you could set a timer to spawn a player after a number of seconds have elapsed.\nOnce a timer has finished repeating, it no longer exists.\nThe minimum accepted interval is 0ms.\nMulti Theft Auto guarantees that the timer will be triggered after at least the interval you specify. The resolution of the timer is tied to the frame rate (server side and client-side). All the overdue timers are triggered at a single point each frame. This means that if, for example, the player is running at 30 frames per second, then two timers specified to occur after 100ms and 110ms would more than likely occur during the same frame, as the difference in time between the two timers (10ms) is less than half the length of the frame (33ms). As with most timers provided by other languages, you shouldnt rely on the timer triggering at an exact point in the future.' ,
arguments={
"theFunction": """The function you wish the timer to call. """,
"timeInterval": """The number of milliseconds that should elapse before the function is called. (the minimum is 50 (0 on 1.5.6 r16715); 1000 milliseconds = 1 second) """,
"timesToExecute": """The number of times you want the timer to execute, or 0 for infinite repetitions. """,
"arguments": """Any arguments you wish to pass to the function can be listed after the timesToExecute argument. Note that any tables you want to pass will get cloned, whereas metatables and functions/function references in that passed table will get lost. Also changes you make in the original table before the function gets called wont get transferred. """
},
result='returns a timer pointer if the timer was set successfully, false if the arguments are invalid or the timer could not be set.' ,
),
url='setTimer',
)
],
),
CompoundFunctionData(
server=[
],
client=[
FunctionData(
signature=FunctionSignature(
name='setWindowFlashing',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='shouldFlash',
argument_type=FunctionType(
names=['bool'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='count',
argument_type=FunctionType(
names=['int'],
is_optional=True,
),
default_value='10',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='' ,
arguments={
"shouldFlash": """whether the window should flash """,
"count": """the number of times the window should flash, defaults to 10 times """
},
result='returns false if:\n* the window is already in focus\n* the client has disabled this feature\nreturns true otherwise' ,
),
url='setWindowFlashing',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='sha256',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='str',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='* The sha module and this function may conflict with eachother, if you use this function uninstall the module!\n* This function returns an uppercase string, so make sure you string.upper() anything else you are checking against that has been sha256d elsewhere.}}\nCalculates the sha256 hash of the specified string.' ,
arguments={
"str": """the string to hash. """
},
result='returns the sha256 hash of the input string if successful, false otherwise.' ,
),
url='sha256',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='sha256',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='str',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='* The sha module and this function may conflict with eachother, if you use this function uninstall the module!\n* This function returns an uppercase string, so make sure you string.upper() anything else you are checking against that has been sha256d elsewhere.}}\nCalculates the sha256 hash of the specified string.' ,
arguments={
"str": """the string to hash. """
},
result='returns the sha256 hash of the input string if successful, false otherwise.' ,
),
url='sha256',
)
],
),
CompoundFunctionData(
server=[
],
client=[
FunctionData(
signature=FunctionSignature(
name='showCol',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='state',
argument_type=FunctionType(
names=['bool'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='' ,
arguments={
"state": """A boolean indicating if the collision previews should be enabled or disabled. """
},
result='* returns true if the function is successful, false otherwise.' ,
),
url='showCol',
)
],
),
CompoundFunctionData(
server=[
],
client=[
FunctionData(
signature=FunctionSignature(
name='showSound',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['bool'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='state',
argument_type=FunctionType(
names=['bool'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='' ,
arguments={
"state": """A boolean indicating if the world sound IDs should be printed in the debug window or not. """
},
result='* returns true if the function is successful, false otherwise.' ,
),
url='showSound',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='split',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='stringToSplit',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='separatingChar',
argument_type=FunctionType(
names=['string', 'int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function splits a string into substrings. You specify a character that will act as a separating character; this will determine where to split the sub-strings. For example, it can split the string Hello World into two strings containing the two words, by spliting using a space as a separator.\nNote: You can use the function gettok to retrieve a single token from the string at a specific index. This may be faster for one-off lookups, but considerably slower if you are going to check each token in a long string.' ,
arguments={
"stringToSplit": """The string you wish to split into parts. """,
"separatingChar": """A string of the character you want to split, or the ASCII|ASCII number representing the character you want to use to split. """
},
result='returns a table of substrings split from the original string if successful, false otherwise.' ,
),
url='split',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='split',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['table'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='stringToSplit',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='separatingChar',
argument_type=FunctionType(
names=['string', 'int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function splits a string into substrings. You specify a character that will act as a separating character; this will determine where to split the sub-strings. For example, it can split the string Hello World into two strings containing the two words, by spliting using a space as a separator.\nNote: You can use the function gettok to retrieve a single token from the string at a specific index. This may be faster for one-off lookups, but considerably slower if you are going to check each token in a long string.' ,
arguments={
"stringToSplit": """The string you wish to split into parts. """,
"separatingChar": """A string of the character you want to split, or the ASCII|ASCII number representing the character you want to use to split. """
},
result='returns a table of substrings split from the original string if successful, false otherwise.' ,
),
url='split',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='teaDecode',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='data',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='key',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function decrypts given https://en.wikipedia.org/wiki/Base64 base64 representation of encrypted data using the https://en.wikipedia.org/wiki/Tiny_Encryption_Algorithm Tiny Encryption Algorithm.' ,
arguments={
"data": """The block of data you want to decrypt """,
"key": """The key that should be used for decryption (Only first 16 characters are used) """
},
result='returns string containing the decrypted data if the decryption process was successfully completed, false otherwise.' ,
),
url='teaDecode',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='teaDecode',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='data',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='key',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function decrypts given https://en.wikipedia.org/wiki/Base64 base64 representation of encrypted data using the https://en.wikipedia.org/wiki/Tiny_Encryption_Algorithm Tiny Encryption Algorithm.' ,
arguments={
"data": """The block of data you want to decrypt """,
"key": """The key that should be used for decryption (Only first 16 characters are used) """
},
result='returns string containing the decrypted data if the decryption process was successfully completed, false otherwise.' ,
),
url='teaDecode',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='teaEncode',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='text',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='key',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This functions performs the https://en.wikipedia.org/wiki/Tiny_Encryption_Algorithm Tiny Encryption Algorithm on the given string and returns the https://en.wikipedia.org/wiki/Base64 base64 representation of the encrypted string.' ,
arguments={
"text": """The string you want to encrypt. (See second example if you want to encode binary data) """,
"key": """The key that should be used for encryption (Only first 16 characters are used) """
},
result='returns the https://en.wikipedia.org/wiki/base64 base64 representation of the encrypted string if the encryption process was successfully completed, false otherwise.' ,
),
url='teaEncode',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='teaEncode',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='text',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='key',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This functions performs the https://en.wikipedia.org/wiki/Tiny_Encryption_Algorithm Tiny Encryption Algorithm on the given string and returns the https://en.wikipedia.org/wiki/Base64 base64 representation of the encrypted string.' ,
arguments={
"text": """The string you want to encrypt. (See second example if you want to encode binary data) """,
"key": """The key that should be used for encryption (Only first 16 characters are used) """
},
result='returns the https://en.wikipedia.org/wiki/base64 base64 representation of the encrypted string if the encryption process was successfully completed, false otherwise.' ,
),
url='teaEncode',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='tocolor',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='red',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='green',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='blue',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='alpha',
argument_type=FunctionType(
names=['int'],
is_optional=True,
),
default_value='255',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function retrieves the hex number of a specified color, useful for the dx functions.' ,
arguments={
"red": """The amount of http://en.wikipedia.org/wiki/RGBA_color_space red in the color (0-255). """,
"green": """The amount of http://en.wikipedia.org/wiki/RGBA_color_space green in the color (0-255). """,
"blue": """The amount of http://en.wikipedia.org/wiki/RGBA_color_space blue in the color (0-255). """,
"alpha": """The amount of http://en.wikipedia.org/wiki/RGBA_color_space alpha in the color (0-255). """
},
result='returns a single value representing the color.' ,
),
url='tocolor',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='tocolor',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='red',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='green',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='blue',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='alpha',
argument_type=FunctionType(
names=['int'],
is_optional=True,
),
default_value='255',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function retrieves the hex number of a specified color, useful for the dx functions.' ,
arguments={
"red": """The amount of http://en.wikipedia.org/wiki/RGBA_color_space red in the color (0-255). """,
"green": """The amount of http://en.wikipedia.org/wiki/RGBA_color_space green in the color (0-255). """,
"blue": """The amount of http://en.wikipedia.org/wiki/RGBA_color_space blue in the color (0-255). """,
"alpha": """The amount of http://en.wikipedia.org/wiki/RGBA_color_space alpha in the color (0-255). """
},
result='returns a single value representing the color.' ,
),
url='tocolor',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='toJSON',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['var'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='compact',
argument_type=FunctionType(
names=['bool'],
is_optional=True,
),
default_value='false',
)
],
[
FunctionArgument(
name='prettyType',
argument_type=FunctionType(
names=['string'],
is_optional=True,
),
default_value='"none"',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function converts a single value (preferably a Lua table) into a JSON encoded string. You can use this to store the data and then load it again using fromJSON.' ,
arguments={
"var": """An argument of any type. Arguments that are elements will be stored as element IDs that are liable to change between sessions. As such, do not save elements across sessions as you will get unpredictable results. """,
"compact": """a boolean representing whether the string will contain whitespaces. To remove whitespaces from JSON string, use true. String will contain whitespaces per default. """,
"prettyType": """a type string from below:
** spaces
** tabs """
},
result='returns a json formatted string.' ,
),
url='toJSON',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='toJSON',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='value',
argument_type=FunctionType(
names=['var'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='compact',
argument_type=FunctionType(
names=['bool'],
is_optional=True,
),
default_value='false',
)
],
[
FunctionArgument(
name='prettyType',
argument_type=FunctionType(
names=['string'],
is_optional=True,
),
default_value='"none"',
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='This function converts a single value (preferably a Lua table) into a JSON encoded string. You can use this to store the data and then load it again using fromJSON.' ,
arguments={
"var": """An argument of any type. Arguments that are elements will be stored as element IDs that are liable to change between sessions. As such, do not save elements across sessions as you will get unpredictable results. """,
"compact": """a boolean representing whether the string will contain whitespaces. To remove whitespaces from JSON string, use true. String will contain whitespaces per default. """,
"prettyType": """a type string from below:
** spaces
** tabs """
},
result='returns a json formatted string.' ,
),
url='toJSON',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='utfChar',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='characterCode',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='The function returns the string of the specified UTF code.' ,
arguments={
"characterCode": """The UTF code, to get the string of. """
},
result='returns a string if the function was successful, false otherwise.' ,
),
url='utfChar',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='utfChar',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='characterCode',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='The function returns the string of the specified UTF code.' ,
arguments={
"characterCode": """The UTF code, to get the string of. """
},
result='returns a string if the function was successful, false otherwise.' ,
),
url='utfChar',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='utfCode',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theString',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='The function returns the UTF codes of the given string.' ,
arguments={
"theString": """The string to get the UTF code of. """
},
result='returns an int if the function was successful, false otherwise.' ,
),
url='utfCode',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='utfCode',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theString',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='The function returns the UTF codes of the given string.' ,
arguments={
"theString": """The string to get the UTF code of. """
},
result='returns an int if the function was successful, false otherwise.' ,
),
url='utfCode',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='utfLen',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theString',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='The function gets the real length of a string, in characters.' ,
arguments={
"theString": """The string to get the length of. """
},
result='returns an int if the function was successful, false otherwise.' ,
),
url='utfLen',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='utfLen',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theString',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='The function gets the real length of a string, in characters.' ,
arguments={
"theString": """The string to get the length of. """
},
result='returns an int if the function was successful, false otherwise.' ,
),
url='utfLen',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='utfSeek',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theString',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='position',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='The function returns the byte position at specified character position.' ,
arguments={
"theString": """The string. """,
"position": """An int with the specified charachter position. """
},
result='returns an int if the function was successful, false otherwise.' ,
),
url='utfSeek',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='utfSeek',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['int'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theString',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='position',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='The function returns the byte position at specified character position.' ,
arguments={
"theString": """The string. """,
"position": """An int with the specified charachter position. """
},
result='returns an int if the function was successful, false otherwise.' ,
),
url='utfSeek',
)
],
),
CompoundFunctionData(
server=[
FunctionData(
signature=FunctionSignature(
name='utfSub',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theString',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='Start',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='End',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='The function returns a sub string, from the specified positions on a character.' ,
arguments={
"theString": """The string. """,
"Start": """An int with the start position. """,
"End": """An int with the end position. """
},
result='returns a string if the function was successful, false otherwise.' ,
),
url='utfSub',
)
],
client=[
FunctionData(
signature=FunctionSignature(
name='utfSub',
return_types=FunctionReturnTypes(
return_types=[
FunctionType(
names=['string'],
is_optional=False,
)
],
variable_length=False,
),
arguments=FunctionArgumentValues(
arguments=[
[
FunctionArgument(
name='theString',
argument_type=FunctionType(
names=['string'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='Start',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
],
[
FunctionArgument(
name='End',
argument_type=FunctionType(
names=['int'],
is_optional=False,
),
default_value=None,
)
]
],
variable_length=False,
),
generic_types=[
],
),
docs=FunctionDoc(
description='The function returns a sub string, from the specified positions on a character.' ,
arguments={
"theString": """The string. """,
"Start": """An int with the start position. """,
"End": """An int with the end position. """
},
result='returns a string if the function was successful, false otherwise.' ,
),
url='utfSub',
)
],
)
]
| 42.758868 | 1,105 | 0.3653 | 20,712 | 345,962 | 6.012022 | 0.057358 | 0.062118 | 0.04686 | 0.068936 | 0.966311 | 0.961324 | 0.946338 | 0.937207 | 0.936894 | 0.936364 | 0 | 0.006471 | 0.56762 | 345,962 | 8,090 | 1,106 | 42.764153 | 0.825961 | 0.000147 | 0 | 0.841292 | 1 | 0.023433 | 0.217815 | 0.005421 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.0057 | 0.000127 | 0 | 0.000127 | 0.00114 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
f82982b1e2f475912ca0263f1303c2459e5192a8 | 4,876 | py | Python | pyaz/network/application_gateway/rewrite_rule/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/network/application_gateway/rewrite_rule/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/network/application_gateway/rewrite_rule/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | 1 | 2022-02-03T09:12:01.000Z | 2022-02-03T09:12:01.000Z | from .... pyaz_utils import _call_az
from . import condition, set
def create(gateway_name, name, resource_group, rule_set_name, enable_reroute=None, modified_path=None, modified_query_string=None, no_wait=None, request_headers=None, response_headers=None, sequence=None):
'''
Create a rewrite rule.
Required Parameters:
- gateway_name -- Name of the application gateway.
- name -- Name of the rewrite rule.
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
- rule_set_name -- Name of the rewrite rule set.
Optional Parameters:
- enable_reroute -- If set as true, it will re-evaluate the url path map provided in path based request routing rules using modified path.
- modified_path -- Url path for url rewrite
- modified_query_string -- Query string for url rewrite.
- no_wait -- Do not wait for the long-running operation to finish.
- request_headers -- Space-separated list of HEADER=VALUE pairs.
- response_headers -- Space-separated list of HEADER=VALUE pairs.
- sequence -- Determines the execution order of the rule in the rule set.
'''
return _call_az("az network application-gateway rewrite-rule create", locals())
def show(gateway_name, name, resource_group, rule_set_name):
'''
Get the details of a rewrite rule.
Required Parameters:
- gateway_name -- Name of the application gateway.
- name -- Name of the rewrite rule.
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
- rule_set_name -- Name of the rewrite rule set.
'''
return _call_az("az network application-gateway rewrite-rule show", locals())
def list(gateway_name, resource_group, rule_set_name):
'''
List rewrite rules.
Required Parameters:
- gateway_name -- Name of the application gateway.
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
- rule_set_name -- Name of the rewrite rule set.
'''
return _call_az("az network application-gateway rewrite-rule list", locals())
def delete(gateway_name, name, resource_group, rule_set_name, no_wait=None):
'''
Delete a rewrite rule.
Required Parameters:
- gateway_name -- Name of the application gateway.
- name -- Name of the rewrite rule.
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
- rule_set_name -- Name of the rewrite rule set.
Optional Parameters:
- no_wait -- Do not wait for the long-running operation to finish.
'''
return _call_az("az network application-gateway rewrite-rule delete", locals())
def update(gateway_name, name, resource_group, rule_set_name, add=None, enable_reroute=None, force_string=None, modified_path=None, modified_query_string=None, no_wait=None, remove=None, request_headers=None, response_headers=None, sequence=None, set=None):
'''
Update a rewrite rule.
Required Parameters:
- gateway_name -- Name of the application gateway.
- name -- Name of the rewrite rule.
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
- rule_set_name -- Name of the rewrite rule set.
Optional Parameters:
- add -- Add an object to a list of objects by specifying a path and key value pairs. Example: --add property.listProperty <key=value, string or JSON string>
- enable_reroute -- If set as true, it will re-evaluate the url path map provided in path based request routing rules using modified path.
- force_string -- When using 'set' or 'add', preserve string literals instead of attempting to convert to JSON.
- modified_path -- Url path for url rewrite
- modified_query_string -- Query string for url rewrite.
- no_wait -- Do not wait for the long-running operation to finish.
- remove -- Remove a property or an element from a list. Example: --remove property.list <indexToRemove> OR --remove propertyToRemove
- request_headers -- Space-separated list of HEADER=VALUE pairs.
- response_headers -- Space-separated list of HEADER=VALUE pairs.
- sequence -- Determines the execution order of the rule in the rule set.
- set -- Update an object by specifying a property path and value to set. Example: --set property1.property2=<value>
'''
return _call_az("az network application-gateway rewrite-rule update", locals())
def list_request_headers():
return _call_az("az network application-gateway rewrite-rule list-request-headers", locals())
def list_response_headers():
return _call_az("az network application-gateway rewrite-rule list-response-headers", locals())
| 48.277228 | 257 | 0.726415 | 688 | 4,876 | 5.013081 | 0.155523 | 0.063787 | 0.040591 | 0.052769 | 0.788055 | 0.788055 | 0.779936 | 0.779936 | 0.734706 | 0.659901 | 0 | 0.000505 | 0.188064 | 4,876 | 100 | 258 | 48.76 | 0.870674 | 0.652789 | 0 | 0 | 0 | 0 | 0.261872 | 0.014665 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4375 | false | 0 | 0.125 | 0.125 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
f871773bd1e1f967ed06309d12127860fa36cd84 | 159 | py | Python | __init__.py | kloudtrader-github/libkloudtrader | abf5500e544e4f7b8834aacbd1dacf37ce11d023 | [
"Apache-2.0"
] | null | null | null | __init__.py | kloudtrader-github/libkloudtrader | abf5500e544e4f7b8834aacbd1dacf37ce11d023 | [
"Apache-2.0"
] | null | null | null | __init__.py | kloudtrader-github/libkloudtrader | abf5500e544e4f7b8834aacbd1dacf37ce11d023 | [
"Apache-2.0"
] | null | null | null | from libkloudtrader.equities.data import *
from libkloudtrader.equities.trade import *
from libkloudtrader.user import *
from libkloudtrader.alert_me import *
| 31.8 | 43 | 0.836478 | 19 | 159 | 6.947368 | 0.473684 | 0.545455 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100629 | 159 | 4 | 44 | 39.75 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f8cca8d5cc4e8123cd4a6ecc0898b89123bea1be | 106 | py | Python | tests/utils/utils.py | LucaCappelletti94/compress_json | 27d0c8e568327d10790ee0387fbc90bceed3c74f | [
"MIT"
] | 7 | 2020-04-11T00:08:34.000Z | 2021-06-09T17:36:31.000Z | tests/utils/utils.py | LucaCappelletti94/compress_json | 27d0c8e568327d10790ee0387fbc90bceed3c74f | [
"MIT"
] | 2 | 2021-03-19T08:29:19.000Z | 2021-10-05T14:23:13.000Z | tests/utils/utils.py | LucaCappelletti94/compress_json | 27d0c8e568327d10790ee0387fbc90bceed3c74f | [
"MIT"
] | null | null | null | from compress_json.compress_json import local_path
def local_call():
return local_path("object.json") | 26.5 | 50 | 0.801887 | 16 | 106 | 5 | 0.625 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113208 | 106 | 4 | 51 | 26.5 | 0.851064 | 0 | 0 | 0 | 0 | 0 | 0.102804 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
3e1fd8eb8e4bda56f97bc143e65d0ccca0883479 | 34 | py | Python | dont-nsesso/nsesso/utils/__init__.py | rishikesh67/django-tenant-oracle-schemas | 918a64e842b678fc506eadbb4d7e51b0b38ab0a2 | [
"MIT"
] | null | null | null | dont-nsesso/nsesso/utils/__init__.py | rishikesh67/django-tenant-oracle-schemas | 918a64e842b678fc506eadbb4d7e51b0b38ab0a2 | [
"MIT"
] | 8 | 2019-12-04T23:26:11.000Z | 2022-02-10T09:42:18.000Z | dont-nsesso/nsesso/utils/__init__.py | rishikesh67/django-tenant-oracle-schemas | 918a64e842b678fc506eadbb4d7e51b0b38ab0a2 | [
"MIT"
] | 2 | 2019-06-26T05:31:16.000Z | 2019-07-01T12:22:50.000Z | from .get_tenant import get_tenant | 34 | 34 | 0.882353 | 6 | 34 | 4.666667 | 0.666667 | 0.642857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3e2acb11d1ace7c3de1d9c17024a8d47bac39cff | 40 | py | Python | src/interpolation_robustness/models/__init__.py | michaelaerni/interpolation_robustness | be18c37a55b6ae1669391fe21e4aba3584fc9882 | [
"MIT"
] | 1 | 2022-02-16T19:24:36.000Z | 2022-02-16T19:24:36.000Z | src/interpolation_robustness/models/__init__.py | michaelaerni/interpolation_robustness | be18c37a55b6ae1669391fe21e4aba3584fc9882 | [
"MIT"
] | null | null | null | src/interpolation_robustness/models/__init__.py | michaelaerni/interpolation_robustness | be18c37a55b6ae1669391fe21e4aba3584fc9882 | [
"MIT"
] | null | null | null | from . import jax
from . import pytorch
| 13.333333 | 21 | 0.75 | 6 | 40 | 5 | 0.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 40 | 2 | 22 | 20 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e41c06d5502e52811712c020329f2e079db9062e | 13,830 | py | Python | container_sdk/api/service/service_client.py | easyopsapis/easyops-api-python | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | [
"Apache-2.0"
] | 5 | 2019-07-31T04:11:05.000Z | 2021-01-07T03:23:20.000Z | container_sdk/api/service/service_client.py | easyopsapis/easyops-api-python | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | [
"Apache-2.0"
] | null | null | null | container_sdk/api/service/service_client.py | easyopsapis/easyops-api-python | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import os
import sys
import container_sdk.api.service.create_pb2
import container_sdk.model.container.service_pb2
import container_sdk.api.service.create_from_yaml_pb2
import container_sdk.api.service.delete_service_pb2
import google.protobuf.empty_pb2
import container_sdk.api.service.get_pb2
import container_sdk.api.service.get_status_pb2
import container_sdk.api.service.list_pb2
import container_sdk.api.service.update_pb2
import container_sdk.api.service.update_resource_spec_pb2
import container_sdk.utils.http_util
import google.protobuf.json_format
class ServiceClient(object):
def __init__(self, server_ip="", server_port=0, service_name="", host=""):
"""
初始化client
:param server_ip: 指定sdk请求的server_ip,为空时走名字服务路由
:param server_port: 指定sdk请求的server_port,与server_ip一起使用, 为空时走名字服务路由
:param service_name: 指定sdk请求的service_name, 为空时按契约名称路由。如果server_ip和service_name同时设置,server_ip优先级更高
:param host: 指定sdk请求服务的host名称, 如cmdb.easyops-only.com
"""
if server_ip == "" and server_port != 0 or server_ip != "" and server_port == 0:
raise Exception("server_ip和server_port必须同时指定")
self._server_ip = server_ip
self._server_port = server_port
self._service_name = service_name
self._host = host
def create(self, request, org, user, timeout=10):
# type: (container_sdk.api.service.create_pb2.CreateRequest, int, str, int) -> container_sdk.model.container.service_pb2.Service
"""
创建 service 负载均衡
:param request: create请求
:param org: 客户的org编号,为数字
:param user: 调用api使用的用户名
:param timeout: 调用超时时间,单位秒
:return: container_sdk.model.container.service_pb2.Service
"""
headers = {"org": org, "user": user}
route_name = ""
server_ip = self._server_ip
if self._service_name != "":
route_name = self._service_name
elif self._server_ip != "":
route_name = "easyops.api.container.service.Create"
uri = "/api/container/v1/services"
requestParam = request
rsp_obj = container_sdk.utils.http_util.do_api_request(
method="POST",
src_name="logic.container_sdk",
dst_name=route_name,
server_ip=server_ip,
server_port=self._server_port,
host=self._host,
uri=uri,
params=google.protobuf.json_format.MessageToDict(
requestParam, preserving_proto_field_name=True),
headers=headers,
timeout=timeout,
)
rsp = container_sdk.model.container.service_pb2.Service()
google.protobuf.json_format.ParseDict(rsp_obj["data"], rsp, ignore_unknown_fields=True)
return rsp
def create_from_yaml(self, request, org, user, timeout=10):
# type: (container_sdk.api.service.create_from_yaml_pb2.CreateFromYamlRequest, int, str, int) -> container_sdk.model.container.service_pb2.Service
"""
通过 yaml 创建 service
:param request: create_from_yaml请求
:param org: 客户的org编号,为数字
:param user: 调用api使用的用户名
:param timeout: 调用超时时间,单位秒
:return: container_sdk.model.container.service_pb2.Service
"""
headers = {"org": org, "user": user}
route_name = ""
server_ip = self._server_ip
if self._service_name != "":
route_name = self._service_name
elif self._server_ip != "":
route_name = "easyops.api.container.service.CreateFromYaml"
uri = "/api/container/v1/services/yaml"
requestParam = request
rsp_obj = container_sdk.utils.http_util.do_api_request(
method="POST",
src_name="logic.container_sdk",
dst_name=route_name,
server_ip=server_ip,
server_port=self._server_port,
host=self._host,
uri=uri,
params=google.protobuf.json_format.MessageToDict(
requestParam, preserving_proto_field_name=True),
headers=headers,
timeout=timeout,
)
rsp = container_sdk.model.container.service_pb2.Service()
google.protobuf.json_format.ParseDict(rsp_obj["data"], rsp, ignore_unknown_fields=True)
return rsp
def delete_service(self, request, org, user, timeout=10):
# type: (container_sdk.api.service.delete_service_pb2.DeleteServiceRequest, int, str, int) -> google.protobuf.empty_pb2.Empty
"""
删除 Service
:param request: delete_service请求
:param org: 客户的org编号,为数字
:param user: 调用api使用的用户名
:param timeout: 调用超时时间,单位秒
:return: google.protobuf.empty_pb2.Empty
"""
headers = {"org": org, "user": user}
route_name = ""
server_ip = self._server_ip
if self._service_name != "":
route_name = self._service_name
elif self._server_ip != "":
route_name = "easyops.api.container.service.DeleteService"
uri = "/api/container/v1/services/{instanceId}".format(
instanceId=request.instanceId,
)
requestParam = request
rsp_obj = container_sdk.utils.http_util.do_api_request(
method="DELETE",
src_name="logic.container_sdk",
dst_name=route_name,
server_ip=server_ip,
server_port=self._server_port,
host=self._host,
uri=uri,
params=google.protobuf.json_format.MessageToDict(
requestParam, preserving_proto_field_name=True),
headers=headers,
timeout=timeout,
)
rsp = google.protobuf.empty_pb2.Empty()
google.protobuf.json_format.ParseDict(rsp_obj, rsp, ignore_unknown_fields=True)
return rsp
def get(self, request, org, user, timeout=10):
# type: (container_sdk.api.service.get_pb2.GetRequest, int, str, int) -> container_sdk.api.service.get_pb2.GetResponse
"""
获取 Service 配置
:param request: get请求
:param org: 客户的org编号,为数字
:param user: 调用api使用的用户名
:param timeout: 调用超时时间,单位秒
:return: container_sdk.api.service.get_pb2.GetResponse
"""
headers = {"org": org, "user": user}
route_name = ""
server_ip = self._server_ip
if self._service_name != "":
route_name = self._service_name
elif self._server_ip != "":
route_name = "easyops.api.container.service.Get"
uri = "/api/container/v1/services/{instanceId}".format(
instanceId=request.instanceId,
)
requestParam = request
rsp_obj = container_sdk.utils.http_util.do_api_request(
method="GET",
src_name="logic.container_sdk",
dst_name=route_name,
server_ip=server_ip,
server_port=self._server_port,
host=self._host,
uri=uri,
params=google.protobuf.json_format.MessageToDict(
requestParam, preserving_proto_field_name=True),
headers=headers,
timeout=timeout,
)
rsp = container_sdk.api.service.get_pb2.GetResponse()
google.protobuf.json_format.ParseDict(rsp_obj["data"], rsp, ignore_unknown_fields=True)
return rsp
def get_status(self, request, org, user, timeout=10):
# type: (container_sdk.api.service.get_status_pb2.GetStatusRequest, int, str, int) -> container_sdk.model.container.service_pb2.Service
"""
获取 Service, 信息来源于现网
:param request: get_status请求
:param org: 客户的org编号,为数字
:param user: 调用api使用的用户名
:param timeout: 调用超时时间,单位秒
:return: container_sdk.model.container.service_pb2.Service
"""
headers = {"org": org, "user": user}
route_name = ""
server_ip = self._server_ip
if self._service_name != "":
route_name = self._service_name
elif self._server_ip != "":
route_name = "easyops.api.container.service.GetStatus"
uri = "/api/container/v1/services/{instanceId}/status".format(
instanceId=request.instanceId,
)
requestParam = request
rsp_obj = container_sdk.utils.http_util.do_api_request(
method="GET",
src_name="logic.container_sdk",
dst_name=route_name,
server_ip=server_ip,
server_port=self._server_port,
host=self._host,
uri=uri,
params=google.protobuf.json_format.MessageToDict(
requestParam, preserving_proto_field_name=True),
headers=headers,
timeout=timeout,
)
rsp = container_sdk.model.container.service_pb2.Service()
google.protobuf.json_format.ParseDict(rsp_obj["data"], rsp, ignore_unknown_fields=True)
return rsp
def list(self, request, org, user, timeout=10):
# type: (container_sdk.api.service.list_pb2.ListRequest, int, str, int) -> container_sdk.api.service.list_pb2.ListResponse
"""
获取 Service 列表
:param request: list请求
:param org: 客户的org编号,为数字
:param user: 调用api使用的用户名
:param timeout: 调用超时时间,单位秒
:return: container_sdk.api.service.list_pb2.ListResponse
"""
headers = {"org": org, "user": user}
route_name = ""
server_ip = self._server_ip
if self._service_name != "":
route_name = self._service_name
elif self._server_ip != "":
route_name = "easyops.api.container.service.List"
uri = "/api/container/v1/services"
requestParam = request
rsp_obj = container_sdk.utils.http_util.do_api_request(
method="GET",
src_name="logic.container_sdk",
dst_name=route_name,
server_ip=server_ip,
server_port=self._server_port,
host=self._host,
uri=uri,
params=google.protobuf.json_format.MessageToDict(
requestParam, preserving_proto_field_name=True),
headers=headers,
timeout=timeout,
)
rsp = container_sdk.api.service.list_pb2.ListResponse()
google.protobuf.json_format.ParseDict(rsp_obj["data"], rsp, ignore_unknown_fields=True)
return rsp
def update(self, request, org, user, timeout=10):
# type: (container_sdk.api.service.update_pb2.UpdateRequest, int, str, int) -> container_sdk.model.container.service_pb2.Service
"""
更新 Service
:param request: update请求
:param org: 客户的org编号,为数字
:param user: 调用api使用的用户名
:param timeout: 调用超时时间,单位秒
:return: container_sdk.model.container.service_pb2.Service
"""
headers = {"org": org, "user": user}
route_name = ""
server_ip = self._server_ip
if self._service_name != "":
route_name = self._service_name
elif self._server_ip != "":
route_name = "easyops.api.container.service.Update"
uri = "/api/container/v1/services/{instanceId}".format(
instanceId=request.instanceId,
)
requestParam = request
rsp_obj = container_sdk.utils.http_util.do_api_request(
method="PUT",
src_name="logic.container_sdk",
dst_name=route_name,
server_ip=server_ip,
server_port=self._server_port,
host=self._host,
uri=uri,
params=google.protobuf.json_format.MessageToDict(
requestParam, preserving_proto_field_name=True),
headers=headers,
timeout=timeout,
)
rsp = container_sdk.model.container.service_pb2.Service()
google.protobuf.json_format.ParseDict(rsp_obj["data"], rsp, ignore_unknown_fields=True)
return rsp
def update_resource_spec(self, request, org, user, timeout=10):
# type: (container_sdk.api.service.update_resource_spec_pb2.UpdateResourceSpecRequest, int, str, int) -> container_sdk.model.container.service_pb2.Service
"""
更新 Service yaml 文件定义
:param request: update_resource_spec请求
:param org: 客户的org编号,为数字
:param user: 调用api使用的用户名
:param timeout: 调用超时时间,单位秒
:return: container_sdk.model.container.service_pb2.Service
"""
headers = {"org": org, "user": user}
route_name = ""
server_ip = self._server_ip
if self._service_name != "":
route_name = self._service_name
elif self._server_ip != "":
route_name = "easyops.api.container.service.UpdateResourceSpec"
uri = "/api/container/v1/services/{instanceId}/yaml".format(
instanceId=request.instanceId,
)
requestParam = request
rsp_obj = container_sdk.utils.http_util.do_api_request(
method="PUT",
src_name="logic.container_sdk",
dst_name=route_name,
server_ip=server_ip,
server_port=self._server_port,
host=self._host,
uri=uri,
params=google.protobuf.json_format.MessageToDict(
requestParam, preserving_proto_field_name=True),
headers=headers,
timeout=timeout,
)
rsp = container_sdk.model.container.service_pb2.Service()
google.protobuf.json_format.ParseDict(rsp_obj["data"], rsp, ignore_unknown_fields=True)
return rsp
| 37.177419 | 162 | 0.614244 | 1,527 | 13,830 | 5.288147 | 0.091683 | 0.081734 | 0.040867 | 0.059938 | 0.866625 | 0.850402 | 0.819814 | 0.770898 | 0.757523 | 0.752074 | 0 | 0.007108 | 0.287925 | 13,830 | 371 | 163 | 37.277628 | 0.812855 | 0.200072 | 0 | 0.754167 | 0 | 0 | 0.085279 | 0.060029 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0375 | false | 0 | 0.058333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e41e86008401ac8e73c0b1d318f608eaaae649d0 | 126 | py | Python | tests/test_pyeasytries.py | rainbowxyt0305/pyeasytries | ac21c37d453319214417414facd5325d0538fc30 | [
"MIT"
] | null | null | null | tests/test_pyeasytries.py | rainbowxyt0305/pyeasytries | ac21c37d453319214417414facd5325d0538fc30 | [
"MIT"
] | null | null | null | tests/test_pyeasytries.py | rainbowxyt0305/pyeasytries | ac21c37d453319214417414facd5325d0538fc30 | [
"MIT"
] | null | null | null | from pyeasytries import __version__
from pyeasytries import pyeasytries
def test_version():
assert __version__ == '0.1.0' | 25.2 | 35 | 0.785714 | 16 | 126 | 5.625 | 0.5625 | 0.333333 | 0.466667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0.142857 | 126 | 5 | 36 | 25.2 | 0.805556 | 0 | 0 | 0 | 0 | 0 | 0.03937 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
e4e1f3a4a4dfec4d2e1fedb3d08f35ef1ce38ff7 | 49,868 | py | Python | PythonFights/Game Creator/pythonfights_game_creator.py | adnmaster2008/python-code-fights | 9540873c20526728fcdad5efa943617d43f7c36f | [
"MIT"
] | null | null | null | PythonFights/Game Creator/pythonfights_game_creator.py | adnmaster2008/python-code-fights | 9540873c20526728fcdad5efa943617d43f7c36f | [
"MIT"
] | null | null | null | PythonFights/Game Creator/pythonfights_game_creator.py | adnmaster2008/python-code-fights | 9540873c20526728fcdad5efa943617d43f7c36f | [
"MIT"
] | null | null | null | import os
import multiprocessing
import time
# Get Game Properties____________________________________
system_game_properties_file = open("game.properties", "r")
exec(system_game_properties_file.read())
system_game_properties_file.close()
system_frame_limit = system_frame_limit
system_pythonfights_space_x = system_pythonfights_space_x
system_pythonfights_space_y = system_pythonfights_space_y
system_player1_position = system_player1_position
system_player2_position = system_player2_position
system_player_build_delay = system_player_build_delay
system_player_bullet_delay = system_player_bullet_delay
# Get Game Properties____________________________________
# Create GameSpace___________________________________
system_pythonfights_space = []
system_pythonfights_space_x_backup = system_pythonfights_space_x
system_pythonfights_space_y_backup = system_pythonfights_space_y
while True:
if(system_pythonfights_space_y_backup == 0):
break
else:
system_pythonfights_space.append([])
system_pythonfights_space_y_backup -= 1
del system_pythonfights_space_y_backup
for x in range(0, len(system_pythonfights_space)):
while True:
if(system_pythonfights_space_x_backup == 0):
break
else:
system_pythonfights_space[x].append(" ")
system_pythonfights_space_x_backup -= 1
system_pythonfights_space_x_backup = system_pythonfights_space_x
del system_pythonfights_space_x_backup
# Create GameSpace____________________________________
print("Creating game...")
system_frame = 0
# Create Game____________________________________________________________________
if(system_player1_is_alive):
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]] = "#"
if(system_player2_is_alive):
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]] = "#"
if(system_player3_is_alive):
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]] = "#"
if(system_player4_is_alive):
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]] = "#"
system_player1_bullet_delay = system_player_bullet_delay
system_player2_bullet_delay = system_player_bullet_delay
system_player3_bullet_delay = system_player_bullet_delay
system_player4_bullet_delay = system_player_bullet_delay
system_player1_build_delay = system_player_build_delay
system_player2_build_delay = system_player_build_delay
system_player3_build_delay = system_player_build_delay
system_player4_build_delay = system_player_build_delay
system_bullet_data = []
system_player1_data = []
system_player2_data = []
system_player3_data = []
system_player4_data = []
while True:
system_player1_output = ""
system_player2_output = ""
system_player3_output = ""
system_player4_output = ""
system_player1_action = ""
system_player2_action = ""
system_player3_action = ""
system_player4_action = ""
# Player 1 Turn
if(not system_pythonfights_space[system_player1_position[1]][system_player1_position[0]] == "#"):
system_player1_is_alive = False
if(system_player1_is_alive):
system_player1_turn_file = open("player1_code.py")
exec(system_player1_turn_file.read())
system_player1_turn_file.close()
system_tmp_data_list = [system_pythonfights_space.copy(), system_player1_position[0], system_player1_position[1], system_player1_build_delay, system_player1_bullet_delay]
system_tmp_data = player1_code(system_player1_data, system_tmp_data_list)
system_player1_output = system_tmp_data[0]
system_player1_data = system_tmp_data[1]
del system_tmp_data_list
del system_tmp_data
del player1_code
# Player 1 Turn
# Player 2 Turn
if(not system_pythonfights_space[system_player2_position[1]][system_player2_position[0]] == "#"):
system_player2_is_alive = False
if(system_player2_is_alive):
system_player2_turn_file = open("player2_code.py")
exec(system_player2_turn_file.read())
system_player2_turn_file.close()
system_tmp_data_list = [system_pythonfights_space.copy(), system_player2_position[0], system_player2_position[1], system_player2_build_delay, system_player2_bullet_delay]
system_tmp_data = player2_code(system_player2_data, system_tmp_data_list)
system_player2_output = system_tmp_data[0]
system_player2_data = system_tmp_data[1]
del system_tmp_data_list
del system_tmp_data
del player2_code
# Player 2 Turn
# Player 3 Turn
if(not system_pythonfights_space[system_player3_position[1]][system_player3_position[0]] == "#"):
system_player3_is_alive = False
if(system_player3_is_alive):
system_player3_turn_file = open("player3_code.py")
exec(system_player3_turn_file.read())
system_player3_turn_file.close()
system_tmp_data_list = [system_pythonfights_space.copy(), system_player3_position[0], system_player3_position[1], system_player3_build_delay, system_player3_bullet_delay]
system_tmp_data = player3_code(system_player3_data, system_tmp_data_list)
system_player3_output = system_tmp_data[0]
system_player3_data = system_tmp_data[1]
del system_tmp_data_list
del system_tmp_data
del player3_code
# Player 3 Turn
# Player 4 Turn
if(not system_pythonfights_space[system_player4_position[1]][system_player4_position[0]] == "#"):
system_player4_is_alive = False
if(system_player4_is_alive):
system_player4_turn_file = open("player4_code.py")
exec(system_player4_turn_file.read())
system_player4_turn_file.close()
system_tmp_data_list = [system_pythonfights_space.copy(), system_player4_position[0], system_player4_position[1], system_player4_build_delay, system_player4_bullet_delay]
system_tmp_data = player4_code(system_player4_data, system_tmp_data_list)
system_player4_output = system_tmp_data[0]
system_player4_data = system_tmp_data[1]
del system_tmp_data_list
del system_tmp_data
del player4_code
# Player 4 Turn
# Process Current Frame
if("m" in system_player1_output and system_player1_is_alive):
system_player1_output = system_player1_output.replace("m", "")
try:
if(system_player1_output == ">"):
if(system_pythonfights_space[system_player1_position[1]][system_player1_position[0]+1] == " "):
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]] = " "
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]+1] = "#"
system_player1_position[0] += 1
if(system_player1_output == "<"):
if(system_pythonfights_space[system_player1_position[1]][system_player1_position[0]-1] == " "):
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]] = " "
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]-1] = "#"
system_player1_position[0] -= 1
if(system_player1_output == "+"):
if(system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]] == " "):
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]] = " "
system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]] = "#"
system_player1_position[1] -= 1
if(system_player1_output == "-"):
if(system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]] == " "):
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]] = " "
system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]] = "#"
system_player1_position[1] += 1
if(system_player1_output == ">+"):
if(system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]+1] == " "):
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]] = " "
system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]+1] = "#"
system_player1_position[0] += 1
system_player1_position[1] -= 1
if(system_player1_output == ">-"):
if(system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]+1] == " "):
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]] = " "
system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]+1] = "#"
system_player1_position[0] += 1
system_player1_position[1] += 1
if(system_player1_output == "<+"):
if(system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]-1] == " "):
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]] = " "
system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]-1] = "#"
system_player1_position[0] -= 1
system_player1_position[1] -= 1
if(system_player1_output == "<-"):
if(system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]-1] == " "):
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]] = " "
system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]-1] = "#"
system_player1_position[0] -= 1
system_player1_position[1] += 1
except:
pass
if("m" in system_player2_output and system_player2_is_alive):
system_player2_output = system_player2_output.replace("m", "")
try:
if(system_player2_output == ">"):
if(system_pythonfights_space[system_player2_position[1]][system_player2_position[0]+1] == " "):
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]] = " "
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]+1] = "#"
system_player2_position[0] += 1
if(system_player2_output == "<"):
if(system_pythonfights_space[system_player2_position[1]][system_player2_position[0]-1] == " "):
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]] = " "
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]-1] = "#"
system_player2_position[0] -= 1
if(system_player2_output == "+"):
if(system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]] == " "):
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]] = " "
system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]] = "#"
system_player2_position[1] -= 1
if(system_player2_output == "-"):
if(system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]] == " "):
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]] = " "
system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]] = "#"
system_player2_position[1] += 1
if(system_player2_output == ">+"):
if(system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]+1] == " "):
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]] = " "
system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]+1] = "#"
system_player2_position[0] += 1
system_player2_position[1] -= 1
if(system_player2_output == ">-"):
if(system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]+1] == " "):
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]] = " "
system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]+1] = "#"
system_player2_position[0] += 1
system_player2_position[1] += 1
if(system_player2_output == "<+"):
if(system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]-1] == " "):
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]] = " "
system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]-1] = "#"
system_player2_position[0] -= 1
system_player2_position[1] -= 1
if(system_player2_output == "<-"):
if(system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]-1] == " "):
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]] = " "
system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]-1] = "#"
system_player2_position[0] -= 1
system_player2_position[1] += 1
except:
pass
if("m" in system_player3_output and system_player3_is_alive):
system_player3_output = system_player3_output.replace("m", "")
try:
if(system_player3_output == ">"):
if(system_pythonfights_space[system_player3_position[1]][system_player3_position[0]+1] == " "):
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]] = " "
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]+1] = "#"
system_player3_position[0] += 1
if(system_player3_output == "<"):
if(system_pythonfights_space[system_player3_position[1]][system_player3_position[0]-1] == " "):
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]] = " "
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]-1] = "#"
system_player3_position[0] -= 1
if(system_player3_output == "+"):
if(system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]] == " "):
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]] = " "
system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]] = "#"
system_player3_position[1] -= 1
if(system_player3_output == "-"):
if(system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]] == " "):
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]] = " "
system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]] = "#"
system_player3_position[1] += 1
if(system_player3_output == ">+"):
if(system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]+1] == " "):
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]] = " "
system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]+1] = "#"
system_player3_position[0] += 1
system_player3_position[1] -= 1
if(system_player3_output == ">-"):
if(system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]+1] == " "):
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]] = " "
system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]+1] = "#"
system_player3_position[0] += 1
system_player3_position[1] += 1
if(system_player3_output == "<+"):
if(system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]-1] == " "):
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]] = " "
system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]-1] = "#"
system_player3_position[0] -= 1
system_player3_position[1] -= 1
if(system_player3_output == "<-"):
if(system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]-1] == " "):
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]] = " "
system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]-1] = "#"
system_player3_position[0] -= 1
system_player3_position[1] += 1
except:
pass
if("m" in system_player4_output and system_player4_is_alive):
system_player4_output = system_player4_output.replace("m", "")
try:
if(system_player4_output == ">"):
if(system_pythonfights_space[system_player4_position[1]][system_player4_position[0]+1] == " "):
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]] = " "
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]+1] = "#"
system_player4_position[0] += 1
if(system_player4_output == "<"):
if(system_pythonfights_space[system_player4_position[1]][system_player4_position[0]-1] == " "):
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]] = " "
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]-1] = "#"
system_player4_position[0] -= 1
if(system_player4_output == "+"):
if(system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]] == " "):
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]] = " "
system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]] = "#"
system_player4_position[1] -= 1
if(system_player4_output == "-"):
if(system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]] == " "):
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]] = " "
system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]] = "#"
system_player3_position[1] += 1
if(system_player4_output == ">+"):
if(system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]+1] == " "):
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]] = " "
system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]+1] = "#"
system_player4_position[0] += 1
system_player4_position[1] -= 1
if(system_player4_output == ">-"):
if(system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]+1] == " "):
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]] = " "
system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]+1] = "#"
system_player4_position[0] += 1
system_player4_position[1] += 1
if(system_player4_output == "<+"):
if(system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]-1] == " "):
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]] = " "
system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]-1] = "#"
system_player4_position[0] -= 1
system_player4_position[1] -= 1
if(system_player4_output == "<-"):
if(system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]-1] == " "):
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]] = " "
system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]-1] = "#"
system_player4_position[0] -= 1
system_player4_position[1] += 1
except:
pass
if("b" in system_player1_output and system_player1_is_alive):
system_player1_output = system_player1_output.replace("b", "")
try:
if(system_player1_build_delay == 0):
system_player1_build_delay = system_player_build_delay
if(system_player1_output == ">"):
if(system_pythonfights_space[system_player1_position[1]][system_player1_position[0]+1] == " "):
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]+1] = "*"
if(system_player1_output == "<"):
if(system_pythonfights_space[system_player1_position[1]][system_player1_position[0]-1] == " "):
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]-1] = "*"
if(system_player1_output == "+"):
if(system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]] == " "):
system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]] = "*"
if(system_player1_output == "-"):
if(system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]] == " "):
system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]] = "*"
if(system_player1_output == ">+"):
if(system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]+1] == " "):
system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]+1] = "*"
if(system_player1_output == ">-"):
if(system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]+1] == " "):
system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]+1] = "*"
if(system_player1_output == "<+"):
if(system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]-1] == " "):
system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]-1] = "*"
if(system_player1_output == "<-"):
if(system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]-1] == " "):
system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]-1] = "*"
except:
pass
if("b" in system_player2_output and system_player2_is_alive):
system_player2_output = system_player2_output.replace("b", "")
try:
if(system_player2_build_delay == 0):
system_player2_build_delay = system_player_build_delay
if(system_player2_output == ">"):
if(system_pythonfights_space[system_player2_position[1]][system_player2_position[0]+1] == " "):
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]+1] = "*"
if(system_player2_output == "<"):
if(system_pythonfights_space[system_player2_position[1]][system_player2_position[0]-1] == " "):
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]-1] = "*"
if(system_player2_output == "+"):
if(system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]] == " "):
system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]] = "*"
if(system_player2_output == "-"):
if(system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]] == " "):
system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]] = "*"
if(system_player2_output == ">+"):
if(system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]+1] == " "):
system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]+1] = "*"
if(system_player2_output == ">-"):
if(system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]+1] == " "):
system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]+1] = "*"
if(system_player2_output == "<+"):
if(system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]-1] == " "):
system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]-1] = "*"
if(system_player2_output == "<-"):
if(system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]-1] == " "):
system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]-1] = "*"
except:
pass
if("b" in system_player3_output and system_player3_is_alive):
system_player3_output = system_player3_output.replace("b", "")
try:
if(system_player3_build_delay == 0):
system_player3_build_delay = system_player_build_delay
if(system_player3_output == ">"):
if(system_pythonfights_space[system_player3_position[1]][system_player3_position[0]+1] == " "):
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]+1] = "*"
if(system_player3_output == "<"):
if(system_pythonfights_space[system_player3_position[1]][system_player3_position[0]-1] == " "):
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]-1] = "*"
if(system_player3_output == "+"):
if(system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]] == " "):
system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]] = "*"
if(system_player3_output == "-"):
if(system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]] == " "):
system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]] = "*"
if(system_player3_output == ">+"):
if(system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]+1] == " "):
system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]+1] = "*"
if(system_player3_output == ">-"):
if(system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]+1] == " "):
system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]+1] = "*"
if(system_player3_output == "<+"):
if(system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]-1] == " "):
system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]-1] = "*"
if(system_player3_output == "<-"):
if(system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]-1] == " "):
system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]-1] = "*"
except:
pass
if("b" in system_player4_output and system_player4_is_alive):
system_player4_output = system_player4_output.replace("b", "")
try:
if(system_player4_build_delay == 0):
system_player4_build_delay = system_player_build_delay
if(system_player4_output == ">"):
if(system_pythonfights_space[system_player4_position[1]][system_player4_position[0]+1] == " "):
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]+1] = "*"
if(system_player4_output == "<"):
if(system_pythonfights_space[system_player4_position[1]][system_player4_position[0]-1] == " "):
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]-1] = "*"
if(system_player4_output == "+"):
if(system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]] == " "):
system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]] = "*"
if(system_player4_output == "-"):
if(system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]] == " "):
system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]] = "*"
if(system_player4_output == ">+"):
if(system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]+1] == " "):
system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]+1] = "*"
if(system_player4_output == ">-"):
if(system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]+1] == " "):
system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]+1] = "*"
if(system_player4_output == "<+"):
if(system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]-1] == " "):
system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]-1] = "*"
if(system_player4_output == "<-"):
if(system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]-1] == " "):
system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]-1] = "*"
except:
pass
system_gonna_be_popped = []
if(len(system_bullet_data) > 0):
for x in range(0, len(system_bullet_data)):
if(not system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] == "*"):
system_gonna_be_popped.append(x)
else:
if(system_bullet_data[x][2] == ">"):
try:
if(system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]+1] == " "):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]+1] = "*"
system_bullet_data[x][0] += 1
elif(system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]+1] == "*"):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]+1] = " "
elif(system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]+1] == "#"):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]+1] = " "
except:
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
if(system_bullet_data[x][2] == "<"):
try:
if(system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]-1] == " "):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]-1] = "*"
system_bullet_data[x][0] -= 1
elif(system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]-1] == "*"):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]-1] = " "
elif(system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]-1] == "#"):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]-1] = " "
except:
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
if(system_bullet_data[x][2] == "+"):
try:
if(system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]] == " "):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]] = "*"
system_bullet_data[x][1] -= 1
elif(system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]] == "*"):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]] = " "
elif(system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]] == "#"):
system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]] = " "
except:
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
if(system_bullet_data[x][2] == "-"):
try:
if(system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]] == " "):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]] = "*"
system_bullet_data[x][1] += 1
elif(system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]] == "*"):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]] = " "
elif(system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]] == "#"):
system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]] = " "
except:
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
if(system_bullet_data[x][2] == ">+"):
try:
if(system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]+1] == " "):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]+1] = "*"
system_bullet_data[x][0] += 1
system_bullet_data[x][1] -= 1
elif(system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]+1] == "*"):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]+1] = " "
elif(system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]+1] == "#"):
system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]+1] = " "
except:
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
if(system_bullet_data[x][2] == ">-"):
try:
if(system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]+1] == " "):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]+1] = "*"
system_bullet_data[x][0] += 1
system_bullet_data[x][1] += 1
elif(system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]+1] == "*"):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]+1] = " "
elif(system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]+1] == "#"):
system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]+1] = " "
except:
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
if(system_bullet_data[x][2] == "<+"):
try:
if(system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]-1] == " "):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]-1] = "*"
system_bullet_data[x][0] -= 1
system_bullet_data[x][1] -= 1
elif(system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]-1] == "*"):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]-1] = " "
elif(system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]-1] == "#"):
system_pythonfights_space[system_bullet_data[x][1]-1][system_bullet_data[x][0]-1] = " "
except:
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
if(system_bullet_data[x][2] == "<-"):
try:
if(system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]-1] == " "):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]-1] = "*"
system_bullet_data[x][0] -= 1
system_bullet_data[x][1] += 1
elif(system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]-1] == "*"):
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]-1] = " "
elif(system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]-1] == "#"):
system_pythonfights_space[system_bullet_data[x][1]+1][system_bullet_data[x][0]-1] = " "
except:
system_pythonfights_space[system_bullet_data[x][1]][system_bullet_data[x][0]] = " "
for x in range(0, len(system_gonna_be_popped)):
system_bullet_data.pop(system_gonna_be_popped[x])
del system_gonna_be_popped
if("s" in system_player1_output and system_player1_is_alive):
system_player1_output = system_player1_output.replace("s", "")
try:
if(system_player1_bullet_delay == 0):
system_player1_bullet_delay = system_player_bullet_delay
if(system_player1_output == ">"):
if(system_pythonfights_space[system_player1_position[1]][system_player1_position[0]+1] == " "):
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]+1] = "*"
system_bullet_data.append([system_player1_position[0]+1, system_player1_position[1], system_player1_output])
if(system_player1_output == "<"):
if(system_pythonfights_space[system_player1_position[1]][system_player1_position[0]-1] == " "):
system_pythonfights_space[system_player1_position[1]][system_player1_position[0]-1] = "*"
system_bullet_data.append([system_player1_position[0]-1, system_player1_position[1], system_player1_output])
if(system_player1_output == "+"):
if(system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]] == " "):
system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]] = "*"
system_bullet_data.append([system_player1_position[0], system_player1_position[1]-1, system_player1_output])
if(system_player1_output == "-"):
if(system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]] == " "):
system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]] = "*"
system_bullet_data.append([system_player1_position[0], system_player1_position[1]+1, system_player1_output])
if(system_player1_output == ">+"):
if(system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]+1] == " "):
system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]+1] = "*"
system_bullet_data.append([system_player1_position[0]+1, system_player1_position[1]-1, system_player1_output])
if(system_player1_output == ">-"):
if(system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]+1] == " "):
system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]+1] = "*"
system_bullet_data.append([system_player1_position[0]+1, system_player1_position[1]+1, system_player1_output])
if(system_player1_output == "<+"):
if(system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]-1] == " "):
system_pythonfights_space[system_player1_position[1]-1][system_player1_position[0]-1] = "*"
system_bullet_data.append([system_player1_position[0]-1, system_player1_position[1]-1, system_player1_output])
if(system_player1_output == "<-"):
if(system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]-1] == " "):
system_pythonfights_space[system_player1_position[1]+1][system_player1_position[0]-1] = "*"
system_bullet_data.append([system_player1_position[0]-1, system_player1_position[1]+1, system_player1_output])
except:
pass
if("s" in system_player2_output and system_player2_is_alive):
system_player2_output = system_player2_output.replace("s", "")
try:
if(system_player2_bullet_delay == 0):
system_player2_bullet_delay = system_player_bullet_delay
if(system_player2_output == ">"):
if(system_pythonfights_space[system_player2_position[1]][system_player2_position[0]+1] == " "):
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]+1] = "*"
system_bullet_data.append([system_player2_position[0]+1, system_player2_position[1], system_player2_output])
if(system_player2_output == "<"):
if(system_pythonfights_space[system_player2_position[1]][system_player2_position[0]-1] == " "):
system_pythonfights_space[system_player2_position[1]][system_player2_position[0]-1] = "*"
system_bullet_data.append([system_player2_position[0]-1, system_player2_position[1], system_player2_output])
if(system_player2_output == "+"):
if(system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]] == " "):
system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]] = "*"
system_bullet_data.append([system_player2_position[0], system_player2_position[1]-1, system_player2_output])
if(system_player2_output == "-"):
if(system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]] == " "):
system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]] = "*"
system_bullet_data.append([system_player2_position[0], system_player2_position[1]+1, system_player2_output])
if(system_player2_output == ">+"):
if(system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]+1] == " "):
system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]+1] = "*"
system_bullet_data.append([system_player2_position[0]+1, system_player2_position[1]-1, system_player2_output])
if(system_player2_output == ">-"):
if(system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]+1] == " "):
system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]+1] = "*"
system_bullet_data.append([system_player2_position[0]+1, system_player2_position[1]+1, system_player2_output])
if(system_player2_output == "<+"):
if(system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]-1] == " "):
system_pythonfights_space[system_player2_position[1]-1][system_player2_position[0]-1] = "*"
system_bullet_data.append([system_player2_position[0]-1, system_player2_position[1]-1, system_player2_output])
if(system_player2_output == "<-"):
if(system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]-1] == " "):
system_pythonfights_space[system_player2_position[1]+1][system_player2_position[0]-1] = "*"
system_bullet_data.append([system_player2_position[0]-1, system_player2_position[1]+1, system_player2_output])
except:
pass
if("s" in system_player3_output and system_player3_is_alive):
system_player3_output = system_player3_output.replace("s", "")
try:
if(system_player3_bullet_delay == 0):
system_player3_bullet_delay = system_player_bullet_delay
if(system_player3_output == ">"):
if(system_pythonfights_space[system_player3_position[1]][system_player3_position[0]+1] == " "):
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]+1] = "*"
system_bullet_data.append([system_player3_position[0]+1, system_player3_position[1], system_player3_output])
if(system_player3_output == "<"):
if(system_pythonfights_space[system_player3_position[1]][system_player3_position[0]-1] == " "):
system_pythonfights_space[system_player3_position[1]][system_player3_position[0]-1] = "*"
system_bullet_data.append([system_player3_position[0]-1, system_player3_position[1], system_player3_output])
if(system_player3_output == "+"):
if(system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]] == " "):
system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]] = "*"
system_bullet_data.append([system_player3_position[0], system_player3_position[1]-1, system_player3_output])
if(system_player3_output == "-"):
if(system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]] == " "):
system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]] = "*"
system_bullet_data.append([system_player3_position[0], system_player3_position[1]+1, system_player3_output])
if(system_player3_output == ">+"):
if(system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]+1] == " "):
system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]+1] = "*"
system_bullet_data.append([system_player3_position[0]+1, system_player3_position[1]-1, system_player3_output])
if(system_player3_output == ">-"):
if(system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]+1] == " "):
system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]+1] = "*"
system_bullet_data.append([system_player3_position[0]+1, system_player3_position[1]+1, system_player3_output])
if(system_player3_output == "<+"):
if(system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]-1] == " "):
system_pythonfights_space[system_player3_position[1]-1][system_player3_position[0]-1] = "*"
system_bullet_data.append([system_player3_position[0]-1, system_player3_position[1]-1, system_player3_output])
if(system_player3_output == "<-"):
if(system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]-1] == " "):
system_pythonfights_space[system_player3_position[1]+1][system_player3_position[0]-1] = "*"
system_bullet_data.append([system_player3_position[0]-1, system_player3_position[1]+1, system_player3_output])
except:
pass
if("s" in system_player4_output and system_player4_is_alive):
system_player4_output = system_player4_output.replace("s", "")
try:
if(system_player4_bullet_delay == 0):
system_player4_bullet_delay = system_player_bullet_delay
if(system_player4_output == ">"):
if(system_pythonfights_space[system_player4_position[1]][system_player4_position[0]+1] == " "):
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]+1] = "*"
system_bullet_data.append([system_player4_position[0]+1, system_player4_position[1], system_player4_output])
if(system_player4_output == "<"):
if(system_pythonfights_space[system_player4_position[1]][system_player4_position[0]-1] == " "):
system_pythonfights_space[system_player4_position[1]][system_player4_position[0]-1] = "*"
system_bullet_data.append([system_player4_position[0]-1, system_player4_position[1], system_player4_output])
if(system_player4_output == "+"):
if(system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]] == " "):
system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]] = "*"
system_bullet_data.append([system_player4_position[0], system_player4_position[1]-1, system_player4_output])
if(system_player4_output == "-"):
if(system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]] == " "):
system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]] = "*"
system_bullet_data.append([system_player4_position[0], system_player4_position[1]+1, system_player4_output])
if(system_player4_output == ">+"):
if(system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]+1] == " "):
system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]+1] = "*"
system_bullet_data.append([system_player4_position[0]+1, system_player4_position[1]-1, system_player4_output])
if(system_player4_output == ">-"):
if(system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]+1] == " "):
system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]+1] = "*"
system_bullet_data.append([system_player4_position[0]+1, system_player4_position[1]+1, system_player4_output])
if(system_player4_output == "<+"):
if(system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]-1] == " "):
system_pythonfights_space[system_player4_position[1]-1][system_player4_position[0]-1] = "*"
system_bullet_data.append([system_player4_position[0]-1, system_player4_position[1]-1, system_player4_output])
if(system_player4_output == "<-"):
if(system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]-1] == " "):
system_pythonfights_space[system_player4_position[1]+1][system_player4_position[0]-1] = "*"
system_bullet_data.append([system_player4_position[0]-1, system_player4_position[1]+1, system_player4_output])
except:
pass
# Process Current Frame
# Reduce Delay
if(system_player1_bullet_delay != 0):
system_player1_bullet_delay -= 1
if(system_player2_bullet_delay != 0):
system_player2_bullet_delay -= 1
if(system_player3_bullet_delay != 0):
system_player3_bullet_delay -= 1
if(system_player4_bullet_delay != 0):
system_player4_bullet_delay -= 1
if(system_player1_build_delay != 0):
system_player1_build_delay -= 1
if(system_player2_build_delay != 0):
system_player2_build_delay -= 1
if(system_player3_build_delay != 0):
system_player3_build_delay -= 1
if(system_player4_build_delay != 0):
system_player4_build_delay -= 1
# Reduce Delay
# Output Current Frame
system_tmp_str = ""
for x in range(0, len(system_pythonfights_space)):
for xx in range(0, len(system_pythonfights_space[x])):
system_tmp_str += system_pythonfights_space[x][xx]
system_tmp_str += "\n"
system_tmp_str += "-"
system_game_write_file = open("game.pyf", "a")
system_game_write_file.write(system_tmp_str+"\n")
system_game_write_file.close()
del system_tmp_str
# Output Current Frame
if(system_frame == system_frame_limit):
break
system_frame += 1
# Create Game_____________________________________________________________________
print("Game created successfully")
while True:
pass
| 63.688378 | 173 | 0.737427 | 6,591 | 49,868 | 5.067971 | 0.0132 | 0.111068 | 0.228602 | 0.265665 | 0.954256 | 0.940993 | 0.916265 | 0.906685 | 0.885429 | 0.87561 | 0 | 0.048996 | 0.123326 | 49,868 | 782 | 174 | 63.769821 | 0.71506 | 0.012052 | 0 | 0.65722 | 0 | 0 | 0.012731 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.017544 | 0.004049 | 0 | 0.004049 | 0.002699 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e4e5f25902ffe341bcf77bf4c9374d1b2a591a1a | 631 | py | Python | 24/00/real_quick_ratio.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | null | null | null | 24/00/real_quick_ratio.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | 39 | 2017-07-31T22:54:01.000Z | 2017-08-31T00:19:03.000Z | 24/00/real_quick_ratio.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | null | null | null | #!python3.6
import difflib
sm = difflib.SequenceMatcher(); print(sm);
sm = difflib.SequenceMatcher(a='abc', b='abc')
print('ratio:', sm.ratio());
print('quick_ratio:', sm.quick_ratio());
print('real_quick_ratio:', sm.real_quick_ratio());
sm = difflib.SequenceMatcher(a='abc', b='a b c')
print('ratio:', sm.ratio());
print('quick_ratio:', sm.quick_ratio());
print('real_quick_ratio:', sm.real_quick_ratio());
sm = difflib.SequenceMatcher(isjunk=lambda x: x in " \t", a='abc', b='a b c'); print(sm, sm.ratio());
print('ratio:', sm.ratio());
print('quick_ratio:', sm.quick_ratio());
print('real_quick_ratio:', sm.real_quick_ratio());
| 39.4375 | 101 | 0.687797 | 98 | 631 | 4.244898 | 0.193878 | 0.288462 | 0.230769 | 0.192308 | 0.805288 | 0.805288 | 0.735577 | 0.685096 | 0.685096 | 0.685096 | 0 | 0.003436 | 0.077655 | 631 | 15 | 102 | 42.066667 | 0.71134 | 0.015848 | 0 | 0.785714 | 0 | 0 | 0.209677 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0.785714 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
e4e9bedd5877d94b4f6b7ac3529d84bb774d7e63 | 13,406 | py | Python | tests/test_user.py | bachya/pyflunearyou | 12fe7ef6082c2010fe427dae6281ecd9a758d77f | [
"MIT"
] | null | null | null | tests/test_user.py | bachya/pyflunearyou | 12fe7ef6082c2010fe427dae6281ecd9a758d77f | [
"MIT"
] | 54 | 2018-11-01T21:30:06.000Z | 2022-03-01T18:40:47.000Z | tests/test_user.py | bachya/pyflunearyou | 12fe7ef6082c2010fe427dae6281ecd9a758d77f | [
"MIT"
] | null | null | null | """Define tests for the user report endpoints."""
from aiocache import SimpleMemoryCache
import aiohttp
import pytest
from pyflunearyou import Client
from pyflunearyou.helpers.report import CACHE_KEY_LOCAL_DATA, CACHE_KEY_STATE_DATA
from .common import (
TEST_LATITUDE,
TEST_LATITUDE_UNCONTAINED,
TEST_LONGITUDE,
TEST_LONGITUDE_UNCONTAINED,
TEST_ZIP,
load_fixture,
)
@pytest.mark.asyncio
async def test_no_explicit_client_session(aresponses):
"""Test not providing an explicit aiohttp ClientSession."""
cache = SimpleMemoryCache()
await cache.delete(CACHE_KEY_LOCAL_DATA)
await cache.delete(CACHE_KEY_STATE_DATA)
aresponses.add(
"api.v2.flunearyou.org",
"/map/markers",
"get",
aresponses.Response(
text=load_fixture("user_report_response.json"),
status=200,
headers={"Content-Type": "application/json; charset=utf-8"},
),
)
aresponses.add(
"api.v2.flunearyou.org",
"/states",
"get",
aresponses.Response(
text=load_fixture("states_response.json"),
status=200,
headers={"Content-Type": "application/json; charset=utf-8"},
),
)
client = Client()
info = await client.user_reports.status_by_coordinates(
TEST_LATITUDE, TEST_LONGITUDE
)
assert info == {
"local": {
"id": 2,
"city": "Los Angeles(90046)",
"place_id": "23818",
"zip": "90046",
"contained_by": "204",
"latitude": "34.114731",
"longitude": "-118.363724",
"none": 2,
"symptoms": 0,
"flu": 0,
"lepto": 0,
"dengue": 0,
"chick": 0,
"icon": "1",
},
"state": {
"name": "California",
"place_id": "204",
"lat": "37.250198",
"lon": "-119.750298",
"data": {
"symptoms_percentage": 12.32,
"none_percentage": 87.68,
"ili_percentage": 2.69,
"lepto_percentage": 0,
"dengue_percentage": 2.17,
"chick_percentage": 0,
"level": 3,
"overlay_color": "#00B7B6",
"total_surveys": 2119,
"symptoms": 261,
"no_symptoms": 1858,
"ili": 57,
"lepto": 0,
"dengue": 46,
"chick": 0,
},
"last_week_data": {
"symptoms_percentage": 14.29,
"none_percentage": 85.71,
"ili_percentage": 2.91,
"lepto_percentage": 0.05,
"dengue_percentage": 2.21,
"chick_percentage": 0,
"level": 3,
"overlay_color": "#00B7B6",
"total_surveys": 2128,
"symptoms": 304,
"no_symptoms": 1824,
"ili": 62,
"lepto": 1,
"dengue": 47,
"chick": 0,
},
},
}
@pytest.mark.asyncio
async def test_status_by_coordinates_success_id(aresponses):
"""Test getting user reports by latitude/longitude (contained ID)."""
cache = SimpleMemoryCache()
await cache.delete(CACHE_KEY_LOCAL_DATA)
await cache.delete(CACHE_KEY_STATE_DATA)
aresponses.add(
"api.v2.flunearyou.org",
"/map/markers",
"get",
aresponses.Response(
text=load_fixture("user_report_response.json"),
status=200,
headers={"Content-Type": "application/json; charset=utf-8"},
),
)
aresponses.add(
"api.v2.flunearyou.org",
"/states",
"get",
aresponses.Response(
text=load_fixture("states_response.json"),
status=200,
headers={"Content-Type": "application/json; charset=utf-8"},
),
)
async with aiohttp.ClientSession() as session:
client = Client(session=session)
info = await client.user_reports.status_by_coordinates(
TEST_LATITUDE, TEST_LONGITUDE
)
assert info == {
"local": {
"id": 2,
"city": "Los Angeles(90046)",
"place_id": "23818",
"zip": "90046",
"contained_by": "204",
"latitude": "34.114731",
"longitude": "-118.363724",
"none": 2,
"symptoms": 0,
"flu": 0,
"lepto": 0,
"dengue": 0,
"chick": 0,
"icon": "1",
},
"state": {
"name": "California",
"place_id": "204",
"lat": "37.250198",
"lon": "-119.750298",
"data": {
"symptoms_percentage": 12.32,
"none_percentage": 87.68,
"ili_percentage": 2.69,
"lepto_percentage": 0,
"dengue_percentage": 2.17,
"chick_percentage": 0,
"level": 3,
"overlay_color": "#00B7B6",
"total_surveys": 2119,
"symptoms": 261,
"no_symptoms": 1858,
"ili": 57,
"lepto": 0,
"dengue": 46,
"chick": 0,
},
"last_week_data": {
"symptoms_percentage": 14.29,
"none_percentage": 85.71,
"ili_percentage": 2.91,
"lepto_percentage": 0.05,
"dengue_percentage": 2.21,
"chick_percentage": 0,
"level": 3,
"overlay_color": "#00B7B6",
"total_surveys": 2128,
"symptoms": 304,
"no_symptoms": 1824,
"ili": 62,
"lepto": 1,
"dengue": 47,
"chick": 0,
},
},
}
@pytest.mark.asyncio
async def test_status_by_coordinates_success_measure(aresponses):
"""Test getting user reports by latitude/longitude (measurement)."""
cache = SimpleMemoryCache()
await cache.delete(CACHE_KEY_LOCAL_DATA)
await cache.delete(CACHE_KEY_STATE_DATA)
aresponses.add(
"api.v2.flunearyou.org",
"/map/markers",
"get",
aresponses.Response(
text=load_fixture("user_report_response.json"),
status=200,
headers={"Content-Type": "application/json; charset=utf-8"},
),
)
aresponses.add(
"api.v2.flunearyou.org",
"/states",
"get",
aresponses.Response(
text=load_fixture("states_response.json"),
status=200,
headers={"Content-Type": "application/json; charset=utf-8"},
),
)
async with aiohttp.ClientSession() as session:
client = Client(session=session)
info = await client.user_reports.status_by_coordinates(
TEST_LATITUDE_UNCONTAINED, TEST_LONGITUDE_UNCONTAINED
)
assert info == {
"local": {
"id": 3,
"city": "Corvallis(97330)",
"place_id": "21462",
"zip": "97330",
"contained_by": "239",
"latitude": "44.638504",
"longitude": "-123.292938",
"none": 3,
"symptoms": 0,
"flu": 0,
"lepto": 0,
"dengue": 0,
"chick": 0,
"icon": "1",
},
"state": {
"name": "California",
"place_id": "204",
"lat": "37.250198",
"lon": "-119.750298",
"data": {
"symptoms_percentage": 12.32,
"none_percentage": 87.68,
"ili_percentage": 2.69,
"lepto_percentage": 0,
"dengue_percentage": 2.17,
"chick_percentage": 0,
"level": 3,
"overlay_color": "#00B7B6",
"total_surveys": 2119,
"symptoms": 261,
"no_symptoms": 1858,
"ili": 57,
"lepto": 0,
"dengue": 46,
"chick": 0,
},
"last_week_data": {
"symptoms_percentage": 14.29,
"none_percentage": 85.71,
"ili_percentage": 2.91,
"lepto_percentage": 0.05,
"dengue_percentage": 2.21,
"chick_percentage": 0,
"level": 3,
"overlay_color": "#00B7B6",
"total_surveys": 2128,
"symptoms": 304,
"no_symptoms": 1824,
"ili": 62,
"lepto": 1,
"dengue": 47,
"chick": 0,
},
},
}
@pytest.mark.asyncio
async def test_status_by_zip_success(aresponses):
"""Test getting user reports by ZIP code."""
cache = SimpleMemoryCache()
await cache.delete(CACHE_KEY_LOCAL_DATA)
await cache.delete(CACHE_KEY_STATE_DATA)
aresponses.add(
"api.v2.flunearyou.org",
"/map/markers",
"get",
aresponses.Response(
text=load_fixture("user_report_response.json"),
status=200,
headers={"Content-Type": "application/json; charset=utf-8"},
),
)
aresponses.add(
"api.v2.flunearyou.org",
"/states",
"get",
aresponses.Response(
text=load_fixture("states_response.json"),
status=200,
headers={"Content-Type": "application/json; charset=utf-8"},
),
)
async with aiohttp.ClientSession() as session:
client = Client(session=session)
info = await client.user_reports.status_by_zip(TEST_ZIP)
assert info == {
"local": {
"id": 2,
"city": "Los Angeles(90046)",
"place_id": "23818",
"zip": "90046",
"contained_by": "204",
"latitude": "34.114731",
"longitude": "-118.363724",
"none": 2,
"symptoms": 0,
"flu": 0,
"lepto": 0,
"dengue": 0,
"chick": 0,
"icon": "1",
},
"state": {
"name": "California",
"place_id": "204",
"lat": "37.250198",
"lon": "-119.750298",
"data": {
"symptoms_percentage": 12.32,
"none_percentage": 87.68,
"ili_percentage": 2.69,
"lepto_percentage": 0,
"dengue_percentage": 2.17,
"chick_percentage": 0,
"level": 3,
"overlay_color": "#00B7B6",
"total_surveys": 2119,
"symptoms": 261,
"no_symptoms": 1858,
"ili": 57,
"lepto": 0,
"dengue": 46,
"chick": 0,
},
"last_week_data": {
"symptoms_percentage": 14.29,
"none_percentage": 85.71,
"ili_percentage": 2.91,
"lepto_percentage": 0.05,
"dengue_percentage": 2.21,
"chick_percentage": 0,
"level": 3,
"overlay_color": "#00B7B6",
"total_surveys": 2128,
"symptoms": 304,
"no_symptoms": 1824,
"ili": 62,
"lepto": 1,
"dengue": 47,
"chick": 0,
},
},
}
@pytest.mark.asyncio
async def test_status_by_zip_failure(aresponses):
"""Test getting user reports by ZIP code."""
cache = SimpleMemoryCache()
await cache.delete(CACHE_KEY_LOCAL_DATA)
await cache.delete(CACHE_KEY_STATE_DATA)
aresponses.add(
"api.v2.flunearyou.org",
"/map/markers",
"get",
aresponses.Response(
text=load_fixture("user_report_response.json"),
status=200,
headers={"Content-Type": "application/json; charset=utf-8"},
),
)
aresponses.add(
"api.v2.flunearyou.org",
"/states",
"get",
aresponses.Response(
text=load_fixture("states_response.json"),
status=200,
headers={"Content-Type": "application/json; charset=utf-8"},
),
)
async with aiohttp.ClientSession() as session:
client = Client(session=session)
info = await client.user_reports.status_by_zip("00000")
assert info == {}
| 31.919048 | 82 | 0.446964 | 1,154 | 13,406 | 5.012998 | 0.130849 | 0.030424 | 0.027658 | 0.036301 | 0.906482 | 0.896111 | 0.891098 | 0.891098 | 0.873466 | 0.873466 | 0 | 0.080484 | 0.426302 | 13,406 | 419 | 83 | 31.995227 | 0.671694 | 0.003208 | 0 | 0.836317 | 0 | 0 | 0.246692 | 0.025625 | 0 | 0 | 0 | 0 | 0.012788 | 1 | 0 | false | 0 | 0.015345 | 0 | 0.015345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e4ef6780ca2a6cc411906e18d8b89785e1f3be3e | 18,276 | py | Python | src/model/models.py | parrondo/tutorial-LSTM | e9ef8297fabf2a479496036a9a6ed5210224a5e3 | [
"MIT"
] | null | null | null | src/model/models.py | parrondo/tutorial-LSTM | e9ef8297fabf2a479496036a9a6ed5210224a5e3 | [
"MIT"
] | null | null | null | src/model/models.py | parrondo/tutorial-LSTM | e9ef8297fabf2a479496036a9a6ed5210224a5e3 | [
"MIT"
] | 1 | 2019-11-04T14:34:05.000Z | 2019-11-04T14:34:05.000Z | import keras.backend as K
from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.convolutional import ZeroPadding2D
from keras.layers.convolutional import AveragePooling2D
import theano.tensor.nnet.abstract_conv as absconv
import h5py
def CNN(nb_classes, img_dim, pretr_weights_file=None, model_name=None):
"""
Build Convolution Neural Network
args : nb_classes (int) number of classes
img_dim (tuple of int) num_chan, height, width
returns : model (keras NN) the Neural Net model
"""
model = Sequential()
model.add(Convolution2D(32, 3, 3, name="convolution2d_1", input_shape=(3, 224, 224), border_mode="same", activation='relu'))
model.add(Convolution2D(32, 3, 3, name="convolution2d_2", border_mode="same", activation='relu'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_1"))
model.add(Convolution2D(64, 3, 3, name="convolution2d_3", border_mode="same", activation='relu'))
model.add(Convolution2D(64, 3, 3, name="convolution2d_4", border_mode="same", activation='relu'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_2"))
model.add(Convolution2D(128, 3, 3, name="convolution2d_5", border_mode="same", activation='relu'))
model.add(Convolution2D(128, 3, 3, name="convolution2d_6", border_mode="same", activation='relu'))
model.add(Convolution2D(128, 3, 3, name="convolution2d_7", border_mode="same", activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2, 2), name="maxpooling2d_3"))
# model.add(Convolution2D(256, 3, 3, name="convolution2d_8", border_mode="same", activation='relu'))
# model.add(Convolution2D(256, 3, 3, name="convolution2d_9", border_mode="same", activation='relu'))
# model.add(Convolution2D(256, 3, 3, name="convolution2d_10", border_mode="same", activation='relu'))
# model.add(MaxPooling2D((2,2), strides=(2, 2), name="maxpooling2d_4"))
# model.add(Convolution2D(512, 3, 3, name="convolution2d_11", border_mode="same", activation='relu'))
# model.add(Convolution2D(512, 3, 3, name="convolution2d_12", border_mode="same", activation='relu'))
# model.add(Convolution2D(512, 3, 3, name="convolution2d_13", border_mode="same", activation='relu'))
# model.add(MaxPooling2D((2,2), strides=(2, 2), name="maxpooling2d_5"))
model.add(Flatten(name="flatten_1"))
model.add(Dense(1024, activation='relu', name="dense_1"))
model.add(Dropout(0.5, name="dropout_1"))
model.add(Dense(1024, activation='relu', name="dense_2"))
model.add(Dropout(0.5, name="dropout_2"))
model.add(Dense(nb_classes, activation='softmax', name="dense_3"))
if model_name:
model.name = model_name
else:
model.name = "CNN"
if pretr_weights_file:
model.load_weights(pretr_weights_file)
model.layers.pop()
model.outputs = [model.layers[-1].output]
model.layers[-1].outbound_nodes = []
model.add(Dense(nb_classes, activation='softmax', name="dense_4"))
return model
def VGG(nb_classes, img_dim, pretr_weights_file=None, model_name=None):
"""
Build Convolution Neural Network
args : nb_classes (int) number of classes
img_dim (tuple of int) num_chan, height, width
pretr_weights_file (str) file holding pre trained weights
returns : model (keras NN) the Neural Net model
"""
model = Sequential()
model.add(ZeroPadding2D((1, 1), input_shape=img_dim, name="zeropadding2d_1"))
model.add(Convolution2D(64, 3, 3, activation='relu', name="convolution2d_1"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_2"))
model.add(Convolution2D(64, 3, 3, activation='relu', name="convolution2d_2"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_1"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_3"))
model.add(Convolution2D(128, 3, 3, activation='relu', name="convolution2d_3"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_4"))
model.add(Convolution2D(128, 3, 3, activation='relu', name="convolution2d_4"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_2"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_5"))
model.add(Convolution2D(256, 3, 3, activation='relu', name="convolution2d_5"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_6"))
model.add(Convolution2D(256, 3, 3, activation='relu', name="convolution2d_6"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_7"))
model.add(Convolution2D(256, 3, 3, activation='relu', name="convolution2d_7"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_3"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_8"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_8"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_9"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_9"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_10"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_10"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_4"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_11"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_11"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_12"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_12"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_13"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_13"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_5"))
model.add(Flatten(name="flatten_1"))
model.add(Dense(4096, activation='relu', name="dense_1"))
model.add(Dropout(0.5, name="dropout_1"))
model.add(Dense(4096, activation='relu', name="dense_2"))
model.add(Dropout(0.5, name="dropout_2"))
model.add(Dense(1000, activation='softmax', name="dense_3"))
if model_name:
model.name = model_name
else:
model.name = "VGG"
if pretr_weights_file:
model.load_weights(pretr_weights_file)
model.layers.pop()
model.outputs = [model.layers[-1].output]
model.layers[-1].outbound_nodes = []
model.add(Dense(nb_classes, activation='softmax', name="dense_4"))
# Freeze layers until specified number
# for k in range(freeze_until):
# model.layers[k].trainable = True
return model
def VGG19(nb_classes, img_dim, pretr_weights_file=None, model_name=None):
"""
Build Convolution Neural Network
args : nb_classes (int) number of classes
img_dim (tuple of int) num_chan, height, width
returns : model (keras NN) the Neural Net model
"""
model = Sequential()
model.add(ZeroPadding2D((1, 1), input_shape=img_dim, name="zeropadding2d_1"))
model.add(Convolution2D(64, 3, 3, activation='relu', name="convolution2d_1"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_2"))
model.add(Convolution2D(64, 3, 3, activation='relu', name="convolution2d_2"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_1"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_3"))
model.add(Convolution2D(128, 3, 3, activation='relu', name="convolution2d_3"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_4"))
model.add(Convolution2D(128, 3, 3, activation='relu', name="convolution2d_4"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_2"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_5"))
model.add(Convolution2D(256, 3, 3, activation='relu', name="convolution2d_5"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_6"))
model.add(Convolution2D(256, 3, 3, activation='relu', name="convolution2d_6"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_7"))
model.add(Convolution2D(256, 3, 3, activation='relu', name="convolution2d_7"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_8"))
model.add(Convolution2D(256, 3, 3, activation='relu', name="convolution2d_8"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_3"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_9"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_9"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_10"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_10"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_11"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_11"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_12"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_12"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_4"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_13"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_13"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_14"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_14"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_15"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_15"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_16"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_16"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_5"))
model.add(Flatten(name="flatten_1"))
model.add(Dense(4096, activation='relu', name="dense_1"))
model.add(Dropout(0.5, name="dropout_1"))
model.add(Dense(4096, activation='relu', name="dense_2"))
model.add(Dropout(0.5, name="dropout_2"))
model.add(Dense(1000, activation='softmax', name="dense_3"))
if model_name:
model.name = model_name
else:
model.name = "VGG19"
if pretr_weights_file:
model.load_weights(pretr_weights_file)
model.layers.pop()
model.outputs = [model.layers[-1].output]
model.layers[-1].outbound_nodes = []
model.add(Dense(nb_classes, activation='softmax', name="dense_4"))
# Freeze layers until specified number
# for k in range(freeze_until):
# model.layers[k].trainable = True
return model
def VGG_celeba(nb_classes, img_dim, pretr_weights_file=None, model_name=None):
"""
Build Convolution Neural Network
args : nb_classes (int) number of classes
img_dim (tuple of int) num_chan, height, width
pretr_weights_file (str) file holding pre trained weights
returns : model (keras NN) the Neural Net model
"""
model = Sequential()
model.add(Convolution2D(32, 3, 3, name="convolution2d_1", input_shape=(3, 224, 224), border_mode="same", activation='relu'))
model.add(Convolution2D(32, 3, 3, name="convolution2d_2", border_mode="same", activation='relu'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_1"))
model.add(Convolution2D(64, 3, 3, name="convolution2d_3", border_mode="same", activation='relu'))
model.add(Convolution2D(64, 3, 3, name="convolution2d_4", border_mode="same", activation='relu'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_2"))
model.add(Convolution2D(128, 3, 3, name="convolution2d_5", border_mode="same", activation='relu'))
model.add(Convolution2D(128, 3, 3, name="convolution2d_6", border_mode="same", activation='relu'))
model.add(Convolution2D(128, 3, 3, name="convolution2d_7", border_mode="same", activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2, 2), name="maxpooling2d_3"))
model.add(Convolution2D(256, 3, 3, name="convolution2d_8", border_mode="same", activation='relu'))
model.add(Convolution2D(256, 3, 3, name="convolution2d_9", border_mode="same", activation='relu'))
model.add(Convolution2D(256, 3, 3, name="convolution2d_10", border_mode="same", activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2, 2), name="maxpooling2d_4"))
model.add(Convolution2D(512, 3, 3, name="convolution2d_11", border_mode="same", activation='relu'))
model.add(Convolution2D(512, 3, 3, name="convolution2d_12", border_mode="same", activation='relu'))
model.add(Convolution2D(512, 3, 3, name="convolution2d_13", border_mode="same", activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2, 2), name="maxpooling2d_5"))
model.add(Flatten(name="flatten_1"))
model.add(Dense(4096, activation='relu', name="dense_1"))
model.add(Dropout(0.5, name="dropout_1"))
model.add(Dense(4096, activation='relu', name="dense_2"))
model.add(Dropout(0.5, name="dropout_2"))
model.add(Dense(2, activation='softmax', name="dense_3"))
if model_name:
model.name = model_name
else:
model.name = "VGG_celeba"
if pretr_weights_file:
model.load_weights(pretr_weights_file)
model.layers.pop()
model.outputs = [model.layers[-1].output]
model.layers[-1].outbound_nodes = []
model.add(Dense(nb_classes, activation='softmax', name="dense_4"))
# Freeze layers until specified number
# for k in range(freeze_until):
# model.layers[k].trainable = True
return model
def VGGCAM(nb_classes, img_dim, pretr_weights_file=None, model_name=None):
"""
Build VGGCAM network
args : nb_classes (int) number of classes
img_dim (tuple of int) num_chan, height, width
pretr_weights_file (str) file holding pre trained weights
returns : model (keras NN) the Neural Net model
"""
model = Sequential()
model.add(ZeroPadding2D((1, 1), input_shape=img_dim, name="zeropadding2d_1"))
model.add(Convolution2D(64, 3, 3, activation='relu', name="convolution2d_1"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_2"))
model.add(Convolution2D(64, 3, 3, activation='relu', name="convolution2d_2"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_1"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_3"))
model.add(Convolution2D(128, 3, 3, activation='relu', name="convolution2d_3"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_4"))
model.add(Convolution2D(128, 3, 3, activation='relu', name="convolution2d_4"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_2"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_5"))
model.add(Convolution2D(256, 3, 3, activation='relu', name="convolution2d_5"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_6"))
model.add(Convolution2D(256, 3, 3, activation='relu', name="convolution2d_6"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_7"))
model.add(Convolution2D(256, 3, 3, activation='relu', name="convolution2d_7"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_3"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_8"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_8"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_9"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_9"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_10"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_10"))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name="maxpooling2d_4"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_11"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_11"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_12"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_12"))
model.add(ZeroPadding2D((1, 1), name="zeropadding2d_13"))
model.add(Convolution2D(512, 3, 3, activation='relu', name="convolution2d_13"))
# Add another conv layer with ReLU + GAP
model.add(Convolution2D(1024, 3, 3, activation='relu', border_mode="same", name="convolution2d_14"))
model.add(AveragePooling2D((14, 14), name="average_pooling2d_1"))
model.add(Flatten(name="flatten_1"))
# Add the W layer
model.add(Dense(10, activation='softmax', name="dense_1"))
if model_name:
model.name = model_name
else:
model.name = "VGGCAM"
if pretr_weights_file:
with h5py.File(pretr_weights_file) as hw:
for k in range(hw.attrs['nb_layers']):
g = hw['layer_{}'.format(k)]
weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]
model.layers[k].set_weights(weights)
if model.layers[k].name == "convolution2d_13":
break
return model
def get_classmap(model, X, nb_classes, batch_size, num_input_channels, ratio):
inc = model.layers[0].input
conv6 = model.layers[-4].output
conv6_resized = absconv.bilinear_upsampling(conv6, ratio,
batch_size=batch_size,
num_input_channels=num_input_channels)
WT = model.layers[-1].W.T
conv6_resized = K.reshape(conv6_resized, (-1, num_input_channels, 224 * 224))
classmap = K.dot(WT, conv6_resized).reshape((-1, nb_classes, 224, 224))
get_cmap = K.function([inc], classmap)
return get_cmap([X])
def load(model_name, nb_classes, img_dim, pretr_weights_file=None):
if model_name == "VGG":
model = VGG(nb_classes, img_dim, pretr_weights_file=pretr_weights_file, model_name=None)
elif model_name == "VGG19":
model = VGG19(nb_classes, img_dim, pretr_weights_file=pretr_weights_file, model_name=None)
elif model_name == "VGGCAM":
model = VGGCAM(nb_classes, img_dim, pretr_weights_file=pretr_weights_file, model_name=None)
elif model_name == "CNN":
model = CNN(nb_classes, img_dim, pretr_weights_file=pretr_weights_file, model_name=None)
elif model_name == "VGG_celeba":
model = VGG_celeba(nb_classes, img_dim, pretr_weights_file=pretr_weights_file, model_name=None)
return model
| 47.842932 | 128 | 0.682808 | 2,481 | 18,276 | 4.88029 | 0.060459 | 0.10968 | 0.119673 | 0.056822 | 0.91328 | 0.895524 | 0.893294 | 0.884044 | 0.878345 | 0.875372 | 0 | 0.072118 | 0.154793 | 18,276 | 381 | 129 | 47.968504 | 0.711724 | 0.119446 | 0 | 0.731225 | 0 | 0 | 0.170026 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027668 | false | 0 | 0.035573 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e4f679bb2c7e28fbed16fec16cbd930c0b40cdbd | 17,348 | py | Python | src/ai/utils/.ipynb_checkpoints/predictor-checkpoint.py | carlov93/predictive_maintenance | eb00b82bde02668387d0308571296a82f78abef6 | [
"MIT"
] | 1 | 2020-02-11T07:50:33.000Z | 2020-02-11T07:50:33.000Z | src/ai/utils/.ipynb_checkpoints/predictor-checkpoint.py | carlov93/predictive_maintenance | eb00b82bde02668387d0308571296a82f78abef6 | [
"MIT"
] | 12 | 2020-03-24T18:16:51.000Z | 2022-03-12T00:15:55.000Z | src/ai/utils/.ipynb_checkpoints/predictor-checkpoint.py | carlov93/predictive_maintenance | eb00b82bde02668387d0308571296a82f78abef6 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.optim as optim
import pandas as pd
import numpy as np
import builtins
class PredictorMse():
def __init__(self, model, criterion, path_data, columns_to_ignore):
self.model = model
self.criterion = criterion
self.path_data = path_data
self.column_names_data = self.get_column_names_data()
self.columns_to_ignore = columns_to_ignore
def get_column_names_data(self):
with open(self.path_data, 'r') as f:
header = f.readline().replace('\n','')
return header.split(",")
def create_column_names_result(self):
column_names_target = [column_name+" target" for column_name in self.column_names_data if column_name not in self.columns_to_ignore+["ID"]]
column_names_predicted = [column_name+" predicted" for column_name in self.column_names_data if column_name not in self.columns_to_ignore+["ID"]]
column_names_loss_per_sensor = [column_name+" share of loss " for column_name in self.column_names_data if column_name not in self.columns_to_ignore+["ID"]]
column_names_residuals= ["residual "+column_name for column_name in self.column_names_data if column_name not in self.columns_to_ignore+["ID"]]
return ["ID"] + column_names_target + column_names_predicted + ["loss"] + column_names_loss_per_sensor + column_names_residuals
def predict(self, data_loader):
results = pd.DataFrame(columns=self.create_column_names_result())
self.model.eval()
with torch.no_grad():
print("Start predicting.")
for batch_number, data in enumerate(data_loader):
input_data, target_data = data
# Store ID of target sample
id_target = target_data[:,0] #ID must be on first position!
# De-select ID feature in both input_data and target data for inference
input_data = torch.from_numpy(input_data.numpy()[:,:,1:]) # ID must be on first position!
target_data = torch.from_numpy(target_data.numpy()[:,1:]) # ID must be on first position!
# Initilize Hidden and Cell State
hidden = self.model.init_hidden()
# Forward propagation
output = self.model(input_data, hidden)
# Calculate loss
loss = self.criterion(output, target_data)
loss_share_per_sensor = self.criterion.share_per_sensor(output, target_data)
for batch in range(self.model.batch_size):
# Reshape and Calculate prediction metrics
predicted_data = output[batch,:].data.numpy().tolist()
ground_truth = target_data[batch,:].data.numpy().tolist()
loss_share_per_sensor_np = loss_share_per_sensor[batch,:].data.numpy().tolist()
residuals = [target_i - prediction_i for target_i, prediction_i in zip(ground_truth, predicted_data)]
# Add values to dataframe
data = [id_target[batch].item()] + ground_truth + predicted_data + [loss[batch].item()] + loss_share_per_sensor_np + residuals
results = results.append(pd.Series(data, index=results.columns), ignore_index=True)
# Print status
if (batch_number*self.model.batch_size)%5000 == 0:
print("Current status: " + str(batch_number*self.model.batch_size) + " samples are predicted.")
print("Finished predicting.")
return results
class PredictorMle():
def __init__(self, model, criterion, path_data, columns_to_ignore):
self.model = model
self.criterion = criterion
self.path_data = path_data
self.column_names_data = self.get_column_names_data()
self.columns_to_ignore = columns_to_ignore
def get_column_names_data(self):
with open(self.path_data, 'r') as f:
header = f.readline().replace('\n','')
return header.split(",")
def create_column_names_result_mse(self):
column_names_target = [column_name+" target" for column_name in self.column_names_data if column_name not in self.columns_to_ignore+["ID"]]
column_names_predicted = [column_name+" predicted" for column_name in self.column_names_data if column_name not in self.columns_to_ignore+["ID"]]
column_names_loss_per_sensor = [column_name+" share of loss " for column_name in self.column_names_data if column_name not in self.columns_to_ignore+["ID"]]
column_names_residuals= ["normalised residual "+column_name for column_name in self.column_names_data if column_name not in self.columns_to_ignore+["ID"]]
return ["ID"] + column_names_target + column_names_predicted + ["loss"] + column_names_loss_per_sensor + column_names_residuals
def predict(self, data_loader):
results = pd.DataFrame(columns=self.create_column_names_result())
self.model.eval()
with torch.no_grad():
print("Start predicting")
for batch_number, data in enumerate(data_loader):
input_data, target_data = data
# Store ID of target sample
id_target = int(target_data[:,0].item()) #ID must be on first position!
# De-select ID feature in both input_data and target data for inference
input_data = torch.from_numpy(input_data.numpy()[:,:,1:]) # ID must be on first position!
target_data = torch.from_numpy(target_data.numpy()[:,1:]) # ID must be on first position!
# Initilize Hidden and Cell State
hidden = self.model.init_hidden()
# Forward propagation
y_hat, tau = self.model(input_data, hidden)
# Because of the transformation of sigma inside the LossModuleMle (σ_t = exp(τ_t))
# we have to revert this transformation with exp(tau_i).
# ToDo: sigma = torch.exp(tau) ????????
# Calculate loss
loss = self.criterion(y_hat, target_data)
loss_share_per_sensor = self.criterion.share_per_sensor(y_hat, target_data)
# Reshape and Calculate prediction metrics
y_hat = torch.squeeze(y_hat)
predicted_data = y_hat.data.numpy().tolist()
sigma = torch.squeeze(sigma)
sigma = sigma.data.numpy().tolist()
target_data = torch.squeeze(target_data)
target_data = target_data.data.numpy().tolist()
loss_share_per_sensor = torch.squeeze(loss_share_per_sensor)
loss_share_per_sensor = loss_share_per_sensor.data.numpy().tolist()
normalised_residuals = [(target_i - prediction_i)/sigma_i for target_i, prediction_i, sigma_i in zip(target_data, predicted_data, sigma)]
# Add values to dataframe
data = [id_target] + target_data + predicted_data + [loss.item()] + loss_share_per_sensor + normalised_residuals
results = results.append(pd.Series(data, index=results.columns), ignore_index=True)
# Print status
if id_target%5000 == 0:
print("Current status: " + str(id_target) + " samples are predicted.")
print("Finished predicting")
return results
class PredictorMultiTaskLearning():
def __init__(self, model, criterion, path_data, columns_to_ignore, threshold_anomaly):
self.model = model
self.criterion = criterion
self.path_data = path_data
self.column_names_data = self.get_column_names_data()
self.columns_to_ignore = columns_to_ignore
self.threshold_anomaly = threshold_anomaly
def get_column_names_data(self):
with open(self.path_data, 'r') as f:
header = f.readline().replace('\n','')
return header.split(",")
def create_column_names_result(self):
column_names_target = [column_name+" target" for column_name in self.column_names_data
if column_name not in self.columns_to_ignore+["ID"]]
column_names_predicted = [column_name+" predicted" for column_name in self.column_names_data
if column_name not in self.columns_to_ignore+["ID"]]
column_names_loss_per_sensor = [column_name+" share of loss " for column_name in self.column_names_data
if column_name not in self.columns_to_ignore+["ID"]]
column_names_residuals= ["residual "+column_name for column_name in self.column_names_data
if column_name not in self.columns_to_ignore+["ID"]]
column_names_latent_space= ["latent_space_"+str(i) for i in range(self.model.n_hidden_fc_ls_analysis)]
column_names = ["ID"] + column_names_target + column_names_predicted + ["loss"] + \
column_names_loss_per_sensor + column_names_residuals + column_names_latent_space
return column_names
def predict(self, data_loader):
results = pd.DataFrame(columns=self.create_column_names_result())
self.model.eval()
with torch.no_grad():
print("Start predicting.")
for batch_number, data in enumerate(data_loader):
input_data, target_data = data
# Store ID of target sample
id_target = target_data[:,0] #ID must be on first position!
# De-select ID feature in both input_data and target data for inference
input_data = torch.from_numpy(input_data.numpy()[:,:,1:]) # ID must be on first position!
target_data = torch.from_numpy(target_data.numpy()[:,1:]) # ID must be on first position!
# Initilize Hidden and Cell State
hidden = self.model.init_hidden()
# Forward propagation
prediction, _ = self.model(input_data, hidden)
latent_space = self.model.current_latent_space
# Calculate loss (subnetwork for latent space analysis not longer considered)
loss_prediction_network = self.criterion(prediction, target_data)
loss_share_per_sensor = self.criterion.share_per_sensor(prediction, target_data)
for batch in range(self.model.batch_size):
# Reshape and Calculate prediction metrics
predicted_data = prediction[batch,:].data.numpy().tolist()
ground_truth = target_data[batch,:].data.numpy().tolist()
residuals = [target_i - prediction_i for target_i, prediction_i in zip(ground_truth, predicted_data)]
latent_space_np = latent_space[batch,:].data.numpy().tolist()
loss = loss_prediction_network[batch].item()
loss_share_per_sensor_np = loss_share_per_sensor[batch,:].data.numpy().tolist()
# Add values to dataframe
data = [id_target[batch].item()] + ground_truth + predicted_data + [loss] + loss_share_per_sensor_np + residuals + latent_space_np
results = results.append(pd.Series(data, index=results.columns), ignore_index=True)
# Print status
if (batch_number*self.model.batch_size)%5000 == 0:
print("Current status: " + str(batch_number*self.model.batch_size) + " samples are predicted.")
return results
def detect_anomaly(self, results_prediction, smooth_rate):
# smooth loss to
smoothed_loss = []
for i,value in enumerate(results_prediction.loc[:,"loss"]):
if i==0:
smoothed_loss.append(value)
else:
x = smooth_rate * value + (1 - smooth_rate) * smoothed_loss[-1]
smoothed_loss.append(x)
results_prediction["smoothed_loss"]=smoothed_loss
# tag sample as an anomaly (1) if loss is higher than given threshold, otherwhise 0
results_prediction["anomaly"] = np.where(results_prediction["smoothed_loss"]>=self.threshold_anomaly, 1, 0)
return results_prediction
class PredictorMultiTaskLearningCopy():
def __init__(self, model, criterion, path_data, columns_to_ignore, threshold_anomaly):
self.model = model
self.criterion = criterion
self.path_data = path_data
self.column_names_data = self.get_column_names_data()
self.columns_to_ignore = columns_to_ignore
self.threshold_anomaly = threshold_anomaly
def get_column_names_data(self):
with open(self.path_data, 'r') as f:
header = f.readline().replace('\n','')
return header.split(",")
def create_column_names_result(self):
column_names_target = [column_name+" target" for column_name in self.column_names_data
if column_name not in self.columns_to_ignore+["ID"]]
column_names_predicted = [column_name+" predicted" for column_name in self.column_names_data
if column_name not in self.columns_to_ignore+["ID"]]
column_names_loss_per_sensor = [column_name+" share of loss " for column_name in self.column_names_data
if column_name not in self.columns_to_ignore+["ID"]]
column_names_residuals= ["residual "+column_name for column_name in self.column_names_data
if column_name not in self.columns_to_ignore+["ID"]]
column_names_latent_space= ["latent_space_"+str(i) for i in range(self.model.n_hidden_fc_ls_analysis)]
column_names = ["ID"] + column_names_target + column_names_predicted + ["loss"] + \
column_names_loss_per_sensor + column_names_residuals + column_names_latent_space
return column_names
def predict(self, data):
results = pd.DataFrame(columns=self.create_column_names_result())
self.model.eval()
with torch.no_grad():
input_data, target_data = data
# Store ID of target sample
id_target = target_data[:,0] #ID must be on first position!
# De-select ID feature in both input_data and target data for inference
input_data = torch.from_numpy(input_data.numpy()[:,:,1:]) # ID must be on first position!
target_data = torch.from_numpy(target_data.numpy()[:,1:]) # ID must be on first position!
# Initilize Hidden and Cell State
hidden = self.model.init_hidden()
# Forward propagation
prediction, _ = self.model(input_data, hidden)
latent_space = self.model.current_latent_space
# Calculate loss (subnetwork for latent space analysis not longer considered)
loss_prediction_network = self.criterion(prediction, target_data)
loss_share_per_sensor = self.criterion.share_per_sensor(prediction, target_data)
batch_results= []
for batch in range(self.model.batch_size):
# Reshape and Calculate prediction metrics
predicted_data = prediction[batch,:].data.numpy().tolist()
ground_truth = target_data[batch,:].data.numpy().tolist()
residuals = [target_i - prediction_i for target_i, prediction_i in zip(ground_truth, predicted_data)]
latent_space_np = latent_space[batch,:].data.numpy().tolist()
loss = loss_prediction_network[batch].item()
loss_share_per_sensor_np = loss_share_per_sensor[batch,:].data.numpy().tolist()
# Add values to dataframe
data = str(id_target[batch].item())+str(";")+str(ground_truth)+str(";")+str(predicted_data)+str(";")+str(loss) +str(";")+str(loss_share_per_sensor_np)+str(";")+str(residuals)+str(";")+str(latent_space_np)+str("\n")
batch_results.append(data)
return batch_results
def detect_anomaly(self, results_prediction, smooth_rate):
# smooth loss to
smoothed_loss = []
for i,value in enumerate(results_prediction.loc[:,"loss"]):
if i==0:
smoothed_loss.append(value)
else:
x = smooth_rate * value + (1 - smooth_rate) * smoothed_loss[-1]
smoothed_loss.append(x)
results_prediction["smoothed_loss"]=smoothed_loss
# tag sample as an anomaly (1) if loss is higher than given threshold, otherwhise 0
results_prediction["anomaly"] = np.where(results_prediction["smoothed_loss"]>=self.threshold_anomaly, 1, 0)
return results_prediction | 55.248408 | 230 | 0.618169 | 2,092 | 17,348 | 4.827916 | 0.07935 | 0.082772 | 0.041584 | 0.037624 | 0.916931 | 0.897624 | 0.887129 | 0.877822 | 0.872475 | 0.862475 | 0 | 0.003341 | 0.292541 | 17,348 | 314 | 231 | 55.248408 | 0.819604 | 0.104047 | 0 | 0.801843 | 0 | 0 | 0.036212 | 0 | 0 | 0 | 0 | 0.003185 | 0 | 1 | 0.082949 | false | 0 | 0.02765 | 0 | 0.193548 | 0.036866 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
900d11aedc3aa70ad91e5ffd4183fa0f21043a15 | 14,868 | py | Python | draw_num_ue.py | T610/MEC | 351b83362e0c8a7128bd95d20de2a720a87b5c48 | [
"MIT"
] | null | null | null | draw_num_ue.py | T610/MEC | 351b83362e0c8a7128bd95d20de2a720a87b5c48 | [
"MIT"
] | null | null | null | draw_num_ue.py | T610/MEC | 351b83362e0c8a7128bd95d20de2a720a87b5c48 | [
"MIT"
] | null | null | null | import numpy as np
from wolf_agent import WoLFAgent
from matrix_game_local_only import MatrixGame_local
from matrix_game_mec_only import MatrixGame_mec
from matrix_game import MatrixGame
from queue_relay import QueueRelay
import matplotlib.pyplot as plt
from gpd import GPD ## TLIU
from dataToExcel import DTE ## TLIU
import xlrd ## TLIU
import xlsxwriter ## TLIU
class draw_picture():
def __init__(self):
self.bandwidth = []
self.usersnumber = []
def run_for_all_mode(self,bw,un):
nb_episode = 700
actions = np.arange(8)
user_num = un
lambda_n = np.zeros(user_num)
for i in range(user_num): # 每比特需要周期量 70~800 cycles/bits
if i % 5 == 0:
lambda_n[i] = 0.001
if i % 5 == 1:
lambda_n[i] = 0.01
if i % 5 == 2:
lambda_n[i] = 0.1
if i % 5 == 3:
lambda_n[i] = 0.001
if i % 5 == 4:
lambda_n[i] = 0.01
actions_set = [[0, 5 * pow(10, 6), 0.4],
[0, 5 * pow(10, 6), 0.4],
[0, 5 * pow(10, 6), 0.4],
[0, 5 * pow(10, 6), 0.4],
[1, 0, 0.4],
[1, 0, 0.4],
[1, 0, 0.4],
[1, 0, 0.4]]
GPD1_array = [4 * pow(10, 6) for _ in range(user_num)]
GPD2_array = [0.3 for _ in range(user_num)]
# init wolf agent
wolf_agent_array = []
for i in range(user_num):
wolf_agent_array.append(WoLFAgent(alpha=0.1, actions=actions, high_delta=0.004, low_delta=0.002))
queue_relay_array = []
for i in range(user_num):
queue_relay_array.append(QueueRelay(lambda_n[i], GPD1_array[i], GPD2_array[i]))
# set reward functio
# reward = Reward()
reward_history = []
# init_Queue_relay
Q_array_histroy = [[10] for i in range(user_num)] ## TLIU
for episode in range(nb_episode):
Q_array = []
Qx_array = []
Qy_array = []
Qz_array = []
M1_array = []
M2_array = []
for i in range(user_num):
Q_array.append(queue_relay_array[i].Q)
Qx_array.append(queue_relay_array[i].Qx)
Qy_array.append(queue_relay_array[i].Qy)
Qz_array.append(queue_relay_array[i].Qz)
M1_array.append(queue_relay_array[i].M1)
M2_array.append(queue_relay_array[i].M2)
for i in range(user_num):
Q_array_histroy[i].append(Q_array[i])
if episode % 50 == 0 and episode != 0:
for i in range(user_num):
aa = GPD()
data = Q_array_histroy[i]
# data = [10000000000000 for i in range(200) ]
# res = aa.gpd( data , 3.96*pow(10,5) )
res = aa.gpd(data, 3.96 * pow(10, 6))
if res:
queue_relay_array[i].GPD1 = res[0][0]
queue_relay_array[i].GPD2 = res[0][1]
queue_relay_array[i].updateM1()
queue_relay_array[i].updateM2()
iteration_actions = []
for i in range(user_num):
iteration_actions.append(wolf_agent_array[i].act())
game = MatrixGame(actions=iteration_actions, Q=Q_array,
Qx=Qx_array, Qy=Qy_array, Qz=Qz_array,
M1=M1_array,
M2=M2_array , BW= bw)
reward, bn, lumbda, rff = game.step(actions=iteration_actions)
print("episode",episode,"reward",sum(reward))
for i in range(user_num):
# wolf agent act
# update_Queue_relay
queue_relay_array[i].lumbda = lumbda[i]
queue_relay_array[i].updateQ(bn[i], actions_set[iteration_actions[i]][0], rff[i])
queue_relay_array[i].updateQx()
queue_relay_array[i].updateQy()
queue_relay_array[i].updateQz()
# reward step
reward_history.append(sum(reward))
for i in range(user_num):
wolf_agent_array[i].observe(reward=reward[i])
# for i in range(user_num):
# print(wolf_agent_array[i].pi_average)
# plt.plot(np.arange(len(reward_history)), reward_history, label="")
# plt.show()
return reward_history[-1]
def run_for_only_mec(self,bw1,un1):
nb_episode = 700
actions_set = [
[1, 0, 0.1],
[1, 0, 0.5],
[1, 0, 1],
[1, 0, 2]]
actions = np.arange(len(actions_set))
user_num = un1
lambda_n = np.zeros(user_num)
for i in range(user_num): # 每比特需要周期量 70~800 cycles/bits
if i % 5 == 0:
lambda_n[i] = 0.001
if i % 5 == 1:
lambda_n[i] = 0.01
if i % 5 == 2:
lambda_n[i] = 0.1
if i % 5 == 3:
lambda_n[i] = 0.001
if i % 5 == 4:
lambda_n[i] = 0.01
GPD1_array = [4 * pow(10, 6) for _ in range(user_num)]
GPD2_array = [0.3 for _ in range(user_num)]
# init wolf agent
wolf_agent_array = []
for i in range(user_num):
wolf_agent_array.append(WoLFAgent(alpha=0.1, actions=actions, high_delta=0.004, low_delta=0.002))
queue_relay_array = []
for i in range(user_num):
queue_relay_array.append(QueueRelay(lambda_n[i], GPD1_array[i], GPD2_array[i]))
# set reward functio
# reward = Reward()
reward_history = []
# init_Queue_relay
Q_array_histroy = [[10] for i in range(user_num)] ## TLIU
for episode in range(nb_episode):
Q_array = []
Qx_array = []
Qy_array = []
Qz_array = []
M1_array = []
M2_array = []
for i in range(user_num):
Q_array.append(queue_relay_array[i].Q)
Qx_array.append(queue_relay_array[i].Qx)
Qy_array.append(queue_relay_array[i].Qy)
Qz_array.append(queue_relay_array[i].Qz)
M1_array.append(queue_relay_array[i].M1)
M2_array.append(queue_relay_array[i].M2)
for i in range(user_num):
Q_array_histroy[i].append(Q_array[i])
if episode % 50 == 0 and episode != 0:
for i in range(user_num):
aa = GPD()
data = Q_array_histroy[i]
# data = [10000000000000 for i in range(200) ]
# res = aa.gpd( data , 3.96*pow(10,5) )
res = aa.gpd(data, 3.96 * pow(10, 6))
if res:
queue_relay_array[i].GPD1 = res[0][0]
queue_relay_array[i].GPD2 = res[0][1]
queue_relay_array[i].updateM1()
queue_relay_array[i].updateM2()
iteration_actions = []
for i in range(user_num):
iteration_actions.append(wolf_agent_array[i].act())
game = MatrixGame_mec(actions=iteration_actions, Q=Q_array,
Qx=Qx_array, Qy=Qy_array, Qz=Qz_array,
M1=M1_array,
M2=M2_array, BW=bw1)
#print('Q value :' + str(Q_array) + str(Qx_array) + str(Qy_array) + str(Qz_array))
reward, bn, lumbda, rff = game.step(actions=iteration_actions)
for i in range(user_num):
# wolf agent act
# update_Queue_relay
queue_relay_array[i].lumbda = lumbda[i]
queue_relay_array[i].updateQ(bn[i], actions_set[iteration_actions[i]][0], rff[i])
queue_relay_array[i].updateQx()
queue_relay_array[i].updateQy()
queue_relay_array[i].updateQz()
# reward step
reward_history.append(sum(reward))
for i in range(user_num):
wolf_agent_array[i].observe(reward=reward[i])
# for i in range(user_num):
# print(wolf_agent_array[i].pi_average)
# plt.plot(np.arange(len(reward_history)), reward_history, label="")
# plt.show()
return reward_history[-1]
def run_for_only_local(self,bw2,un2):
nb_episode = 700
actions_set = [
[0, 5 * pow(10, 6), 0],
[0, 10 * pow(10, 6), 0],
[0, 20 * pow(10, 6), 0],
[0, 30 * pow(10, 6), 0]]
actions = np.arange(len(actions_set))
user_num = un2
lambda_n = np.zeros(user_num)
for i in range(user_num): # 每比特需要周期量 70~800 cycles/bits
if i % 5 == 0:
lambda_n[i] = 0.001
if i % 5 == 1:
lambda_n[i] = 0.01
if i % 5 == 2:
lambda_n[i] = 0.1
if i % 5 == 3:
lambda_n[i] = 0.001
if i % 5 == 4:
lambda_n[i] = 0.01
GPD1_array = [4 * pow(10, 6) for _ in range(user_num)]
GPD2_array = [0.3 for _ in range(user_num)]
# init wolf agent
wolf_agent_array = []
for i in range(user_num):
wolf_agent_array.append(WoLFAgent(alpha=0.1, actions=actions, high_delta=0.004, low_delta=0.002))
queue_relay_array = []
for i in range(user_num):
queue_relay_array.append(QueueRelay(lambda_n[i], GPD1_array[i], GPD2_array[i]))
# set reward functio
# reward = Reward()
reward_history = []
# init_Queue_relay
Q_array_histroy = [[10] for i in range(user_num)] ## TLIU
for episode in range(nb_episode):
Q_array = []
Qx_array = []
Qy_array = []
Qz_array = []
M1_array = []
M2_array = []
for i in range(user_num):
Q_array.append(queue_relay_array[i].Q)
Qx_array.append(queue_relay_array[i].Qx)
Qy_array.append(queue_relay_array[i].Qy)
Qz_array.append(queue_relay_array[i].Qz)
M1_array.append(queue_relay_array[i].M1)
M2_array.append(queue_relay_array[i].M2)
for i in range(user_num):
Q_array_histroy[i].append(Q_array[i])
if episode % 50 == 0 and episode != 0:
for i in range(user_num):
aa = GPD()
data = Q_array_histroy[i]
# data = [10000000000000 for i in range(200) ]
# res = aa.gpd( data , 3.96*pow(10,5) )
res = aa.gpd(data, 3.96 * pow(10, 6))
if res:
queue_relay_array[i].GPD1 = res[0][0]
queue_relay_array[i].GPD2 = res[0][1]
queue_relay_array[i].updateM1()
queue_relay_array[i].updateM2()
iteration_actions = []
for i in range(user_num):
iteration_actions.append(wolf_agent_array[i].act())
game = MatrixGame_local(actions=iteration_actions, Q=Q_array,
Qx=Qx_array, Qy=Qy_array, Qz=Qz_array,
M1=M1_array,
M2=M2_array, BW=bw2)
reward, bn, lumbda, rff = game.step(actions=iteration_actions)
for i in range(user_num):
# wolf agent act
# update_Queue_relay
queue_relay_array[i].lumbda = lumbda[i]
queue_relay_array[i].updateQ(bn[i], actions_set[iteration_actions[i]][0], rff[i])
queue_relay_array[i].updateQx()
queue_relay_array[i].updateQy()
queue_relay_array[i].updateQz()
# reward step
reward_history.append(sum(reward))
for i in range(user_num):
wolf_agent_array[i].observe(reward=reward[i])
# for i in range(user_num):
# print(wolf_agent_array[i].pi_average)
# plt.plot(np.arange(len(reward_history)), reward_history, label="")
# plt.show()
return reward_history[-1]
if __name__ == '__main__':
#bandwidth = np.array([2*pow(10,6),4*pow(10,6),6*pow(10,6),8*pow(10,6),10*pow(10,6),12*pow(10,6),14*pow(10,6),16*pow(10,6)])
usernumber = np.array([10,15,20,25,30,35,40,45])
draw = draw_picture()
cost_of_all = []
cost_of_mec = []
cost_of_local = []
cost_of_all_6mhz = []
cost_of_all_8mhz = []
cost_of_all_12mhz = []
for i in range(8):
cost_of_all.append(draw.run_for_all_mode(bw=10*pow(10,6), un=usernumber[i]))
cost_of_mec.append(draw.run_for_only_mec(bw1=10*pow(10,6), un1=usernumber[i]))
cost_of_local.append(draw.run_for_only_local(bw2=10*pow(10,6), un2=usernumber[i]))
cost_of_all_6mhz.append(draw.run_for_all_mode(bw=6 * pow(10, 6), un=usernumber[i]))
cost_of_all_8mhz.append(draw.run_for_all_mode(bw=8 * pow(10, 6), un=usernumber[i]))
cost_of_all_12mhz.append(draw.run_for_all_mode(bw=12 * pow(10, 6), un=usernumber[i]))
plt.plot(usernumber, cost_of_all, '^-', linewidth=0.4, label='all selection')
plt.plot(usernumber, cost_of_local, '<-', linewidth=0.4, label='only local selection')
plt.plot(usernumber, cost_of_mec, '>-', linewidth=0.4, label='only MEC selection')
plt.plot(usernumber, cost_of_all_6mhz, '<-', linewidth=0.2, label='all selection of 6mhz')
plt.plot(usernumber, cost_of_all_8mhz, '<-', linewidth=0.2, label='all selection of 8mhz')
plt.plot(usernumber, cost_of_all_12mhz, '<-', linewidth=0.2, label='all selection of 12mhz')
plt.grid(True) #显示网格
plt.xlabel('The number of UE')
plt.ylabel('Sum Cost')
plt.legend(loc='upper left') #图例右上角
plt.show()
data = DTE("./picture/pic2/all") ## TLIU
print(cost_of_all)
data.write(cost_of_all)
data = DTE("./picture/pic2/mec") ## TLIU
print(cost_of_mec)
data.write(cost_of_mec)
data = DTE("./picture/pic2/local") ## TLIU
print(cost_of_local)
data.write(cost_of_local)
data = DTE("./picture/pic2/all_6MHZ") ## TLIU
print(cost_of_all_6mhz)
data.write(cost_of_all_6mhz)
data = DTE("./picture/pic2/all_8MHZ") ## TLIU
print(cost_of_all_8mhz)
data.write(cost_of_all_8mhz)
data = DTE("./picture/pic2/all_12MHZ") ## TLIU
print(cost_of_all_12mhz)
data.write(cost_of_all_12mhz)
| 36.263415 | 128 | 0.523742 | 2,014 | 14,868 | 3.622145 | 0.08143 | 0.051816 | 0.104866 | 0.098698 | 0.841261 | 0.787114 | 0.760247 | 0.734202 | 0.717204 | 0.702947 | 0 | 0.05762 | 0.355663 | 14,868 | 409 | 129 | 36.352078 | 0.703862 | 0.095305 | 0 | 0.709343 | 0 | 0 | 0.023025 | 0.005233 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013841 | false | 0 | 0.038062 | 0 | 0.065744 | 0.024221 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
90123dc13141a8929af9b8cdcf1a759bb7563372 | 167 | py | Python | maskrcnn_benchmark/modeling/rpn/__init__.py | p517332051/face_benchmark | c76c2b2142ecf65b7bace4b007a33fa4e795d2d0 | [
"MIT"
] | 2 | 2020-04-02T14:53:29.000Z | 2021-07-13T04:31:47.000Z | maskrcnn_benchmark/modeling/rpn/__init__.py | p517332051/face_benchmark | c76c2b2142ecf65b7bace4b007a33fa4e795d2d0 | [
"MIT"
] | null | null | null | maskrcnn_benchmark/modeling/rpn/__init__.py | p517332051/face_benchmark | c76c2b2142ecf65b7bace4b007a33fa4e795d2d0 | [
"MIT"
] | 1 | 2020-07-13T06:02:42.000Z | 2020-07-13T06:02:42.000Z | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
# from .rpn import build_rpn
from .face_head import build_face_head
from .rpn import build_rpn
| 33.4 | 71 | 0.790419 | 27 | 167 | 4.703704 | 0.592593 | 0.259843 | 0.204724 | 0.283465 | 0.330709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143713 | 167 | 4 | 72 | 41.75 | 0.888112 | 0.57485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
5f6e46a7200adaee49848b74b2d059e47a701ee7 | 167,546 | py | Python | boto3_type_annotations_with_docs/boto3_type_annotations/appmesh/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 119 | 2018-12-01T18:20:57.000Z | 2022-02-02T10:31:29.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/appmesh/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 15 | 2018-11-16T00:16:44.000Z | 2021-11-13T03:44:18.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/appmesh/client.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 11 | 2019-05-06T05:26:51.000Z | 2021-09-28T15:27:59.000Z | from typing import Optional
from botocore.client import BaseClient
from typing import Dict
from botocore.paginate import Paginator
from botocore.waiter import Waiter
from typing import Union
from typing import List
class Client(BaseClient):
def can_paginate(self, operation_name: str = None):
"""
Check if an operation can be paginated.
:type operation_name: string
:param operation_name: The operation name. This is the same name
as the method name on the client. For example, if the
method name is ``create_foo``, and you\'d normally invoke the
operation as ``client.create_foo(**kwargs)``, if the
``create_foo`` operation can be paginated, you can use the
call ``client.get_paginator(\"create_foo\")``.
:return: ``True`` if the operation can be paginated,
``False`` otherwise.
"""
pass
def create_mesh(self, meshName: str, clientToken: str = None, spec: Dict = None, tags: List = None) -> Dict:
"""
Creates a service mesh. A service mesh is a logical boundary for network traffic between the services that reside within it.
After you create your service mesh, you can create virtual services, virtual nodes, virtual routers, and routes to distribute traffic between the applications in your mesh.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/CreateMesh>`_
**Request Syntax**
::
response = client.create_mesh(
clientToken='string',
meshName='string',
spec={
'egressFilter': {
'type': 'ALLOW_ALL'|'DROP_ALL'
}
},
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
**Response Syntax**
::
{
'mesh': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'egressFilter': {
'type': 'ALLOW_ALL'|'DROP_ALL'
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
}
}
}
**Response Structure**
- *(dict) --*
- **mesh** *(dict) --*
The full description of your service mesh following the create call.
- **meshName** *(string) --*
The name of the service mesh.
- **metadata** *(dict) --*
The associated metadata for the service mesh.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The associated specification for the service mesh.
- **egressFilter** *(dict) --*
The egress filter rules for the service mesh.
- **type** *(string) --*
The egress filter type. By default, the type is ``DROP_ALL`` , which allows egress only from virtual nodes to other defined resources in the service mesh (and any traffic to ``*.amazonaws.com`` for AWS API calls). You can set the egress filter type to ``ALLOW_ALL`` to allow egress to any endpoint inside or outside of the service mesh.
- **status** *(dict) --*
The status of the service mesh.
- **status** *(string) --*
The current mesh status.
:type clientToken: string
:param clientToken:
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
This field is autopopulated if not provided.
:type meshName: string
:param meshName: **[REQUIRED]**
The name to use for the service mesh.
:type spec: dict
:param spec:
The service mesh specification to apply.
- **egressFilter** *(dict) --*
The egress filter rules for the service mesh.
- **type** *(string) --* **[REQUIRED]**
The egress filter type. By default, the type is ``DROP_ALL`` , which allows egress only from virtual nodes to other defined resources in the service mesh (and any traffic to ``*.amazonaws.com`` for AWS API calls). You can set the egress filter type to ``ALLOW_ALL`` to allow egress to any endpoint inside or outside of the service mesh.
:type tags: list
:param tags:
Optional metadata that you can apply to the service mesh to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
- *(dict) --*
Optional metadata that you apply to a resource to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
- **key** *(string) --* **[REQUIRED]**
One part of a key-value pair that make up a tag. A ``key`` is a general label that acts like a category for more specific tag values.
- **value** *(string) --*
The optional part of a key-value pair that make up a tag. A ``value`` acts as a descriptor within a tag category (key).
:rtype: dict
:returns:
"""
pass
def create_route(self, meshName: str, routeName: str, spec: Dict, virtualRouterName: str, clientToken: str = None, tags: List = None) -> Dict:
"""
Creates a route that is associated with a virtual router.
You can use the ``prefix`` parameter in your route specification for path-based routing of requests. For example, if your virtual service name is ``my-service.local`` and you want the route to match requests to ``my-service.local/metrics`` , your prefix should be ``/metrics`` .
If your route matches a request, you can distribute traffic to one or more target virtual nodes with relative weighting.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/CreateRoute>`_
**Request Syntax**
::
response = client.create_route(
clientToken='string',
meshName='string',
routeName='string',
spec={
'httpRoute': {
'action': {
'weightedTargets': [
{
'virtualNode': 'string',
'weight': 123
},
]
},
'match': {
'prefix': 'string'
}
},
'tcpRoute': {
'action': {
'weightedTargets': [
{
'virtualNode': 'string',
'weight': 123
},
]
}
}
},
tags=[
{
'key': 'string',
'value': 'string'
},
],
virtualRouterName='string'
)
**Response Syntax**
::
{
'route': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'routeName': 'string',
'spec': {
'httpRoute': {
'action': {
'weightedTargets': [
{
'virtualNode': 'string',
'weight': 123
},
]
},
'match': {
'prefix': 'string'
}
},
'tcpRoute': {
'action': {
'weightedTargets': [
{
'virtualNode': 'string',
'weight': 123
},
]
}
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualRouterName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **route** *(dict) --*
The full description of your mesh following the create call.
- **meshName** *(string) --*
The name of the service mesh that the route resides in.
- **metadata** *(dict) --*
The associated metadata for the route.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **routeName** *(string) --*
The name of the route.
- **spec** *(dict) --*
The specifications of the route.
- **httpRoute** *(dict) --*
The HTTP routing information for the route.
- **action** *(dict) --*
The action to take if a match is determined.
- **weightedTargets** *(list) --*
The targets that traffic is routed to when a request matches the route. You can specify one or more targets and their relative weights to distribute traffic with.
- *(dict) --*
An object representing a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10.
- **virtualNode** *(string) --*
The virtual node to associate with the weighted target.
- **weight** *(integer) --*
The relative weight of the weighted target.
- **match** *(dict) --*
The criteria for determining an HTTP request match.
- **prefix** *(string) --*
Specifies the path to match requests with. This parameter must always start with ``/`` , which by itself matches all requests to the virtual service name. You can also match for path-based routing of requests. For example, if your virtual service name is ``my-service.local`` and you want the route to match requests to ``my-service.local/metrics`` , your prefix should be ``/metrics`` .
- **tcpRoute** *(dict) --*
The TCP routing information for the route.
- **action** *(dict) --*
The action to take if a match is determined.
- **weightedTargets** *(list) --*
The targets that traffic is routed to when a request matches the route. You can specify one or more targets and their relative weights to distribute traffic with.
- *(dict) --*
An object representing a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10.
- **virtualNode** *(string) --*
The virtual node to associate with the weighted target.
- **weight** *(integer) --*
The relative weight of the weighted target.
- **status** *(dict) --*
The status of the route.
- **status** *(string) --*
The current status for the route.
- **virtualRouterName** *(string) --*
The virtual router that the route is associated with.
:type clientToken: string
:param clientToken:
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
This field is autopopulated if not provided.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to create the route in.
:type routeName: string
:param routeName: **[REQUIRED]**
The name to use for the route.
:type spec: dict
:param spec: **[REQUIRED]**
The route specification to apply.
- **httpRoute** *(dict) --*
The HTTP routing information for the route.
- **action** *(dict) --* **[REQUIRED]**
The action to take if a match is determined.
- **weightedTargets** *(list) --* **[REQUIRED]**
The targets that traffic is routed to when a request matches the route. You can specify one or more targets and their relative weights to distribute traffic with.
- *(dict) --*
An object representing a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10.
- **virtualNode** *(string) --* **[REQUIRED]**
The virtual node to associate with the weighted target.
- **weight** *(integer) --* **[REQUIRED]**
The relative weight of the weighted target.
- **match** *(dict) --* **[REQUIRED]**
The criteria for determining an HTTP request match.
- **prefix** *(string) --* **[REQUIRED]**
Specifies the path to match requests with. This parameter must always start with ``/`` , which by itself matches all requests to the virtual service name. You can also match for path-based routing of requests. For example, if your virtual service name is ``my-service.local`` and you want the route to match requests to ``my-service.local/metrics`` , your prefix should be ``/metrics`` .
- **tcpRoute** *(dict) --*
The TCP routing information for the route.
- **action** *(dict) --* **[REQUIRED]**
The action to take if a match is determined.
- **weightedTargets** *(list) --* **[REQUIRED]**
The targets that traffic is routed to when a request matches the route. You can specify one or more targets and their relative weights to distribute traffic with.
- *(dict) --*
An object representing a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10.
- **virtualNode** *(string) --* **[REQUIRED]**
The virtual node to associate with the weighted target.
- **weight** *(integer) --* **[REQUIRED]**
The relative weight of the weighted target.
:type tags: list
:param tags:
Optional metadata that you can apply to the route to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
- *(dict) --*
Optional metadata that you apply to a resource to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
- **key** *(string) --* **[REQUIRED]**
One part of a key-value pair that make up a tag. A ``key`` is a general label that acts like a category for more specific tag values.
- **value** *(string) --*
The optional part of a key-value pair that make up a tag. A ``value`` acts as a descriptor within a tag category (key).
:type virtualRouterName: string
:param virtualRouterName: **[REQUIRED]**
The name of the virtual router in which to create the route.
:rtype: dict
:returns:
"""
pass
def create_virtual_node(self, meshName: str, spec: Dict, virtualNodeName: str, clientToken: str = None, tags: List = None) -> Dict:
"""
Creates a virtual node within a service mesh.
A virtual node acts as a logical pointer to a particular task group, such as an Amazon ECS service or a Kubernetes deployment. When you create a virtual node, you must specify the DNS service discovery hostname for your task group.
Any inbound traffic that your virtual node expects should be specified as a ``listener`` . Any outbound traffic that your virtual node expects to reach should be specified as a ``backend`` .
The response metadata for your new virtual node contains the ``arn`` that is associated with the virtual node. Set this value (either the full ARN or the truncated resource name: for example, ``mesh/default/virtualNode/simpleapp`` ) as the ``APPMESH_VIRTUAL_NODE_NAME`` environment variable for your task group's Envoy proxy container in your task definition or pod spec. This is then mapped to the ``node.id`` and ``node.cluster`` Envoy parameters.
.. note::
If you require your Envoy stats or tracing to use a different name, you can override the ``node.cluster`` value that is set by ``APPMESH_VIRTUAL_NODE_NAME`` with the ``APPMESH_VIRTUAL_NODE_CLUSTER`` environment variable.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/CreateVirtualNode>`_
**Request Syntax**
::
response = client.create_virtual_node(
clientToken='string',
meshName='string',
spec={
'backends': [
{
'virtualService': {
'virtualServiceName': 'string'
}
},
],
'listeners': [
{
'healthCheck': {
'healthyThreshold': 123,
'intervalMillis': 123,
'path': 'string',
'port': 123,
'protocol': 'http'|'tcp',
'timeoutMillis': 123,
'unhealthyThreshold': 123
},
'portMapping': {
'port': 123,
'protocol': 'http'|'tcp'
}
},
],
'logging': {
'accessLog': {
'file': {
'path': 'string'
}
}
},
'serviceDiscovery': {
'dns': {
'hostname': 'string'
}
}
},
tags=[
{
'key': 'string',
'value': 'string'
},
],
virtualNodeName='string'
)
**Response Syntax**
::
{
'virtualNode': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'backends': [
{
'virtualService': {
'virtualServiceName': 'string'
}
},
],
'listeners': [
{
'healthCheck': {
'healthyThreshold': 123,
'intervalMillis': 123,
'path': 'string',
'port': 123,
'protocol': 'http'|'tcp',
'timeoutMillis': 123,
'unhealthyThreshold': 123
},
'portMapping': {
'port': 123,
'protocol': 'http'|'tcp'
}
},
],
'logging': {
'accessLog': {
'file': {
'path': 'string'
}
}
},
'serviceDiscovery': {
'dns': {
'hostname': 'string'
}
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualNodeName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **virtualNode** *(dict) --*
The full description of your virtual node following the create call.
- **meshName** *(string) --*
The name of the service mesh that the virtual node resides in.
- **metadata** *(dict) --*
The associated metadata for the virtual node.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The specifications of the virtual node.
- **backends** *(list) --*
The backends that the virtual node is expected to send outbound traffic to.
- *(dict) --*
An object representing the backends that a virtual node is expected to send outbound traffic to.
- **virtualService** *(dict) --*
Specifies a virtual service to use as a backend for a virtual node.
- **virtualServiceName** *(string) --*
The name of the virtual service that is acting as a virtual node backend.
- **listeners** *(list) --*
The listeners that the virtual node is expected to receive inbound traffic from. Currently only one listener is supported per virtual node.
- *(dict) --*
An object representing a listener for a virtual node.
- **healthCheck** *(dict) --*
The health check information for the listener.
- **healthyThreshold** *(integer) --*
The number of consecutive successful health checks that must occur before declaring listener healthy.
- **intervalMillis** *(integer) --*
The time period in milliseconds between each health check execution.
- **path** *(string) --*
The destination path for the health check request. This is required only if the specified protocol is HTTP. If the protocol is TCP, this parameter is ignored.
- **port** *(integer) --*
The destination port for the health check request. This port must match the port defined in the PortMapping for the listener.
- **protocol** *(string) --*
The protocol for the health check request.
- **timeoutMillis** *(integer) --*
The amount of time to wait when receiving a response from the health check, in milliseconds.
- **unhealthyThreshold** *(integer) --*
The number of consecutive failed health checks that must occur before declaring a virtual node unhealthy.
- **portMapping** *(dict) --*
The port mapping information for the listener.
- **port** *(integer) --*
The port used for the port mapping.
- **protocol** *(string) --*
The protocol used for the port mapping.
- **logging** *(dict) --*
The inbound and outbound access logging information for the virtual node.
- **accessLog** *(dict) --*
The access log configuration for a virtual node.
- **file** *(dict) --*
The file object to send virtual node access logs to.
- **path** *(string) --*
The file path to write access logs to. You can use ``/dev/stdout`` to send access logs to standard out and configure your Envoy container to use a log driver, such as ``awslogs`` , to export the access logs to a log storage service such as Amazon CloudWatch Logs. You can also specify a path in the Envoy container's file system to write the files to disk.
.. note::
The Envoy process must have write permissions to the path that you specify here. Otherwise, Envoy fails to bootstrap properly.
- **serviceDiscovery** *(dict) --*
The service discovery information for the virtual node. If your virtual node does not expect ingress traffic, you can omit this parameter.
- **dns** *(dict) --*
Specifies the DNS information for the virtual node.
- **hostname** *(string) --*
Specifies the DNS service discovery hostname for the virtual node.
- **status** *(dict) --*
The current status for the virtual node.
- **status** *(string) --*
The current status of the virtual node.
- **virtualNodeName** *(string) --*
The name of the virtual node.
:type clientToken: string
:param clientToken:
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
This field is autopopulated if not provided.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to create the virtual node in.
:type spec: dict
:param spec: **[REQUIRED]**
The virtual node specification to apply.
- **backends** *(list) --*
The backends that the virtual node is expected to send outbound traffic to.
- *(dict) --*
An object representing the backends that a virtual node is expected to send outbound traffic to.
- **virtualService** *(dict) --*
Specifies a virtual service to use as a backend for a virtual node.
- **virtualServiceName** *(string) --* **[REQUIRED]**
The name of the virtual service that is acting as a virtual node backend.
- **listeners** *(list) --*
The listeners that the virtual node is expected to receive inbound traffic from. Currently only one listener is supported per virtual node.
- *(dict) --*
An object representing a listener for a virtual node.
- **healthCheck** *(dict) --*
The health check information for the listener.
- **healthyThreshold** *(integer) --* **[REQUIRED]**
The number of consecutive successful health checks that must occur before declaring listener healthy.
- **intervalMillis** *(integer) --* **[REQUIRED]**
The time period in milliseconds between each health check execution.
- **path** *(string) --*
The destination path for the health check request. This is required only if the specified protocol is HTTP. If the protocol is TCP, this parameter is ignored.
- **port** *(integer) --*
The destination port for the health check request. This port must match the port defined in the PortMapping for the listener.
- **protocol** *(string) --* **[REQUIRED]**
The protocol for the health check request.
- **timeoutMillis** *(integer) --* **[REQUIRED]**
The amount of time to wait when receiving a response from the health check, in milliseconds.
- **unhealthyThreshold** *(integer) --* **[REQUIRED]**
The number of consecutive failed health checks that must occur before declaring a virtual node unhealthy.
- **portMapping** *(dict) --* **[REQUIRED]**
The port mapping information for the listener.
- **port** *(integer) --* **[REQUIRED]**
The port used for the port mapping.
- **protocol** *(string) --* **[REQUIRED]**
The protocol used for the port mapping.
- **logging** *(dict) --*
The inbound and outbound access logging information for the virtual node.
- **accessLog** *(dict) --*
The access log configuration for a virtual node.
- **file** *(dict) --*
The file object to send virtual node access logs to.
- **path** *(string) --* **[REQUIRED]**
The file path to write access logs to. You can use ``/dev/stdout`` to send access logs to standard out and configure your Envoy container to use a log driver, such as ``awslogs`` , to export the access logs to a log storage service such as Amazon CloudWatch Logs. You can also specify a path in the Envoy container\'s file system to write the files to disk.
.. note::
The Envoy process must have write permissions to the path that you specify here. Otherwise, Envoy fails to bootstrap properly.
- **serviceDiscovery** *(dict) --*
The service discovery information for the virtual node. If your virtual node does not expect ingress traffic, you can omit this parameter.
- **dns** *(dict) --*
Specifies the DNS information for the virtual node.
- **hostname** *(string) --* **[REQUIRED]**
Specifies the DNS service discovery hostname for the virtual node.
:type tags: list
:param tags:
Optional metadata that you can apply to the virtual node to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
- *(dict) --*
Optional metadata that you apply to a resource to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
- **key** *(string) --* **[REQUIRED]**
One part of a key-value pair that make up a tag. A ``key`` is a general label that acts like a category for more specific tag values.
- **value** *(string) --*
The optional part of a key-value pair that make up a tag. A ``value`` acts as a descriptor within a tag category (key).
:type virtualNodeName: string
:param virtualNodeName: **[REQUIRED]**
The name to use for the virtual node.
:rtype: dict
:returns:
"""
pass
def create_virtual_router(self, meshName: str, spec: Dict, virtualRouterName: str, clientToken: str = None, tags: List = None) -> Dict:
"""
Creates a virtual router within a service mesh.
Any inbound traffic that your virtual router expects should be specified as a ``listener`` .
Virtual routers handle traffic for one or more virtual services within your mesh. After you create your virtual router, create and associate routes for your virtual router that direct incoming requests to different virtual nodes.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/CreateVirtualRouter>`_
**Request Syntax**
::
response = client.create_virtual_router(
clientToken='string',
meshName='string',
spec={
'listeners': [
{
'portMapping': {
'port': 123,
'protocol': 'http'|'tcp'
}
},
]
},
tags=[
{
'key': 'string',
'value': 'string'
},
],
virtualRouterName='string'
)
**Response Syntax**
::
{
'virtualRouter': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'listeners': [
{
'portMapping': {
'port': 123,
'protocol': 'http'|'tcp'
}
},
]
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualRouterName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **virtualRouter** *(dict) --*
The full description of your virtual router following the create call.
- **meshName** *(string) --*
The name of the service mesh that the virtual router resides in.
- **metadata** *(dict) --*
The associated metadata for the virtual router.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The specifications of the virtual router.
- **listeners** *(list) --*
The listeners that the virtual router is expected to receive inbound traffic from. Currently only one listener is supported per virtual router.
- *(dict) --*
An object representing a virtual router listener.
- **portMapping** *(dict) --*
An object representing a virtual node or virtual router listener port mapping.
- **port** *(integer) --*
The port used for the port mapping.
- **protocol** *(string) --*
The protocol used for the port mapping.
- **status** *(dict) --*
The current status of the virtual router.
- **status** *(string) --*
The current status of the virtual router.
- **virtualRouterName** *(string) --*
The name of the virtual router.
:type clientToken: string
:param clientToken:
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
This field is autopopulated if not provided.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to create the virtual router in.
:type spec: dict
:param spec: **[REQUIRED]**
The virtual router specification to apply.
- **listeners** *(list) --* **[REQUIRED]**
The listeners that the virtual router is expected to receive inbound traffic from. Currently only one listener is supported per virtual router.
- *(dict) --*
An object representing a virtual router listener.
- **portMapping** *(dict) --* **[REQUIRED]**
An object representing a virtual node or virtual router listener port mapping.
- **port** *(integer) --* **[REQUIRED]**
The port used for the port mapping.
- **protocol** *(string) --* **[REQUIRED]**
The protocol used for the port mapping.
:type tags: list
:param tags:
Optional metadata that you can apply to the virtual router to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
- *(dict) --*
Optional metadata that you apply to a resource to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
- **key** *(string) --* **[REQUIRED]**
One part of a key-value pair that make up a tag. A ``key`` is a general label that acts like a category for more specific tag values.
- **value** *(string) --*
The optional part of a key-value pair that make up a tag. A ``value`` acts as a descriptor within a tag category (key).
:type virtualRouterName: string
:param virtualRouterName: **[REQUIRED]**
The name to use for the virtual router.
:rtype: dict
:returns:
"""
pass
def create_virtual_service(self, meshName: str, spec: Dict, virtualServiceName: str, clientToken: str = None, tags: List = None) -> Dict:
"""
Creates a virtual service within a service mesh.
A virtual service is an abstraction of a real service that is provided by a virtual node directly or indirectly by means of a virtual router. Dependent services call your virtual service by its ``virtualServiceName`` , and those requests are routed to the virtual node or virtual router that is specified as the provider for the virtual service.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/CreateVirtualService>`_
**Request Syntax**
::
response = client.create_virtual_service(
clientToken='string',
meshName='string',
spec={
'provider': {
'virtualNode': {
'virtualNodeName': 'string'
},
'virtualRouter': {
'virtualRouterName': 'string'
}
}
},
tags=[
{
'key': 'string',
'value': 'string'
},
],
virtualServiceName='string'
)
**Response Syntax**
::
{
'virtualService': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'provider': {
'virtualNode': {
'virtualNodeName': 'string'
},
'virtualRouter': {
'virtualRouterName': 'string'
}
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualServiceName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **virtualService** *(dict) --*
The full description of your virtual service following the create call.
- **meshName** *(string) --*
The name of the service mesh that the virtual service resides in.
- **metadata** *(dict) --*
An object representing metadata for a resource.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The specifications of the virtual service.
- **provider** *(dict) --*
The App Mesh object that is acting as the provider for a virtual service. You can specify a single virtual node or virtual router.
- **virtualNode** *(dict) --*
The virtual node associated with a virtual service.
- **virtualNodeName** *(string) --*
The name of the virtual node that is acting as a service provider.
- **virtualRouter** *(dict) --*
The virtual router associated with a virtual service.
- **virtualRouterName** *(string) --*
The name of the virtual router that is acting as a service provider.
- **status** *(dict) --*
The current status of the virtual service.
- **status** *(string) --*
The current status of the virtual service.
- **virtualServiceName** *(string) --*
The name of the virtual service.
:type clientToken: string
:param clientToken:
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
This field is autopopulated if not provided.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to create the virtual service in.
:type spec: dict
:param spec: **[REQUIRED]**
The virtual service specification to apply.
- **provider** *(dict) --*
The App Mesh object that is acting as the provider for a virtual service. You can specify a single virtual node or virtual router.
- **virtualNode** *(dict) --*
The virtual node associated with a virtual service.
- **virtualNodeName** *(string) --* **[REQUIRED]**
The name of the virtual node that is acting as a service provider.
- **virtualRouter** *(dict) --*
The virtual router associated with a virtual service.
- **virtualRouterName** *(string) --* **[REQUIRED]**
The name of the virtual router that is acting as a service provider.
:type tags: list
:param tags:
Optional metadata that you can apply to the virtual service to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
- *(dict) --*
Optional metadata that you apply to a resource to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
- **key** *(string) --* **[REQUIRED]**
One part of a key-value pair that make up a tag. A ``key`` is a general label that acts like a category for more specific tag values.
- **value** *(string) --*
The optional part of a key-value pair that make up a tag. A ``value`` acts as a descriptor within a tag category (key).
:type virtualServiceName: string
:param virtualServiceName: **[REQUIRED]**
The name to use for the virtual service.
:rtype: dict
:returns:
"""
pass
def delete_mesh(self, meshName: str) -> Dict:
"""
Deletes an existing service mesh.
You must delete all resources (virtual services, routes, virtual routers, and virtual nodes) in the service mesh before you can delete the mesh itself.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/DeleteMesh>`_
**Request Syntax**
::
response = client.delete_mesh(
meshName='string'
)
**Response Syntax**
::
{
'mesh': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'egressFilter': {
'type': 'ALLOW_ALL'|'DROP_ALL'
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
}
}
}
**Response Structure**
- *(dict) --*
- **mesh** *(dict) --*
The service mesh that was deleted.
- **meshName** *(string) --*
The name of the service mesh.
- **metadata** *(dict) --*
The associated metadata for the service mesh.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The associated specification for the service mesh.
- **egressFilter** *(dict) --*
The egress filter rules for the service mesh.
- **type** *(string) --*
The egress filter type. By default, the type is ``DROP_ALL`` , which allows egress only from virtual nodes to other defined resources in the service mesh (and any traffic to ``*.amazonaws.com`` for AWS API calls). You can set the egress filter type to ``ALLOW_ALL`` to allow egress to any endpoint inside or outside of the service mesh.
- **status** *(dict) --*
The status of the service mesh.
- **status** *(string) --*
The current mesh status.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to delete.
:rtype: dict
:returns:
"""
pass
def delete_route(self, meshName: str, routeName: str, virtualRouterName: str) -> Dict:
"""
Deletes an existing route.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/DeleteRoute>`_
**Request Syntax**
::
response = client.delete_route(
meshName='string',
routeName='string',
virtualRouterName='string'
)
**Response Syntax**
::
{
'route': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'routeName': 'string',
'spec': {
'httpRoute': {
'action': {
'weightedTargets': [
{
'virtualNode': 'string',
'weight': 123
},
]
},
'match': {
'prefix': 'string'
}
},
'tcpRoute': {
'action': {
'weightedTargets': [
{
'virtualNode': 'string',
'weight': 123
},
]
}
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualRouterName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **route** *(dict) --*
The route that was deleted.
- **meshName** *(string) --*
The name of the service mesh that the route resides in.
- **metadata** *(dict) --*
The associated metadata for the route.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **routeName** *(string) --*
The name of the route.
- **spec** *(dict) --*
The specifications of the route.
- **httpRoute** *(dict) --*
The HTTP routing information for the route.
- **action** *(dict) --*
The action to take if a match is determined.
- **weightedTargets** *(list) --*
The targets that traffic is routed to when a request matches the route. You can specify one or more targets and their relative weights to distribute traffic with.
- *(dict) --*
An object representing a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10.
- **virtualNode** *(string) --*
The virtual node to associate with the weighted target.
- **weight** *(integer) --*
The relative weight of the weighted target.
- **match** *(dict) --*
The criteria for determining an HTTP request match.
- **prefix** *(string) --*
Specifies the path to match requests with. This parameter must always start with ``/`` , which by itself matches all requests to the virtual service name. You can also match for path-based routing of requests. For example, if your virtual service name is ``my-service.local`` and you want the route to match requests to ``my-service.local/metrics`` , your prefix should be ``/metrics`` .
- **tcpRoute** *(dict) --*
The TCP routing information for the route.
- **action** *(dict) --*
The action to take if a match is determined.
- **weightedTargets** *(list) --*
The targets that traffic is routed to when a request matches the route. You can specify one or more targets and their relative weights to distribute traffic with.
- *(dict) --*
An object representing a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10.
- **virtualNode** *(string) --*
The virtual node to associate with the weighted target.
- **weight** *(integer) --*
The relative weight of the weighted target.
- **status** *(dict) --*
The status of the route.
- **status** *(string) --*
The current status for the route.
- **virtualRouterName** *(string) --*
The virtual router that the route is associated with.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to delete the route in.
:type routeName: string
:param routeName: **[REQUIRED]**
The name of the route to delete.
:type virtualRouterName: string
:param virtualRouterName: **[REQUIRED]**
The name of the virtual router to delete the route in.
:rtype: dict
:returns:
"""
pass
def delete_virtual_node(self, meshName: str, virtualNodeName: str) -> Dict:
"""
Deletes an existing virtual node.
You must delete any virtual services that list a virtual node as a service provider before you can delete the virtual node itself.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/DeleteVirtualNode>`_
**Request Syntax**
::
response = client.delete_virtual_node(
meshName='string',
virtualNodeName='string'
)
**Response Syntax**
::
{
'virtualNode': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'backends': [
{
'virtualService': {
'virtualServiceName': 'string'
}
},
],
'listeners': [
{
'healthCheck': {
'healthyThreshold': 123,
'intervalMillis': 123,
'path': 'string',
'port': 123,
'protocol': 'http'|'tcp',
'timeoutMillis': 123,
'unhealthyThreshold': 123
},
'portMapping': {
'port': 123,
'protocol': 'http'|'tcp'
}
},
],
'logging': {
'accessLog': {
'file': {
'path': 'string'
}
}
},
'serviceDiscovery': {
'dns': {
'hostname': 'string'
}
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualNodeName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **virtualNode** *(dict) --*
The virtual node that was deleted.
- **meshName** *(string) --*
The name of the service mesh that the virtual node resides in.
- **metadata** *(dict) --*
The associated metadata for the virtual node.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The specifications of the virtual node.
- **backends** *(list) --*
The backends that the virtual node is expected to send outbound traffic to.
- *(dict) --*
An object representing the backends that a virtual node is expected to send outbound traffic to.
- **virtualService** *(dict) --*
Specifies a virtual service to use as a backend for a virtual node.
- **virtualServiceName** *(string) --*
The name of the virtual service that is acting as a virtual node backend.
- **listeners** *(list) --*
The listeners that the virtual node is expected to receive inbound traffic from. Currently only one listener is supported per virtual node.
- *(dict) --*
An object representing a listener for a virtual node.
- **healthCheck** *(dict) --*
The health check information for the listener.
- **healthyThreshold** *(integer) --*
The number of consecutive successful health checks that must occur before declaring listener healthy.
- **intervalMillis** *(integer) --*
The time period in milliseconds between each health check execution.
- **path** *(string) --*
The destination path for the health check request. This is required only if the specified protocol is HTTP. If the protocol is TCP, this parameter is ignored.
- **port** *(integer) --*
The destination port for the health check request. This port must match the port defined in the PortMapping for the listener.
- **protocol** *(string) --*
The protocol for the health check request.
- **timeoutMillis** *(integer) --*
The amount of time to wait when receiving a response from the health check, in milliseconds.
- **unhealthyThreshold** *(integer) --*
The number of consecutive failed health checks that must occur before declaring a virtual node unhealthy.
- **portMapping** *(dict) --*
The port mapping information for the listener.
- **port** *(integer) --*
The port used for the port mapping.
- **protocol** *(string) --*
The protocol used for the port mapping.
- **logging** *(dict) --*
The inbound and outbound access logging information for the virtual node.
- **accessLog** *(dict) --*
The access log configuration for a virtual node.
- **file** *(dict) --*
The file object to send virtual node access logs to.
- **path** *(string) --*
The file path to write access logs to. You can use ``/dev/stdout`` to send access logs to standard out and configure your Envoy container to use a log driver, such as ``awslogs`` , to export the access logs to a log storage service such as Amazon CloudWatch Logs. You can also specify a path in the Envoy container's file system to write the files to disk.
.. note::
The Envoy process must have write permissions to the path that you specify here. Otherwise, Envoy fails to bootstrap properly.
- **serviceDiscovery** *(dict) --*
The service discovery information for the virtual node. If your virtual node does not expect ingress traffic, you can omit this parameter.
- **dns** *(dict) --*
Specifies the DNS information for the virtual node.
- **hostname** *(string) --*
Specifies the DNS service discovery hostname for the virtual node.
- **status** *(dict) --*
The current status for the virtual node.
- **status** *(string) --*
The current status of the virtual node.
- **virtualNodeName** *(string) --*
The name of the virtual node.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to delete the virtual node in.
:type virtualNodeName: string
:param virtualNodeName: **[REQUIRED]**
The name of the virtual node to delete.
:rtype: dict
:returns:
"""
pass
def delete_virtual_router(self, meshName: str, virtualRouterName: str) -> Dict:
"""
Deletes an existing virtual router.
You must delete any routes associated with the virtual router before you can delete the router itself.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/DeleteVirtualRouter>`_
**Request Syntax**
::
response = client.delete_virtual_router(
meshName='string',
virtualRouterName='string'
)
**Response Syntax**
::
{
'virtualRouter': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'listeners': [
{
'portMapping': {
'port': 123,
'protocol': 'http'|'tcp'
}
},
]
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualRouterName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **virtualRouter** *(dict) --*
The virtual router that was deleted.
- **meshName** *(string) --*
The name of the service mesh that the virtual router resides in.
- **metadata** *(dict) --*
The associated metadata for the virtual router.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The specifications of the virtual router.
- **listeners** *(list) --*
The listeners that the virtual router is expected to receive inbound traffic from. Currently only one listener is supported per virtual router.
- *(dict) --*
An object representing a virtual router listener.
- **portMapping** *(dict) --*
An object representing a virtual node or virtual router listener port mapping.
- **port** *(integer) --*
The port used for the port mapping.
- **protocol** *(string) --*
The protocol used for the port mapping.
- **status** *(dict) --*
The current status of the virtual router.
- **status** *(string) --*
The current status of the virtual router.
- **virtualRouterName** *(string) --*
The name of the virtual router.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to delete the virtual router in.
:type virtualRouterName: string
:param virtualRouterName: **[REQUIRED]**
The name of the virtual router to delete.
:rtype: dict
:returns:
"""
pass
def delete_virtual_service(self, meshName: str, virtualServiceName: str) -> Dict:
"""
Deletes an existing virtual service.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/DeleteVirtualService>`_
**Request Syntax**
::
response = client.delete_virtual_service(
meshName='string',
virtualServiceName='string'
)
**Response Syntax**
::
{
'virtualService': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'provider': {
'virtualNode': {
'virtualNodeName': 'string'
},
'virtualRouter': {
'virtualRouterName': 'string'
}
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualServiceName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **virtualService** *(dict) --*
The virtual service that was deleted.
- **meshName** *(string) --*
The name of the service mesh that the virtual service resides in.
- **metadata** *(dict) --*
An object representing metadata for a resource.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The specifications of the virtual service.
- **provider** *(dict) --*
The App Mesh object that is acting as the provider for a virtual service. You can specify a single virtual node or virtual router.
- **virtualNode** *(dict) --*
The virtual node associated with a virtual service.
- **virtualNodeName** *(string) --*
The name of the virtual node that is acting as a service provider.
- **virtualRouter** *(dict) --*
The virtual router associated with a virtual service.
- **virtualRouterName** *(string) --*
The name of the virtual router that is acting as a service provider.
- **status** *(dict) --*
The current status of the virtual service.
- **status** *(string) --*
The current status of the virtual service.
- **virtualServiceName** *(string) --*
The name of the virtual service.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to delete the virtual service in.
:type virtualServiceName: string
:param virtualServiceName: **[REQUIRED]**
The name of the virtual service to delete.
:rtype: dict
:returns:
"""
pass
def describe_mesh(self, meshName: str) -> Dict:
"""
Describes an existing service mesh.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/DescribeMesh>`_
**Request Syntax**
::
response = client.describe_mesh(
meshName='string'
)
**Response Syntax**
::
{
'mesh': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'egressFilter': {
'type': 'ALLOW_ALL'|'DROP_ALL'
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
}
}
}
**Response Structure**
- *(dict) --*
- **mesh** *(dict) --*
The full description of your service mesh.
- **meshName** *(string) --*
The name of the service mesh.
- **metadata** *(dict) --*
The associated metadata for the service mesh.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The associated specification for the service mesh.
- **egressFilter** *(dict) --*
The egress filter rules for the service mesh.
- **type** *(string) --*
The egress filter type. By default, the type is ``DROP_ALL`` , which allows egress only from virtual nodes to other defined resources in the service mesh (and any traffic to ``*.amazonaws.com`` for AWS API calls). You can set the egress filter type to ``ALLOW_ALL`` to allow egress to any endpoint inside or outside of the service mesh.
- **status** *(dict) --*
The status of the service mesh.
- **status** *(string) --*
The current mesh status.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to describe.
:rtype: dict
:returns:
"""
pass
def describe_route(self, meshName: str, routeName: str, virtualRouterName: str) -> Dict:
"""
Describes an existing route.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/DescribeRoute>`_
**Request Syntax**
::
response = client.describe_route(
meshName='string',
routeName='string',
virtualRouterName='string'
)
**Response Syntax**
::
{
'route': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'routeName': 'string',
'spec': {
'httpRoute': {
'action': {
'weightedTargets': [
{
'virtualNode': 'string',
'weight': 123
},
]
},
'match': {
'prefix': 'string'
}
},
'tcpRoute': {
'action': {
'weightedTargets': [
{
'virtualNode': 'string',
'weight': 123
},
]
}
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualRouterName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **route** *(dict) --*
The full description of your route.
- **meshName** *(string) --*
The name of the service mesh that the route resides in.
- **metadata** *(dict) --*
The associated metadata for the route.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **routeName** *(string) --*
The name of the route.
- **spec** *(dict) --*
The specifications of the route.
- **httpRoute** *(dict) --*
The HTTP routing information for the route.
- **action** *(dict) --*
The action to take if a match is determined.
- **weightedTargets** *(list) --*
The targets that traffic is routed to when a request matches the route. You can specify one or more targets and their relative weights to distribute traffic with.
- *(dict) --*
An object representing a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10.
- **virtualNode** *(string) --*
The virtual node to associate with the weighted target.
- **weight** *(integer) --*
The relative weight of the weighted target.
- **match** *(dict) --*
The criteria for determining an HTTP request match.
- **prefix** *(string) --*
Specifies the path to match requests with. This parameter must always start with ``/`` , which by itself matches all requests to the virtual service name. You can also match for path-based routing of requests. For example, if your virtual service name is ``my-service.local`` and you want the route to match requests to ``my-service.local/metrics`` , your prefix should be ``/metrics`` .
- **tcpRoute** *(dict) --*
The TCP routing information for the route.
- **action** *(dict) --*
The action to take if a match is determined.
- **weightedTargets** *(list) --*
The targets that traffic is routed to when a request matches the route. You can specify one or more targets and their relative weights to distribute traffic with.
- *(dict) --*
An object representing a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10.
- **virtualNode** *(string) --*
The virtual node to associate with the weighted target.
- **weight** *(integer) --*
The relative weight of the weighted target.
- **status** *(dict) --*
The status of the route.
- **status** *(string) --*
The current status for the route.
- **virtualRouterName** *(string) --*
The virtual router that the route is associated with.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh that the route resides in.
:type routeName: string
:param routeName: **[REQUIRED]**
The name of the route to describe.
:type virtualRouterName: string
:param virtualRouterName: **[REQUIRED]**
The name of the virtual router that the route is associated with.
:rtype: dict
:returns:
"""
pass
def describe_virtual_node(self, meshName: str, virtualNodeName: str) -> Dict:
"""
Describes an existing virtual node.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/DescribeVirtualNode>`_
**Request Syntax**
::
response = client.describe_virtual_node(
meshName='string',
virtualNodeName='string'
)
**Response Syntax**
::
{
'virtualNode': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'backends': [
{
'virtualService': {
'virtualServiceName': 'string'
}
},
],
'listeners': [
{
'healthCheck': {
'healthyThreshold': 123,
'intervalMillis': 123,
'path': 'string',
'port': 123,
'protocol': 'http'|'tcp',
'timeoutMillis': 123,
'unhealthyThreshold': 123
},
'portMapping': {
'port': 123,
'protocol': 'http'|'tcp'
}
},
],
'logging': {
'accessLog': {
'file': {
'path': 'string'
}
}
},
'serviceDiscovery': {
'dns': {
'hostname': 'string'
}
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualNodeName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **virtualNode** *(dict) --*
The full description of your virtual node.
- **meshName** *(string) --*
The name of the service mesh that the virtual node resides in.
- **metadata** *(dict) --*
The associated metadata for the virtual node.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The specifications of the virtual node.
- **backends** *(list) --*
The backends that the virtual node is expected to send outbound traffic to.
- *(dict) --*
An object representing the backends that a virtual node is expected to send outbound traffic to.
- **virtualService** *(dict) --*
Specifies a virtual service to use as a backend for a virtual node.
- **virtualServiceName** *(string) --*
The name of the virtual service that is acting as a virtual node backend.
- **listeners** *(list) --*
The listeners that the virtual node is expected to receive inbound traffic from. Currently only one listener is supported per virtual node.
- *(dict) --*
An object representing a listener for a virtual node.
- **healthCheck** *(dict) --*
The health check information for the listener.
- **healthyThreshold** *(integer) --*
The number of consecutive successful health checks that must occur before declaring listener healthy.
- **intervalMillis** *(integer) --*
The time period in milliseconds between each health check execution.
- **path** *(string) --*
The destination path for the health check request. This is required only if the specified protocol is HTTP. If the protocol is TCP, this parameter is ignored.
- **port** *(integer) --*
The destination port for the health check request. This port must match the port defined in the PortMapping for the listener.
- **protocol** *(string) --*
The protocol for the health check request.
- **timeoutMillis** *(integer) --*
The amount of time to wait when receiving a response from the health check, in milliseconds.
- **unhealthyThreshold** *(integer) --*
The number of consecutive failed health checks that must occur before declaring a virtual node unhealthy.
- **portMapping** *(dict) --*
The port mapping information for the listener.
- **port** *(integer) --*
The port used for the port mapping.
- **protocol** *(string) --*
The protocol used for the port mapping.
- **logging** *(dict) --*
The inbound and outbound access logging information for the virtual node.
- **accessLog** *(dict) --*
The access log configuration for a virtual node.
- **file** *(dict) --*
The file object to send virtual node access logs to.
- **path** *(string) --*
The file path to write access logs to. You can use ``/dev/stdout`` to send access logs to standard out and configure your Envoy container to use a log driver, such as ``awslogs`` , to export the access logs to a log storage service such as Amazon CloudWatch Logs. You can also specify a path in the Envoy container's file system to write the files to disk.
.. note::
The Envoy process must have write permissions to the path that you specify here. Otherwise, Envoy fails to bootstrap properly.
- **serviceDiscovery** *(dict) --*
The service discovery information for the virtual node. If your virtual node does not expect ingress traffic, you can omit this parameter.
- **dns** *(dict) --*
Specifies the DNS information for the virtual node.
- **hostname** *(string) --*
Specifies the DNS service discovery hostname for the virtual node.
- **status** *(dict) --*
The current status for the virtual node.
- **status** *(string) --*
The current status of the virtual node.
- **virtualNodeName** *(string) --*
The name of the virtual node.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh that the virtual node resides in.
:type virtualNodeName: string
:param virtualNodeName: **[REQUIRED]**
The name of the virtual node to describe.
:rtype: dict
:returns:
"""
pass
def describe_virtual_router(self, meshName: str, virtualRouterName: str) -> Dict:
"""
Describes an existing virtual router.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/DescribeVirtualRouter>`_
**Request Syntax**
::
response = client.describe_virtual_router(
meshName='string',
virtualRouterName='string'
)
**Response Syntax**
::
{
'virtualRouter': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'listeners': [
{
'portMapping': {
'port': 123,
'protocol': 'http'|'tcp'
}
},
]
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualRouterName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **virtualRouter** *(dict) --*
The full description of your virtual router.
- **meshName** *(string) --*
The name of the service mesh that the virtual router resides in.
- **metadata** *(dict) --*
The associated metadata for the virtual router.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The specifications of the virtual router.
- **listeners** *(list) --*
The listeners that the virtual router is expected to receive inbound traffic from. Currently only one listener is supported per virtual router.
- *(dict) --*
An object representing a virtual router listener.
- **portMapping** *(dict) --*
An object representing a virtual node or virtual router listener port mapping.
- **port** *(integer) --*
The port used for the port mapping.
- **protocol** *(string) --*
The protocol used for the port mapping.
- **status** *(dict) --*
The current status of the virtual router.
- **status** *(string) --*
The current status of the virtual router.
- **virtualRouterName** *(string) --*
The name of the virtual router.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh that the virtual router resides in.
:type virtualRouterName: string
:param virtualRouterName: **[REQUIRED]**
The name of the virtual router to describe.
:rtype: dict
:returns:
"""
pass
def describe_virtual_service(self, meshName: str, virtualServiceName: str) -> Dict:
"""
Describes an existing virtual service.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/DescribeVirtualService>`_
**Request Syntax**
::
response = client.describe_virtual_service(
meshName='string',
virtualServiceName='string'
)
**Response Syntax**
::
{
'virtualService': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'provider': {
'virtualNode': {
'virtualNodeName': 'string'
},
'virtualRouter': {
'virtualRouterName': 'string'
}
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualServiceName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **virtualService** *(dict) --*
The full description of your virtual service.
- **meshName** *(string) --*
The name of the service mesh that the virtual service resides in.
- **metadata** *(dict) --*
An object representing metadata for a resource.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The specifications of the virtual service.
- **provider** *(dict) --*
The App Mesh object that is acting as the provider for a virtual service. You can specify a single virtual node or virtual router.
- **virtualNode** *(dict) --*
The virtual node associated with a virtual service.
- **virtualNodeName** *(string) --*
The name of the virtual node that is acting as a service provider.
- **virtualRouter** *(dict) --*
The virtual router associated with a virtual service.
- **virtualRouterName** *(string) --*
The name of the virtual router that is acting as a service provider.
- **status** *(dict) --*
The current status of the virtual service.
- **status** *(string) --*
The current status of the virtual service.
- **virtualServiceName** *(string) --*
The name of the virtual service.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh that the virtual service resides in.
:type virtualServiceName: string
:param virtualServiceName: **[REQUIRED]**
The name of the virtual service to describe.
:rtype: dict
:returns:
"""
pass
def generate_presigned_url(self, ClientMethod: str = None, Params: Dict = None, ExpiresIn: int = None, HttpMethod: str = None):
"""
Generate a presigned url given a client, its method, and arguments
:type ClientMethod: string
:param ClientMethod: The client method to presign for
:type Params: dict
:param Params: The parameters normally passed to
``ClientMethod``.
:type ExpiresIn: int
:param ExpiresIn: The number of seconds the presigned url is valid
for. By default it expires in an hour (3600 seconds)
:type HttpMethod: string
:param HttpMethod: The http method to use on the generated url. By
default, the http method is whatever is used in the method\'s model.
:returns: The presigned url
"""
pass
def get_paginator(self, operation_name: str = None) -> Paginator:
"""
Create a paginator for an operation.
:type operation_name: string
:param operation_name: The operation name. This is the same name
as the method name on the client. For example, if the
method name is ``create_foo``, and you\'d normally invoke the
operation as ``client.create_foo(**kwargs)``, if the
``create_foo`` operation can be paginated, you can use the
call ``client.get_paginator(\"create_foo\")``.
:raise OperationNotPageableError: Raised if the operation is not
pageable. You can use the ``client.can_paginate`` method to
check if an operation is pageable.
:rtype: L{botocore.paginate.Paginator}
:return: A paginator object.
"""
pass
def get_waiter(self, waiter_name: str = None) -> Waiter:
"""
Returns an object that can wait for some condition.
:type waiter_name: str
:param waiter_name: The name of the waiter to get. See the waiters
section of the service docs for a list of available waiters.
:returns: The specified waiter object.
:rtype: botocore.waiter.Waiter
"""
pass
def list_meshes(self, limit: int = None, nextToken: str = None) -> Dict:
"""
Returns a list of existing service meshes.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/ListMeshes>`_
**Request Syntax**
::
response = client.list_meshes(
limit=123,
nextToken='string'
)
**Response Syntax**
::
{
'meshes': [
{
'arn': 'string',
'meshName': 'string'
},
],
'nextToken': 'string'
}
**Response Structure**
- *(dict) --*
- **meshes** *(list) --*
The list of existing service meshes.
- *(dict) --*
An object representing a service mesh returned by a list operation.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) of the service mesh.
- **meshName** *(string) --*
The name of the service mesh.
- **nextToken** *(string) --*
The ``nextToken`` value to include in a future ``ListMeshes`` request. When the results of a ``ListMeshes`` request exceed ``limit`` , you can use this value to retrieve the next page of results. This value is ``null`` when there are no more results to return.
:type limit: integer
:param limit:
The maximum number of results returned by ``ListMeshes`` in paginated output. When you use this parameter, ``ListMeshes`` returns only ``limit`` results in a single page along with a ``nextToken`` response element. You can see the remaining results of the initial request by sending another ``ListMeshes`` request with the returned ``nextToken`` value. This value can be between 1 and 100. If you don\'t use this parameter, ``ListMeshes`` returns up to 100 results and a ``nextToken`` value if applicable.
:type nextToken: string
:param nextToken:
The ``nextToken`` value returned from a previous paginated ``ListMeshes`` request where ``limit`` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the ``nextToken`` value.
.. note::
This token should be treated as an opaque identifier that is used only to retrieve the next items in a list and not for other programmatic purposes.
:rtype: dict
:returns:
"""
pass
def list_routes(self, meshName: str, virtualRouterName: str, limit: int = None, nextToken: str = None) -> Dict:
"""
Returns a list of existing routes in a service mesh.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/ListRoutes>`_
**Request Syntax**
::
response = client.list_routes(
limit=123,
meshName='string',
nextToken='string',
virtualRouterName='string'
)
**Response Syntax**
::
{
'nextToken': 'string',
'routes': [
{
'arn': 'string',
'meshName': 'string',
'routeName': 'string',
'virtualRouterName': 'string'
},
]
}
**Response Structure**
- *(dict) --*
- **nextToken** *(string) --*
The ``nextToken`` value to include in a future ``ListRoutes`` request. When the results of a ``ListRoutes`` request exceed ``limit`` , you can use this value to retrieve the next page of results. This value is ``null`` when there are no more results to return.
- **routes** *(list) --*
The list of existing routes for the specified service mesh and virtual router.
- *(dict) --*
An object representing a route returned by a list operation.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the route.
- **meshName** *(string) --*
The name of the service mesh that the route resides in.
- **routeName** *(string) --*
The name of the route.
- **virtualRouterName** *(string) --*
The virtual router that the route is associated with.
:type limit: integer
:param limit:
The maximum number of results returned by ``ListRoutes`` in paginated output. When you use this parameter, ``ListRoutes`` returns only ``limit`` results in a single page along with a ``nextToken`` response element. You can see the remaining results of the initial request by sending another ``ListRoutes`` request with the returned ``nextToken`` value. This value can be between 1 and 100. If you don\'t use this parameter, ``ListRoutes`` returns up to 100 results and a ``nextToken`` value if applicable.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to list routes in.
:type nextToken: string
:param nextToken:
The ``nextToken`` value returned from a previous paginated ``ListRoutes`` request where ``limit`` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the ``nextToken`` value.
:type virtualRouterName: string
:param virtualRouterName: **[REQUIRED]**
The name of the virtual router to list routes in.
:rtype: dict
:returns:
"""
pass
def list_tags_for_resource(self, resourceArn: str, limit: int = None, nextToken: str = None) -> Dict:
"""
List the tags for an App Mesh resource.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/ListTagsForResource>`_
**Request Syntax**
::
response = client.list_tags_for_resource(
limit=123,
nextToken='string',
resourceArn='string'
)
**Response Syntax**
::
{
'nextToken': 'string',
'tags': [
{
'key': 'string',
'value': 'string'
},
]
}
**Response Structure**
- *(dict) --*
- **nextToken** *(string) --*
The ``nextToken`` value to include in a future ``ListTagsForResource`` request. When the results of a ``ListTagsForResource`` request exceed ``limit`` , you can use this value to retrieve the next page of results. This value is ``null`` when there are no more results to return.
- **tags** *(list) --*
The tags for the resource.
- *(dict) --*
Optional metadata that you apply to a resource to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
- **key** *(string) --*
One part of a key-value pair that make up a tag. A ``key`` is a general label that acts like a category for more specific tag values.
- **value** *(string) --*
The optional part of a key-value pair that make up a tag. A ``value`` acts as a descriptor within a tag category (key).
:type limit: integer
:param limit:
The maximum number of tag results returned by ``ListTagsForResource`` in paginated output. When this parameter is used, ``ListTagsForResource`` returns only ``limit`` results in a single page along with a ``nextToken`` response element. You can see the remaining results of the initial request by sending another ``ListTagsForResource`` request with the returned ``nextToken`` value. This value can be between 1 and 100. If you don\'t use this parameter, ``ListTagsForResource`` returns up to 100 results and a ``nextToken`` value if applicable.
:type nextToken: string
:param nextToken:
The ``nextToken`` value returned from a previous paginated ``ListTagsForResource`` request where ``limit`` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the ``nextToken`` value.
:type resourceArn: string
:param resourceArn: **[REQUIRED]**
The Amazon Resource Name (ARN) that identifies the resource to list the tags for.
:rtype: dict
:returns:
"""
pass
def list_virtual_nodes(self, meshName: str, limit: int = None, nextToken: str = None) -> Dict:
"""
Returns a list of existing virtual nodes.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/ListVirtualNodes>`_
**Request Syntax**
::
response = client.list_virtual_nodes(
limit=123,
meshName='string',
nextToken='string'
)
**Response Syntax**
::
{
'nextToken': 'string',
'virtualNodes': [
{
'arn': 'string',
'meshName': 'string',
'virtualNodeName': 'string'
},
]
}
**Response Structure**
- *(dict) --*
- **nextToken** *(string) --*
The ``nextToken`` value to include in a future ``ListVirtualNodes`` request. When the results of a ``ListVirtualNodes`` request exceed ``limit`` , you can use this value to retrieve the next page of results. This value is ``null`` when there are no more results to return.
- **virtualNodes** *(list) --*
The list of existing virtual nodes for the specified service mesh.
- *(dict) --*
An object representing a virtual node returned by a list operation.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the virtual node.
- **meshName** *(string) --*
The name of the service mesh that the virtual node resides in.
- **virtualNodeName** *(string) --*
The name of the virtual node.
:type limit: integer
:param limit:
The maximum number of results returned by ``ListVirtualNodes`` in paginated output. When you use this parameter, ``ListVirtualNodes`` returns only ``limit`` results in a single page along with a ``nextToken`` response element. You can see the remaining results of the initial request by sending another ``ListVirtualNodes`` request with the returned ``nextToken`` value. This value can be between 1 and 100. If you don\'t use this parameter, ``ListVirtualNodes`` returns up to 100 results and a ``nextToken`` value if applicable.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to list virtual nodes in.
:type nextToken: string
:param nextToken:
The ``nextToken`` value returned from a previous paginated ``ListVirtualNodes`` request where ``limit`` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the ``nextToken`` value.
:rtype: dict
:returns:
"""
pass
def list_virtual_routers(self, meshName: str, limit: int = None, nextToken: str = None) -> Dict:
"""
Returns a list of existing virtual routers in a service mesh.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/ListVirtualRouters>`_
**Request Syntax**
::
response = client.list_virtual_routers(
limit=123,
meshName='string',
nextToken='string'
)
**Response Syntax**
::
{
'nextToken': 'string',
'virtualRouters': [
{
'arn': 'string',
'meshName': 'string',
'virtualRouterName': 'string'
},
]
}
**Response Structure**
- *(dict) --*
- **nextToken** *(string) --*
The ``nextToken`` value to include in a future ``ListVirtualRouters`` request. When the results of a ``ListVirtualRouters`` request exceed ``limit`` , you can use this value to retrieve the next page of results. This value is ``null`` when there are no more results to return.
- **virtualRouters** *(list) --*
The list of existing virtual routers for the specified service mesh.
- *(dict) --*
An object representing a virtual router returned by a list operation.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the virtual router.
- **meshName** *(string) --*
The name of the service mesh that the virtual router resides in.
- **virtualRouterName** *(string) --*
The name of the virtual router.
:type limit: integer
:param limit:
The maximum number of results returned by ``ListVirtualRouters`` in paginated output. When you use this parameter, ``ListVirtualRouters`` returns only ``limit`` results in a single page along with a ``nextToken`` response element. You can see the remaining results of the initial request by sending another ``ListVirtualRouters`` request with the returned ``nextToken`` value. This value can be between 1 and 100. If you don\'t use this parameter, ``ListVirtualRouters`` returns up to 100 results and a ``nextToken`` value if applicable.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to list virtual routers in.
:type nextToken: string
:param nextToken:
The ``nextToken`` value returned from a previous paginated ``ListVirtualRouters`` request where ``limit`` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the ``nextToken`` value.
:rtype: dict
:returns:
"""
pass
def list_virtual_services(self, meshName: str, limit: int = None, nextToken: str = None) -> Dict:
"""
Returns a list of existing virtual services in a service mesh.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/ListVirtualServices>`_
**Request Syntax**
::
response = client.list_virtual_services(
limit=123,
meshName='string',
nextToken='string'
)
**Response Syntax**
::
{
'nextToken': 'string',
'virtualServices': [
{
'arn': 'string',
'meshName': 'string',
'virtualServiceName': 'string'
},
]
}
**Response Structure**
- *(dict) --*
- **nextToken** *(string) --*
The ``nextToken`` value to include in a future ``ListVirtualServices`` request. When the results of a ``ListVirtualServices`` request exceed ``limit`` , you can use this value to retrieve the next page of results. This value is ``null`` when there are no more results to return.
- **virtualServices** *(list) --*
The list of existing virtual services for the specified service mesh.
- *(dict) --*
An object representing a virtual service returned by a list operation.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the virtual service.
- **meshName** *(string) --*
The name of the service mesh that the virtual service resides in.
- **virtualServiceName** *(string) --*
The name of the virtual service.
:type limit: integer
:param limit:
The maximum number of results returned by ``ListVirtualServices`` in paginated output. When you use this parameter, ``ListVirtualServices`` returns only ``limit`` results in a single page along with a ``nextToken`` response element. You can see the remaining results of the initial request by sending another ``ListVirtualServices`` request with the returned ``nextToken`` value. This value can be between 1 and 100. If you don\'t use this parameter, ``ListVirtualServices`` returns up to 100 results and a ``nextToken`` value if applicable.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to list virtual services in.
:type nextToken: string
:param nextToken:
The ``nextToken`` value returned from a previous paginated ``ListVirtualServices`` request where ``limit`` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the ``nextToken`` value.
:rtype: dict
:returns:
"""
pass
def tag_resource(self, resourceArn: str, tags: List) -> Dict:
"""
Associates the specified tags to a resource with the specified ``resourceArn`` . If existing tags on a resource aren't specified in the request parameters, they aren't changed. When a resource is deleted, the tags associated with that resource are also deleted.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/TagResource>`_
**Request Syntax**
::
response = client.tag_resource(
resourceArn='string',
tags=[
{
'key': 'string',
'value': 'string'
},
]
)
**Response Syntax**
::
{}
**Response Structure**
- *(dict) --*
:type resourceArn: string
:param resourceArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the resource to add tags to.
:type tags: list
:param tags: **[REQUIRED]**
The tags to add to the resource. A tag is an array of key-value pairs. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
- *(dict) --*
Optional metadata that you apply to a resource to assist with categorization and organization. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
- **key** *(string) --* **[REQUIRED]**
One part of a key-value pair that make up a tag. A ``key`` is a general label that acts like a category for more specific tag values.
- **value** *(string) --*
The optional part of a key-value pair that make up a tag. A ``value`` acts as a descriptor within a tag category (key).
:rtype: dict
:returns:
"""
pass
def untag_resource(self, resourceArn: str, tagKeys: List) -> Dict:
"""
Deletes specified tags from a resource.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/UntagResource>`_
**Request Syntax**
::
response = client.untag_resource(
resourceArn='string',
tagKeys=[
'string',
]
)
**Response Syntax**
::
{}
**Response Structure**
- *(dict) --*
:type resourceArn: string
:param resourceArn: **[REQUIRED]**
The Amazon Resource Name (ARN) of the resource to delete tags from.
:type tagKeys: list
:param tagKeys: **[REQUIRED]**
The keys of the tags to be removed.
- *(string) --*
:rtype: dict
:returns:
"""
pass
def update_mesh(self, meshName: str, clientToken: str = None, spec: Dict = None) -> Dict:
"""
Updates an existing service mesh.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/UpdateMesh>`_
**Request Syntax**
::
response = client.update_mesh(
clientToken='string',
meshName='string',
spec={
'egressFilter': {
'type': 'ALLOW_ALL'|'DROP_ALL'
}
}
)
**Response Syntax**
::
{
'mesh': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'egressFilter': {
'type': 'ALLOW_ALL'|'DROP_ALL'
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
}
}
}
**Response Structure**
- *(dict) --*
- **mesh** *(dict) --*
An object representing a service mesh returned by a describe operation.
- **meshName** *(string) --*
The name of the service mesh.
- **metadata** *(dict) --*
The associated metadata for the service mesh.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The associated specification for the service mesh.
- **egressFilter** *(dict) --*
The egress filter rules for the service mesh.
- **type** *(string) --*
The egress filter type. By default, the type is ``DROP_ALL`` , which allows egress only from virtual nodes to other defined resources in the service mesh (and any traffic to ``*.amazonaws.com`` for AWS API calls). You can set the egress filter type to ``ALLOW_ALL`` to allow egress to any endpoint inside or outside of the service mesh.
- **status** *(dict) --*
The status of the service mesh.
- **status** *(string) --*
The current mesh status.
:type clientToken: string
:param clientToken:
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
This field is autopopulated if not provided.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh to update.
:type spec: dict
:param spec:
The service mesh specification to apply.
- **egressFilter** *(dict) --*
The egress filter rules for the service mesh.
- **type** *(string) --* **[REQUIRED]**
The egress filter type. By default, the type is ``DROP_ALL`` , which allows egress only from virtual nodes to other defined resources in the service mesh (and any traffic to ``*.amazonaws.com`` for AWS API calls). You can set the egress filter type to ``ALLOW_ALL`` to allow egress to any endpoint inside or outside of the service mesh.
:rtype: dict
:returns:
"""
pass
def update_route(self, meshName: str, routeName: str, spec: Dict, virtualRouterName: str, clientToken: str = None) -> Dict:
"""
Updates an existing route for a specified service mesh and virtual router.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/UpdateRoute>`_
**Request Syntax**
::
response = client.update_route(
clientToken='string',
meshName='string',
routeName='string',
spec={
'httpRoute': {
'action': {
'weightedTargets': [
{
'virtualNode': 'string',
'weight': 123
},
]
},
'match': {
'prefix': 'string'
}
},
'tcpRoute': {
'action': {
'weightedTargets': [
{
'virtualNode': 'string',
'weight': 123
},
]
}
}
},
virtualRouterName='string'
)
**Response Syntax**
::
{
'route': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'routeName': 'string',
'spec': {
'httpRoute': {
'action': {
'weightedTargets': [
{
'virtualNode': 'string',
'weight': 123
},
]
},
'match': {
'prefix': 'string'
}
},
'tcpRoute': {
'action': {
'weightedTargets': [
{
'virtualNode': 'string',
'weight': 123
},
]
}
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualRouterName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **route** *(dict) --*
A full description of the route that was updated.
- **meshName** *(string) --*
The name of the service mesh that the route resides in.
- **metadata** *(dict) --*
The associated metadata for the route.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **routeName** *(string) --*
The name of the route.
- **spec** *(dict) --*
The specifications of the route.
- **httpRoute** *(dict) --*
The HTTP routing information for the route.
- **action** *(dict) --*
The action to take if a match is determined.
- **weightedTargets** *(list) --*
The targets that traffic is routed to when a request matches the route. You can specify one or more targets and their relative weights to distribute traffic with.
- *(dict) --*
An object representing a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10.
- **virtualNode** *(string) --*
The virtual node to associate with the weighted target.
- **weight** *(integer) --*
The relative weight of the weighted target.
- **match** *(dict) --*
The criteria for determining an HTTP request match.
- **prefix** *(string) --*
Specifies the path to match requests with. This parameter must always start with ``/`` , which by itself matches all requests to the virtual service name. You can also match for path-based routing of requests. For example, if your virtual service name is ``my-service.local`` and you want the route to match requests to ``my-service.local/metrics`` , your prefix should be ``/metrics`` .
- **tcpRoute** *(dict) --*
The TCP routing information for the route.
- **action** *(dict) --*
The action to take if a match is determined.
- **weightedTargets** *(list) --*
The targets that traffic is routed to when a request matches the route. You can specify one or more targets and their relative weights to distribute traffic with.
- *(dict) --*
An object representing a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10.
- **virtualNode** *(string) --*
The virtual node to associate with the weighted target.
- **weight** *(integer) --*
The relative weight of the weighted target.
- **status** *(dict) --*
The status of the route.
- **status** *(string) --*
The current status for the route.
- **virtualRouterName** *(string) --*
The virtual router that the route is associated with.
:type clientToken: string
:param clientToken:
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
This field is autopopulated if not provided.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh that the route resides in.
:type routeName: string
:param routeName: **[REQUIRED]**
The name of the route to update.
:type spec: dict
:param spec: **[REQUIRED]**
The new route specification to apply. This overwrites the existing data.
- **httpRoute** *(dict) --*
The HTTP routing information for the route.
- **action** *(dict) --* **[REQUIRED]**
The action to take if a match is determined.
- **weightedTargets** *(list) --* **[REQUIRED]**
The targets that traffic is routed to when a request matches the route. You can specify one or more targets and their relative weights to distribute traffic with.
- *(dict) --*
An object representing a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10.
- **virtualNode** *(string) --* **[REQUIRED]**
The virtual node to associate with the weighted target.
- **weight** *(integer) --* **[REQUIRED]**
The relative weight of the weighted target.
- **match** *(dict) --* **[REQUIRED]**
The criteria for determining an HTTP request match.
- **prefix** *(string) --* **[REQUIRED]**
Specifies the path to match requests with. This parameter must always start with ``/`` , which by itself matches all requests to the virtual service name. You can also match for path-based routing of requests. For example, if your virtual service name is ``my-service.local`` and you want the route to match requests to ``my-service.local/metrics`` , your prefix should be ``/metrics`` .
- **tcpRoute** *(dict) --*
The TCP routing information for the route.
- **action** *(dict) --* **[REQUIRED]**
The action to take if a match is determined.
- **weightedTargets** *(list) --* **[REQUIRED]**
The targets that traffic is routed to when a request matches the route. You can specify one or more targets and their relative weights to distribute traffic with.
- *(dict) --*
An object representing a target and its relative weight. Traffic is distributed across targets according to their relative weight. For example, a weighted target with a relative weight of 50 receives five times as much traffic as one with a relative weight of 10.
- **virtualNode** *(string) --* **[REQUIRED]**
The virtual node to associate with the weighted target.
- **weight** *(integer) --* **[REQUIRED]**
The relative weight of the weighted target.
:type virtualRouterName: string
:param virtualRouterName: **[REQUIRED]**
The name of the virtual router that the route is associated with.
:rtype: dict
:returns:
"""
pass
def update_virtual_node(self, meshName: str, spec: Dict, virtualNodeName: str, clientToken: str = None) -> Dict:
"""
Updates an existing virtual node in a specified service mesh.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/UpdateVirtualNode>`_
**Request Syntax**
::
response = client.update_virtual_node(
clientToken='string',
meshName='string',
spec={
'backends': [
{
'virtualService': {
'virtualServiceName': 'string'
}
},
],
'listeners': [
{
'healthCheck': {
'healthyThreshold': 123,
'intervalMillis': 123,
'path': 'string',
'port': 123,
'protocol': 'http'|'tcp',
'timeoutMillis': 123,
'unhealthyThreshold': 123
},
'portMapping': {
'port': 123,
'protocol': 'http'|'tcp'
}
},
],
'logging': {
'accessLog': {
'file': {
'path': 'string'
}
}
},
'serviceDiscovery': {
'dns': {
'hostname': 'string'
}
}
},
virtualNodeName='string'
)
**Response Syntax**
::
{
'virtualNode': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'backends': [
{
'virtualService': {
'virtualServiceName': 'string'
}
},
],
'listeners': [
{
'healthCheck': {
'healthyThreshold': 123,
'intervalMillis': 123,
'path': 'string',
'port': 123,
'protocol': 'http'|'tcp',
'timeoutMillis': 123,
'unhealthyThreshold': 123
},
'portMapping': {
'port': 123,
'protocol': 'http'|'tcp'
}
},
],
'logging': {
'accessLog': {
'file': {
'path': 'string'
}
}
},
'serviceDiscovery': {
'dns': {
'hostname': 'string'
}
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualNodeName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **virtualNode** *(dict) --*
A full description of the virtual node that was updated.
- **meshName** *(string) --*
The name of the service mesh that the virtual node resides in.
- **metadata** *(dict) --*
The associated metadata for the virtual node.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The specifications of the virtual node.
- **backends** *(list) --*
The backends that the virtual node is expected to send outbound traffic to.
- *(dict) --*
An object representing the backends that a virtual node is expected to send outbound traffic to.
- **virtualService** *(dict) --*
Specifies a virtual service to use as a backend for a virtual node.
- **virtualServiceName** *(string) --*
The name of the virtual service that is acting as a virtual node backend.
- **listeners** *(list) --*
The listeners that the virtual node is expected to receive inbound traffic from. Currently only one listener is supported per virtual node.
- *(dict) --*
An object representing a listener for a virtual node.
- **healthCheck** *(dict) --*
The health check information for the listener.
- **healthyThreshold** *(integer) --*
The number of consecutive successful health checks that must occur before declaring listener healthy.
- **intervalMillis** *(integer) --*
The time period in milliseconds between each health check execution.
- **path** *(string) --*
The destination path for the health check request. This is required only if the specified protocol is HTTP. If the protocol is TCP, this parameter is ignored.
- **port** *(integer) --*
The destination port for the health check request. This port must match the port defined in the PortMapping for the listener.
- **protocol** *(string) --*
The protocol for the health check request.
- **timeoutMillis** *(integer) --*
The amount of time to wait when receiving a response from the health check, in milliseconds.
- **unhealthyThreshold** *(integer) --*
The number of consecutive failed health checks that must occur before declaring a virtual node unhealthy.
- **portMapping** *(dict) --*
The port mapping information for the listener.
- **port** *(integer) --*
The port used for the port mapping.
- **protocol** *(string) --*
The protocol used for the port mapping.
- **logging** *(dict) --*
The inbound and outbound access logging information for the virtual node.
- **accessLog** *(dict) --*
The access log configuration for a virtual node.
- **file** *(dict) --*
The file object to send virtual node access logs to.
- **path** *(string) --*
The file path to write access logs to. You can use ``/dev/stdout`` to send access logs to standard out and configure your Envoy container to use a log driver, such as ``awslogs`` , to export the access logs to a log storage service such as Amazon CloudWatch Logs. You can also specify a path in the Envoy container's file system to write the files to disk.
.. note::
The Envoy process must have write permissions to the path that you specify here. Otherwise, Envoy fails to bootstrap properly.
- **serviceDiscovery** *(dict) --*
The service discovery information for the virtual node. If your virtual node does not expect ingress traffic, you can omit this parameter.
- **dns** *(dict) --*
Specifies the DNS information for the virtual node.
- **hostname** *(string) --*
Specifies the DNS service discovery hostname for the virtual node.
- **status** *(dict) --*
The current status for the virtual node.
- **status** *(string) --*
The current status of the virtual node.
- **virtualNodeName** *(string) --*
The name of the virtual node.
:type clientToken: string
:param clientToken:
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
This field is autopopulated if not provided.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh that the virtual node resides in.
:type spec: dict
:param spec: **[REQUIRED]**
The new virtual node specification to apply. This overwrites the existing data.
- **backends** *(list) --*
The backends that the virtual node is expected to send outbound traffic to.
- *(dict) --*
An object representing the backends that a virtual node is expected to send outbound traffic to.
- **virtualService** *(dict) --*
Specifies a virtual service to use as a backend for a virtual node.
- **virtualServiceName** *(string) --* **[REQUIRED]**
The name of the virtual service that is acting as a virtual node backend.
- **listeners** *(list) --*
The listeners that the virtual node is expected to receive inbound traffic from. Currently only one listener is supported per virtual node.
- *(dict) --*
An object representing a listener for a virtual node.
- **healthCheck** *(dict) --*
The health check information for the listener.
- **healthyThreshold** *(integer) --* **[REQUIRED]**
The number of consecutive successful health checks that must occur before declaring listener healthy.
- **intervalMillis** *(integer) --* **[REQUIRED]**
The time period in milliseconds between each health check execution.
- **path** *(string) --*
The destination path for the health check request. This is required only if the specified protocol is HTTP. If the protocol is TCP, this parameter is ignored.
- **port** *(integer) --*
The destination port for the health check request. This port must match the port defined in the PortMapping for the listener.
- **protocol** *(string) --* **[REQUIRED]**
The protocol for the health check request.
- **timeoutMillis** *(integer) --* **[REQUIRED]**
The amount of time to wait when receiving a response from the health check, in milliseconds.
- **unhealthyThreshold** *(integer) --* **[REQUIRED]**
The number of consecutive failed health checks that must occur before declaring a virtual node unhealthy.
- **portMapping** *(dict) --* **[REQUIRED]**
The port mapping information for the listener.
- **port** *(integer) --* **[REQUIRED]**
The port used for the port mapping.
- **protocol** *(string) --* **[REQUIRED]**
The protocol used for the port mapping.
- **logging** *(dict) --*
The inbound and outbound access logging information for the virtual node.
- **accessLog** *(dict) --*
The access log configuration for a virtual node.
- **file** *(dict) --*
The file object to send virtual node access logs to.
- **path** *(string) --* **[REQUIRED]**
The file path to write access logs to. You can use ``/dev/stdout`` to send access logs to standard out and configure your Envoy container to use a log driver, such as ``awslogs`` , to export the access logs to a log storage service such as Amazon CloudWatch Logs. You can also specify a path in the Envoy container\'s file system to write the files to disk.
.. note::
The Envoy process must have write permissions to the path that you specify here. Otherwise, Envoy fails to bootstrap properly.
- **serviceDiscovery** *(dict) --*
The service discovery information for the virtual node. If your virtual node does not expect ingress traffic, you can omit this parameter.
- **dns** *(dict) --*
Specifies the DNS information for the virtual node.
- **hostname** *(string) --* **[REQUIRED]**
Specifies the DNS service discovery hostname for the virtual node.
:type virtualNodeName: string
:param virtualNodeName: **[REQUIRED]**
The name of the virtual node to update.
:rtype: dict
:returns:
"""
pass
def update_virtual_router(self, meshName: str, spec: Dict, virtualRouterName: str, clientToken: str = None) -> Dict:
"""
Updates an existing virtual router in a specified service mesh.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/UpdateVirtualRouter>`_
**Request Syntax**
::
response = client.update_virtual_router(
clientToken='string',
meshName='string',
spec={
'listeners': [
{
'portMapping': {
'port': 123,
'protocol': 'http'|'tcp'
}
},
]
},
virtualRouterName='string'
)
**Response Syntax**
::
{
'virtualRouter': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'listeners': [
{
'portMapping': {
'port': 123,
'protocol': 'http'|'tcp'
}
},
]
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualRouterName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **virtualRouter** *(dict) --*
A full description of the virtual router that was updated.
- **meshName** *(string) --*
The name of the service mesh that the virtual router resides in.
- **metadata** *(dict) --*
The associated metadata for the virtual router.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The specifications of the virtual router.
- **listeners** *(list) --*
The listeners that the virtual router is expected to receive inbound traffic from. Currently only one listener is supported per virtual router.
- *(dict) --*
An object representing a virtual router listener.
- **portMapping** *(dict) --*
An object representing a virtual node or virtual router listener port mapping.
- **port** *(integer) --*
The port used for the port mapping.
- **protocol** *(string) --*
The protocol used for the port mapping.
- **status** *(dict) --*
The current status of the virtual router.
- **status** *(string) --*
The current status of the virtual router.
- **virtualRouterName** *(string) --*
The name of the virtual router.
:type clientToken: string
:param clientToken:
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
This field is autopopulated if not provided.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh that the virtual router resides in.
:type spec: dict
:param spec: **[REQUIRED]**
The new virtual router specification to apply. This overwrites the existing data.
- **listeners** *(list) --* **[REQUIRED]**
The listeners that the virtual router is expected to receive inbound traffic from. Currently only one listener is supported per virtual router.
- *(dict) --*
An object representing a virtual router listener.
- **portMapping** *(dict) --* **[REQUIRED]**
An object representing a virtual node or virtual router listener port mapping.
- **port** *(integer) --* **[REQUIRED]**
The port used for the port mapping.
- **protocol** *(string) --* **[REQUIRED]**
The protocol used for the port mapping.
:type virtualRouterName: string
:param virtualRouterName: **[REQUIRED]**
The name of the virtual router to update.
:rtype: dict
:returns:
"""
pass
def update_virtual_service(self, meshName: str, spec: Dict, virtualServiceName: str, clientToken: str = None) -> Dict:
"""
Updates an existing virtual service in a specified service mesh.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/appmesh-2019-01-25/UpdateVirtualService>`_
**Request Syntax**
::
response = client.update_virtual_service(
clientToken='string',
meshName='string',
spec={
'provider': {
'virtualNode': {
'virtualNodeName': 'string'
},
'virtualRouter': {
'virtualRouterName': 'string'
}
}
},
virtualServiceName='string'
)
**Response Syntax**
::
{
'virtualService': {
'meshName': 'string',
'metadata': {
'arn': 'string',
'createdAt': datetime(2015, 1, 1),
'lastUpdatedAt': datetime(2015, 1, 1),
'uid': 'string',
'version': 123
},
'spec': {
'provider': {
'virtualNode': {
'virtualNodeName': 'string'
},
'virtualRouter': {
'virtualRouterName': 'string'
}
}
},
'status': {
'status': 'ACTIVE'|'DELETED'|'INACTIVE'
},
'virtualServiceName': 'string'
}
}
**Response Structure**
- *(dict) --*
- **virtualService** *(dict) --*
A full description of the virtual service that was updated.
- **meshName** *(string) --*
The name of the service mesh that the virtual service resides in.
- **metadata** *(dict) --*
An object representing metadata for a resource.
- **arn** *(string) --*
The full Amazon Resource Name (ARN) for the resource.
- **createdAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was created.
- **lastUpdatedAt** *(datetime) --*
The Unix epoch timestamp in seconds for when the resource was last updated.
- **uid** *(string) --*
The unique identifier for the resource.
- **version** *(integer) --*
The version of the resource. Resources are created at version 1, and this version is incremented each time that they're updated.
- **spec** *(dict) --*
The specifications of the virtual service.
- **provider** *(dict) --*
The App Mesh object that is acting as the provider for a virtual service. You can specify a single virtual node or virtual router.
- **virtualNode** *(dict) --*
The virtual node associated with a virtual service.
- **virtualNodeName** *(string) --*
The name of the virtual node that is acting as a service provider.
- **virtualRouter** *(dict) --*
The virtual router associated with a virtual service.
- **virtualRouterName** *(string) --*
The name of the virtual router that is acting as a service provider.
- **status** *(dict) --*
The current status of the virtual service.
- **status** *(string) --*
The current status of the virtual service.
- **virtualServiceName** *(string) --*
The name of the virtual service.
:type clientToken: string
:param clientToken:
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. Up to 36 letters, numbers, hyphens, and underscores are allowed.
This field is autopopulated if not provided.
:type meshName: string
:param meshName: **[REQUIRED]**
The name of the service mesh that the virtual service resides in.
:type spec: dict
:param spec: **[REQUIRED]**
The new virtual service specification to apply. This overwrites the existing data.
- **provider** *(dict) --*
The App Mesh object that is acting as the provider for a virtual service. You can specify a single virtual node or virtual router.
- **virtualNode** *(dict) --*
The virtual node associated with a virtual service.
- **virtualNodeName** *(string) --* **[REQUIRED]**
The name of the virtual node that is acting as a service provider.
- **virtualRouter** *(dict) --*
The virtual router associated with a virtual service.
- **virtualRouterName** *(string) --* **[REQUIRED]**
The name of the virtual router that is acting as a service provider.
:type virtualServiceName: string
:param virtualServiceName: **[REQUIRED]**
The name of the virtual service to update.
:rtype: dict
:returns:
"""
pass
| 53.426658 | 555 | 0.494688 | 15,774 | 167,546 | 5.243248 | 0.033092 | 0.026358 | 0.011317 | 0.015089 | 0.932642 | 0.916875 | 0.900758 | 0.892524 | 0.880603 | 0.871462 | 0 | 0.009319 | 0.413319 | 167,546 | 3,135 | 556 | 53.4437 | 0.832089 | 0.83102 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.444444 | false | 0.444444 | 0.097222 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 10 |
5fb9a70d1227d97cfee21a0859c864326ebd39cf | 1,164 | py | Python | ChkUsrInputX.py | Epikarsios/MFC_Master | 663a80446649eeee3dc29386b2a96cfef812d188 | [
"MIT"
] | null | null | null | ChkUsrInputX.py | Epikarsios/MFC_Master | 663a80446649eeee3dc29386b2a96cfef812d188 | [
"MIT"
] | null | null | null | ChkUsrInputX.py | Epikarsios/MFC_Master | 663a80446649eeee3dc29386b2a96cfef812d188 | [
"MIT"
] | null | null | null |
def chkUsrNumSetPoint(str_Cmd):
allowed_Chars = set('0123456789.')
if set(str_Cmd).issubset(allowed_Chars) and str_Cmd:
numDecPoint = str_Cmd.count('.')
if numDecPoint > 1:
print('Too many decimal points.')
return False
else:
if float(str_Cmd)> 0.0009 and float(str_Cmd)< 10 or float(str_Cmd)==0:
print( 'Valid number')
return True
else:
print( 'Out Of Range')
else:
print( 'Not a valid number')
return False
def chkUsrNumMole(str_Cmd):
allowed_Chars = set('0123456789.')
if set(str_Cmd).issubset(allowed_Chars) and str_Cmd:
numDecPoint = str_Cmd.count('.')
if numDecPoint > 1:
print('Too many decimal points.')
return False
else:
if float(str_Cmd)> 0.001 and float(str_Cmd)< 1000 or float(str_Cmd)==0:
print( 'Valid number')
return True
else:
print( 'Out Of Range')
else:
print( 'Not a valid number')
return False
| 24.25 | 94 | 0.519759 | 134 | 1,164 | 4.380597 | 0.283582 | 0.143101 | 0.112436 | 0.081772 | 0.868825 | 0.868825 | 0.868825 | 0.868825 | 0.868825 | 0.868825 | 0 | 0.054393 | 0.384021 | 1,164 | 47 | 95 | 24.765957 | 0.764296 | 0 | 0 | 0.875 | 0 | 0 | 0.134251 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.25 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
395de470c994af57ab51c35a0e2ad5e25892ed5b | 115 | py | Python | soane/items/__init__.py | spheten/soane | b5517275b8b3fd3b2b5a19b031c98cfd45d42292 | [
"BSD-3-Clause"
] | 1 | 2021-10-03T07:13:55.000Z | 2021-10-03T07:13:55.000Z | soane/items/__init__.py | spheten/soane | b5517275b8b3fd3b2b5a19b031c98cfd45d42292 | [
"BSD-3-Clause"
] | 14 | 2021-10-03T07:10:10.000Z | 2021-10-06T09:07:41.000Z | soane/items/__init__.py | spheten/soane | b5517275b8b3fd3b2b5a19b031c98cfd45d42292 | [
"BSD-3-Clause"
] | null | null | null | '''
Package definition for 'soane.items'.
'''
from soane.items.book import Book
from soane.items.note import Note
| 16.428571 | 37 | 0.747826 | 17 | 115 | 5.058824 | 0.529412 | 0.348837 | 0.325581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 115 | 6 | 38 | 19.166667 | 0.86 | 0.321739 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
396eda635a05fced7862c22c71a8d525ab8c9b84 | 9,450 | py | Python | pm/models/coach_mobility_standing.py | aosojnik/pipeline-manager-features-models | e5232f1c1b2073253a1c505dc9fff0d3d839dd6c | [
"MIT"
] | null | null | null | pm/models/coach_mobility_standing.py | aosojnik/pipeline-manager-features-models | e5232f1c1b2073253a1c505dc9fff0d3d839dd6c | [
"MIT"
] | null | null | null | pm/models/coach_mobility_standing.py | aosojnik/pipeline-manager-features-models | e5232f1c1b2073253a1c505dc9fff0d3d839dd6c | [
"MIT"
] | null | null | null | ##################################################################################
##########--this is an autogenerated python model definition for proDEX--#########
##--original file: coach_mobility_standing_v04_forprodex.dxi --##
##################################################################################
from .lib.proDEX import *
coach_mobility_standing = Node()
Relative_change = Node()
situ_mobility_standing = Atrib()
situ_mobility_standing_predicted = Atrib()
coach_mobility_standing.setName('coach_mobility_standing')
Relative_change.setName('Relative_change')
situ_mobility_standing.setName('situ_mobility_standing')
situ_mobility_standing_predicted.setName('situ_mobility_standing_predicted')
coach_mobility_standing.setValues(['negative_message_to_PU_or_SU', 'positive_message_to_PU_or_SU', 'no_action'])
Relative_change.setValues(['big_drop', 'medium_drop', 'small_drop', 'no_change', 'small_improvement', 'medium_improvement', 'big_improvement'])
situ_mobility_standing.setValues(['very_low', 'low', 'medium', 'high', 'very_high'])
situ_mobility_standing_predicted.setValues(['very_low', 'low', 'medium', 'high', 'very_high'])
coach_mobility_standing.addChild(situ_mobility_standing)
situ_mobility_standing.setParent(coach_mobility_standing)
coach_mobility_standing.addChild(Relative_change)
Relative_change.setParent(coach_mobility_standing)
Relative_change.addChild(situ_mobility_standing)
situ_mobility_standing.setParent(Relative_change)
Relative_change.addChild(situ_mobility_standing_predicted)
situ_mobility_standing_predicted.setParent(Relative_change)
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_low', Relative_change:'big_drop'}, 'negative_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_low', Relative_change:'medium_drop'}, 'negative_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_low', Relative_change:'small_drop'}, 'negative_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_low', Relative_change:'no_change'}, 'no_action'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_low', Relative_change:'small_improvement'}, 'no_action'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_low', Relative_change:'medium_improvement'}, 'no_action'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_low', Relative_change:'big_improvement'}, 'no_action'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'low', Relative_change:'big_drop'}, 'negative_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'low', Relative_change:'medium_drop'}, 'negative_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'low', Relative_change:'small_drop'}, 'negative_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'low', Relative_change:'no_change'}, 'no_action'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'low', Relative_change:'small_improvement'}, 'positive_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'low', Relative_change:'medium_improvement'}, 'positive_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'low', Relative_change:'big_improvement'}, 'positive_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'medium', Relative_change:'big_drop'}, 'negative_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'medium', Relative_change:'medium_drop'}, 'negative_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'medium', Relative_change:'small_drop'}, 'negative_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'medium', Relative_change:'no_change'}, 'no_action'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'medium', Relative_change:'small_improvement'}, 'positive_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'medium', Relative_change:'medium_improvement'}, 'positive_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'medium', Relative_change:'big_improvement'}, 'positive_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'high', Relative_change:'big_drop'}, 'negative_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'high', Relative_change:'medium_drop'}, 'negative_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'high', Relative_change:'small_drop'}, 'negative_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'high', Relative_change:'no_change'}, 'no_action'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'high', Relative_change:'small_improvement'}, 'positive_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'high', Relative_change:'medium_improvement'}, 'positive_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'high', Relative_change:'big_improvement'}, 'positive_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_high', Relative_change:'big_drop'}, 'no_action'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_high', Relative_change:'medium_drop'}, 'no_action'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_high', Relative_change:'small_drop'}, 'no_action'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_high', Relative_change:'no_change'}, 'no_action'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_high', Relative_change:'small_improvement'}, 'no_action'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_high', Relative_change:'medium_improvement'}, 'positive_message_to_PU_or_SU'])
coach_mobility_standing.addFunctionRow([{situ_mobility_standing:'very_high', Relative_change:'big_improvement'}, 'positive_message_to_PU_or_SU'])
Relative_change.addFunctionRow([{situ_mobility_standing:'very_low', situ_mobility_standing_predicted:'very_low'}, 'no_change'])
Relative_change.addFunctionRow([{situ_mobility_standing:'very_low', situ_mobility_standing_predicted:'low'}, 'small_drop'])
Relative_change.addFunctionRow([{situ_mobility_standing:'very_low', situ_mobility_standing_predicted:'medium'}, 'big_drop'])
Relative_change.addFunctionRow([{situ_mobility_standing:'very_low', situ_mobility_standing_predicted:'high'}, 'big_drop'])
Relative_change.addFunctionRow([{situ_mobility_standing:'very_low', situ_mobility_standing_predicted:'very_high'}, 'big_drop'])
Relative_change.addFunctionRow([{situ_mobility_standing:'low', situ_mobility_standing_predicted:'very_low'}, 'small_improvement'])
Relative_change.addFunctionRow([{situ_mobility_standing:'low', situ_mobility_standing_predicted:'low'}, 'no_change'])
Relative_change.addFunctionRow([{situ_mobility_standing:'low', situ_mobility_standing_predicted:'medium'}, 'small_drop'])
Relative_change.addFunctionRow([{situ_mobility_standing:'low', situ_mobility_standing_predicted:'high'}, 'big_drop'])
Relative_change.addFunctionRow([{situ_mobility_standing:'low', situ_mobility_standing_predicted:'very_high'}, 'big_drop'])
Relative_change.addFunctionRow([{situ_mobility_standing:'medium', situ_mobility_standing_predicted:'very_low'}, 'big_improvement'])
Relative_change.addFunctionRow([{situ_mobility_standing:'medium', situ_mobility_standing_predicted:'low'}, 'small_improvement'])
Relative_change.addFunctionRow([{situ_mobility_standing:'medium', situ_mobility_standing_predicted:'medium'}, 'no_change'])
Relative_change.addFunctionRow([{situ_mobility_standing:'medium', situ_mobility_standing_predicted:'high'}, 'small_drop'])
Relative_change.addFunctionRow([{situ_mobility_standing:'medium', situ_mobility_standing_predicted:'very_high'}, 'big_drop'])
Relative_change.addFunctionRow([{situ_mobility_standing:'high', situ_mobility_standing_predicted:'very_low'}, 'big_improvement'])
Relative_change.addFunctionRow([{situ_mobility_standing:'high', situ_mobility_standing_predicted:'low'}, 'big_improvement'])
Relative_change.addFunctionRow([{situ_mobility_standing:'high', situ_mobility_standing_predicted:'medium'}, 'small_improvement'])
Relative_change.addFunctionRow([{situ_mobility_standing:'high', situ_mobility_standing_predicted:'high'}, 'no_change'])
Relative_change.addFunctionRow([{situ_mobility_standing:'high', situ_mobility_standing_predicted:'very_high'}, 'small_drop'])
Relative_change.addFunctionRow([{situ_mobility_standing:'very_high', situ_mobility_standing_predicted:'very_low'}, 'big_improvement'])
Relative_change.addFunctionRow([{situ_mobility_standing:'very_high', situ_mobility_standing_predicted:'low'}, 'big_improvement'])
Relative_change.addFunctionRow([{situ_mobility_standing:'very_high', situ_mobility_standing_predicted:'medium'}, 'small_improvement'])
Relative_change.addFunctionRow([{situ_mobility_standing:'very_high', situ_mobility_standing_predicted:'high'}, 'small_improvement'])
Relative_change.addFunctionRow([{situ_mobility_standing:'very_high', situ_mobility_standing_predicted:'very_high'}, 'no_change'])
| 100.531915 | 148 | 0.83037 | 1,141 | 9,450 | 6.323401 | 0.042068 | 0.317117 | 0.274428 | 0.282744 | 0.903812 | 0.895218 | 0.86833 | 0.867775 | 0.838808 | 0.828274 | 0 | 0.000217 | 0.026561 | 9,450 | 93 | 149 | 101.612903 | 0.784107 | 0.013122 | 0 | 0 | 1 | 0 | 0.251696 | 0.08503 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.012346 | 0 | 0.012346 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
39ec6604cd7e9f6141d01e4dae04e56601492418 | 2,052 | py | Python | visualization_tools/test_wrap2pi.py | rafaelbarretorb/trajectory_tracking_control | 6203577568e5c7b128b36604ccc5e279069ad5e2 | [
"MIT"
] | 11 | 2020-05-22T04:44:16.000Z | 2022-03-29T10:49:07.000Z | visualization_tools/test_wrap2pi.py | rafaelbarretorb/trajectory_tracking_control | 6203577568e5c7b128b36604ccc5e279069ad5e2 | [
"MIT"
] | 15 | 2021-11-26T23:37:56.000Z | 2021-11-28T20:48:07.000Z | visualization_tools/test_wrap2pi.py | rafaelbarretorb/trajectory_tracking_control | 6203577568e5c7b128b36604ccc5e279069ad5e2 | [
"MIT"
] | 2 | 2021-08-16T01:16:36.000Z | 2021-09-22T08:13:06.000Z |
import unittest
import numpy as np
from linear_controller import wrapToPi, wrapToPi2
PI = np.pi
class TestWrap2Pi(unittest.TestCase):
def test1(self):
self.assertAlmostEqual(wrapToPi( np.radians(360)), np.radians(0.0), places=2) # x axis positive
self.assertAlmostEqual(wrapToPi( np.radians(-270)), np.radians(90.0), places=2) # y axis positive
self.assertAlmostEqual(wrapToPi( np.radians(-540)), np.radians(-180.0), places=2) # x axis negative
self.assertAlmostEqual(wrapToPi( np.radians(270)), np.radians(-90.0), places=2) # y axis negative
self.assertAlmostEqual(wrapToPi( np.radians(-270-45)), np.radians(45), places=2) # 1st quadrant.
self.assertAlmostEqual(wrapToPi( np.radians(-180-45)), np.radians(90+45), places=2) # 2nd quadrant.
self.assertAlmostEqual(wrapToPi( np.radians(180+45)), np.radians(-180+45), places=2) # 3rd quadrant.
self.assertAlmostEqual(wrapToPi( np.radians(270+45)), np.radians(-90+45), places=2) # 4th quadrant.
def test2(self):
self.assertAlmostEqual(wrapToPi2( np.radians(270)), np.radians(-90.0), places=2)
self.assertAlmostEqual(wrapToPi2( np.radians(360)), np.radians(0.0), places=2) # x axis positive
self.assertAlmostEqual(wrapToPi2( np.radians(-270)), np.radians(90.0), places=2) # y axis positive
self.assertAlmostEqual(wrapToPi2( np.radians(-540)), np.radians(-180.0), places=2) # x axis negative
self.assertAlmostEqual(wrapToPi2( np.radians(270)), np.radians(-90.0), places=2) # y axis negative
self.assertAlmostEqual(wrapToPi2( np.radians(-270-45)), np.radians(45), places=2) # 1st quadrant.
self.assertAlmostEqual(wrapToPi2( np.radians(-180-45)), np.radians(90+45), places=2) # 2nd quadrant.
self.assertAlmostEqual(wrapToPi2( np.radians(180+45)), np.radians(-180+45), places=2) # 3rd quadrant.
self.assertAlmostEqual(wrapToPi2( np.radians(270+45)), np.radians(-90+45), places=2) # 4th quadrant.
if __name__ == '__main__':
unittest.main()
| 58.628571 | 110 | 0.68616 | 280 | 2,052 | 4.996429 | 0.15 | 0.218728 | 0.051465 | 0.205861 | 0.888492 | 0.846319 | 0.846319 | 0.834167 | 0.834167 | 0.809864 | 0 | 0.093588 | 0.156433 | 2,052 | 34 | 111 | 60.352941 | 0.714616 | 0.116472 | 0 | 0.076923 | 0 | 0 | 0.004459 | 0 | 0 | 0 | 0 | 0 | 0.653846 | 1 | 0.076923 | false | 0 | 0.115385 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
844e3ba99659d75528532b6e17bc37aa61216569 | 238 | py | Python | python/__init__.py | dhohn/vbfcprw | 6378b1ae9d968ec4049361b17f192daebb2a7577 | [
"MIT"
] | null | null | null | python/__init__.py | dhohn/vbfcprw | 6378b1ae9d968ec4049361b17f192daebb2a7577 | [
"MIT"
] | null | null | null | python/__init__.py | dhohn/vbfcprw | 6378b1ae9d968ec4049361b17f192daebb2a7577 | [
"MIT"
] | null | null | null | # make class and functions available in this namespace
#class
from ROOT import OptObsEventStore
#functions
from ROOT.HLeptonsCPRW import getOptObs
from ROOT.HLeptonsCPRW import getReweight
from ROOT.HLeptonsCPRW import getWeightsDtilde
| 23.8 | 54 | 0.852941 | 29 | 238 | 7 | 0.551724 | 0.157635 | 0.295567 | 0.384236 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121849 | 238 | 9 | 55 | 26.444444 | 0.971292 | 0.277311 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
ffd11cc41504c9d074e99f4f76f1758bbedaf92f | 314 | py | Python | word_embedding_loader/__init__.py | Emekaborisama/word_embedding_loader | 5b0fd435360d335341dc111bbc52869bb2731422 | [
"MIT"
] | 4 | 2017-11-17T22:03:37.000Z | 2018-06-26T08:50:27.000Z | word_embedding_loader/__init__.py | Emekaborisama/word_embedding_loader | 5b0fd435360d335341dc111bbc52869bb2731422 | [
"MIT"
] | 7 | 2017-08-09T12:51:00.000Z | 2018-06-29T19:12:11.000Z | word_embedding_loader/__init__.py | Emekaborisama/word_embedding_loader | 5b0fd435360d335341dc111bbc52869bb2731422 | [
"MIT"
] | 2 | 2018-06-26T09:06:31.000Z | 2020-06-13T16:05:05.000Z | # -*- coding: utf-8 -*-
from __future__ import absolute_import, division, print_function, \
unicode_literals
from word_embedding_loader._version import __version__
from word_embedding_loader.exceptions import ParseError, ParseWarning, parse_warn
from word_embedding_loader.word_embedding import WordEmbedding
| 39.25 | 81 | 0.840764 | 38 | 314 | 6.421053 | 0.578947 | 0.213115 | 0.209016 | 0.282787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003546 | 0.101911 | 314 | 7 | 82 | 44.857143 | 0.861702 | 0.066879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0.2 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
081a17d8af79628412cfb1bdc63d657e169bd27f | 8,865 | py | Python | Dockerized_Apps/taxii-client/src/taxii/easy_taxii_print.py | CanadianInstituteForCybersecurity/cic-exchange-model | 1bb4f3ed51252cf152c2279cbeae75ca607b6fca | [
"MIT"
] | null | null | null | Dockerized_Apps/taxii-client/src/taxii/easy_taxii_print.py | CanadianInstituteForCybersecurity/cic-exchange-model | 1bb4f3ed51252cf152c2279cbeae75ca607b6fca | [
"MIT"
] | null | null | null | Dockerized_Apps/taxii-client/src/taxii/easy_taxii_print.py | CanadianInstituteForCybersecurity/cic-exchange-model | 1bb4f3ed51252cf152c2279cbeae75ca607b6fca | [
"MIT"
] | null | null | null | import pprint
from taxii2client.v21 import Server, ApiRoot, Collection, as_pages
def print_api_roots(path, username=None, user_password=None):
"""Prints all api roots in the server
username and password requirements are dependent on the server"""
if not username and user_password:
server1 = Server(url=path)
else:
server1 = Server(url=path, user=username, password=user_password)
print("Server Title: {}".format(server1.title))
print("Server Description: {}".format(server1.description))
print("Server Contact: {}".format(server1.contact))
print("Server Default API: {}\n".format(server1.default.url))
for api_root in server1.api_roots:
print("API Title: " + api_root.title)
print("API Description: {}".format(api_root.description))
print("API Versions: {}".format(api_root.versions))
print("API required length: {}".format(api_root.max_content_length))
print("API URL: {} \n".format(api_root.url))
def print_single_api_root(api_path, username=None, user_password=None):
"""Prints an api root in the server
username and password requirements are dependent on the server"""
if not username and user_password:
api_root = ApiRoot(url=api_path)
else:
api_root = ApiRoot(url=api_path, user=username, password=user_password)
print("API Title: " + api_root.title)
print("API Description: {}".format(api_root.description))
print("API Versions: {}".format(api_root.versions))
print("API required length: {}".format(api_root.max_content_length))
print("API URL: {} \n".format(api_root.url))
def print_collections_per_api_root(path, username=None, user_password=None):
"""Prints all collections within all api roots in the server
username and password requirements are dependent on the server"""
if not username and user_password:
server1 = Server(url=path)
else:
server1 = Server(url=path, user=username, password=user_password)
for api_root in server1.api_roots:
print("API Root: {}".format(api_root.title))
for collection in api_root.collections:
print("\tTitle: " + collection.title)
print("\tID: " + collection.id)
print("\tDescription: {}".format(collection.description))
print("\tReadable: {}".format(collection.can_read))
print("\tWriteable: {}".format(collection.can_write))
print("\tMedia Types: {}".format(collection.media_types))
print("\tURL: {}".format(collection.url))
for key, values in collection.custom_properties.items():
print("\t{}: {}".format(key,values))
print("\n")
def print_single_collection(collection_path, username=None, user_password=None):
"""Prints a collection within an api root in the server
username and password requirements are dependent on the server"""
if not username and user_password:
collection = Collection(url=collection_path)
else:
collection = Collection(url=collection_path, user=username, password=user_password)
print("\tTitle: " + collection.title)
print("\tID: " + collection.id)
print("\tDescription: {}".format(collection.description))
print("\tReadable: {}".format(collection.can_read))
print("\tWriteable: {}".format(collection.can_write))
print("\tMedia Types: {}".format(collection.media_types))
print("\tURL: {}\n".format(collection.url))
def print_bundles_per_collections_per_api_root(path, username=None, user_password=None):
"""Prints all bundles within each collection within each api roots in the server"""
if not username and user_password:
server1 = Server(url=path)
else:
server1 = Server(url=path, user=username, password=user_password)
for api_root in server1.api_roots:
print("API Root: {}".format(api_root.title))
for collection in api_root.collections:
print("\tTitle: " + collection.title)
print("\tID: " + collection.id)
if collection.can_read:
bundle = collection.get_objects()
pprint.pprint(bundle)
else:
print("\tCannot access")
def print_bundles_per_single_collection(collection_path, username=None, user_password=None):
"""Prints all bundles within a collection in the server
username and password requirements are dependent on the server"""
if not username and user_password:
collection = Collection(url=collection_path)
else:
collection = Collection(url=collection_path, user=username, password=user_password)
print("\tTitle: " + collection.title)
print("\tID: " + collection.id)
print("\tDescription: {}".format(collection.description))
print("\tReadable: {}".format(collection.can_read))
print("\tWriteable: {}".format(collection.can_write))
print("\tMedia Types: {}".format(collection.media_types))
print("\tURL: {}\n".format(collection.url))
if collection.can_read:
bundle = collection.get_objects()
pprint.pprint(bundle)
else:
print("\tCannot access")
def print_single_object(collection_path, object_id, username=None, user_password=None):
"""Prints a single object in a collections
username and password requirements are dependent on the server"""
if not username and user_password:
collection = Collection(url=collection_path)
else:
collection = Collection(url=collection_path, user=username, password=user_password)
bundle = collection.get_object(object_id)
print("\tTitle: " + collection.title)
print("\tID: " + collection.id)
print("\t===============Object=============")
pprint.pprint(bundle)
object_list = bundle["objects"]
for obj in object_list:
print(obj)
def print_manifest_per_collections_per_api_root(path, username=None, user_password=None):
"""Prints all manifests within each collection within each api root in the server
username and password requirements are dependent on the server"""
if not username and user_password:
server1 = Server(url=path)
else:
server1 = Server(url=path, user=username, password=user_password)
for api_root in server1.api_roots:
print("API Root: {}".format(api_root.title))
for collection in api_root.collections:
print("\tTitle: " + collection.title)
print("\tID: " + collection.id)
if collection.can_read:
try:
manifest_resource = collection.get_manifest()
pprint.pprint(manifest_resource)
except:
pass
else:
print("\tCannot access")
def print_manifest_per_single_collection(collection_path, username=None, user_password=None):
"""Prints all manifests within a collection
username and password requirements are dependent on the server"""
if not username and user_password:
collection = Collection(url=collection_path)
else:
collection = Collection(url=collection_path, user=username, password=user_password)
print("\tTitle: " + collection.title)
print("\tID: " + collection.id)
print("\tDescription: {}".format(collection.description))
print("\tReadable: {}".format(collection.can_read))
print("\tWriteable: {}".format(collection.can_write))
print("\tMedia Types: {}".format(collection.media_types))
print("\tURL: {}\n".format(collection.url))
if collection.can_read:
bundle = collection.get_manifest()
pprint.pprint(bundle)
else:
print("\tCannot access")
def copy_object_to_collection(from_collection_path, to_collection_path, object_id, from_username=None, from_user_password=None, to_username=None, to_user_password=None):
"""copies bundle from one collection to another collection
username and password requirements are dependent on the server"""
if not from_username and from_user_password:
from_collection = Collection(url=from_collection_path)
else:
from_collection = Collection(url=from_collection_path, user=from_username, password=from_user_password)
if not to_username and to_username:
to_collection = Collection(url=to_collection_path)
else:
to_collection = Collection(url=to_collection_path, user=to_username, password=to_user_password)
bundle = from_collection.get_object(object_id)
to_collection.add_objects(bundle)
print("========Successful===========")
to_collection.get_object(bundle)
# try:
# bundle = from_collection.get_object(object_id)
# to_collection.add_objects(bundle)
# print("========Successful===========")
# to_collection.get_object(bundle)
# except:
# print("Addition Failed") | 43.033981 | 169 | 0.679075 | 1,063 | 8,865 | 5.489182 | 0.09031 | 0.06581 | 0.047301 | 0.023993 | 0.832562 | 0.819709 | 0.798286 | 0.7491 | 0.735219 | 0.721851 | 0 | 0.002683 | 0.201241 | 8,865 | 206 | 170 | 43.033981 | 0.821353 | 0.147095 | 0 | 0.713333 | 0 | 0 | 0.110204 | 0.008704 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.213333 | 0.013333 | 0 | 0.08 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 9 |
08289650a721b2511b611234286658166242db65 | 206 | py | Python | xsd2tkform/core/list.py | Dolgalad/xsd2tkform | 27f0d5bd1d9b6816982c18c45323ff7d1191efca | [
"MIT"
] | null | null | null | xsd2tkform/core/list.py | Dolgalad/xsd2tkform | 27f0d5bd1d9b6816982c18c45323ff7d1191efca | [
"MIT"
] | null | null | null | xsd2tkform/core/list.py | Dolgalad/xsd2tkform | 27f0d5bd1d9b6816982c18c45323ff7d1191efca | [
"MIT"
] | null | null | null | """Simple type value list definition
"""
class List:
def __init__(self, item_type=None):
self.item_type = None
def __str__(self):
return "List(item_type={})".format(self.item_type)
| 22.888889 | 58 | 0.65534 | 28 | 206 | 4.392857 | 0.5 | 0.260163 | 0.292683 | 0.260163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208738 | 206 | 8 | 59 | 25.75 | 0.754601 | 0.160194 | 0 | 0 | 0 | 0 | 0.108434 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 7 |
f27a763c0e260694798be75fea4d592cfd53d55c | 38,379 | py | Python | nova/tests/unit/api/openstack/compute/test_multiple_create.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | null | null | null | nova/tests/unit/api/openstack/compute/test_multiple_create.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | null | null | null | nova/tests/unit/api/openstack/compute/test_multiple_create.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | 2 | 2017-07-20T17:31:34.000Z | 2020-07-24T02:42:19.000Z | begin_unit
comment|'# Copyright 2013 IBM Corp.'
nl|'\n'
comment|'# All Rights Reserved.'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# Licensed under the Apache License, Version 2.0 (the "License"); you may'
nl|'\n'
comment|'# not use this file except in compliance with the License. You may obtain'
nl|'\n'
comment|'# a copy of the License at'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# http://www.apache.org/licenses/LICENSE-2.0'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# Unless required by applicable law or agreed to in writing, software'
nl|'\n'
comment|'# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT'
nl|'\n'
comment|'# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the'
nl|'\n'
comment|'# License for the specific language governing permissions and limitations'
nl|'\n'
comment|'# under the License.'
nl|'\n'
nl|'\n'
name|'import'
name|'datetime'
newline|'\n'
nl|'\n'
name|'import'
name|'webob'
newline|'\n'
nl|'\n'
name|'from'
name|'nova'
op|'.'
name|'api'
op|'.'
name|'openstack'
op|'.'
name|'compute'
name|'import'
name|'block_device_mapping'
name|'as'
name|'block_device_mapping_v21'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'api'
op|'.'
name|'openstack'
op|'.'
name|'compute'
name|'import'
name|'extension_info'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'api'
op|'.'
name|'openstack'
op|'.'
name|'compute'
name|'import'
name|'multiple_create'
name|'as'
name|'multiple_create_v21'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'api'
op|'.'
name|'openstack'
op|'.'
name|'compute'
name|'import'
name|'servers'
name|'as'
name|'servers_v21'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'compute'
name|'import'
name|'api'
name|'as'
name|'compute_api'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'compute'
name|'import'
name|'flavors'
newline|'\n'
name|'import'
name|'nova'
op|'.'
name|'conf'
newline|'\n'
name|'from'
name|'nova'
name|'import'
name|'exception'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'network'
name|'import'
name|'manager'
newline|'\n'
name|'from'
name|'nova'
name|'import'
name|'test'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'tests'
op|'.'
name|'unit'
op|'.'
name|'api'
op|'.'
name|'openstack'
name|'import'
name|'fakes'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'tests'
op|'.'
name|'unit'
name|'import'
name|'fake_instance'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'tests'
op|'.'
name|'unit'
op|'.'
name|'image'
name|'import'
name|'fake'
newline|'\n'
nl|'\n'
DECL|variable|CONF
name|'CONF'
op|'='
name|'nova'
op|'.'
name|'conf'
op|'.'
name|'CONF'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|function|return_security_group
name|'def'
name|'return_security_group'
op|'('
name|'context'
op|','
name|'instance_id'
op|','
name|'security_group_id'
op|')'
op|':'
newline|'\n'
indent|' '
name|'pass'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|MultiCreateExtensionTestV21
dedent|''
name|'class'
name|'MultiCreateExtensionTestV21'
op|'('
name|'test'
op|'.'
name|'TestCase'
op|')'
op|':'
newline|'\n'
DECL|variable|validation_error
indent|' '
name|'validation_error'
op|'='
name|'exception'
op|'.'
name|'ValidationError'
newline|'\n'
nl|'\n'
DECL|member|setUp
name|'def'
name|'setUp'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Shared implementation for tests below that create instance."""'
newline|'\n'
name|'super'
op|'('
name|'MultiCreateExtensionTestV21'
op|','
name|'self'
op|')'
op|'.'
name|'setUp'
op|'('
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'flags'
op|'('
name|'verbose'
op|'='
name|'True'
op|','
nl|'\n'
name|'enable_instance_password'
op|'='
name|'True'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'instance_cache_num'
op|'='
number|'0'
newline|'\n'
name|'self'
op|'.'
name|'instance_cache_by_id'
op|'='
op|'{'
op|'}'
newline|'\n'
name|'self'
op|'.'
name|'instance_cache_by_uuid'
op|'='
op|'{'
op|'}'
newline|'\n'
nl|'\n'
name|'ext_info'
op|'='
name|'extension_info'
op|'.'
name|'LoadedExtensionInfo'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'controller'
op|'='
name|'servers_v21'
op|'.'
name|'ServersController'
op|'('
nl|'\n'
name|'extension_info'
op|'='
name|'ext_info'
op|')'
newline|'\n'
name|'CONF'
op|'.'
name|'set_override'
op|'('
string|"'extensions_blacklist'"
op|','
string|"'os-multiple-create'"
op|','
nl|'\n'
string|"'osapi_v21'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'no_mult_create_controller'
op|'='
name|'servers_v21'
op|'.'
name|'ServersController'
op|'('
nl|'\n'
name|'extension_info'
op|'='
name|'ext_info'
op|')'
newline|'\n'
nl|'\n'
DECL|function|instance_create
name|'def'
name|'instance_create'
op|'('
name|'context'
op|','
name|'inst'
op|')'
op|':'
newline|'\n'
indent|' '
name|'inst_type'
op|'='
name|'flavors'
op|'.'
name|'get_flavor_by_flavor_id'
op|'('
number|'3'
op|')'
newline|'\n'
name|'image_uuid'
op|'='
string|"'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'"
newline|'\n'
name|'def_image_ref'
op|'='
string|"'http://localhost/images/%s'"
op|'%'
name|'image_uuid'
newline|'\n'
name|'self'
op|'.'
name|'instance_cache_num'
op|'+='
number|'1'
newline|'\n'
name|'instance'
op|'='
name|'fake_instance'
op|'.'
name|'fake_db_instance'
op|'('
op|'**'
op|'{'
nl|'\n'
string|"'id'"
op|':'
name|'self'
op|'.'
name|'instance_cache_num'
op|','
nl|'\n'
string|"'display_name'"
op|':'
name|'inst'
op|'['
string|"'display_name'"
op|']'
name|'or'
string|"'test'"
op|','
nl|'\n'
string|"'uuid'"
op|':'
name|'inst'
op|'['
string|"'uuid'"
op|']'
op|','
nl|'\n'
string|"'instance_type'"
op|':'
name|'inst_type'
op|','
nl|'\n'
string|"'access_ip_v4'"
op|':'
string|"'1.2.3.4'"
op|','
nl|'\n'
string|"'access_ip_v6'"
op|':'
string|"'fead::1234'"
op|','
nl|'\n'
string|"'image_ref'"
op|':'
name|'inst'
op|'.'
name|'get'
op|'('
string|"'image_ref'"
op|','
name|'def_image_ref'
op|')'
op|','
nl|'\n'
string|"'user_id'"
op|':'
string|"'fake'"
op|','
nl|'\n'
string|"'project_id'"
op|':'
string|"'fake'"
op|','
nl|'\n'
string|"'reservation_id'"
op|':'
name|'inst'
op|'['
string|"'reservation_id'"
op|']'
op|','
nl|'\n'
string|'"created_at"'
op|':'
name|'datetime'
op|'.'
name|'datetime'
op|'('
number|'2010'
op|','
number|'10'
op|','
number|'10'
op|','
number|'12'
op|','
number|'0'
op|','
number|'0'
op|')'
op|','
nl|'\n'
string|'"updated_at"'
op|':'
name|'datetime'
op|'.'
name|'datetime'
op|'('
number|'2010'
op|','
number|'11'
op|','
number|'11'
op|','
number|'11'
op|','
number|'0'
op|','
number|'0'
op|')'
op|','
nl|'\n'
string|'"progress"'
op|':'
number|'0'
op|','
nl|'\n'
string|'"fixed_ips"'
op|':'
op|'['
op|']'
op|','
nl|'\n'
string|'"task_state"'
op|':'
string|'""'
op|','
nl|'\n'
string|'"vm_state"'
op|':'
string|'""'
op|','
nl|'\n'
string|'"security_groups"'
op|':'
name|'inst'
op|'['
string|"'security_groups'"
op|']'
op|','
nl|'\n'
op|'}'
op|')'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'instance_cache_by_id'
op|'['
name|'instance'
op|'['
string|"'id'"
op|']'
op|']'
op|'='
name|'instance'
newline|'\n'
name|'self'
op|'.'
name|'instance_cache_by_uuid'
op|'['
name|'instance'
op|'['
string|"'uuid'"
op|']'
op|']'
op|'='
name|'instance'
newline|'\n'
name|'return'
name|'instance'
newline|'\n'
nl|'\n'
DECL|function|instance_get
dedent|''
name|'def'
name|'instance_get'
op|'('
name|'context'
op|','
name|'instance_id'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Stub for compute/api create() pulling in instance after\n scheduling\n """'
newline|'\n'
name|'return'
name|'self'
op|'.'
name|'instance_cache_by_id'
op|'['
name|'instance_id'
op|']'
newline|'\n'
nl|'\n'
DECL|function|instance_update
dedent|''
name|'def'
name|'instance_update'
op|'('
name|'context'
op|','
name|'uuid'
op|','
name|'values'
op|')'
op|':'
newline|'\n'
indent|' '
name|'instance'
op|'='
name|'self'
op|'.'
name|'instance_cache_by_uuid'
op|'['
name|'uuid'
op|']'
newline|'\n'
name|'instance'
op|'.'
name|'update'
op|'('
name|'values'
op|')'
newline|'\n'
name|'return'
name|'instance'
newline|'\n'
nl|'\n'
DECL|function|server_update
dedent|''
name|'def'
name|'server_update'
op|'('
name|'context'
op|','
name|'instance_uuid'
op|','
name|'params'
op|','
nl|'\n'
name|'columns_to_join'
op|'='
name|'None'
op|')'
op|':'
newline|'\n'
indent|' '
name|'inst'
op|'='
name|'self'
op|'.'
name|'instance_cache_by_uuid'
op|'['
name|'instance_uuid'
op|']'
newline|'\n'
name|'inst'
op|'.'
name|'update'
op|'('
name|'params'
op|')'
newline|'\n'
name|'return'
op|'('
name|'inst'
op|','
name|'inst'
op|')'
newline|'\n'
nl|'\n'
DECL|function|fake_method
dedent|''
name|'def'
name|'fake_method'
op|'('
op|'*'
name|'args'
op|','
op|'**'
name|'kwargs'
op|')'
op|':'
newline|'\n'
indent|' '
name|'pass'
newline|'\n'
nl|'\n'
DECL|function|project_get_networks
dedent|''
name|'def'
name|'project_get_networks'
op|'('
name|'context'
op|','
name|'user_id'
op|')'
op|':'
newline|'\n'
indent|' '
name|'return'
name|'dict'
op|'('
name|'id'
op|'='
string|"'1'"
op|','
name|'host'
op|'='
string|"'localhost'"
op|')'
newline|'\n'
nl|'\n'
dedent|''
name|'fakes'
op|'.'
name|'stub_out_rate_limiting'
op|'('
name|'self'
op|'.'
name|'stubs'
op|')'
newline|'\n'
name|'fakes'
op|'.'
name|'stub_out_key_pair_funcs'
op|'('
name|'self'
op|'.'
name|'stubs'
op|')'
newline|'\n'
name|'fake'
op|'.'
name|'stub_out_image_service'
op|'('
name|'self'
op|')'
newline|'\n'
name|'fakes'
op|'.'
name|'stub_out_nw_api'
op|'('
name|'self'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'stub_out'
op|'('
string|"'nova.db.instance_add_security_group'"
op|','
nl|'\n'
name|'return_security_group'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'stub_out'
op|'('
string|"'nova.db.project_get_networks'"
op|','
name|'project_get_networks'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'stub_out'
op|'('
string|"'nova.db.instance_create'"
op|','
name|'instance_create'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'stub_out'
op|'('
string|"'nova.db.instance_system_metadata_update'"
op|','
name|'fake_method'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'stub_out'
op|'('
string|"'nova.db.instance_get'"
op|','
name|'instance_get'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'stub_out'
op|'('
string|"'nova.db.instance_update'"
op|','
name|'instance_update'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'stub_out'
op|'('
string|"'nova.db.instance_update_and_get_original'"
op|','
nl|'\n'
name|'server_update'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'stubs'
op|'.'
name|'Set'
op|'('
name|'manager'
op|'.'
name|'VlanManager'
op|','
string|"'allocate_fixed_ip'"
op|','
nl|'\n'
name|'fake_method'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'req'
op|'='
name|'fakes'
op|'.'
name|'HTTPRequest'
op|'.'
name|'blank'
op|'('
string|"''"
op|')'
newline|'\n'
nl|'\n'
DECL|member|_test_create_extra
dedent|''
name|'def'
name|'_test_create_extra'
op|'('
name|'self'
op|','
name|'params'
op|','
name|'no_image'
op|'='
name|'False'
op|','
nl|'\n'
name|'override_controller'
op|'='
name|'None'
op|')'
op|':'
newline|'\n'
indent|' '
name|'image_uuid'
op|'='
string|"'c905cedb-7281-47e4-8a62-f26bc5fc4c77'"
newline|'\n'
name|'server'
op|'='
name|'dict'
op|'('
name|'name'
op|'='
string|"'server_test'"
op|','
name|'imageRef'
op|'='
name|'image_uuid'
op|','
name|'flavorRef'
op|'='
number|'2'
op|')'
newline|'\n'
name|'if'
name|'no_image'
op|':'
newline|'\n'
indent|' '
name|'server'
op|'.'
name|'pop'
op|'('
string|"'imageRef'"
op|','
name|'None'
op|')'
newline|'\n'
dedent|''
name|'server'
op|'.'
name|'update'
op|'('
name|'params'
op|')'
newline|'\n'
name|'body'
op|'='
name|'dict'
op|'('
name|'server'
op|'='
name|'server'
op|')'
newline|'\n'
name|'if'
name|'override_controller'
op|':'
newline|'\n'
indent|' '
name|'server'
op|'='
name|'override_controller'
op|'.'
name|'create'
op|'('
name|'self'
op|'.'
name|'req'
op|','
nl|'\n'
name|'body'
op|'='
name|'body'
op|')'
op|'.'
name|'obj'
op|'['
string|"'server'"
op|']'
newline|'\n'
dedent|''
name|'else'
op|':'
newline|'\n'
indent|' '
name|'server'
op|'='
name|'self'
op|'.'
name|'controller'
op|'.'
name|'create'
op|'('
name|'self'
op|'.'
name|'req'
op|','
nl|'\n'
name|'body'
op|'='
name|'body'
op|')'
op|'.'
name|'obj'
op|'['
string|"'server'"
op|']'
newline|'\n'
nl|'\n'
DECL|member|_check_multiple_create_extension_disabled
dedent|''
dedent|''
name|'def'
name|'_check_multiple_create_extension_disabled'
op|'('
name|'self'
op|','
op|'**'
name|'kwargs'
op|')'
op|':'
newline|'\n'
comment|'# NOTE: on v2.1 API, "create a server" API doesn\'t add the following'
nl|'\n'
comment|'# attributes into kwargs when non-loading multiple_create extension.'
nl|'\n'
comment|'# However, v2.0 API adds them as values "1" instead. So we need to'
nl|'\n'
comment|'# define checking methods for each API here.'
nl|'\n'
indent|' '
name|'self'
op|'.'
name|'assertNotIn'
op|'('
string|"'min_count'"
op|','
name|'kwargs'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertNotIn'
op|'('
string|"'max_count'"
op|','
name|'kwargs'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_instance_with_multiple_create_disabled
dedent|''
name|'def'
name|'test_create_instance_with_multiple_create_disabled'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'min_count'
op|'='
number|'2'
newline|'\n'
name|'max_count'
op|'='
number|'3'
newline|'\n'
name|'params'
op|'='
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MIN_ATTRIBUTE_NAME'
op|':'
name|'min_count'
op|','
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MAX_ATTRIBUTE_NAME'
op|':'
name|'max_count'
op|','
nl|'\n'
op|'}'
newline|'\n'
name|'old_create'
op|'='
name|'compute_api'
op|'.'
name|'API'
op|'.'
name|'create'
newline|'\n'
nl|'\n'
DECL|function|create
name|'def'
name|'create'
op|'('
op|'*'
name|'args'
op|','
op|'**'
name|'kwargs'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'_check_multiple_create_extension_disabled'
op|'('
op|'**'
name|'kwargs'
op|')'
newline|'\n'
name|'return'
name|'old_create'
op|'('
op|'*'
name|'args'
op|','
op|'**'
name|'kwargs'
op|')'
newline|'\n'
nl|'\n'
dedent|''
name|'self'
op|'.'
name|'stubs'
op|'.'
name|'Set'
op|'('
name|'compute_api'
op|'.'
name|'API'
op|','
string|"'create'"
op|','
name|'create'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'_test_create_extra'
op|'('
nl|'\n'
name|'params'
op|','
nl|'\n'
name|'override_controller'
op|'='
name|'self'
op|'.'
name|'no_mult_create_controller'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_multiple_create_with_string_type_min_and_max
dedent|''
name|'def'
name|'test_multiple_create_with_string_type_min_and_max'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'min_count'
op|'='
string|"'2'"
newline|'\n'
name|'max_count'
op|'='
string|"'3'"
newline|'\n'
name|'params'
op|'='
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MIN_ATTRIBUTE_NAME'
op|':'
name|'min_count'
op|','
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MAX_ATTRIBUTE_NAME'
op|':'
name|'max_count'
op|','
nl|'\n'
op|'}'
newline|'\n'
name|'old_create'
op|'='
name|'compute_api'
op|'.'
name|'API'
op|'.'
name|'create'
newline|'\n'
nl|'\n'
DECL|function|create
name|'def'
name|'create'
op|'('
op|'*'
name|'args'
op|','
op|'**'
name|'kwargs'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'kwargs'
op|'['
string|"'min_count'"
op|']'
op|','
name|'int'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIsInstance'
op|'('
name|'kwargs'
op|'['
string|"'max_count'"
op|']'
op|','
name|'int'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'kwargs'
op|'['
string|"'min_count'"
op|']'
op|','
number|'2'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'kwargs'
op|'['
string|"'max_count'"
op|']'
op|','
number|'3'
op|')'
newline|'\n'
name|'return'
name|'old_create'
op|'('
op|'*'
name|'args'
op|','
op|'**'
name|'kwargs'
op|')'
newline|'\n'
nl|'\n'
dedent|''
name|'self'
op|'.'
name|'stubs'
op|'.'
name|'Set'
op|'('
name|'compute_api'
op|'.'
name|'API'
op|','
string|"'create'"
op|','
name|'create'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'_test_create_extra'
op|'('
name|'params'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_instance_with_multiple_create_enabled
dedent|''
name|'def'
name|'test_create_instance_with_multiple_create_enabled'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'min_count'
op|'='
number|'2'
newline|'\n'
name|'max_count'
op|'='
number|'3'
newline|'\n'
name|'params'
op|'='
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MIN_ATTRIBUTE_NAME'
op|':'
name|'min_count'
op|','
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MAX_ATTRIBUTE_NAME'
op|':'
name|'max_count'
op|','
nl|'\n'
op|'}'
newline|'\n'
name|'old_create'
op|'='
name|'compute_api'
op|'.'
name|'API'
op|'.'
name|'create'
newline|'\n'
nl|'\n'
DECL|function|create
name|'def'
name|'create'
op|'('
op|'*'
name|'args'
op|','
op|'**'
name|'kwargs'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'kwargs'
op|'['
string|"'min_count'"
op|']'
op|','
number|'2'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'kwargs'
op|'['
string|"'max_count'"
op|']'
op|','
number|'3'
op|')'
newline|'\n'
name|'return'
name|'old_create'
op|'('
op|'*'
name|'args'
op|','
op|'**'
name|'kwargs'
op|')'
newline|'\n'
nl|'\n'
dedent|''
name|'self'
op|'.'
name|'stubs'
op|'.'
name|'Set'
op|'('
name|'compute_api'
op|'.'
name|'API'
op|','
string|"'create'"
op|','
name|'create'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'_test_create_extra'
op|'('
name|'params'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_instance_invalid_negative_min
dedent|''
name|'def'
name|'test_create_instance_invalid_negative_min'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'image_href'
op|'='
string|"'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'"
newline|'\n'
name|'flavor_ref'
op|'='
string|"'http://localhost/123/flavors/3'"
newline|'\n'
nl|'\n'
name|'body'
op|'='
op|'{'
nl|'\n'
string|"'server'"
op|':'
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MIN_ATTRIBUTE_NAME'
op|':'
op|'-'
number|'1'
op|','
nl|'\n'
string|"'name'"
op|':'
string|"'server_test'"
op|','
nl|'\n'
string|"'imageRef'"
op|':'
name|'image_href'
op|','
nl|'\n'
string|"'flavorRef'"
op|':'
name|'flavor_ref'
op|','
nl|'\n'
op|'}'
nl|'\n'
op|'}'
newline|'\n'
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'self'
op|'.'
name|'validation_error'
op|','
nl|'\n'
name|'self'
op|'.'
name|'controller'
op|'.'
name|'create'
op|','
nl|'\n'
name|'self'
op|'.'
name|'req'
op|','
nl|'\n'
name|'body'
op|'='
name|'body'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_instance_invalid_negative_max
dedent|''
name|'def'
name|'test_create_instance_invalid_negative_max'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'image_href'
op|'='
string|"'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'"
newline|'\n'
name|'flavor_ref'
op|'='
string|"'http://localhost/123/flavors/3'"
newline|'\n'
nl|'\n'
name|'body'
op|'='
op|'{'
nl|'\n'
string|"'server'"
op|':'
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MAX_ATTRIBUTE_NAME'
op|':'
op|'-'
number|'1'
op|','
nl|'\n'
string|"'name'"
op|':'
string|"'server_test'"
op|','
nl|'\n'
string|"'imageRef'"
op|':'
name|'image_href'
op|','
nl|'\n'
string|"'flavorRef'"
op|':'
name|'flavor_ref'
op|','
nl|'\n'
op|'}'
nl|'\n'
op|'}'
newline|'\n'
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'self'
op|'.'
name|'validation_error'
op|','
nl|'\n'
name|'self'
op|'.'
name|'controller'
op|'.'
name|'create'
op|','
nl|'\n'
name|'self'
op|'.'
name|'req'
op|','
nl|'\n'
name|'body'
op|'='
name|'body'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_instance_with_blank_min
dedent|''
name|'def'
name|'test_create_instance_with_blank_min'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'image_href'
op|'='
string|"'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'"
newline|'\n'
name|'flavor_ref'
op|'='
string|"'http://localhost/123/flavors/3'"
newline|'\n'
nl|'\n'
name|'body'
op|'='
op|'{'
nl|'\n'
string|"'server'"
op|':'
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MIN_ATTRIBUTE_NAME'
op|':'
string|"''"
op|','
nl|'\n'
string|"'name'"
op|':'
string|"'server_test'"
op|','
nl|'\n'
string|"'image_ref'"
op|':'
name|'image_href'
op|','
nl|'\n'
string|"'flavor_ref'"
op|':'
name|'flavor_ref'
op|','
nl|'\n'
op|'}'
nl|'\n'
op|'}'
newline|'\n'
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'self'
op|'.'
name|'validation_error'
op|','
nl|'\n'
name|'self'
op|'.'
name|'controller'
op|'.'
name|'create'
op|','
nl|'\n'
name|'self'
op|'.'
name|'req'
op|','
nl|'\n'
name|'body'
op|'='
name|'body'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_instance_with_blank_max
dedent|''
name|'def'
name|'test_create_instance_with_blank_max'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'image_href'
op|'='
string|"'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'"
newline|'\n'
name|'flavor_ref'
op|'='
string|"'http://localhost/123/flavors/3'"
newline|'\n'
nl|'\n'
name|'body'
op|'='
op|'{'
nl|'\n'
string|"'server'"
op|':'
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MAX_ATTRIBUTE_NAME'
op|':'
string|"''"
op|','
nl|'\n'
string|"'name'"
op|':'
string|"'server_test'"
op|','
nl|'\n'
string|"'image_ref'"
op|':'
name|'image_href'
op|','
nl|'\n'
string|"'flavor_ref'"
op|':'
name|'flavor_ref'
op|','
nl|'\n'
op|'}'
nl|'\n'
op|'}'
newline|'\n'
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'self'
op|'.'
name|'validation_error'
op|','
nl|'\n'
name|'self'
op|'.'
name|'controller'
op|'.'
name|'create'
op|','
nl|'\n'
name|'self'
op|'.'
name|'req'
op|','
nl|'\n'
name|'body'
op|'='
name|'body'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_instance_invalid_min_greater_than_max
dedent|''
name|'def'
name|'test_create_instance_invalid_min_greater_than_max'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'image_href'
op|'='
string|"'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'"
newline|'\n'
name|'flavor_ref'
op|'='
string|"'http://localhost/123/flavors/3'"
newline|'\n'
nl|'\n'
name|'body'
op|'='
op|'{'
nl|'\n'
string|"'server'"
op|':'
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MIN_ATTRIBUTE_NAME'
op|':'
number|'4'
op|','
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MAX_ATTRIBUTE_NAME'
op|':'
number|'2'
op|','
nl|'\n'
string|"'name'"
op|':'
string|"'server_test'"
op|','
nl|'\n'
string|"'imageRef'"
op|':'
name|'image_href'
op|','
nl|'\n'
string|"'flavorRef'"
op|':'
name|'flavor_ref'
op|','
nl|'\n'
op|'}'
nl|'\n'
op|'}'
newline|'\n'
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'webob'
op|'.'
name|'exc'
op|'.'
name|'HTTPBadRequest'
op|','
nl|'\n'
name|'self'
op|'.'
name|'controller'
op|'.'
name|'create'
op|','
nl|'\n'
name|'self'
op|'.'
name|'req'
op|','
nl|'\n'
name|'body'
op|'='
name|'body'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_instance_invalid_alpha_min
dedent|''
name|'def'
name|'test_create_instance_invalid_alpha_min'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'image_href'
op|'='
string|"'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'"
newline|'\n'
name|'flavor_ref'
op|'='
string|"'http://localhost/123/flavors/3'"
newline|'\n'
nl|'\n'
name|'body'
op|'='
op|'{'
nl|'\n'
string|"'server'"
op|':'
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MIN_ATTRIBUTE_NAME'
op|':'
string|"'abcd'"
op|','
nl|'\n'
string|"'name'"
op|':'
string|"'server_test'"
op|','
nl|'\n'
string|"'imageRef'"
op|':'
name|'image_href'
op|','
nl|'\n'
string|"'flavorRef'"
op|':'
name|'flavor_ref'
op|','
nl|'\n'
op|'}'
nl|'\n'
op|'}'
newline|'\n'
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'self'
op|'.'
name|'validation_error'
op|','
nl|'\n'
name|'self'
op|'.'
name|'controller'
op|'.'
name|'create'
op|','
nl|'\n'
name|'self'
op|'.'
name|'req'
op|','
nl|'\n'
name|'body'
op|'='
name|'body'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_instance_invalid_alpha_max
dedent|''
name|'def'
name|'test_create_instance_invalid_alpha_max'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'image_href'
op|'='
string|"'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'"
newline|'\n'
name|'flavor_ref'
op|'='
string|"'http://localhost/123/flavors/3'"
newline|'\n'
nl|'\n'
name|'body'
op|'='
op|'{'
nl|'\n'
string|"'server'"
op|':'
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MAX_ATTRIBUTE_NAME'
op|':'
string|"'abcd'"
op|','
nl|'\n'
string|"'name'"
op|':'
string|"'server_test'"
op|','
nl|'\n'
string|"'imageRef'"
op|':'
name|'image_href'
op|','
nl|'\n'
string|"'flavorRef'"
op|':'
name|'flavor_ref'
op|','
nl|'\n'
op|'}'
nl|'\n'
op|'}'
newline|'\n'
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'self'
op|'.'
name|'validation_error'
op|','
nl|'\n'
name|'self'
op|'.'
name|'controller'
op|'.'
name|'create'
op|','
nl|'\n'
name|'self'
op|'.'
name|'req'
op|','
nl|'\n'
name|'body'
op|'='
name|'body'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_multiple_instances
dedent|''
name|'def'
name|'test_create_multiple_instances'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Test creating multiple instances but not asking for\n reservation_id\n """'
newline|'\n'
name|'image_href'
op|'='
string|"'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'"
newline|'\n'
name|'flavor_ref'
op|'='
string|"'http://localhost/123/flavors/3'"
newline|'\n'
name|'body'
op|'='
op|'{'
nl|'\n'
string|"'server'"
op|':'
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MIN_ATTRIBUTE_NAME'
op|':'
number|'2'
op|','
nl|'\n'
string|"'name'"
op|':'
string|"'server_test'"
op|','
nl|'\n'
string|"'imageRef'"
op|':'
name|'image_href'
op|','
nl|'\n'
string|"'flavorRef'"
op|':'
name|'flavor_ref'
op|','
nl|'\n'
string|"'metadata'"
op|':'
op|'{'
string|"'hello'"
op|':'
string|"'world'"
op|','
nl|'\n'
string|"'open'"
op|':'
string|"'stack'"
op|'}'
op|','
nl|'\n'
op|'}'
nl|'\n'
op|'}'
newline|'\n'
nl|'\n'
name|'res'
op|'='
name|'self'
op|'.'
name|'controller'
op|'.'
name|'create'
op|'('
name|'self'
op|'.'
name|'req'
op|','
name|'body'
op|'='
name|'body'
op|')'
op|'.'
name|'obj'
newline|'\n'
nl|'\n'
name|'instance_uuids'
op|'='
name|'self'
op|'.'
name|'instance_cache_by_uuid'
op|'.'
name|'keys'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIn'
op|'('
name|'res'
op|'['
string|'"server"'
op|']'
op|'['
string|'"id"'
op|']'
op|','
name|'instance_uuids'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'_check_admin_password_len'
op|'('
name|'res'
op|'['
string|'"server"'
op|']'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_multiple_instances_pass_disabled
dedent|''
name|'def'
name|'test_create_multiple_instances_pass_disabled'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Test creating multiple instances but not asking for\n reservation_id\n """'
newline|'\n'
name|'self'
op|'.'
name|'flags'
op|'('
name|'enable_instance_password'
op|'='
name|'False'
op|')'
newline|'\n'
name|'image_href'
op|'='
string|"'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'"
newline|'\n'
name|'flavor_ref'
op|'='
string|"'http://localhost/123/flavors/3'"
newline|'\n'
name|'body'
op|'='
op|'{'
nl|'\n'
string|"'server'"
op|':'
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MIN_ATTRIBUTE_NAME'
op|':'
number|'2'
op|','
nl|'\n'
string|"'name'"
op|':'
string|"'server_test'"
op|','
nl|'\n'
string|"'imageRef'"
op|':'
name|'image_href'
op|','
nl|'\n'
string|"'flavorRef'"
op|':'
name|'flavor_ref'
op|','
nl|'\n'
string|"'metadata'"
op|':'
op|'{'
string|"'hello'"
op|':'
string|"'world'"
op|','
nl|'\n'
string|"'open'"
op|':'
string|"'stack'"
op|'}'
op|','
nl|'\n'
op|'}'
nl|'\n'
op|'}'
newline|'\n'
nl|'\n'
name|'res'
op|'='
name|'self'
op|'.'
name|'controller'
op|'.'
name|'create'
op|'('
name|'self'
op|'.'
name|'req'
op|','
name|'body'
op|'='
name|'body'
op|')'
op|'.'
name|'obj'
newline|'\n'
nl|'\n'
name|'instance_uuids'
op|'='
name|'self'
op|'.'
name|'instance_cache_by_uuid'
op|'.'
name|'keys'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIn'
op|'('
name|'res'
op|'['
string|'"server"'
op|']'
op|'['
string|'"id"'
op|']'
op|','
name|'instance_uuids'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'_check_admin_password_missing'
op|'('
name|'res'
op|'['
string|'"server"'
op|']'
op|')'
newline|'\n'
nl|'\n'
DECL|member|_check_admin_password_len
dedent|''
name|'def'
name|'_check_admin_password_len'
op|'('
name|'self'
op|','
name|'server_dict'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""utility function - check server_dict for admin_password length."""'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'CONF'
op|'.'
name|'password_length'
op|','
nl|'\n'
name|'len'
op|'('
name|'server_dict'
op|'['
string|'"adminPass"'
op|']'
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|member|_check_admin_password_missing
dedent|''
name|'def'
name|'_check_admin_password_missing'
op|'('
name|'self'
op|','
name|'server_dict'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""utility function - check server_dict for admin_password absence."""'
newline|'\n'
name|'self'
op|'.'
name|'assertNotIn'
op|'('
string|'"admin_password"'
op|','
name|'server_dict'
op|')'
newline|'\n'
nl|'\n'
DECL|member|_create_multiple_instances_resv_id_return
dedent|''
name|'def'
name|'_create_multiple_instances_resv_id_return'
op|'('
name|'self'
op|','
name|'resv_id_return'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Test creating multiple instances with asking for\n reservation_id\n """'
newline|'\n'
name|'image_href'
op|'='
string|"'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'"
newline|'\n'
name|'flavor_ref'
op|'='
string|"'http://localhost/123/flavors/3'"
newline|'\n'
name|'body'
op|'='
op|'{'
nl|'\n'
string|"'server'"
op|':'
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MIN_ATTRIBUTE_NAME'
op|':'
number|'2'
op|','
nl|'\n'
string|"'name'"
op|':'
string|"'server_test'"
op|','
nl|'\n'
string|"'imageRef'"
op|':'
name|'image_href'
op|','
nl|'\n'
string|"'flavorRef'"
op|':'
name|'flavor_ref'
op|','
nl|'\n'
string|"'metadata'"
op|':'
op|'{'
string|"'hello'"
op|':'
string|"'world'"
op|','
nl|'\n'
string|"'open'"
op|':'
string|"'stack'"
op|'}'
op|','
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'RRID_ATTRIBUTE_NAME'
op|':'
name|'resv_id_return'
nl|'\n'
op|'}'
nl|'\n'
op|'}'
newline|'\n'
nl|'\n'
name|'res'
op|'='
name|'self'
op|'.'
name|'controller'
op|'.'
name|'create'
op|'('
name|'self'
op|'.'
name|'req'
op|','
name|'body'
op|'='
name|'body'
op|')'
newline|'\n'
name|'reservation_id'
op|'='
name|'res'
op|'.'
name|'obj'
op|'['
string|"'reservation_id'"
op|']'
newline|'\n'
name|'self'
op|'.'
name|'assertNotEqual'
op|'('
name|'reservation_id'
op|','
string|'""'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertIsNotNone'
op|'('
name|'reservation_id'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertTrue'
op|'('
name|'len'
op|'('
name|'reservation_id'
op|')'
op|'>'
number|'1'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_multiple_instances_with_resv_id_return
dedent|''
name|'def'
name|'test_create_multiple_instances_with_resv_id_return'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'_create_multiple_instances_resv_id_return'
op|'('
name|'True'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_multiple_instances_with_string_resv_id_return
dedent|''
name|'def'
name|'test_create_multiple_instances_with_string_resv_id_return'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'_create_multiple_instances_resv_id_return'
op|'('
string|'"True"'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_multiple_instances_with_multiple_volume_bdm
dedent|''
name|'def'
name|'test_create_multiple_instances_with_multiple_volume_bdm'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Test that a BadRequest is raised if multiple instances\n are requested with a list of block device mappings for volumes.\n """'
newline|'\n'
name|'min_count'
op|'='
number|'2'
newline|'\n'
name|'bdm'
op|'='
op|'['
op|'{'
string|"'source_type'"
op|':'
string|"'volume'"
op|','
string|"'uuid'"
op|':'
string|"'vol-xxxx'"
op|'}'
op|','
nl|'\n'
op|'{'
string|"'source_type'"
op|':'
string|"'volume'"
op|','
string|"'uuid'"
op|':'
string|"'vol-yyyy'"
op|'}'
nl|'\n'
op|']'
newline|'\n'
name|'params'
op|'='
op|'{'
nl|'\n'
name|'block_device_mapping_v21'
op|'.'
name|'ATTRIBUTE_NAME'
op|':'
name|'bdm'
op|','
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MIN_ATTRIBUTE_NAME'
op|':'
name|'min_count'
nl|'\n'
op|'}'
newline|'\n'
name|'old_create'
op|'='
name|'compute_api'
op|'.'
name|'API'
op|'.'
name|'create'
newline|'\n'
nl|'\n'
DECL|function|create
name|'def'
name|'create'
op|'('
op|'*'
name|'args'
op|','
op|'**'
name|'kwargs'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'kwargs'
op|'['
string|"'min_count'"
op|']'
op|','
number|'2'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'len'
op|'('
name|'kwargs'
op|'['
string|"'block_device_mapping'"
op|']'
op|')'
op|','
number|'2'
op|')'
newline|'\n'
name|'return'
name|'old_create'
op|'('
op|'*'
name|'args'
op|','
op|'**'
name|'kwargs'
op|')'
newline|'\n'
nl|'\n'
dedent|''
name|'self'
op|'.'
name|'stubs'
op|'.'
name|'Set'
op|'('
name|'compute_api'
op|'.'
name|'API'
op|','
string|"'create'"
op|','
name|'create'
op|')'
newline|'\n'
name|'exc'
op|'='
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'webob'
op|'.'
name|'exc'
op|'.'
name|'HTTPBadRequest'
op|','
nl|'\n'
name|'self'
op|'.'
name|'_test_create_extra'
op|','
name|'params'
op|','
name|'no_image'
op|'='
name|'True'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
string|'"Cannot attach one or more volumes to multiple "'
nl|'\n'
string|'"instances"'
op|','
name|'exc'
op|'.'
name|'explanation'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_multiple_instances_with_single_volume_bdm
dedent|''
name|'def'
name|'test_create_multiple_instances_with_single_volume_bdm'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Test that a BadRequest is raised if multiple instances\n are requested to boot from a single volume.\n """'
newline|'\n'
name|'min_count'
op|'='
number|'2'
newline|'\n'
name|'bdm'
op|'='
op|'['
op|'{'
string|"'source_type'"
op|':'
string|"'volume'"
op|','
string|"'uuid'"
op|':'
string|"'vol-xxxx'"
op|'}'
op|']'
newline|'\n'
name|'params'
op|'='
op|'{'
nl|'\n'
name|'block_device_mapping_v21'
op|'.'
name|'ATTRIBUTE_NAME'
op|':'
name|'bdm'
op|','
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MIN_ATTRIBUTE_NAME'
op|':'
name|'min_count'
nl|'\n'
op|'}'
newline|'\n'
name|'old_create'
op|'='
name|'compute_api'
op|'.'
name|'API'
op|'.'
name|'create'
newline|'\n'
nl|'\n'
DECL|function|create
name|'def'
name|'create'
op|'('
op|'*'
name|'args'
op|','
op|'**'
name|'kwargs'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'kwargs'
op|'['
string|"'min_count'"
op|']'
op|','
number|'2'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
name|'kwargs'
op|'['
string|"'block_device_mapping'"
op|']'
op|'['
number|'0'
op|']'
op|'['
string|"'volume_id'"
op|']'
op|','
nl|'\n'
string|"'vol-xxxx'"
op|')'
newline|'\n'
name|'return'
name|'old_create'
op|'('
op|'*'
name|'args'
op|','
op|'**'
name|'kwargs'
op|')'
newline|'\n'
nl|'\n'
dedent|''
name|'self'
op|'.'
name|'stubs'
op|'.'
name|'Set'
op|'('
name|'compute_api'
op|'.'
name|'API'
op|','
string|"'create'"
op|','
name|'create'
op|')'
newline|'\n'
name|'exc'
op|'='
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'webob'
op|'.'
name|'exc'
op|'.'
name|'HTTPBadRequest'
op|','
nl|'\n'
name|'self'
op|'.'
name|'_test_create_extra'
op|','
name|'params'
op|','
name|'no_image'
op|'='
name|'True'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'assertEqual'
op|'('
string|'"Cannot attach one or more volumes to multiple "'
nl|'\n'
string|'"instances"'
op|','
name|'exc'
op|'.'
name|'explanation'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_multiple_instance_with_non_integer_max_count
dedent|''
name|'def'
name|'test_create_multiple_instance_with_non_integer_max_count'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'image_href'
op|'='
string|"'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'"
newline|'\n'
name|'flavor_ref'
op|'='
string|"'http://localhost/123/flavors/3'"
newline|'\n'
name|'body'
op|'='
op|'{'
nl|'\n'
string|"'server'"
op|':'
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MAX_ATTRIBUTE_NAME'
op|':'
number|'2.5'
op|','
nl|'\n'
string|"'name'"
op|':'
string|"'server_test'"
op|','
nl|'\n'
string|"'imageRef'"
op|':'
name|'image_href'
op|','
nl|'\n'
string|"'flavorRef'"
op|':'
name|'flavor_ref'
op|','
nl|'\n'
string|"'metadata'"
op|':'
op|'{'
string|"'hello'"
op|':'
string|"'world'"
op|','
nl|'\n'
string|"'open'"
op|':'
string|"'stack'"
op|'}'
op|','
nl|'\n'
op|'}'
nl|'\n'
op|'}'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'self'
op|'.'
name|'validation_error'
op|','
nl|'\n'
name|'self'
op|'.'
name|'controller'
op|'.'
name|'create'
op|','
name|'self'
op|'.'
name|'req'
op|','
name|'body'
op|'='
name|'body'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_multiple_instance_with_non_integer_min_count
dedent|''
name|'def'
name|'test_create_multiple_instance_with_non_integer_min_count'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'image_href'
op|'='
string|"'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'"
newline|'\n'
name|'flavor_ref'
op|'='
string|"'http://localhost/123/flavors/3'"
newline|'\n'
name|'body'
op|'='
op|'{'
nl|'\n'
string|"'server'"
op|':'
op|'{'
nl|'\n'
name|'multiple_create_v21'
op|'.'
name|'MIN_ATTRIBUTE_NAME'
op|':'
number|'2.5'
op|','
nl|'\n'
string|"'name'"
op|':'
string|"'server_test'"
op|','
nl|'\n'
string|"'imageRef'"
op|':'
name|'image_href'
op|','
nl|'\n'
string|"'flavorRef'"
op|':'
name|'flavor_ref'
op|','
nl|'\n'
string|"'metadata'"
op|':'
op|'{'
string|"'hello'"
op|':'
string|"'world'"
op|','
nl|'\n'
string|"'open'"
op|':'
string|"'stack'"
op|'}'
op|','
nl|'\n'
op|'}'
nl|'\n'
op|'}'
newline|'\n'
nl|'\n'
name|'self'
op|'.'
name|'assertRaises'
op|'('
name|'self'
op|'.'
name|'validation_error'
op|','
nl|'\n'
name|'self'
op|'.'
name|'controller'
op|'.'
name|'create'
op|','
name|'self'
op|'.'
name|'req'
op|','
name|'body'
op|'='
name|'body'
op|')'
newline|'\n'
dedent|''
dedent|''
endmarker|''
end_unit
| 13.107582 | 152 | 0.615232 | 5,732 | 38,379 | 3.989532 | 0.05792 | 0.133287 | 0.036733 | 0.069792 | 0.872311 | 0.825564 | 0.796222 | 0.766792 | 0.732552 | 0.685674 | 0 | 0.012526 | 0.095156 | 38,379 | 2,927 | 153 | 13.11206 | 0.645982 | 0 | 0 | 0.94602 | 0 | 0.000683 | 0.389718 | 0.063759 | 0 | 0 | 0 | 0 | 0.010933 | 0 | null | null | 0.005808 | 0.005125 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f28c4e9784618dee268d5f8588a3f4d202c01e57 | 1,344 | py | Python | ocradmin/documents/models.py | mikesname/ocropodium | a3e379cca38dc1999349bf4e9b5608e81dc54b10 | [
"Apache-2.0"
] | 1 | 2018-04-18T20:39:02.000Z | 2018-04-18T20:39:02.000Z | ocradmin/documents/models.py | mikesname/ocropodium | a3e379cca38dc1999349bf4e9b5608e81dc54b10 | [
"Apache-2.0"
] | null | null | null | ocradmin/documents/models.py | mikesname/ocropodium | a3e379cca38dc1999349bf4e9b5608e81dc54b10 | [
"Apache-2.0"
] | null | null | null | from django.db import models
from ocradmin.projects.models import Project
from ocradmin import storage
from django.conf import settings
#class DocumentBase(object):
# """Document model abstract class. Each storage
# backend implements its own version of this."""
# def __init__(self, label):
# """Initialise the Document with an image path/handle."""
# self._label = label
#
# @property
# def label(self):
# raise NotImplementedError
#
# def __unicode__(self):
# """Unicode representation."""
# return self.label
#
# def save(self):
# """Save objects, settings dates if necessary
# and writing all cached datastreams to storage."""
# raise NotImplementedError
#
# def set_image_content(self, content):
# """Set image content."""
# raise NotImplementedError
#
# def set_image_mimetype(self, mimetype):
# """Set image mimetype."""
# raise NotImplementedError
#
# def set_image_label(self, label):
# """Set image label."""
# raise NotImplementedError
#
# def set_label(self, label):
# """Set document label."""
# raise NotImplementedError
#
# def set_metadata(self, attr, value):
# """Set arbitrary document metadata."""
# raise NotImplementedError
| 26.352941 | 74 | 0.627976 | 138 | 1,344 | 5.992754 | 0.434783 | 0.203144 | 0.195889 | 0.181378 | 0.211608 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.260417 | 1,344 | 50 | 75 | 26.88 | 0.831992 | 0.835565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f2ab73ebd7c9f579f7e3cee6ff1e83814bff2cf1 | 101 | py | Python | DSA/bit_magic/divisibility.py | RohanMiraje/DSAwithPython | ea4884afcac9d6cc2817a93e918c829dd10cef5d | [
"MIT"
] | 2 | 2020-02-12T03:00:03.000Z | 2020-07-06T17:27:03.000Z | DSA/bit_magic/divisibility.py | RohanMiraje/DSAwithPython | ea4884afcac9d6cc2817a93e918c829dd10cef5d | [
"MIT"
] | null | null | null | DSA/bit_magic/divisibility.py | RohanMiraje/DSAwithPython | ea4884afcac9d6cc2817a93e918c829dd10cef5d | [
"MIT"
] | null | null | null | def check_if_no_is_div_by_9(n):
pass
if __name__ == '__main__':
check_if_no_is_div_by_9(9)
| 14.428571 | 31 | 0.722772 | 21 | 101 | 2.52381 | 0.571429 | 0.264151 | 0.339623 | 0.415094 | 0.641509 | 0.641509 | 0.641509 | 0 | 0 | 0 | 0 | 0.036145 | 0.178218 | 101 | 6 | 32 | 16.833333 | 0.60241 | 0 | 0 | 0 | 0 | 0 | 0.079208 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
4b80c16aa52029cb5daf60aaf5c85d5e67b38206 | 11,964 | py | Python | tests/test_apis_APIBuilder.py | josiah-wolf-oberholtzer/uqbar | 96f86eb6264b0677a9e2931a527769640e5658b6 | [
"MIT"
] | 7 | 2018-12-02T05:59:54.000Z | 2021-12-28T22:40:18.000Z | tests/test_apis_APIBuilder.py | josiah-wolf-oberholtzer/uqbar | 96f86eb6264b0677a9e2931a527769640e5658b6 | [
"MIT"
] | 16 | 2017-12-28T22:08:09.000Z | 2022-02-26T14:47:23.000Z | tests/test_apis_APIBuilder.py | josiah-wolf-oberholtzer/uqbar | 96f86eb6264b0677a9e2931a527769640e5658b6 | [
"MIT"
] | 5 | 2020-03-28T14:57:47.000Z | 2022-02-01T10:02:18.000Z | import pathlib
import shutil
import sys
import pytest
import uqbar.apis
from uqbar.strings import normalize
@pytest.fixture
def test_path():
test_path = pathlib.Path(__file__).parent
docs_path = test_path / "docs"
if str(test_path) not in sys.path:
sys.path.insert(0, str(test_path))
if docs_path.exists():
shutil.rmtree(str(docs_path))
yield test_path
if docs_path.exists():
shutil.rmtree(str(docs_path))
def test_collection_01(test_path):
builder = uqbar.apis.APIBuilder([test_path / "fake_package"], test_path / "docs")
source_paths = uqbar.apis.collect_source_paths(builder._initial_source_paths)
node_tree = builder.build_node_tree(source_paths)
assert normalize(str(node_tree)) == normalize(
"""
None/
fake_package/
fake_package.empty_module
fake_package.empty_package/
fake_package.empty_package.empty
fake_package.enums
fake_package.module
fake_package.multi/
fake_package.multi.one
fake_package.multi.two
"""
)
documenters = list(builder.collect_module_documenters(node_tree))
assert isinstance(documenters[0], uqbar.apis.RootDocumenter)
assert [documenter.package_path for documenter in documenters[1:]] == [
"fake_package",
"fake_package.empty_module",
"fake_package.empty_package",
"fake_package.empty_package.empty",
"fake_package.enums",
"fake_package.module",
"fake_package.multi",
"fake_package.multi.one",
"fake_package.multi.two",
]
def test_collection_02(test_path):
builder = uqbar.apis.APIBuilder(
[test_path / "fake_package"], test_path / "docs", document_private_modules=True
)
source_paths = uqbar.apis.collect_source_paths(builder._initial_source_paths)
node_tree = builder.build_node_tree(source_paths)
assert normalize(str(node_tree)) == normalize(
"""
None/
fake_package/
fake_package._private/
fake_package._private.nested
fake_package.empty_module
fake_package.empty_package/
fake_package.empty_package.empty
fake_package.enums
fake_package.module
fake_package.multi/
fake_package.multi.one
fake_package.multi.two
"""
)
documenters = list(builder.collect_module_documenters(node_tree))
assert isinstance(documenters[0], uqbar.apis.RootDocumenter)
assert [documenter.package_path for documenter in documenters[1:]] == [
"fake_package",
"fake_package._private",
"fake_package._private.nested",
"fake_package.empty_module",
"fake_package.empty_package",
"fake_package.empty_package.empty",
"fake_package.enums",
"fake_package.module",
"fake_package.multi",
"fake_package.multi.one",
"fake_package.multi.two",
]
def test_collection_03(test_path):
builder = uqbar.apis.APIBuilder(
[test_path / "fake_package" / "multi"], test_path / "docs"
)
source_paths = uqbar.apis.collect_source_paths(builder._initial_source_paths)
node_tree = builder.build_node_tree(source_paths)
documenters = list(builder.collect_module_documenters(node_tree))
assert isinstance(documenters[0], uqbar.apis.RootDocumenter)
assert [documenter.package_path for documenter in documenters[1:]] == [
"fake_package",
"fake_package.multi",
"fake_package.multi.one",
"fake_package.multi.two",
]
def test_collection_04(test_path):
builder = uqbar.apis.APIBuilder(
[test_path / "fake_package"], test_path / "docs", document_empty_modules=False
)
source_paths = uqbar.apis.collect_source_paths(builder._initial_source_paths)
node_tree = builder.build_node_tree(source_paths)
assert normalize(str(node_tree)) == normalize(
"""
None/
fake_package/
fake_package.enums
fake_package.module
fake_package.multi/
fake_package.multi.one
fake_package.multi.two
"""
)
documenters = list(builder.collect_module_documenters(node_tree))
assert isinstance(documenters[0], uqbar.apis.RootDocumenter)
assert [documenter.package_path for documenter in documenters[1:]] == [
"fake_package",
"fake_package.enums",
"fake_package.module",
"fake_package.multi",
"fake_package.multi.one",
"fake_package.multi.two",
]
def test_output_01(test_path):
builder = uqbar.apis.APIBuilder([test_path / "fake_package"], test_path / "docs")
builder()
paths = sorted((test_path / "docs").rglob("*"))
paths = [str(path.relative_to(test_path)) for path in paths]
assert paths == [
"docs/fake_package",
"docs/fake_package/empty_module.rst",
"docs/fake_package/empty_package",
"docs/fake_package/empty_package/empty.rst",
"docs/fake_package/empty_package/index.rst",
"docs/fake_package/enums.rst",
"docs/fake_package/index.rst",
"docs/fake_package/module.rst",
"docs/fake_package/multi",
"docs/fake_package/multi/index.rst",
"docs/fake_package/multi/one.rst",
"docs/fake_package/multi/two.rst",
"docs/index.rst",
]
base_path = test_path / "docs" / "fake_package"
with (base_path / ".." / "index.rst").open() as file_pointer:
assert normalize(file_pointer.read()) == normalize(
"""
API
===
.. toctree::
fake_package/index
"""
)
with (base_path / "index.rst").open() as file_pointer:
assert normalize(file_pointer.read()) == normalize(
"""
.. _fake-package:
fake_package
============
.. automodule:: fake_package
.. currentmodule:: fake_package
.. toctree::
empty_module
empty_package/index
enums
module
multi/index
"""
)
with (base_path / "module.rst").open() as file_pointer:
assert normalize(file_pointer.read()) == normalize(
"""
.. _fake-package--module:
module
======
.. automodule:: fake_package.module
.. currentmodule:: fake_package.module
.. autoclass:: ChildClass
:members:
:undoc-members:
.. autoclass:: PublicClass
:members:
:undoc-members:
.. autofunction:: public_function
"""
)
def test_output_02(test_path):
builder = uqbar.apis.APIBuilder(
[test_path / "fake_package"], test_path / "docs", document_private_modules=True
)
builder()
paths = sorted((test_path / "docs").rglob("*"))
paths = [str(path.relative_to(test_path)) for path in paths]
assert paths == [
"docs/fake_package",
"docs/fake_package/_private",
"docs/fake_package/_private/index.rst",
"docs/fake_package/_private/nested.rst",
"docs/fake_package/empty_module.rst",
"docs/fake_package/empty_package",
"docs/fake_package/empty_package/empty.rst",
"docs/fake_package/empty_package/index.rst",
"docs/fake_package/enums.rst",
"docs/fake_package/index.rst",
"docs/fake_package/module.rst",
"docs/fake_package/multi",
"docs/fake_package/multi/index.rst",
"docs/fake_package/multi/one.rst",
"docs/fake_package/multi/two.rst",
"docs/index.rst",
]
base_path = test_path / "docs" / "fake_package"
with (base_path / ".." / "index.rst").open() as file_pointer:
assert normalize(file_pointer.read()) == normalize(
"""
API
===
.. toctree::
fake_package/index
"""
)
with (base_path / "index.rst").open() as file_pointer:
assert normalize(file_pointer.read()) == normalize(
"""
.. _fake-package:
fake_package
============
.. automodule:: fake_package
.. currentmodule:: fake_package
.. toctree::
_private/index
empty_module
empty_package/index
enums
module
multi/index
"""
)
with (base_path / "module.rst").open() as file_pointer:
assert normalize(file_pointer.read()) == normalize(
"""
.. _fake-package--module:
module
======
.. automodule:: fake_package.module
.. currentmodule:: fake_package.module
.. autoclass:: ChildClass
:members:
:undoc-members:
.. autoclass:: PublicClass
:members:
:undoc-members:
.. autofunction:: public_function
"""
)
def test_output_03(test_path):
builder = uqbar.apis.APIBuilder(
[test_path / "fake_package"],
test_path / "docs",
document_private_members=True,
document_private_modules=True,
)
builder()
paths = sorted((test_path / "docs").rglob("*"))
paths = [str(path.relative_to(test_path)) for path in paths]
assert paths == [
"docs/fake_package",
"docs/fake_package/_private",
"docs/fake_package/_private/index.rst",
"docs/fake_package/_private/nested.rst",
"docs/fake_package/empty_module.rst",
"docs/fake_package/empty_package",
"docs/fake_package/empty_package/empty.rst",
"docs/fake_package/empty_package/index.rst",
"docs/fake_package/enums.rst",
"docs/fake_package/index.rst",
"docs/fake_package/module.rst",
"docs/fake_package/multi",
"docs/fake_package/multi/index.rst",
"docs/fake_package/multi/one.rst",
"docs/fake_package/multi/two.rst",
"docs/index.rst",
]
base_path = test_path / "docs" / "fake_package"
with (base_path / ".." / "index.rst").open() as file_pointer:
assert normalize(file_pointer.read()) == normalize(
"""
API
===
.. toctree::
fake_package/index
"""
)
with (base_path / "index.rst").open() as file_pointer:
assert normalize(file_pointer.read()) == normalize(
"""
.. _fake-package:
fake_package
============
.. automodule:: fake_package
.. currentmodule:: fake_package
.. toctree::
_private/index
empty_module
empty_package/index
enums
module
multi/index
"""
)
with (base_path / "module.rst").open() as file_pointer:
assert normalize(file_pointer.read()) == normalize(
"""
.. _fake-package--module:
module
======
.. automodule:: fake_package.module
.. currentmodule:: fake_package.module
.. autoclass:: ChildClass
:members:
:undoc-members:
.. autoclass:: PublicClass
:members:
:undoc-members:
.. autoclass:: _PrivateClass
:members:
:undoc-members:
.. autofunction:: _private_function
.. autofunction:: public_function
"""
)
| 30.365482 | 87 | 0.56996 | 1,208 | 11,964 | 5.362583 | 0.073676 | 0.224143 | 0.104199 | 0.077802 | 0.943964 | 0.943964 | 0.943964 | 0.943964 | 0.943964 | 0.943964 | 0 | 0.002798 | 0.312939 | 11,964 | 393 | 88 | 30.442748 | 0.78528 | 0 | 0 | 0.731959 | 0 | 0 | 0.279724 | 0.204645 | 0 | 0 | 0 | 0 | 0.118557 | 1 | 0.041237 | false | 0 | 0.030928 | 0 | 0.072165 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4ba874689ddef0d70402f40b7202be1dafa315d9 | 4,836 | py | Python | imagemodel/experimental/reference_tracking/dataset_providers/rt_transformer.py | tenkeyless/imagemodel | 360c672117b5ccb1bfb3d6771b0720fa1a1f513c | [
"MIT"
] | null | null | null | imagemodel/experimental/reference_tracking/dataset_providers/rt_transformer.py | tenkeyless/imagemodel | 360c672117b5ccb1bfb3d6771b0720fa1a1f513c | [
"MIT"
] | null | null | null | imagemodel/experimental/reference_tracking/dataset_providers/rt_transformer.py | tenkeyless/imagemodel | 360c672117b5ccb1bfb3d6771b0720fa1a1f513c | [
"MIT"
] | null | null | null | from abc import ABCMeta, abstractmethod
from typing import Tuple
import tensorflow as tf
from imagemodel.common.dataset_providers.transformer import TransformerP, TransformerT
class RTTransformerT(TransformerT, metaclass=ABCMeta):
@property
@abstractmethod
def bin_size(self) -> int:
pass
def __augment(
self,
main_img: tf.Tensor,
ref_img: tf.Tensor,
main_label: tf.Tensor,
ref_label: tf.Tensor,
main_bw_label: tf.Tensor,
ref_bw_label: tf.Tensor) -> Tuple[tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor]:
pass
def __resize(
self,
main_img: tf.Tensor,
ref_img: tf.Tensor,
main_label: tf.Tensor,
ref_label: tf.Tensor,
main_bw_label: tf.Tensor,
ref_bw_label: tf.Tensor) -> Tuple[tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor]:
pass
def __color_extract(
self,
main_img: tf.Tensor,
ref_img: tf.Tensor,
main_label: tf.Tensor,
ref_label: tf.Tensor,
main_bw_label: tf.Tensor,
ref_bw_label: tf.Tensor) -> \
Tuple[tf.Tensor, tf.Tensor, tf.Tensor, Tuple[tf.Tensor, tf.Tensor], tf.Tensor, tf.Tensor, tf.Tensor]:
pass
def __color_to_bin(
self,
main_img: tf.Tensor,
ref_img: tf.Tensor,
main_label: tf.Tensor,
ref_label_color_map: tf.Tensor,
ref_label: tf.Tensor,
main_bw_label: tf.Tensor,
ref_bw_label: tf.Tensor) -> Tuple[tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor]:
pass
def __apply_filter(
self,
main_img: tf.Tensor,
ref_img: tf.Tensor,
bin_main_label: tf.Tensor,
bin_ref_label: tf.Tensor,
main_bw_label: tf.Tensor,
ref_bw_label: tf.Tensor) -> Tuple[tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor]:
pass
def __norm_data(
self,
main_img: tf.Tensor,
ref_img: tf.Tensor,
bin_main_label: tf.Tensor,
bin_ref_label: tf.Tensor,
main_bw_label: tf.Tensor,
ref_bw_label: tf.Tensor) -> Tuple[tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor]:
pass
def __zip_dataset(
self,
main_img: tf.Tensor,
ref_img: tf.Tensor,
bin_main_label: tf.Tensor,
bin_ref_label: tf.Tensor,
main_bw_label: tf.Tensor,
ref_bw_label: tf.Tensor) -> \
Tuple[Tuple[tf.Tensor, tf.Tensor, tf.Tensor], Tuple[tf.Tensor, tf.Tensor, tf.Tensor]]:
pass
class RTTransformerP(TransformerP, metaclass=ABCMeta):
@property
@abstractmethod
def bin_size(self) -> int:
pass
def __resize(self, filename: tf.Tensor, main_img: tf.Tensor, ref_img: tf.Tensor, ref_label: tf.Tensor) -> \
Tuple[tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor]:
pass
def __color_extract(self, filename: tf.Tensor, main_img: tf.Tensor, ref_img: tf.Tensor, ref_label: tf.Tensor) -> \
Tuple[tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, Tuple[tf.Tensor, tf.Tensor]]:
pass
def __color_to_bin(
self,
filename: tf.Tensor,
main_img: tf.Tensor,
ref_img: tf.Tensor,
ref_label: tf.Tensor,
ref_label_color_map: Tuple[tf.Tensor, tf.Tensor]) -> \
Tuple[tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, Tuple[tf.Tensor, tf.Tensor]]:
pass
def __apply_filter(
self,
filename: tf.Tensor,
main_img: tf.Tensor,
ref_img: tf.Tensor,
bin_ref_label: tf.Tensor,
ref_label_color_map: Tuple[tf.Tensor, tf.Tensor]) -> \
Tuple[tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, Tuple[tf.Tensor, tf.Tensor]]:
pass
def __norm_data(
self,
filename: tf.Tensor,
main_img: tf.Tensor,
ref_img: tf.Tensor,
bin_ref_label: tf.Tensor,
ref_label_color_map: Tuple[tf.Tensor, tf.Tensor]) -> \
Tuple[tf.Tensor, tf.Tensor, tf.Tensor, tf.Tensor, Tuple[tf.Tensor, tf.Tensor]]:
pass
def __zip_dataset(
self,
filename: tf.Tensor,
main_img: tf.Tensor,
ref_img: tf.Tensor,
bin_ref_label: tf.Tensor,
ref_label_color_map: Tuple[tf.Tensor, tf.Tensor]) -> \
Tuple[Tuple[tf.Tensor, tf.Tensor, tf.Tensor], Tuple[tf.Tensor, Tuple[tf.Tensor, tf.Tensor]]]:
pass
| 34.056338 | 118 | 0.56493 | 614 | 4,836 | 4.223127 | 0.074919 | 0.47204 | 0.235249 | 0.376398 | 0.910143 | 0.910143 | 0.910143 | 0.905515 | 0.895874 | 0.873506 | 0 | 0 | 0.327543 | 4,836 | 141 | 119 | 34.297872 | 0.797355 | 0 | 0 | 0.868852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.122951 | false | 0.122951 | 0.032787 | 0 | 0.172131 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 11 |
4bc2106f1b1852ec4a2e3e054f318f2eec8fce24 | 3,732 | py | Python | 20180806_HTML.py | DahyeLee0403/HTML_- | adab04b24181c1e4fbeb14bbeca4dd960c49e3a9 | [
"MIT"
] | null | null | null | 20180806_HTML.py | DahyeLee0403/HTML_- | adab04b24181c1e4fbeb14bbeca4dd960c49e3a9 | [
"MIT"
] | null | null | null | 20180806_HTML.py | DahyeLee0403/HTML_- | adab04b24181c1e4fbeb14bbeca4dd960c49e3a9 | [
"MIT"
] | null | null | null | #20180806_코딩야학_4일차
#부모 자식과 목차
<ul>: unordered list/ 다른 목록 들과 구분할 경계
<ol>: ordered list (1. 2. 3. 이렇게 숫자 부여)
<li>: list
-----------------------------------------------
<ol>
<li>1. HTML</li>
<li>2. CSS</li>
<li>3. JavaScript</li>
</ol>
<h1>HTML</h1>
<p>Hypertext Markup Language (HTML) is the standard markup language for <strong>creating <u>web</u> pages</strong> and web applications.Web browsers receive HTML documents from a web server or from local storage and render them into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for the appearance of the document.
</p><p style="margin-top:45px;">HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects, such as interactive forms, may be embedded into the rendered page. It provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets.
</p>
VIEW RESO
-----------------------------------------------
#문서의 구조
제목 지정
<title> : 검색엔진이 웹페이지를 검색 시, 가장 중요
영어가 아닌 문자 가 깨지는 이유
웹페이지가 저장된 문자 '표현 방식'이 일치 X
웹브라우저가 웹페이지를 '해석하는 방식'이 일치 X
<meta charset="utf-8">:'utf-8'으로
<!doctype html> :페이지가 html 로 만들어졌다.
-----------------------------------------------
<head>
<html>
<title>WEB1 - html</title>
<meta charset="utf-8">
</head>
<body>
<ol>
<li>HTML</li>
<li>CSS</li>
<li>JavaScript</li>
</ol>
<h1>HTML</h1>
<p>Hypertext Markup Language (HTML) is the standard markup language for <strong>creating <u>web</u> pages</strong> and web applications.Web browsers receive HTML documents from a web server or from local storage and render them into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for the appearance of the document.
</p><p style="margin-top:45px;">HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects, such as interactive forms, may be embedded into the rendered page. It provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets.
</p>
-----------------------------------------------
#HTML 태그의 제왕
<href> : hyper text reference
<a> : anchor
target="_blank" : 링크 클릭했을 때 새창에서 페이지 열리게
title : 링크가 어떤 내용을 담고 있는지 툴팁으로 보여주는 기능
-----------------------------------------------
<!doctype html>
<html>
<head>
<title>WEB1 - html</title>
<meta charset="utf-8">
</head>
<body>
<ol>
<li>HTML</li>
<li>CSS</li>
<li>JavaScript</li>
</ol>
<h1>HTML</h1>
<p><a href="https://www.w3.org/TR/html5/" target="_blank" title="html5 specification">Hypertext Markup Language (HTML)</a> is the standard markup language for <strong>creating <u>web</u> pages</strong> and web applications.Web browsers receive HTML documents from a web server or from local storage and render them into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for the appearance of the document.
<img src="coding.jpg" width="100%">
</p><p style="margin-top:45px;">HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects, such as interactive forms, may be embedded into the rendered page. It provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets.
</p>
</body>
</html>
| 47.846154 | 464 | 0.693462 | 575 | 3,732 | 4.493913 | 0.32 | 0.009288 | 0.03483 | 0.018576 | 0.816176 | 0.816176 | 0.816176 | 0.816176 | 0.816176 | 0.816176 | 0 | 0.012318 | 0.151661 | 3,732 | 77 | 465 | 48.467532 | 0.803853 | 0.011522 | 0 | 0.634921 | 0 | 0 | 0.04152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
29a99923af94efeab6ab8092d6f592745aea08cb | 106 | py | Python | app/models.py | OrigamiCranes/PrintingPortal | e25f9f683dca3a0dcf4c90ae50515d7693447cb8 | [
"MIT",
"Unlicense"
] | null | null | null | app/models.py | OrigamiCranes/PrintingPortal | e25f9f683dca3a0dcf4c90ae50515d7693447cb8 | [
"MIT",
"Unlicense"
] | null | null | null | app/models.py | OrigamiCranes/PrintingPortal | e25f9f683dca3a0dcf4c90ae50515d7693447cb8 | [
"MIT",
"Unlicense"
] | null | null | null | from app import db
from app.blueprints.printing.models import *
from app.blueprints.auth.models import *
| 21.2 | 44 | 0.801887 | 16 | 106 | 5.3125 | 0.5 | 0.247059 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122642 | 106 | 4 | 45 | 26.5 | 0.913978 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 7 |
29e2f48e5a36d3aaac84e02312e8a2ca8448c94b | 35,089 | py | Python | 0_joan_stark/lamp.py | wang0618/ascii-art | 7ce6f152541716034bf0a22d341a898b17e2865f | [
"MIT"
] | 1 | 2021-08-29T09:52:06.000Z | 2021-08-29T09:52:06.000Z | 0_joan_stark/lamp.py | wang0618/ascii-art | 7ce6f152541716034bf0a22d341a898b17e2865f | [
"MIT"
] | null | null | null | 0_joan_stark/lamp.py | wang0618/ascii-art | 7ce6f152541716034bf0a22d341a898b17e2865f | [
"MIT"
] | null | null | null | # Genie in a Lamp
# https://web.archive.org/web/20000229125939/http://www.geocities.com/SoHo/Gallery/6446/mlamp.htm
duration = 200
name = "Aladdin"
frames = [
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" ()\n" +
" <^^>\n" +
" .-\"\"-. \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" ()\n" +
" <^^>\n" +
" .-\"\"-. \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" ()\n" +
" <^^> * \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" ()\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" `-._ `~` `-,./_<\n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \\ \\_.-./=\\.-._ __\n" +
" `-._ `~` `-,./_<\n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" / / \\ c /O\n" +
" \\ \\_.-./=\\.-._ __\n" +
" `-._ `~` `-,./_<\n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" / _<( ^.^ )\n" +
" / / \\ c /O\n" +
" \\ \\_.-./=\\.-._ __\n" +
" `-._ `~` `-,./_<\n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" ___\\__/__/\n" +
" / _<( ^.^ )\n" +
" / / \\ c /O\n" +
" \\ \\_.-./=\\.-._ __\n" +
" `-._ `~` `-,./_<\n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" / ! )\\\n" +
" ___\\__/__/\n" +
" / _<( ^.^ )\n" +
" / / \\ c /O\n" +
" \\ \\_.-./=\\.-._ __\n" +
" `-._ `~` `-,./_<\n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" .-=-.\n" +
" / ! )\\\n" +
" ___\\__/__/\n" +
" / _<( ^.^ )\n" +
" / / \\ c /O\n" +
" \\ \\_.-./=\\.-._ __\n" +
" `-._ `~` `-,./_<\n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" .-=-.\n" +
" / ! )\\\n" +
" ___\\__/__/\n" +
" MAKE A WISH! / _<( o.o )\n" +
" / / \\ c /O\n" +
" \\ \\_.-./=\\.-._ __\n" +
" `-._ `~` `-,./_<\n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" .-=-.\n" +
" / ! )\\\n" +
" ___\\__/__/\n" +
" MAKE A WISH! / _<( ^.^ )\n" +
" / / \\ = /O\n" +
" \\ \\_.-./=\\.-._ __\n" +
" . `-._ `~` `-,./_< *\n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" .-=-. *\n" +
" / ! )\\\n" +
" ___\\__/__/\n" +
" MAKE A WISH! / _<( o.o )\n" +
" / / \\ = /O\n" +
" \\ \\_.-./=\\.-._ __ *\n" +
" * `-._ `~` `-,./_<\n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" .-=-.\n" +
" / ! )\\\n" +
" ___\\__/__/\n" +
" MAKE A WISH! / _<( ^.^ ) *\n" +
" / / \\ c /O\n" +
" \\ \\_.-./=\\.-._ __\n" +
" `-._ `~` `-,./_<\n" +
" . `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" .-=-.\n" +
" / ! )\\\n" +
" ___\\__/__/\n" +
" MAKE A WISH! / _<( ^.^ )\n" +
" / / \\ = /O\n" +
" \\ \\_.-./=\\.-._ __ *\n" +
" `-._ `~` `-,./_<\n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" .-=-.\n" +
" / ! )\\\n" +
" ___\\__/__/\n" +
" MAKE A WISH! / _<( o.o )\n" +
" / / \\ = /O\n" +
" \\ \\_.-./=\\.-._ __\n" +
" * `-._ `~` `-,./_<\n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ . *\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" .-=-.\n" +
" / ! )\\\n" +
" ___\\__/__/\n" +
" MAKE A WISH! / _<( ^.^ ) *\n" +
" / / \\ c /O\n" +
" \\ \\_.-./=\\.-._ __\n" +
" `-._ `~` `-,./_<\n" +
" `\\' \\'\\`'----'\n" +
" * \\ . \\ *\n" +
" `-~~~\\ .\n" +
" . `-._`-._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" .(, ).\n" +
" (* / !*)\\\n" +
" ___\\__/__/\n" +
" ' / _<( ^( * ) *\n" +
" / / (*, )/O\n" +
" ( \\ *_.)./=\\(. )_ __\n" +
" `-._ ` `-,./_<\n" +
" `POOF !(' )---'\n" +
" *(. ) . \\ *\n" +
" `-* (' ) .\n" +
" . `-(*, )._ *\n" +
" * `~~~-, \n" +
" () ) *\n" +
" <^^> * * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" .(, ).\n" +
" (* (, ) *) \n" +
" _\\__/__/\n" +
" ' /(,* )( * ) *\n" +
" ' (* / (*, )/ \n" +
" ( \\ *_.)./ (. )_ __\n" +
" `( *,) ` `-,./_<\n" +
" (. * `POOF !(' )---'\n" +
" *(. ) . ( * ) *\n" +
" (, *)* (' ) .\n" +
" . `-(*, )._ *\n" +
" * ( `)*~~-, \n" +
" () ) *\n" +
" <^^> * * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" .(, ).\n" +
" (* (, ) *) \n" +
" ( .* ) \n" +
" ' (,* )( * ) *\n" +
" ' (* (*, ) \n" +
" ( * .). (. ) \n" +
" `( *,) *` * (, )* \n" +
" (. * `POOF! (' )\n" +
" *(. ) . ( * ) *\n" +
" (, *)* (' ) .\n" +
" . . (*, )._ *\n" +
" * ( `)* ~-, \n" +
" () ) *\n" +
" <^^> * * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" ( .* ) \n" +
" ' (,* )( * ) *\n" +
" ' (* (*, ) \n" +
" ( * .). (. ) \n" +
" `( *,) *` * (, )* \n" +
" (. * `POOF! (' )\n" +
" *(. ) . ( * ) *\n" +
" (, *)* (' ) .\n" +
" . . (*, )._ *\n" +
" * ( `)* ~-, \n" +
" () ) *\n" +
" <^^> * * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" ' (* (*, ) \n" +
" ( * .). (. ) \n" +
" `( *,) *` * (, )* \n" +
" (. * `POOF! (' )\n" +
" *(. ) . ( * ) *\n" +
" (, *)* (' ) .\n" +
" . . (*, )._ *\n" +
" * ( `)* ~-, \n" +
" () ) *\n" +
" <^^> * * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" (. * `POOF! (' )\n" +
" *(. ) . ( * ) *\n" +
" (, *)* (' ) .\n" +
" . . (*, )._ *\n" +
" * ( `)* ~-, \n" +
" () ) *\n" +
" <^^> * * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" . . (*, ) *\n" +
" * ( `)* ~-, \n" +
" () ) *\n" +
" <^^> * * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" ()\n" +
" <^^> * * ( \n" +
" .-\"\"-. )* \n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" ()\n" +
" <^^>\n" +
" .-\"\"-.\n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" ()\n" +
" <^^>\n" +
" .-\"\"-.\n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" ()\n" +
" <^^>\n" +
" .-\"\"-.\n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" ",
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" \n" +
" ()\n" +
" <^^>\n" +
" .-\"\"-.\n" +
" .---. .\"-....-\"-._ _...---''`/\n" +
" ( (`\\ \\ .' ``-'' _.-\"'`\n" +
" \\ \\ \\ : :. .-'\n" +
" `\\`.\\: `:. _.'\n" +
" ( .'`.` _.'\n" +
" `` `-..______.-'\n" +
" ):. ( \n" +
" .\"-....-\".\n" +
" jgs .':. `. \n" +
" \"-..______..-\" "
]
| 36.780922 | 97 | 0.057938 | 988 | 35,089 | 1.225709 | 0.030364 | 1.342692 | 1.880264 | 2.328654 | 0.903386 | 0.90256 | 0.90256 | 0.90256 | 0.90256 | 0.90256 | 0 | 0.001681 | 0.644048 | 35,089 | 953 | 98 | 36.819517 | 0.095276 | 0.003163 | 0 | 0.969365 | 0 | 0 | 0.723652 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 14 |
d9b2055b16e6617b69be689c0ed46fc395f5b9c6 | 52,317 | py | Python | test/unit/samplot_test.py | mchowdh200/samplot | 780094b6d694e0c1194c17d4ed33e06f7ce56975 | [
"MIT"
] | 1 | 2021-01-16T05:58:57.000Z | 2021-01-16T05:58:57.000Z | test/unit/samplot_test.py | mchowdh200/samplot | 780094b6d694e0c1194c17d4ed33e06f7ce56975 | [
"MIT"
] | null | null | null | test/unit/samplot_test.py | mchowdh200/samplot | 780094b6d694e0c1194c17d4ed33e06f7ce56975 | [
"MIT"
] | null | null | null | import unittest
import sys
sys.path.append('../../src/')
import samplot
bam_1 = '../data/NA12878_restricted.bam'
bam_2 = '../data/NA12889_restricted.bam'
bam_3 = '../data/NA12890_restricted.bam'
bams=[bam_1, bam_2, bam_3]
sv_chrm = 'chr4'
sv_start = 115928730
sv_end = 115931875
sv_type = 'DEL'
#{{{ class Test_set_plot_dimensions(unittest.TestCase):
class Test_set_plot_dimensions(unittest.TestCase):
#{{{ def test_set_plot_dimensions(self):
def test_set_plot_dimensions(self):
'''
def set_plot_dimensions(sv,
sv_type,
arg_plot_height,
arg_plot_width,
bams,
annotation_files,
transcript_file,
arg_window,
zoom):
'''
plot_height = None
plot_width = None
annotation_files = None
transcript_file = None
zoom = None
window = None
sv = [samplot.genome_interval(sv_chrm,sv_start,sv_end)]
# Test basic function where window is set to be proportional to SV size
r_plot_height, r_plot_width, r_window, r_ranges = \
samplot.set_plot_dimensions(sv,
sv_type,
plot_height,
plot_width,
bams,
annotation_files,
transcript_file,
window,
zoom)
self.assertEqual(r_plot_height, 5)
self.assertEqual(r_plot_width, 8)
this_window = int((sv_end - sv_start)/2)
self.assertEqual( r_window, this_window)
self.assertEqual( r_ranges[0],
samplot.genome_interval(sv_chrm,
sv_start - this_window,
sv_end + this_window))
# Test to see if zoom is ignored when it is larger than window
zoom = 10000
r_plot_height, r_plot_width, r_window, r_ranges = \
samplot.set_plot_dimensions(sv,
sv_type,
plot_height,
plot_width,
bams,
annotation_files,
transcript_file,
window,
zoom)
self.assertEqual( r_ranges[0],
samplot.genome_interval(sv_chrm,
sv_start - this_window,
sv_end + this_window))
# Test to see if zoom creates two ranges
zoom = 100
r_plot_height, r_plot_width, r_window, r_ranges = \
samplot.set_plot_dimensions(sv,
sv_type,
plot_height,
plot_width,
bams,
annotation_files,
transcript_file,
window,
zoom)
self.assertEqual( r_window, zoom)
self.assertEqual( len(r_ranges), 2)
self.assertEqual( r_ranges[0],
samplot.genome_interval(sv_chrm,
sv_start - zoom,
sv_start + zoom,))
self.assertEqual( r_ranges[1],
samplot.genome_interval(sv_chrm,
sv_end - zoom,
sv_end + zoom) )
# Test to multiple sv regions
window = None
zoom = None
sv = [samplot.genome_interval(sv_chrm,sv_start,sv_start),
samplot.genome_interval(sv_chrm,sv_end,sv_end)]
r_plot_height, r_plot_width, r_window, r_ranges = \
samplot.set_plot_dimensions(sv,
sv_type,
plot_height,
plot_width,
bams,
annotation_files,
transcript_file,
window,
zoom)
self.assertEqual( len(r_ranges), 2)
self.assertEqual( r_ranges[0],
samplot.genome_interval(sv_chrm,
sv_start-1000,
sv_start+1000) )
self.assertEqual( r_ranges[1],
samplot.genome_interval(sv_chrm,
sv_end-1000,
sv_end+1000) )
#}}}
#{{{def test_get_read_data(self):
def test_get_read_data(self):
'''
read_data,max_coverage = get_read_data(ranges,
options.bams,
options.reference,
options.min_mqual,
options.coverage_only,
options.long_read,
options.same_yaxis_scales,
options.max_depth,
options.z)
'''
plot_height = None
plot_width = None
annotation_files = None
transcript_file = None
zoom = None
window = None
sv = [samplot.genome_interval(sv_chrm,sv_start,sv_end)]
# Test basic function where window is set to be proportional to SV size
r_plot_height, r_plot_width, r_window, r_ranges = \
samplot.set_plot_dimensions(sv,
sv_type,
plot_height,
plot_width,
bams,
annotation_files,
transcript_file,
window,
zoom)
reference = None
min_mqual = None
coverage_only = None
long_read = 1000
long_even_size = 100
same_yaxis_scales = None
max_depth = 100
z = 4
read_data,max_coverage = samplot.get_read_data(r_ranges,
bams,
reference,
min_mqual,
coverage_only,
long_read,
long_even_size,
same_yaxis_scales,
max_depth,
z)
#}}}
#}}}
#{{{ class Test_genome_interval(unittest.TestCase):
class Test_genome_interval(unittest.TestCase):
#{{{ def test_init(self):
def test_init(self):
gi = samplot.genome_interval('chr1', 1, 1000)
self.assertEqual(gi.chrm, 'chr1')
self.assertEqual(gi.start, 1)
self.assertEqual(gi.end, 1000)
#}}}
#{{{ def test_init(self):
def test_intersect(self):
gi = samplot.genome_interval('chr8', 500, 1000)
self.assertEqual(-1,
gi.intersect(samplot.genome_interval('chr7',
500,
1000)))
self.assertEqual(1,
gi.intersect(samplot.genome_interval('chr9',
500,
1000)))
self.assertEqual(-1,
gi.intersect(samplot.genome_interval('chr8',
100,
499)))
self.assertEqual(1,
gi.intersect(samplot.genome_interval('chr8',
1001,
2000)))
self.assertEqual(0,
gi.intersect(samplot.genome_interval('chr8',
1,
500)))
self.assertEqual(0,
gi.intersect(samplot.genome_interval('chr8',
500,
501)))
self.assertEqual(0,
gi.intersect(samplot.genome_interval('chr8',
1000,
2000)))
#}}}
#{{{ def test_get_range_hit(self):
def test_get_range_hit(self):
gi_0 = samplot.genome_interval('chr8', 500, 1000)
ranges = [gi_0]
self.assertEqual(0, samplot.get_range_hit(ranges, 'chr8', 500))
gi_1 = samplot.genome_interval('chr8', 2000, 3000)
ranges = [gi_0, gi_1]
self.assertEqual(0, samplot.get_range_hit(ranges, 'chr8', 500))
self.assertEqual(1, samplot.get_range_hit(ranges, 'chr8', 2500))
self.assertEqual(None, samplot.get_range_hit(ranges, 'chr7', 2500))
self.assertEqual(None, samplot.get_range_hit(ranges, 'chr8', 100))
self.assertEqual(None, samplot.get_range_hit(ranges, 'chr8', 10000))
#}}}
#{{{ def test_map_genome_point_to_range_points(self):
def test_map_genome_point_to_range_points(self):
gi_0 = samplot.genome_interval('chr8', 100, 200)
ranges = [gi_0]
self.assertEqual(None,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
10))
self.assertEqual(0.0,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
100))
self.assertEqual(0.25,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
125))
self.assertEqual(0.5,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
150))
self.assertEqual(0.75,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
175))
self.assertEqual(1.0,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
200))
self.assertEqual(None,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
201))
gi_1 = samplot.genome_interval('chr8', 300, 400)
ranges = [gi_0, gi_1]
self.assertEqual(None,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
10))
self.assertEqual(0.0,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
100))
self.assertEqual(0.25/2,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
125))
self.assertEqual(0.5/2,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
150))
self.assertEqual(0.75/2,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
175))
self.assertEqual(1.0/2,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
200))
self.assertEqual(None,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
201))
self.assertEqual(0.5,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
300))
self.assertEqual(0.5+0.25/2,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
325))
self.assertEqual(0.5+0.5/2,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
350))
self.assertEqual(0.5+0.75/2,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
375))
self.assertEqual(1.0,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
400))
gi_0 = samplot.genome_interval('chr8', 100, 200)
gi_1 = samplot.genome_interval('chr9', 300, 400)
ranges = [gi_0, gi_1]
self.assertEqual(None,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
10))
self.assertEqual(0.0,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
100))
self.assertEqual(0.25/2,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
125))
self.assertEqual(0.5/2,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
150))
self.assertEqual(0.75/2,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
175))
self.assertEqual(1.0/2,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
200))
self.assertEqual(None,
samplot.map_genome_point_to_range_points(ranges,
'chr8',
201))
self.assertEqual(0.5,
samplot.map_genome_point_to_range_points(ranges,
'chr9',
300))
self.assertEqual(0.5+0.25/2,
samplot.map_genome_point_to_range_points(ranges,
'chr9',
325))
self.assertEqual(0.5+0.5/2,
samplot.map_genome_point_to_range_points(ranges,
'chr9',
350))
self.assertEqual(0.5+0.75/2,
samplot.map_genome_point_to_range_points(ranges,
'chr9',
375))
self.assertEqual(1.0,
samplot.map_genome_point_to_range_points(ranges,
'chr9',
400))
#}}}
#}}}
#{{{ class Test_long_read_plan(unittest.TestCase):
class Test_long_read_plan(unittest.TestCase):
#{{{ def test_init(self):
def test_add_align_step(self):
alignment = samplot.Alignment('chr8', 100, 500, True, 0)
# both are in the same range
gi_0 = samplot.genome_interval('chr8', 100, 1000)
ranges = [gi_0]
steps = []
samplot.add_align_step(alignment, steps, ranges)
self.assertEqual(1, len(steps))
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(100, steps[0].start_pos.start)
self.assertEqual(100, steps[0].start_pos.end)
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(500, steps[0].end_pos.start)
self.assertEqual(500, steps[0].end_pos.end)
self.assertEqual('Align', steps[0].info['TYPE'])
# in different ranges
gi_0 = samplot.genome_interval('chr8', 100, 200)
gi_1 = samplot.genome_interval('chr8', 300, 1000)
ranges = [gi_0, gi_1]
steps = []
samplot.add_align_step(alignment, steps, ranges)
self.assertEqual(2, len(steps))
#start
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(100, steps[0].start_pos.start)
self.assertEqual(100, steps[0].start_pos.end)
#end
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(200, steps[0].end_pos.start)
self.assertEqual(200, steps[0].end_pos.end)
#event
self.assertEqual('Align', steps[0].info['TYPE'])
#start
self.assertEqual('chr8', steps[1].start_pos.chrm)
self.assertEqual(300, steps[1].start_pos.start)
self.assertEqual(300, steps[1].start_pos.end)
#end
self.assertEqual('chr8', steps[1].end_pos.chrm)
self.assertEqual(500, steps[1].end_pos.start)
self.assertEqual(500, steps[1].end_pos.end)
#event
self.assertEqual('Align', steps[1].info['TYPE'])
# start is not in range, use end hit
gi_0 = samplot.genome_interval('chr8', 10, 20)
gi_1 = samplot.genome_interval('chr8', 300, 1000)
ranges = [gi_0, gi_1]
steps = []
samplot.add_align_step(alignment, steps, ranges)
self.assertEqual(1, len(steps))
#start
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(300, steps[0].start_pos.start)
self.assertEqual(300, steps[0].start_pos.end)
#end
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(500, steps[0].end_pos.start)
self.assertEqual(500, steps[0].end_pos.end)
#event
self.assertEqual('Align', steps[0].info['TYPE'])
# end is not in range, use start hit
gi_0 = samplot.genome_interval('chr8', 100, 200)
gi_1 = samplot.genome_interval('chr8', 3000, 4000)
ranges = [gi_0, gi_1]
steps = []
samplot.add_align_step(alignment, steps, ranges)
#start
self.assertEqual(1, len(steps))
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(100, steps[0].start_pos.start)
self.assertEqual(100, steps[0].start_pos.end)
#end
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(200, steps[0].end_pos.end)
self.assertEqual(200, steps[0].end_pos.start)
#event
self.assertEqual('Align', steps[0].info['TYPE'])
# neither end is in range, add nothing
gi_0 = samplot.genome_interval('chr8', 10, 20)
gi_1 = samplot.genome_interval('chr8', 3000, 4000)
ranges = [gi_0, gi_1]
steps = []
samplot.add_align_step(alignment, steps, ranges)
self.assertEqual(0, len(steps))
#}}}
#{{{def test_get_alignments_from_cigar(self):
def test_get_alignments_from_cigar(self):
'''
alignments = get_alignments_from_cigar(
bam_file.get_reference_name(read.reference_id),
read.pos,
not read.is_reverse,
read.cigartuples)
'''
CIGAR_MAP = { 'M' : 0,
'I' : 1,
'D' : 2,
'N' : 3,
'S' : 4,
'H' : 5,
'P' : 6,
'=' : 7,
'X' : 8,
'B' : 9 }
cigar = [(CIGAR_MAP['M'], 100),
(CIGAR_MAP['D'], 100),
(CIGAR_MAP['M'], 100)]
alignments = samplot.get_alignments_from_cigar('chr8',
100,
True,
cigar)
self.assertEqual(2,len(alignments))
self.assertEqual('chr8', alignments[0].pos.chrm)
self.assertEqual(100, alignments[0].pos.start)
self.assertEqual(200, alignments[0].pos.end)
self.assertEqual(True, alignments[0].strand)
self.assertEqual(0, alignments[0].query_position)
self.assertEqual('chr8', alignments[1].pos.chrm)
self.assertEqual(300, alignments[1].pos.start)
self.assertEqual(400, alignments[1].pos.end)
self.assertEqual(True, alignments[1].strand)
self.assertEqual(100, alignments[1].query_position)
#}}}
#{{{def test_get_long_read_plan(self):
def test_get_long_read_plan(self):
gi_0 = samplot.genome_interval('chr8', 100, 250)
gi_1 = samplot.genome_interval('chr8', 300, 400)
ranges = [gi_0, gi_1]
long_reads = {}
read_name = 'Test'
alignments = [samplot.Alignment('chr8', 100, 200, True, 0)]
long_reads[read_name] = [ samplot.LongRead(alignments) ]
max_gap, steps = samplot.get_long_read_plan(read_name,
long_reads,
ranges)
self.assertEqual(0, max_gap)
self.assertEqual(1, len(steps))
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(100, steps[0].start_pos.start)
self.assertEqual(100, steps[0].start_pos.end)
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(200, steps[0].end_pos.start)
self.assertEqual(200, steps[0].end_pos.end)
self.assertEqual('LONGREAD', steps[0].event)
self.assertEqual('Align', steps[0].info['TYPE'])
alignments = [samplot.Alignment('chr8', 100, 299, True, 0)]
long_reads[read_name] = [ samplot.LongRead(alignments) ]
max_gap, steps = samplot.get_long_read_plan(read_name,
long_reads,
ranges)
self.assertEqual(0, max_gap)
self.assertEqual(1, len(steps))
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(100, steps[0].start_pos.start)
self.assertEqual(100, steps[0].start_pos.end)
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(250, steps[0].end_pos.start)
self.assertEqual(250, steps[0].end_pos.end)
self.assertEqual('Align', steps[0].info['TYPE'])
alignments = [samplot.Alignment('chr8', 100, 350, True, 0)]
long_reads[read_name] = [ samplot.LongRead(alignments) ]
max_gap, steps = samplot.get_long_read_plan(read_name,
long_reads,
ranges)
self.assertEqual(0, max_gap)
self.assertEqual(2, len(steps))
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(100, steps[0].start_pos.start)
self.assertEqual(100, steps[0].start_pos.end)
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(250, steps[0].end_pos.start)
self.assertEqual(250, steps[0].end_pos.end)
self.assertEqual('Align', steps[0].info['TYPE'])
self.assertEqual('chr8', steps[1].start_pos.chrm)
self.assertEqual(300, steps[1].start_pos.start)
self.assertEqual(300, steps[1].start_pos.end)
self.assertEqual('chr8', steps[1].end_pos.chrm)
self.assertEqual(350, steps[1].end_pos.start)
self.assertEqual(350, steps[1].end_pos.end)
self.assertEqual('Align', steps[1].info['TYPE'])
alignments = [samplot.Alignment('chr8', 100, 250, True, 0),
samplot.Alignment('chr8', 300, 350, True, 150)]
long_reads[read_name] = [ samplot.LongRead(alignments) ]
max_gap, steps = samplot.get_long_read_plan(read_name,
long_reads,
ranges)
self.assertEqual(50, max_gap)
self.assertEqual(3, len(steps))
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(100, steps[0].start_pos.start)
self.assertEqual(100, steps[0].start_pos.end)
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(250, steps[0].end_pos.start)
self.assertEqual(250, steps[0].end_pos.end)
self.assertEqual('Align', steps[0].info['TYPE'])
self.assertEqual('chr8', steps[1].start_pos.chrm)
self.assertEqual(250, steps[1].start_pos.start)
self.assertEqual(250, steps[1].start_pos.end)
self.assertEqual('chr8', steps[1].end_pos.chrm)
self.assertEqual(300, steps[1].end_pos.start)
self.assertEqual(300, steps[1].end_pos.end)
self.assertEqual('Deletion', steps[1].info['TYPE'])
self.assertEqual('chr8', steps[2].start_pos.chrm)
self.assertEqual(300, steps[2].start_pos.start)
self.assertEqual(300, steps[2].start_pos.end)
self.assertEqual('chr8', steps[2].end_pos.chrm)
self.assertEqual(350, steps[2].end_pos.start)
self.assertEqual(350, steps[2].end_pos.end)
self.assertEqual('Align', steps[2].info['TYPE'])
gi_0 = samplot.genome_interval('chr8', 100, 250)
gi_1 = samplot.genome_interval('chr9', 300, 400)
ranges = [gi_0, gi_1]
alignments = [samplot.Alignment('chr8', 100, 250, True, 0),
samplot.Alignment('chr9', 300, 350, True, 150)]
long_reads[read_name] = [ samplot.LongRead(alignments) ]
max_gap, steps = samplot.get_long_read_plan(read_name,
long_reads,
ranges)
self.assertEqual(5000, max_gap)
self.assertEqual(3, len(steps))
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(100, steps[0].start_pos.start)
self.assertEqual(100, steps[0].start_pos.end)
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(250, steps[0].end_pos.start)
self.assertEqual(250, steps[0].end_pos.end)
self.assertEqual('Align', steps[0].info['TYPE'])
self.assertEqual('chr8', steps[1].start_pos.chrm)
self.assertEqual(250, steps[1].start_pos.start)
self.assertEqual(250, steps[1].start_pos.end)
self.assertEqual('chr9', steps[1].end_pos.chrm)
self.assertEqual(300, steps[1].end_pos.start)
self.assertEqual(300, steps[1].end_pos.end)
self.assertEqual('InterChrm', steps[1].info['TYPE'])
self.assertEqual('chr9', steps[2].start_pos.chrm)
self.assertEqual(300, steps[2].start_pos.start)
self.assertEqual(300, steps[2].start_pos.end)
self.assertEqual('chr9', steps[2].end_pos.chrm)
self.assertEqual(350, steps[2].end_pos.start)
self.assertEqual(350, steps[2].end_pos.end)
self.assertEqual('Align', steps[2].info['TYPE'])
#}}}
#}}}
#{{{class Test_annotation_plan(unittest.TestCase):
class Test_annotation_plan(unittest.TestCase):
#{{{def test_get_alignments_from_cigar(self):
def test_get_alignments_from_cigar(self):
gi_1 = samplot.genome_interval('chr8', 100, 200)
gi_2 = samplot.genome_interval('chr8', 300, 400)
ranges = [gi_1, gi_2]
i = samplot.genome_interval('chr8', 110, 120)
s, e = samplot.get_interval_range_plan_start_end(ranges, i)
self.assertEqual('chr8',s.chrm)
self.assertEqual(110,s.start)
self.assertEqual(110,s.end)
self.assertEqual('chr8',e.chrm)
self.assertEqual(120,e.start)
self.assertEqual(120,e.end)
i = samplot.genome_interval('chr8', 110, 220)
s, e = samplot.get_interval_range_plan_start_end(ranges, i)
self.assertEqual('chr8',s.chrm)
self.assertEqual(110,s.start)
self.assertEqual(110,s.end)
self.assertEqual('chr8',e.chrm)
self.assertEqual(200,e.start)
self.assertEqual(200,e.end)
i = samplot.genome_interval('chr8', 220, 320)
s, e = samplot.get_interval_range_plan_start_end(ranges, i)
self.assertEqual('chr8',s.chrm)
self.assertEqual(300,s.start)
self.assertEqual(300,s.end)
self.assertEqual('chr8',e.chrm)
self.assertEqual(320,e.start)
self.assertEqual(320,e.end)
i = samplot.genome_interval('chr8', 120, 320)
s, e = samplot.get_interval_range_plan_start_end(ranges, i)
self.assertEqual('chr8',s.chrm)
self.assertEqual(120,s.start)
self.assertEqual(120,s.end)
self.assertEqual('chr8',e.chrm)
self.assertEqual(320,e.start)
self.assertEqual(320,e.end)
i = samplot.genome_interval('chr8', 320, 520)
s, e = samplot.get_interval_range_plan_start_end(ranges, i)
self.assertEqual('chr8',s.chrm)
self.assertEqual(320,s.start)
self.assertEqual(320,s.end)
self.assertEqual('chr8',e.chrm)
self.assertEqual(400,e.start)
self.assertEqual(400,e.end)
i = samplot.genome_interval('chr8', 30, 50)
s, e = samplot.get_interval_range_plan_start_end(ranges, i)
self.assertEqual(None, s)
self.assertEqual(None, e)
i = samplot.genome_interval('chr8', 3000, 5000)
s, e = samplot.get_interval_range_plan_start_end(ranges, i)
self.assertEqual(None, s)
self.assertEqual(None, e)
#}}}
#}}}
#{{{class Test_splits(unittest.TestCase):
class Test_splits(unittest.TestCase):
#{{{def test_get_split_plan(self):
def test_get_split_plan(self):
splits = {}
hp = 0
splits[hp] = {}
read_name_1 = 'Test1'
ranges = [samplot.genome_interval('chr8', 100, 200),
samplot.genome_interval('chr8', 600, 800) ]
#both in same ragne
#Deletion
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 100, 150, True, 0, False, False),
samplot.SplitRead('chr8', 170, 180, True, 50, False, False)]
plan = samplot.get_split_plan(ranges, splits[hp][read_name_1])
max_gap, steps = plan
self.assertEqual(20, max_gap)
self.assertEqual(1, len(steps))
self.assertEqual('SPLITREAD', steps[0].event)
self.assertEqual('Deletion', steps[0].info['TYPE'])
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(150, steps[0].start_pos.start)
self.assertEqual(150, steps[0].start_pos.end)
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(170, steps[0].end_pos.start)
self.assertEqual(170, steps[0].end_pos.end)
#Duplication
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 100, 150, True, 0, False, False),
samplot.SplitRead('chr8', 130, 180, True, 50, False, False)]
plan = samplot.get_split_plan(ranges, splits[hp][read_name_1])
max_gap, steps = plan
self.assertEqual(20, max_gap)
self.assertEqual(1, len(steps))
self.assertEqual('SPLITREAD', steps[0].event)
self.assertEqual('Duplication', steps[0].info['TYPE'])
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(150, steps[0].start_pos.start)
self.assertEqual(150, steps[0].start_pos.end)
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(130, steps[0].end_pos.start)
self.assertEqual(130, steps[0].end_pos.end)
#Inversion
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 100, 150, True, 0, False, False),
samplot.SplitRead('chr8', 151, 180, False, 50, False, False)]
plan = samplot.get_split_plan(ranges, splits[hp][read_name_1])
max_gap, steps = plan
self.assertEqual(30, max_gap)
self.assertEqual(1, len(steps))
self.assertEqual('SPLITREAD', steps[0].event)
self.assertEqual('Inversion', steps[0].info['TYPE'])
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(150, steps[0].start_pos.start)
self.assertEqual(150, steps[0].start_pos.end)
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(151, steps[0].end_pos.start)
self.assertEqual(151, steps[0].end_pos.end)
#both in same ragne
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 100, 150, True, 0, False, False)]
plan = samplot.get_split_plan(ranges, splits[hp][read_name_1])
self.assertEqual(None, plan)
#both in same ragne
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 550, 650, True, 0, False, False),
samplot.SplitRead('chr8', 700, 750, True, 50, False, False)]
plan = samplot.get_split_plan(ranges, splits[hp][read_name_1])
max_gap, steps = plan
self.assertEqual(50, max_gap)
self.assertEqual(1, len(steps))
self.assertEqual('SPLITREAD', steps[0].event)
self.assertEqual('Deletion', steps[0].info['TYPE'])
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(650, steps[0].start_pos.start)
self.assertEqual(650, steps[0].start_pos.end)
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(700, steps[0].end_pos.start)
self.assertEqual(700, steps[0].end_pos.end)
#both in same ragne
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 150, 175, True, 0, False, False),
samplot.SplitRead('chr8', 650, 675, True, 50, False, False)]
plan = samplot.get_split_plan(ranges, splits[hp][read_name_1])
max_gap, steps = plan
self.assertEqual(475, max_gap)
self.assertEqual(1, len(steps))
self.assertEqual('SPLITREAD', steps[0].event)
self.assertEqual('Deletion', steps[0].info['TYPE'])
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(175, steps[0].start_pos.start)
self.assertEqual(175, steps[0].start_pos.end)
self.assertEqual('chr8', steps[0].end_pos.chrm)
self.assertEqual(650, steps[0].end_pos.start)
self.assertEqual(650, steps[0].end_pos.end)
#inter chrom
ranges = [samplot.genome_interval('chr8', 100, 200),
samplot.genome_interval('chr9', 600, 800) ]
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 150, 175, True, 0, False, False),
samplot.SplitRead('chr9', 650, 675, True, 50, False, False)]
plan = samplot.get_split_plan(ranges, splits[hp][read_name_1])
max_gap, steps = plan
self.assertEqual(samplot.INTERCHROM_YAXIS, max_gap)
self.assertEqual(1, len(steps))
self.assertEqual('SPLITREAD', steps[0].event)
self.assertEqual('InterChrm', steps[0].info['TYPE'])
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(175, steps[0].start_pos.start)
self.assertEqual(175, steps[0].start_pos.end)
self.assertEqual('chr9', steps[0].end_pos.chrm)
self.assertEqual(650, steps[0].end_pos.start)
self.assertEqual(650, steps[0].end_pos.end)
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 150, 175, True, 0, False, False),
samplot.SplitRead('chr9', 650, 675, False, 50, False, False)]
plan = samplot.get_split_plan(ranges, splits[hp][read_name_1])
max_gap, steps = plan
self.assertEqual(samplot.INTERCHROM_YAXIS, max_gap)
self.assertEqual(1, len(steps))
self.assertEqual('SPLITREAD', steps[0].event)
self.assertEqual('InterChrmInversion', steps[0].info['TYPE'])
self.assertEqual('chr8', steps[0].start_pos.chrm)
self.assertEqual(175, steps[0].start_pos.start)
self.assertEqual(175, steps[0].start_pos.end)
self.assertEqual('chr9', steps[0].end_pos.chrm)
self.assertEqual(650, steps[0].end_pos.start)
self.assertEqual(650, steps[0].end_pos.end)
#}}}
#{{{def test_get_splits_plan(self):
def test_get_splits_plan(self):
splits = {}
hp = 0
splits[hp] = {}
ranges = [samplot.genome_interval('chr8', 100, 200),
samplot.genome_interval('chr9', 600, 800) ]
#Deletion
splits[hp]['del'] = [\
samplot.SplitRead('chr8', 100, 150, True, 0, False, False),
samplot.SplitRead('chr8', 170, 180, True, 50, False, False)]
#Duplication
splits[hp]['dup'] = [\
samplot.SplitRead('chr8', 100, 150, True, 0, False, False),
samplot.SplitRead('chr8', 130, 180, True, 50, False, False)]
#Inversion
splits[hp]['inv'] = [\
samplot.SplitRead('chr8', 100, 150, True, 0, False, False),
samplot.SplitRead('chr8', 151, 180, False, 50, False, False)]
#Bad split
splits[hp]['bad'] = [\
samplot.SplitRead('chr8', 100, 150, True, 0, False, False)]
#Interchm
splits[hp]['interchm'] = [\
samplot.SplitRead('chr8', 150, 175, True, 0, False, False),
samplot.SplitRead('chr9', 650, 675, True, 50, False, False)]
#InterchmInv
splits[hp]['interchminv'] = [\
samplot.SplitRead('chr8', 150, 175, True, 0, False, False),
samplot.SplitRead('chr9', 650, 675, False, 50, False, False)]
plan = samplot.get_splits_plan(ranges, splits[hp])
max_gap, steps = plan
self.assertEqual(samplot.INTERCHROM_YAXIS, max_gap)
self.assertEqual(5, len(steps))
#}}}
#{{{def test_get_split_insert_size(self):
def test_get_split_insert_size(self):
splits = {}
hp = 0
splits[hp] = {}
read_name_1 = 'Test1'
#both in same ragne
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 100, 150, True, 0, False, False),
samplot.SplitRead('chr8', 170, 180, True, 50, False, False)]
read_name_2 = 'Test2'
splits[hp][read_name_2] = [\
samplot.SplitRead('chr8', 100, 150, True, 0, False, False),
samplot.SplitRead('chr8', 170, 180, True, 50, False, False)]
ranges = [samplot.genome_interval('chr8', 100, 200),
samplot.genome_interval('chr8', 600, 800) ]
split_insert_sizes = samplot.get_splits_insert_sizes(ranges, splits)
self.assertEqual(2, len(split_insert_sizes))
self.assertEqual(20, split_insert_sizes[0])
self.assertEqual(20, split_insert_sizes[1])
#one starting in range ends out of range
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 100, 350, True, 0, False, False),
samplot.SplitRead('chr8', 650, 700, True, 250, False, False)]
split_insert_sizes = samplot.get_splits_insert_sizes(ranges, splits)
self.assertEqual(2, len(split_insert_sizes))
self.assertEqual(300, split_insert_sizes[0])
self.assertEqual(20, split_insert_sizes[1])
#one out of range
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 10, 35, True, 0, False, False),
samplot.SplitRead('chr8', 650, 700, True, 25, False, False)]
split_insert_sizes = samplot.get_splits_insert_sizes(ranges, splits)
self.assertEqual(1, len(split_insert_sizes))
self.assertEqual(20, split_insert_sizes[0])
#DUP
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 125, 150, True, 0, False, False),
samplot.SplitRead('chr8', 130, 155, True, 25, False, False)]
split_insert_sizes = samplot.get_splits_insert_sizes(ranges, splits)
self.assertEqual(2, len(split_insert_sizes))
self.assertEqual(20, split_insert_sizes[0])
self.assertEqual(20, split_insert_sizes[1])
#INV
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 125, 150, True, 0, False, False),
samplot.SplitRead('chr8', 151, 175, False, 25, False, False)]
split_insert_sizes = samplot.get_splits_insert_sizes(ranges, splits)
self.assertEqual(2, len(split_insert_sizes))
self.assertEqual(25, split_insert_sizes[0])
self.assertEqual(20, split_insert_sizes[1])
#interchrm
ranges = [samplot.genome_interval('chr8', 100, 200),
samplot.genome_interval('chr9', 600, 800) ]
splits[hp][read_name_1] = [\
samplot.SplitRead('chr8', 125, 150, True, 0, False, False),
samplot.SplitRead('chr9', 650, 675, True, 25, False, False)]
split_insert_sizes = samplot.get_splits_insert_sizes(ranges, splits)
self.assertEqual(2, len(split_insert_sizes))
self.assertEqual(samplot.INTERCHROM_YAXIS, split_insert_sizes[0])
self.assertEqual(20, split_insert_sizes[1])
#}}}
#}}}
#{{{ class Test_pairs(unittest.TestCase):
class Test_pairs(unittest.TestCase):
#{{{ def test_get_pair_insert_size(self):
def test_get_pair_insert_size(self):
ranges = [samplot.genome_interval('chr8', 100, 200),
samplot.genome_interval('chr8', 600, 800) ]
pairs = {}
hp = 0
pairs[hp] = {}
read_name_1 = 'Test1'
#both in same ragne
pairs[hp][read_name_1] = [\
samplot.PairedEnd('chr8', 100, 150, True, False, False),
samplot.PairedEnd('chr8', 170, 180, False, False, False)]
read_name_2 = 'Test2'
pairs[hp][read_name_2] = [\
samplot.PairedEnd('chr8', 100, 150, True, False, False),
samplot.PairedEnd('chr8', 170, 180, False, False, False)]
pair_insert_sizes = samplot.get_pairs_insert_sizes(ranges, pairs)
self.assertEqual(2, len(pair_insert_sizes))
self.assertEqual(80, pair_insert_sizes[0])
self.assertEqual(80, pair_insert_sizes[1])
#one starting in range ends out of range
pairs[hp][read_name_1] = [\
samplot.PairedEnd('chr8', 100, 150, True, False, False),
samplot.PairedEnd('chr8', 190, 240, False, False, False)]
pair_insert_sizes = samplot.get_pairs_insert_sizes(ranges, pairs)
self.assertEqual(2, len(pair_insert_sizes))
self.assertEqual(140, pair_insert_sizes[0])
self.assertEqual(80, pair_insert_sizes[1])
#one out of range
pairs[hp][read_name_1] = [\
samplot.PairedEnd('chr9', 100, 150, True, False, False),
samplot.PairedEnd('chr8', 190, 240, False, False, False)]
pair_insert_sizes = samplot.get_pairs_insert_sizes(ranges, pairs)
self.assertEqual(1, len(pair_insert_sizes))
self.assertEqual(80, pair_insert_sizes[0])
#DUP
pairs[hp][read_name_1] = [\
samplot.PairedEnd('chr8', 125, 150, True, False, False),
samplot.PairedEnd('chr8', 175, 200, False, False, False)]
pair_insert_sizes = samplot.get_pairs_insert_sizes(ranges, pairs)
self.assertEqual(2, len(pair_insert_sizes))
self.assertEqual(75, pair_insert_sizes[0])
self.assertEqual(80, pair_insert_sizes[1])
#INV
pairs[hp][read_name_1] = [\
samplot.PairedEnd('chr8', 125, 150, True, False, False),
samplot.PairedEnd('chr8', 175, 200, True, False, False)]
pair_insert_sizes = samplot.get_pairs_insert_sizes(ranges, pairs)
self.assertEqual(2, len(pair_insert_sizes))
self.assertEqual(75, pair_insert_sizes[0])
self.assertEqual(80, pair_insert_sizes[1])
#interchrm
ranges = [samplot.genome_interval('chr8', 100, 200),
samplot.genome_interval('chr9', 600, 800) ]
pairs[hp][read_name_1] = [\
samplot.PairedEnd('chr8', 125, 150, True, False, False),
samplot.PairedEnd('chr9', 675, 700, True, False, False)]
pair_insert_sizes = samplot.get_pairs_insert_sizes(ranges, pairs)
self.assertEqual(2, len(pair_insert_sizes))
self.assertEqual(samplot.INTERCHROM_YAXIS, pair_insert_sizes[0])
self.assertEqual(80, pair_insert_sizes[1])
#}}}
#{{{ def test_get_pair_plan(self):
def test_get_pair_plan(self):
ranges = [samplot.genome_interval('chr8', 100, 200),
samplot.genome_interval('chr8', 600, 800) ]
pairs = {}
hp = 0
pairs[hp] = {}
read_name_1 = 'Test1'
#both in same ragne
pairs[hp][read_name_1] = [\
samplot.PairedEnd('chr8', 100, 150, False, False, False),
samplot.PairedEnd('chr8', 170, 180, True, False, False)]
read_name_2 = 'Test2'
pairs[hp][read_name_2] = [\
samplot.PairedEnd('chr8', 100, 150, False, False, False),
samplot.PairedEnd('chr8', 170, 180, True, False, False)]
max_event, steps = samplot.get_pairs_plan(ranges, pairs[hp])
self.assertEqual(80, max_event)
self.assertEqual(2, len(steps))
#}}}
#}}}
#{{{ class Test_linked(unittest.TestCase):
class Test_linked(unittest.TestCase):
#{{{def test_get_split_insert_size(self):
def test_get_linked_plan(self):
ranges = [samplot.genome_interval('chr8', 100, 200),
samplot.genome_interval('chr8', 600, 800) ]
pairs = {}
hp = 0
pairs[hp] = {}
pairs[hp]['PE_1'] = [\
samplot.PairedEnd('chr8', 100, 150, False, False, False),
samplot.PairedEnd('chr8', 170, 180, True, False, False)]
pairs[hp]['PE_2'] = [\
samplot.PairedEnd('chr8', 110, 160, False, False, False),
samplot.PairedEnd('chr8', 680, 690, True, False, False)]
splits = {}
splits[hp] = {}
splits[hp]['SR_1'] = [\
samplot.SplitRead('chr8', 155, 160, True, 0, False, False),
samplot.SplitRead('chr8', 670, 675, True, 50, False, False)]
linked_reads = {}
linked_reads[hp] = {}
MI = 5
linked_reads[hp][MI] = [[],[]]
linked_reads[hp][MI][0].append('PE_1')
linked_reads[hp][MI][0].append('PE_2')
linked_reads[hp][MI][1].append('SR_1')
max_event, steps = samplot.get_linked_plan(ranges,
pairs[hp],
splits[hp],
linked_reads[hp],
MI)
self.assertEqual(580, max_event)
self.assertEqual(2, len(steps))
self.assertEqual(2, len(steps[0].info['PAIR_STEPS']))
self.assertEqual(1, len(steps[0].info['SPLIT_STEPS']))
self.assertEqual(100, steps[0].start_pos.start)
self.assertEqual(100, steps[0].start_pos.end)
self.assertEqual(ranges[0].end, steps[0].end_pos.start)
self.assertEqual(ranges[0].end, steps[0].end_pos.end)
self.assertEqual(ranges[1].start, steps[1].start_pos.start)
self.assertEqual(ranges[1].start, steps[1].start_pos.end)
self.assertEqual(690, steps[1].end_pos.start)
self.assertEqual(690, steps[1].end_pos.end)
self.assertEqual(100,steps[0].info['PAIR_STEPS'][0].start_pos.start)
self.assertEqual(100,steps[0].info['PAIR_STEPS'][0].start_pos.end)
self.assertEqual(180,steps[0].info['PAIR_STEPS'][0].end_pos.start)
self.assertEqual(180,steps[0].info['PAIR_STEPS'][0].end_pos.end)
self.assertEqual(110,steps[0].info['PAIR_STEPS'][1].start_pos.start)
self.assertEqual(110,steps[0].info['PAIR_STEPS'][1].start_pos.end)
self.assertEqual(690,steps[0].info['PAIR_STEPS'][1].end_pos.start)
self.assertEqual(690,steps[0].info['PAIR_STEPS'][1].end_pos.end)
self.assertEqual(160,steps[0].info['SPLIT_STEPS'][0].start_pos.start)
self.assertEqual(160,steps[0].info['SPLIT_STEPS'][0].start_pos.end)
self.assertEqual(670,steps[0].info['SPLIT_STEPS'][0].end_pos.start)
self.assertEqual(670,steps[0].info['SPLIT_STEPS'][0].end_pos.end)
#}}}
#}}}
if __name__ == '__main__':
unittest.main()
| 41.129717 | 79 | 0.504521 | 5,557 | 52,317 | 4.545978 | 0.043549 | 0.212572 | 0.054628 | 0.050075 | 0.896564 | 0.874 | 0.84546 | 0.792732 | 0.78244 | 0.752989 | 0 | 0.068647 | 0.382973 | 52,317 | 1,271 | 80 | 41.162077 | 0.713918 | 0.052927 | 0 | 0.721058 | 0 | 0 | 0.031834 | 0.001828 | 0 | 0 | 0 | 0 | 0.394708 | 1 | 0.017641 | false | 0 | 0.003308 | 0 | 0.028666 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d9cf739f7ae8c5fa72f25c4c260720eef62183a2 | 6,691 | py | Python | tf_quant_finance/experimental/instruments/overnight_index_linked_futures_test.py | slowy07/tf-quant-finance | 0976f720fb58a2d7bfd863640c12a2425cd2f94f | [
"Apache-2.0"
] | 3,138 | 2019-07-24T21:43:17.000Z | 2022-03-30T12:11:09.000Z | tf_quant_finance/experimental/instruments/overnight_index_linked_futures_test.py | Aarif1430/tf-quant-finance | 9372eb1ddf2b48cb1a3d4283bc67a10647ddc7a6 | [
"Apache-2.0"
] | 63 | 2019-09-07T19:16:03.000Z | 2022-03-29T19:29:40.000Z | tf_quant_finance/experimental/instruments/overnight_index_linked_futures_test.py | Aarif1430/tf-quant-finance | 9372eb1ddf2b48cb1a3d4283bc67a10647ddc7a6 | [
"Apache-2.0"
] | 423 | 2019-07-26T21:28:05.000Z | 2022-03-26T13:07:44.000Z | # Lint as: python3
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for overnight_index_linked_futures.py."""
from absl.testing import parameterized
import numpy as np
import tensorflow.compat.v2 as tf
import tf_quant_finance as tff
from tensorflow.python.framework import test_util # pylint: disable=g-direct-tensorflow-import
dates = tff.datetime
instruments = tff.experimental.instruments
@test_util.run_all_in_graph_and_eager_modes
class OvernightIndexLinkedFuturesTest(tf.test.TestCase,
parameterized.TestCase):
@parameterized.named_parameters(
('DoublePrecision', np.float64),
)
def test_fut_compounded(self, dtype):
cal = dates.create_holiday_calendar(weekend_mask=dates.WeekendMask.NONE)
start_date = dates.convert_to_date_tensor([(2020, 5, 1)])
end_date = dates.convert_to_date_tensor([(2020, 5, 31)])
valuation_date = dates.convert_to_date_tensor([(2020, 2, 8)])
indexfuture = instruments.OvernightIndexLinkedFutures(
start_date,
end_date,
holiday_calendar=cal,
averaging_type=instruments.AverageType.COMPOUNDING,
dtype=dtype)
curve_dates = valuation_date + dates.months([1, 2, 6])
reference_curve = instruments.RateCurve(
curve_dates,
np.array([0.02, 0.025, 0.015], dtype=dtype),
valuation_date=valuation_date,
dtype=dtype)
market = tff.experimental.instruments.InterestRateMarket(
reference_curve=reference_curve, discount_curve=None)
price = self.evaluate(indexfuture.price(valuation_date, market))
np.testing.assert_allclose(price, 98.64101997, atol=1e-6)
@parameterized.named_parameters(
('DoublePrecision', np.float64),
)
def test_fut_averaged(self, dtype):
cal = dates.create_holiday_calendar(weekend_mask=dates.WeekendMask.NONE)
start_date = dates.convert_to_date_tensor([(2020, 5, 1)])
end_date = dates.convert_to_date_tensor([(2020, 5, 31)])
valuation_date = dates.convert_to_date_tensor([(2020, 2, 8)])
indexfuture = instruments.OvernightIndexLinkedFutures(
start_date,
end_date,
averaging_type=instruments.AverageType.ARITHMETIC_AVERAGE,
holiday_calendar=cal,
dtype=dtype)
curve_dates = valuation_date + dates.months([1, 2, 6])
reference_curve = instruments.RateCurve(
curve_dates,
np.array([0.02, 0.025, 0.015], dtype=dtype),
valuation_date=valuation_date,
dtype=dtype)
market = tff.experimental.instruments.InterestRateMarket(
reference_curve=reference_curve, discount_curve=None)
price = self.evaluate(indexfuture.price(valuation_date, market))
np.testing.assert_allclose(price, 98.6417886, atol=1e-6)
@parameterized.named_parameters(
('DoublePrecision', np.float64),
)
def test_fut_compounded_calendar(self, dtype):
cal = dates.create_holiday_calendar(
weekend_mask=dates.WeekendMask.SATURDAY_SUNDAY)
start_date = dates.convert_to_date_tensor([(2020, 5, 1)])
end_date = dates.convert_to_date_tensor([(2020, 5, 31)])
valuation_date = dates.convert_to_date_tensor([(2020, 2, 8)])
indexfuture = instruments.OvernightIndexLinkedFutures(
start_date,
end_date,
holiday_calendar=cal,
averaging_type=instruments.AverageType.COMPOUNDING,
dtype=dtype)
curve_dates = valuation_date + dates.months([1, 2, 6])
reference_curve = instruments.RateCurve(
curve_dates,
np.array([0.02, 0.025, 0.015], dtype=dtype),
valuation_date=valuation_date,
dtype=dtype)
market = tff.experimental.instruments.InterestRateMarket(
reference_curve=reference_curve, discount_curve=None)
price = self.evaluate(indexfuture.price(valuation_date, market))
np.testing.assert_allclose(price, 98.6332129, atol=1e-6)
@parameterized.named_parameters(
('DoublePrecision', np.float64),
)
def test_fut_averaged_calendar(self, dtype):
cal = dates.create_holiday_calendar(
weekend_mask=dates.WeekendMask.SATURDAY_SUNDAY)
start_date = dates.convert_to_date_tensor([(2020, 5, 1)])
end_date = dates.convert_to_date_tensor([(2020, 5, 31)])
valuation_date = dates.convert_to_date_tensor([(2020, 2, 8)])
indexfuture = instruments.OvernightIndexLinkedFutures(
start_date,
end_date,
averaging_type=instruments.AverageType.ARITHMETIC_AVERAGE,
holiday_calendar=cal,
dtype=dtype)
curve_dates = valuation_date + dates.months([1, 2, 6])
reference_curve = instruments.RateCurve(
curve_dates,
np.array([0.02, 0.025, 0.015], dtype=dtype),
valuation_date=valuation_date,
dtype=dtype)
market = tff.experimental.instruments.InterestRateMarket(
reference_curve=reference_curve, discount_curve=None)
price = self.evaluate(indexfuture.price(valuation_date, market))
np.testing.assert_allclose(price, 98.63396465, atol=1e-6)
@parameterized.named_parameters(
('DoublePrecision', np.float64),
)
def test_fut_many(self, dtype):
cal = dates.create_holiday_calendar(weekend_mask=dates.WeekendMask.NONE)
start_date = dates.convert_to_date_tensor([(2020, 5, 1), (2020, 5, 1)])
end_date = dates.convert_to_date_tensor([(2020, 5, 31), (2020, 5, 31)])
valuation_date = dates.convert_to_date_tensor([(2020, 2, 8)])
indexfuture = instruments.OvernightIndexLinkedFutures(
start_date,
end_date,
holiday_calendar=cal,
averaging_type=instruments.AverageType.COMPOUNDING,
dtype=dtype)
curve_dates = valuation_date + dates.months([1, 2, 6])
reference_curve = instruments.RateCurve(
curve_dates,
np.array([0.02, 0.025, 0.015], dtype=dtype),
valuation_date=valuation_date,
dtype=dtype)
market = tff.experimental.instruments.InterestRateMarket(
reference_curve=reference_curve, discount_curve=None)
price = self.evaluate(indexfuture.price(valuation_date, market))
np.testing.assert_allclose(price, [98.64101997, 98.64101997], atol=1e-6)
if __name__ == '__main__':
tf.test.main()
| 37.80226 | 95 | 0.71499 | 827 | 6,691 | 5.5526 | 0.205562 | 0.070775 | 0.052265 | 0.058798 | 0.809669 | 0.805967 | 0.805967 | 0.805967 | 0.805967 | 0.805967 | 0 | 0.048505 | 0.180392 | 6,691 | 176 | 96 | 38.017045 | 0.78884 | 0.097743 | 0 | 0.783582 | 0 | 0 | 0.013794 | 0 | 0 | 0 | 0 | 0 | 0.037313 | 1 | 0.037313 | false | 0 | 0.037313 | 0 | 0.08209 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8a4282b7092d1b82fab471cd450754e6ce4cac3b | 12,119 | py | Python | opendr/__init__.py | foamliu/opendr | d75b1a95806bc0e8c6cc400b016fb31d2ca3c7b7 | [
"MIT"
] | 363 | 2015-02-08T20:51:30.000Z | 2022-03-27T08:56:40.000Z | opendr/__init__.py | MF1523017/opendr | bc16a6a51771d6e062d088ba5cede66649b7c7ec | [
"MIT"
] | 41 | 2015-06-26T14:59:15.000Z | 2021-12-28T03:24:18.000Z | opendr/__init__.py | MF1523017/opendr | bc16a6a51771d6e062d088ba5cede66649b7c7ec | [
"MIT"
] | 118 | 2015-01-15T21:53:08.000Z | 2021-10-04T13:49:44.000Z | from .version import version as __version__
def test():
from os.path import split
import unittest
test_loader = unittest.TestLoader()
test_loader = test_loader.discover(split(__file__)[0])
test_runner = unittest.TextTestRunner()
test_runner.run( test_loader )
demos = {}
demos['texture'] = """
# Create renderer
import chumpy as ch
from opendr.renderer import TexturedRenderer
rn = TexturedRenderer()
# Assign attributes to renderer
from opendr.util_tests import get_earthmesh
m = get_earthmesh(trans=ch.array([0,0,4]), rotation=ch.zeros(3))
w, h = (320, 240)
from opendr.camera import ProjectPoints
rn.camera = ProjectPoints(v=m.v, rt=ch.zeros(3), t=ch.zeros(3), f=ch.array([w,w])/2., c=ch.array([w,h])/2., k=ch.zeros(5))
rn.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
rn.set(v=m.v, f=m.f, vc=m.vc, texture_image=m.texture_image, ft=m.ft, vt=m.vt)
# Show it
import matplotlib.pyplot as plt
plt.ion()
plt.imshow(rn.r)
plt.show()
dr = rn.dr_wrt(rn.v) # or rn.vc, or rn.camera.rt, rn.camera.t, rn.camera.f, rn.camera.c, etc
"""
demos['moments'] = """
from opendr.util_tests import get_earthmesh
from opendr.simple import *
import numpy as np
w, h = 320, 240
m = get_earthmesh(trans=ch.array([0,0,4]), rotation=ch.zeros(3))
# Create V, A, U, f: geometry, brightness, camera, renderer
V = ch.array(m.v)
A = SphericalHarmonics(vn=VertNormals(v=V, f=m.f),
components=[3.,1.,0.,0.,0.,0.,0.,0.,0.],
light_color=ch.ones(3))
U = ProjectPoints(v=V, f=[300,300.], c=[w/2.,h/2.], k=ch.zeros(5),
t=ch.zeros(3), rt=ch.zeros(3))
rn = TexturedRenderer(vc=A, camera=U, f=m.f, bgcolor=[0.,0.,0.],
texture_image=m.texture_image, vt=m.vt, ft=m.ft,
frustum={'width':w, 'height':h, 'near':1,'far':20})
i, j = ch.array([2.]), ch.array([1.])
xs, ys = ch.meshgrid(range(rn.shape[1]), range(rn.shape[0]))
ysp = ys ** j
xsp = xs ** i
rn_bw = ch.sum(rn, axis=2)
moment = ch.sum((rn_bw * ysp * xsp).ravel())
# Print our numerical result
print moment
# Note that opencv produces the same result for 'm21',
# and that other moments can be created by changing "i" and "j" above
import cv2
print cv2.moments(rn_bw.r)['m21']
# Derivatives wrt vertices and lighting
print moment.dr_wrt(V)
print moment.dr_wrt(A.components)
"""
demos['per_face_normals'] = """
# Create renderer
import chumpy as ch
import numpy as np
from opendr.renderer import ColoredRenderer
from opendr.lighting import LambertianPointLight
rn = ColoredRenderer()
# Assign attributes to renderer
from opendr.util_tests import get_earthmesh
m = get_earthmesh(trans=ch.array([0,0,4]), rotation=ch.zeros(3))
w, h = (320, 240)
# THESE ARE THE 3 CRITICAL LINES
m.v = m.v[m.f.ravel()]
m.vc = m.vc[m.f.ravel()]
m.f = np.arange(m.f.size).reshape((-1,3))
from opendr.camera import ProjectPoints
rn.camera = ProjectPoints(v=m.v, rt=ch.zeros(3), t=ch.zeros(3), f=ch.array([w,w])/2., c=ch.array([w,h])/2., k=ch.zeros(5))
rn.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
rn.set(v=m.v, f=m.f, bgcolor=ch.zeros(3))
# Construct point light source
rn.vc = LambertianPointLight(
f=m.f,
v=rn.v,
num_verts=len(m.v),
light_pos=ch.array([-1000,-1000,-1000]),
vc=m.vc,
light_color=ch.array([1., 1., 1.]))
# Show it
import matplotlib.pyplot as plt
plt.ion()
plt.imshow(rn.r)
plt.show()
dr = rn.dr_wrt(rn.v) # or rn.vc, or rn.camera.rt, rn.camera.t, rn.camera.f, rn.camera.c, etc
"""
demos['silhouette'] = """
# Create renderer
import chumpy as ch
from opendr.renderer import ColoredRenderer
rn = ColoredRenderer()
# Assign attributes to renderer
from opendr.util_tests import get_earthmesh
m = get_earthmesh(trans=ch.array([0,0,4]), rotation=ch.zeros(3))
w, h = (320, 240)
from opendr.camera import ProjectPoints
rn.camera = ProjectPoints(v=m.v, rt=ch.zeros(3), t=ch.zeros(3), f=ch.array([w,w])/2., c=ch.array([w,h])/2., k=ch.zeros(5))
rn.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
rn.set(v=m.v, f=m.f, vc=m.vc*0+1, bgcolor=ch.zeros(3))
# Show it
import matplotlib.pyplot as plt
plt.ion()
plt.imshow(rn.r)
plt.show()
dr = rn.dr_wrt(rn.v) # or rn.vc, or rn.camera.rt, rn.camera.t, rn.camera.f, rn.camera.c, etc
"""
demos['boundary'] = """
# Create renderer
import chumpy as ch
from opendr.renderer import BoundaryRenderer
rn = BoundaryRenderer()
# Assign attributes to renderer
from opendr.util_tests import get_earthmesh
m = get_earthmesh(trans=ch.array([0,0,4]), rotation=ch.zeros(3))
w, h = (320, 240)
from opendr.camera import ProjectPoints
rn.camera = ProjectPoints(v=m.v, rt=ch.zeros(3), t=ch.zeros(3), f=ch.array([w,w])/2., c=ch.array([w,h])/2., k=ch.zeros(5))
rn.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
rn.set(v=m.v, f=m.f, vc=m.vc*0+1, bgcolor=ch.zeros(3), num_channels=3)
# Show it
import matplotlib.pyplot as plt
plt.ion()
plt.imshow(rn.r)
plt.show()
dr = rn.dr_wrt(rn.v) # or rn.vc, or rn.camera.rt, rn.camera.t, rn.camera.f, rn.camera.c, etc
"""
demos['point_light'] = """
# Create renderer
import chumpy as ch
from opendr.renderer import ColoredRenderer
from opendr.lighting import LambertianPointLight
rn = ColoredRenderer()
# Assign attributes to renderer
from opendr.util_tests import get_earthmesh
m = get_earthmesh(trans=ch.array([0,0,4]), rotation=ch.zeros(3))
w, h = (320, 240)
from opendr.camera import ProjectPoints
rn.camera = ProjectPoints(v=m.v, rt=ch.zeros(3), t=ch.zeros(3), f=ch.array([w,w])/2., c=ch.array([w,h])/2., k=ch.zeros(5))
rn.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
rn.set(v=m.v, f=m.f, bgcolor=ch.zeros(3))
# Construct point light source
rn.vc = LambertianPointLight(
f=m.f,
v=rn.v,
num_verts=len(m.v),
light_pos=ch.array([-1000,-1000,-1000]),
vc=m.vc,
light_color=ch.array([1., 1., 1.]))
# Show it
import matplotlib.pyplot as plt
plt.ion()
plt.imshow(rn.r)
plt.show()
dr = rn.dr_wrt(rn.v) # or rn.vc, or rn.camera.rt, rn.camera.t, rn.camera.f, rn.camera.c, etc
"""
demos['spherical_harmonics'] = """
# Create renderer
import chumpy as ch
from opendr.renderer import ColoredRenderer
from opendr.lighting import SphericalHarmonics
from opendr.geometry import VertNormals
rn = ColoredRenderer()
# Assign attributes to renderer
from opendr.util_tests import get_earthmesh
m = get_earthmesh(trans=ch.array([0,0,4]), rotation=ch.zeros(3))
w, h = (320, 240)
from opendr.camera import ProjectPoints
rn.camera = ProjectPoints(v=m.v, rt=ch.zeros(3), t=ch.zeros(3), f=ch.array([w,w])/2., c=ch.array([w,h])/2., k=ch.zeros(5))
rn.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
rn.set(v=m.v, f=m.f, bgcolor=ch.zeros(3))
vn = VertNormals(v=rn.v, f=rn.f)
sh_red = SphericalHarmonics(vn=vn, light_color=ch.array([1,0,0]), components=ch.random.randn(9))
sh_green = SphericalHarmonics(vn=vn, light_color=ch.array([0,1,0]), components=ch.random.randn(9))
sh_blue = SphericalHarmonics(vn=vn, light_color=ch.array([0,0,1]), components=ch.random.randn(9))
rn.vc = sh_red + sh_green + sh_blue
# Show it
import matplotlib.pyplot as plt
plt.ion()
plt.imshow(rn.r)
plt.show()
dr = rn.dr_wrt(rn.v) # or rn.vc, or rn.camera.rt, rn.camera.t, rn.camera.f, rn.camera.c, etc
"""
demos['optimization'] = """
from opendr.simple import *
import numpy as np
import matplotlib.pyplot as plt
w, h = 320, 240
try:
m = load_mesh('earth.obj')
except:
from opendr.util_tests import get_earthmesh
m = get_earthmesh(trans=ch.array([0,0,0]), rotation=ch.zeros(3))
# Create V, A, U, f: geometry, brightness, camera, renderer
V = ch.array(m.v)
A = SphericalHarmonics(vn=VertNormals(v=V, f=m.f),
components=[3.,2.,0.,0.,0.,0.,0.,0.,0.],
light_color=ch.ones(3))
U = ProjectPoints(v=V, f=[w,w], c=[w/2.,h/2.], k=ch.zeros(5),
t=ch.zeros(3), rt=ch.zeros(3))
f = TexturedRenderer(vc=A, camera=U, f=m.f, bgcolor=[0.,0.,0.],
texture_image=m.texture_image, vt=m.vt, ft=m.ft,
frustum={'width':w, 'height':h, 'near':1,'far':20})
# Parameterize the vertices
translation, rotation = ch.array([0,0,8]), ch.zeros(3)
f.v = translation + V.dot(Rodrigues(rotation))
observed = f.r
np.random.seed(1)
translation[:] = translation.r + np.random.rand(3)
rotation[:] = rotation.r + np.random.rand(3) *.2
A.components[1:] = 0
# Create the energy
E_raw = f - observed
E_pyr = gaussian_pyramid(E_raw, n_levels=6, normalization='size')
def cb(_):
import cv2
global E_raw
cv2.imshow('Absolute difference', np.abs(E_raw.r))
cv2.waitKey(1)
print 'OPTIMIZING TRANSLATION, ROTATION, AND LIGHT PARMS'
free_variables=[translation, rotation, A.components]
ch.minimize({'pyr': E_pyr}, x0=free_variables, callback=cb)
ch.minimize({'raw': E_raw}, x0=free_variables, callback=cb)
"""
demos['optimization_cpl'] = """
from opendr.simple import *
import numpy as np
import matplotlib.pyplot as plt
w, h = 320, 240
try:
m = load_mesh('earth.obj')
except:
from opendr.util_tests import get_earthmesh
m = get_earthmesh(trans=ch.array([0,0,0]), rotation=ch.zeros(3))
# Create V, A, U, f: geometry, brightness, camera, renderer
V = ch.array(m.v)
A = SphericalHarmonics(vn=VertNormals(v=V, f=m.f),
components=[3.,2.,0.,0.,0.,0.,0.,0.,0.],
light_color=ch.ones(3))
U = ProjectPoints(v=V, f=[w,w], c=[w/2.,h/2.], k=ch.zeros(5),
t=ch.zeros(3), rt=ch.zeros(3))
f = TexturedRenderer(vc=A, camera=U, f=m.f, bgcolor=[0.,0.,0.],
texture_image=m.texture_image, vt=m.vt, ft=m.ft,
frustum={'width':w, 'height':h, 'near':1,'far':20})
# Parameterize the vertices
translation, rotation = ch.array([0,0,8]), ch.zeros(3)
model_v = translation + ch.array(V.r).dot(Rodrigues(rotation))
# Simulate an observed image
V[:] = model_v.r
observed = f.r
np.random.seed(1)
translation[:] = translation.r + np.random.rand(3)
rotation[:] = rotation.r + np.random.rand(3) *.2
V[:] = model_v.r
A.components[1:] = 0
# Create the energy
E_raw = f - observed
E_pyr = gaussian_pyramid(E_raw, n_levels=6, normalization='size')
def cb(_):
import cv2
global E_raw
cv2.imshow('Absolute difference', np.abs(E_raw.r))
cv2.waitKey(1)
print 'OPTIMIZING TRANSLATION, ROTATION, AND LIGHT PARMS'
free_variables=[translation, rotation, A.components, V]
ch.minimize({'pyr': E_pyr, 'cpl': (V - model_v)*1e-4}, x0=free_variables, callback=cb)
ch.minimize({'raw': E_raw, 'cpl': V - model_v}, x0=free_variables, callback=cb)
"""
def demo(which=None):
import re
if which not in demos:
print('Please indicate which demo you want, as follows:')
for key in demos:
print("\tdemo('%s')" % (key,))
return
print('- - - - - - - - - - - <CODE> - - - - - - - - - - - -')
print(re.sub('global.*\n','',demos[which]))
print('- - - - - - - - - - - </CODE> - - - - - - - - - - - -\n')
exec('global np\n' + demos[which], globals(), locals())
| 34.527066 | 122 | 0.592871 | 1,896 | 12,119 | 3.727848 | 0.122363 | 0.042586 | 0.038483 | 0.01528 | 0.813809 | 0.79485 | 0.790606 | 0.772213 | 0.755659 | 0.755659 | 0 | 0.033463 | 0.23558 | 12,119 | 350 | 123 | 34.625714 | 0.729491 | 0 | 0 | 0.745645 | 0 | 0.163763 | 0.938361 | 0.170889 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006969 | false | 0 | 0.184669 | 0 | 0.195122 | 0.038328 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8a4662ccd8891445b972f3d6ee9f8facd13c3a75 | 56,157 | py | Python | dizzy/tests/test_interaction.py | 0xc0decafe/dizzy | 6cf6abf7a9b990fe77618e42651f3c3d286cc15b | [
"BSD-3-Clause"
] | 1 | 2020-11-19T10:11:43.000Z | 2020-11-19T10:11:43.000Z | dizzy/tests/test_interaction.py | 0xc0decafe/dizzy | 6cf6abf7a9b990fe77618e42651f3c3d286cc15b | [
"BSD-3-Clause"
] | null | null | null | dizzy/tests/test_interaction.py | 0xc0decafe/dizzy | 6cf6abf7a9b990fe77618e42651f3c3d286cc15b | [
"BSD-3-Clause"
] | 1 | 2020-11-19T10:12:18.000Z | 2020-11-19T10:12:18.000Z | # test_interations.py
#
# Copyright 2017 Daniel Mende <mail@c0decafe.de>
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of the nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from unittest import TestCase, main
from dizzy.interaction import Interaction
from dizzy.objects.field import Field
from dizzy.functions.length import length
from dizzy.dizz import Dizz
from dizzy.functions.checksum import checksum
from dizzy.functions import BOTH
from dizzy.value import Value
class TestInteraction(TestCase):
def __init__(self, arg):
self.maxDiff = None
TestCase.__init__(self, arg)
def test_init(self):
objects = [Field("test0", b"\x11\x11", fuzz="std"), Field("test1", b"\x22", fuzz="std"),
Field("test2", b"\x33\x33", slice(9, 17), fuzz="std")]
d0 = Dizz("test0", objects, fuzz="std")
objects = [Field("test0", b"\xff", fuzz="full"), Field("test1", b"\xaa", 10, fuzz="std")]
d1 = Dizz("test1", objects, fuzz="std")
act = Interaction("Test", [d0, d1])
self.assertEqual(act.name, "Test")
self.assertEqual(act.objects, [d0, d1])
def test_iter(self):
expected = [Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x00\x00w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x00\x02w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x00\x04w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x00\x06w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x00\x08w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\xff\xf6w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\xff\xf8w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\xff\xfaw3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\xff\xfcw3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\xff\xfew3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\xff\xf8w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\xff\xfaw3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\xff\xfcw3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\xff\xfew3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x02\x00w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x04\x00w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x06\x00w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x08\x00w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""33\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""33\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""73\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""73\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00"";3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00"#\xf73\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00"#\xfb3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00"#\xfb3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00"#\xff3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00"#\xff3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""\xfb3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""\xfb3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""\xff3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""\xff3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""D\x00\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""D\x01\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""D\x02\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""D\x03\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""D\x04\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""E\xfb\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""E\xfc\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""E\xfd\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""E\xfe\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""E\xff\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""D\xfc\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""D\xfd\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""D\xfe\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""D\xff\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""E\x00\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""F\x00\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""G\x00\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""D\x00\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x88\x00\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x88\x01\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x88\x02\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x88\x03\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x88\x04\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x8b\xfb\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x8b\xfc\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x8b\xfd\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x8b\xfe\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x8b\xff\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x89\xfc\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x89\xfd\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x89\xfe\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x89\xff\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x89\x00\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x8a\x00\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x8b\x00\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00DD\x8c\x00\x00"', 50), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x10\x00\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x10\x01\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x10\x02\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x10\x03\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x10\x04\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x17\xfb\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x17\xfc\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x17\xfd\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x17\xfe\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x17\xff\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x13\xfc\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x13\xfd\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x13\xfe\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x13\xff\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x11\x00\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x12\x00\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x13\x00\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00\x88\x89\x14\x00\x00#', 51), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12 \x00\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12 \x01\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12 \x02\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12 \x03\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12 \x04\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12/\xfb\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12/\xfc\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12/\xfd\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12/\xfe\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12/\xff\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x01\x11\x12'\xfc\x00$", 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x01\x11\x12'\xfd\x00$", 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x01\x11\x12'\xfe\x00$", 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x01\x11\x12'\xff\x00$", 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12!\x00\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12"\x00\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12#\x00\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x01\x11\x12$\x00\x00$', 52), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$@\x00\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$@\x01\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$@\x02\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$@\x03\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$@\x04\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$_\xfb\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$_\xfc\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$_\xfd\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$_\xfe\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$_\xff\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$O\xfc\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$O\xfd\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$O\xfe\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$O\xff\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$A\x00\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$B\x00\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$C\x00\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x02"$D\x00\x00%', 53), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\x80\x00\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\x80\x01\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\x80\x02\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\x80\x03\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\x80\x04\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\xbf\xfb\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\xbf\xfc\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\xbf\xfd\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\xbf\xfe\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\xbf\xff\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\x9f\xfc\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\x9f\xfd\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\x9f\xfe\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\x9f\xff\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\x81\x00\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\x82\x00\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\x83\x00\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x04DH\x84\x00\x00&', 54), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x00\x00\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x00\x01\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x00\x02\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x00\x03\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x00\x04\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x7f\xfb\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x7f\xfc\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x7f\xfd\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x7f\xfe\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x7f\xff\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91?\xfc\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91?\xfd\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91?\xfe\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91?\xff\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x01\x00\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x02\x00\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x03\x00\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b"\x08\x88\x91\x04\x00\x00'", 55), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\x00\x00\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\x00\x01\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\x00\x02\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\x00\x03\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\x00\x04\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\xff\xfb\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\xff\xfc\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\xff\xfd\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\xff\xfe\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\xff\xff\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\x7f\xfc\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\x7f\xfd\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\x7f\xfe\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\x7f\xff\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\x01\x00\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\x02\x00\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\x03\x00\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x11\x11"\x04\x00\x00(', 56), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x00\x00', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x00\x01', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x00\x02', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x00\x03', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x00\x04', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\xff\xfb', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\xff\xfc', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\xff\xfd', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\xff\xfe', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\xff\xff', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x7f\xfc', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x7f\xfd', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x7f\xfe', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x7f\xff', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x01\x00', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x02\x00', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x03\x00', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x04\x00', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x00\xaa\x00\x01\xc2\xe9', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x04\xaa\x00\x01\x16\x1d', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x08\xaa\x00\x01(\xee', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x0c\xaa\x00\x01v\x10', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x10\xaa\x00\x01\x1f\x89', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x14\xaa\x00\x01\xa6&', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x18\xaa\x00\x01\xf9o', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x1c\xaa\x00\x01\x9a\xe4', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00 \xaa\x00\x01\xd1\xa2', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00$\xaa\x00\x01M\xf0', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00(\xaa\x00\x01\xfe\x1e', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00,\xaa\x00\x01j\xf2', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x000\xaa\x00\x01\x8aH', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x004\xaa\x00\x01\x06[', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x008\xaa\x00\x01\xc0\x06', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00<\xaa\x00\x01\xa6;', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00@\xaa\x00\x01%c', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00D\xaa\x00\x01@\xa8', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00H\xaa\x00\x01\xfe*', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00L\xaa\x00\x01\x06\xdf', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00P\xaa\x00\x01\x07G', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00T\xaa\x00\x01\xb5\x16', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00X\xaa\x00\x01\xe1\x87', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\\\xaa\x00\x01?\xdd', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00`\xaa\x00\x012\x80', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00d\xaa\x00\x01l\xb2', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00h\xaa\x00\x01S\xfc', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00l\xaa\x00\x01\xd1J', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00p\xaa\x00\x01d\x12', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00t\xaa\x00\x01\xe3\xe6', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00x\xaa\x00\x01\xdf\x00', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00|\xaa\x00\x01i\xd7', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x80\xaa\x00\x01\xef\xeb', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x84\xaa\x00\x01\xc1\xce', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x88\xaa\x00\x01\x1e\xdb', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x8c\xaa\x00\x01\x0e\x87', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x90\xaa\x00\x01\xab7', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x94\xaa\x00\x01\xe9(', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x98\xaa\x00\x01F\x1e', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\x9c\xaa\x00\x01\x16}', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xa0\xaa\x00\x01%\xd6', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xa4\xaa\x00\x01]\xfd', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xa8\xaa\x00\x01\x151', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b"\x00\xac\xaa\x00\x01\xea'", 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xb0\xaa\x00\x01:\xdc', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xb4\xaa\x00\x01\xf5\xba', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xb8\xaa\x00\x01\x9e\xec', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xbc\xaa\x00\x013\xb0', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xc0\xaa\x00\x01\x9f\xa3', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xc4\xaa\x00\x01q\x1f', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xc8\xaa\x00\x01\xcaa', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xcc\xaa\x00\x01d\xc1', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xd0\xaa\x00\x01U\x14', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xd4\xaa\x00\x01<\xef', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xd8\xaa\x00\x01\x158', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xdc\xaa\x00\x01\x82\xc2', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xe0\xaa\x00\x018!', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xe4\xaa\x00\x01\xd9S', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xe8\xaa\x00\x01\x9fe', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xec\xaa\x00\x01\xec\x1d', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xf0\xaa\x00\x01\x1cH', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xf4\xaa\x00\x01\xd7\xbb', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xf8\xaa\x00\x01\xee[', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x00\xfc\xaa\x00\x01B`', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x00\xaa\x00\x01\xc3q', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x04\xaa\x00\x01\xae\x1c', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x08\xaa\x00\x01\xe6I', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x0c\xaa\x00\x01U=', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x10\xaa\x00\x01\x01\xec', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x14\xaa\x00\x01\xd3\xce', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x18\xaa\x00\x01j\x15', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x1c\xaa\x00\x01\xd1v', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01 \xaa\x00\x01\xdf\x8b', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01$\xaa\x00\x01\x14\x1a', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01(\xaa\x00\x01\xc7\x8e', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b"\x01,\xaa\x00\x01\xda'", 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x010\xaa\x00\x01\x9a\xc6', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x014\xaa\x00\x01\xdd\x18', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x018\xaa\x00\x01%A', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01<\xaa\x00\x01\xa5\x86', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01@\xaa\x00\x01\x10\xd3', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01D\xaa\x00\x01XE', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01H\xaa\x00\x01\x99\x07', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01L\xaa\x00\x013\x89', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01P\xaa\x00\x01\xd2\x8f', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01T\xaa\x00\x01A\xc1', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01X\xaa\x00\x01\xfa\xec', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\\\xaa\x00\x01@\x17', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01`\xaa\x00\x01\xfb[', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01d\xaa\x00\x01P\xbe', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01h\xaa\x00\x01\xcf\x89', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01l\xaa\x00\x01pg', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01p\xaa\x00\x019\x12', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01t\xaa\x00\x01\nT', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01x\xaa\x00\x01~g', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01|\xaa\x00\x01n\xc0', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x80\xaa\x00\x01\xd9\x9a', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x84\xaa\x00\x01q\x8a', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x88\xaa\x00\x01\xe9\xde', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x8c\xaa\x00\x01\xb6\xcb', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x90\xaa\x00\x01\xd7\xfd', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x94\xaa\x00\x01\x9b\xcf', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x98\xaa\x00\x01$\x8e', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\x9c\xaa\x00\x01\x0e\x9e', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xa0\xaa\x00\x01\xbf\xca', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xa4\xaa\x00\x01\xd5\xe7', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xa8\xaa\x00\x01A\x02', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xac\xaa\x00\x01\xd5\x96', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xb0\xaa\x00\x01\xfdf', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xb4\xaa\x00\x01S\x9a', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xb8\xaa\x00\x01O\x87', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xbc\xaa\x00\x01XM', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xc0\xaa\x00\x01v\xd3', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xc4\xaa\x00\x01ki', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xc8\xaa\x00\x01\xcd\x85', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xcc\xaa\x00\x01\xff~', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xd0\xaa\x00\x01x\x12', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xd4\xaa\x00\x01g\xa6', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xd8\xaa\x00\x01\x170', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xdc\xaa\x00\x01\xf4\xc7', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xe0\xaa\x00\x01\x85\xd4', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xe4\xaa\x00\x01\xf7\xde', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xe8\xaa\x00\x01\xd4\xee', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xec\xaa\x00\x01c\xd3', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xf0\xaa\x00\x01w\xd4', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xf4\xaa\x00\x01%\x91', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xf8\xaa\x00\x01\xdbg', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x01\xfc\xaa\x00\x01\x1a\xd4', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x00\xaa\x00\x01t\xc5', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x04\xaa\x00\x01\xfbw', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x08\xaa\x00\x01\xea\x92', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x0c\xaa\x00\x01\xc5d', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x10\xaa\x00\x01\x14#', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x14\xaa\x00\x01\x95N', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x18\xaa\x00\x01\xeb\xb9', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x1c\xaa\x00\x01\x97\x83', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02 \xaa\x00\x015\x95', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02$\xaa\x00\x01\x1fV', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02(\xaa\x00\x01T\x81', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02,\xaa\x00\x01M\x03', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x020\xaa\x00\x01y\x02', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x024\xaa\x00\x01\x80\xcd', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x028\xaa\x00\x015d', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02<\xaa\x00\x01\xff\xcf', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02@\xaa\x00\x01\x08O', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02D\xaa\x00\x01\x85V', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02H\xaa\x00\x01$H', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02L\xaa\x00\x01g\x90', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02P\xaa\x00\x01\tR', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02T\xaa\x00\x01\xae\xb4', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02X\xaa\x00\x01\xabY', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\\\xaa\x00\x01b\x17', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02`\xaa\x00\x01\xd4\xf2', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02d\xaa\x00\x01di', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02h\xaa\x00\x01\xce+', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02l\xaa\x00\x01\x90\xa6', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02p\xaa\x00\x01\x89\x07', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02t\xaa\x00\x01\x11\xee', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02x\xaa\x00\x01\x9e\xbe', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02|\xaa\x00\x01X\xf0', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x80\xaa\x00\x01h\x0e', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x84\xaa\x00\x01H\x1a', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x88\xaa\x00\x01\xfb\xc8', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x8c\xaa\x00\x01\xa4A', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x90\xaa\x00\x01\x0f7', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x94\xaa\x00\x01$?', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x98\xaa\x00\x01+\x18', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\x9c\xaa\x00\x01\ny', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xa0\xaa\x00\x01\xd2\x85', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xa4\xaa\x00\x01\xd0\xfb', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xa8\xaa\x00\x01\xdbh', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xac\xaa\x00\x01D\xd1', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xb0\xaa\x00\x01\x91\xe4', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xb4\xaa\x00\x01\x1a\x90', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xb8\xaa\x00\x01\x81\x12', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xbc\xaa\x00\x01\n\x94', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xc0\xaa\x00\x01\xe0\xea', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xc4\xaa\x00\x01,C', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xc8\xaa\x00\x01\xd5t', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xcc\xaa\x00\x01\xb6\x12', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xd0\xaa\x00\x01-\x0c', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xd4\xaa\x00\x01C\xe5', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xd8\xaa\x00\x01\x0e\xe8', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xdc\xaa\x00\x01U2', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xe0\xaa\x00\x010\xba', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xe4\xaa\x00\x01\xf2>', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xe8\xaa\x00\x01\xfc6', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xec\xaa\x00\x01\x1b.', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xf0\xaa\x00\x01S,', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xf4\xaa\x00\x01\x82\xce', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xf8\xaa\x00\x01\x1ea', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x02\xfc\xaa\x00\x01\xe6;', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x00\xaa\x00\x01"\x86', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x04\xaa\x00\x01\x97L', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x08\xaa\x00\x01G\xa9', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x0c\xaa\x00\x01\x9e\x1a', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b"\x03\x10\xaa\x00\x01\xb4'", 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x14\xaa\x00\x01\x16\xbc', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x18\xaa\x00\x01Y\xf5', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x1c\xaa\x00\x01^\xdf', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03 \xaa\x00\x01fl', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03$\xaa\x00\x01YA', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03(\xaa\x00\x01\xd5\xf4', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03,\xaa\x00\x01\x83(', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x030\xaa\x00\x01\x84\xc6', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x034\xaa\x00\x01y\xa0', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x038\xaa\x00\x01\x06\x93', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03<\xaa\x00\x01\x99:', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03@\xaa\x00\x01\xabT', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03D\xaa\x00\x01\xd9\xf0', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03H\xaa\x00\x01d\xdd', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03L\xaa\x00\x01\xfd~', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03P\xaa\x00\x01\x10\x7f', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03T\xaa\x00\x01qd', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03X\xaa\x00\x01\xf9\xed', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\\\xaa\x00\x01\x9a\x13', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03`\xaa\x00\x01p}', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03d\xaa\x00\x01`+', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03h\xaa\x00\x01.Q', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03l\xaa\x00\x01\x9f\xab', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03p\xaa\x00\x01.\\', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03t\xaa\x00\x01\x8d\xaf', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03x\xaa\x00\x01\xaf\xcd', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03|\xaa\x00\x01\xcby', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x80\xaa\x00\x01\x19H', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x84\xaa\x00\x01V\x86', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x88\xaa\x00\x01J\xce', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x8c\xaa\x00\x01\xd6\x82', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x90\xaa\x00\x01\x89@', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x94\xaa\x00\x01\x9f#', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x98\xaa\x00\x01\xc7+', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\x9c\xaa\x00\x01Zr', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xa0\xaa\x00\x01\xdb\xf1', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xa4\xaa\x00\x01\x1b\x1b', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xa8\xaa\x00\x0103', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xac\xaa\x00\x01\xc6\x92', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xb0\xaa\x00\x01\xa3\x96', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xb4\xaa\x00\x01\xd4\xd6', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xb8\xaa\x00\x01\x1c\x1a', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xbc\xaa\x00\x01l<', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xc0\xaa\x00\x01;\x19', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xc4\xaa\x00\x01V\xf4', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xc8\xaa\x00\x01L\xd9', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xcc\xaa\x00\x01\x07 ', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xd0\xaa\x00\x01\xb6$', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xd4\xaa\x00\x01%~', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xd8\xaa\x00\x01\xd9\xc7', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xdc\xaa\x00\x01?\xe2', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xe0\xaa\x00\x01\xdf\x12', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xe4\xaa\x00\x01\xb1\xa8', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xe8\xaa\x00\x01P\x10', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xec\xaa\x00\x01\x19\x8b', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xf0\xaa\x00\x01\r\xc8', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xf4\xaa\x00\x01.2', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xf8\xaa\x00\x01V\xab', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfc\xaa\x00\x018}', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfc\x00\x00\x01i{', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfc\x01\x00\x01\xc8\x98', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfc\x02\x00\x01\xe8,', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfc\x03\x00\x01\xf1\xf2', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfc\x04\x00\x01\xd2\xaa', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xff\xfb\x00\x01a\x93', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xff\xfc\x00\x01]\xf3', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xff\xfd\x00\x01\x9bE', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xff\xfe\x00\x01\xd9\x1b', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xff\xff\x00\x01\xe8R', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfd\xfc\x00\x01\xa1E', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfd\xfd\x00\x014\x1b', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfd\xfe\x00\x01\xac\xcd', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfd\xff\x00\x01.\xe0', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfd\x00\x00\x01\xb4d', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfe\x00\x00\x01)\x1d', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xff\x00\x00\x01\xbd\x08', 50), None,
Value(b'\x00""w3\x00!', 49), Value(b'\x03\xfc\x00\x00\x01i{', 50)]
objects = [Field("test0", b"\x11\x11", fuzz="std"), Field("test1", b"\x22", fuzz="std"),
Field("test2", b"\x33\x33", slice(9, 17), fuzz="std"), Field("length", b"\x00\x00", fuzz="std")]
functions = [length("length", "test0", "test2")]
d0 = Dizz("test0", objects, functions, fuzz="std")
objects = [Field("test0", b"\xff", fuzz="full"), Field("test1", b"\xaa", 10, fuzz="std"),
Field("test2", b"\x00\x00"), Field("checksum", b"\x00\x00")]
functions = [checksum("checksum", "test0", "test2", "sha1")]
d1 = Dizz("test1", objects, functions, fuzz="std")
def inc(interaction_iterator, dizzy_iterator, response):
i = int.from_bytes(dizzy_iterator["test2"].byte, "big") + 1
dizzy_iterator["test2"] = i.to_bytes(2, "big")
act = Interaction("Test", [d0, d1], {1: [inc]})
self.assertEqual([i for i in act], expected)
def test_length_std(self):
objects = [Field("test0", b"\x11\x11", fuzz="std"), Field("test1", b"\x22", fuzz="std"),
Field("test2", b"\x33\x33", slice(9, 17), fuzz="std"), Field("length", b"\x00\x00", fuzz="std")]
functions = [length("length", "test0", "test2")]
d0 = Dizz("test0", objects, functions, fuzz="std")
objects = [Field("test0", b"\xff", fuzz="full"), Field("test1", b"\xaa", 10, fuzz="std"),
Field("test2", b"\x00\x00"), Field("checksum", b"\x00\x00")]
functions = [checksum("checksum", "test0", "test2", "sha1")]
d1 = Dizz("test1", objects, functions, fuzz="std")
objects = [Field("test0", b"\xff", fuzz="full"), Field("test1", b"\xaa", 10, fuzz="std"),
Field("test2", b"\x00\x00"), Field("checksum", b"\x00\x00")]
functions = [checksum("checksum", "test0", "test2", "sha1")]
d2 = Dizz("test2", objects, functions, fuzz="std")
def inc(interaction_iterator, dizzy_iterator, response):
i = int.from_bytes(dizzy_iterator["test2"].byte, "big") + 1
dizzy_iterator["test2"] = Value(i.to_bytes(2, "big"))
act = Interaction("Test", [d0, d1, d2], {1: [inc]}, fuzz="std")
self.assertEqual(len([i for i in act]), act.length())
def test_iterations_std(self):
objects = [Field("test0", b"\x11\x11", fuzz="std"), Field("test1", b"\x22", fuzz="std"),
Field("test2", b"\x33\x33", slice(9, 17), fuzz="std"), Field("length", b"\x00\x00", fuzz="std")]
functions = [length("length", "test0", "test2")]
d0 = Dizz("test0", objects, functions, fuzz="std")
objects = [Field("test0", b"\xff", fuzz="full"), Field("test1", b"\xaa", 10, fuzz="std"),
Field("test2", b"\x00\x00"), Field("checksum", b"\x00\x00")]
functions = [checksum("checksum", "test0", "test2", "sha1")]
d1 = Dizz("test1", objects, functions, fuzz="std")
objects = [Field("test0", b"\xff", fuzz="full"), Field("test1", b"\xaa", 10, fuzz="std"),
Field("test2", b"\x00\x00"), Field("checksum", b"\x00\x00")]
functions = [checksum("checksum", "test0", "test2", "sha1")]
d2 = Dizz("test2", objects, functions, fuzz="std")
def inc(_, dizzy_iterator, __):
i = int.from_bytes(dizzy_iterator["test2"].byte, "big") + 1
dizzy_iterator["test2"] = Value(i.to_bytes(2, "big"))
act = Interaction("Test", [d0, d1, d2], {1: [inc]}, fuzz="std")
iterations_1 = 1
iterations_2 = 0
for obj in act:
if obj is None:
iterations_1 += 1
else:
iterations_2 += 1
self.assertEqual(iterations_1, act.iterations())
self.assertEqual(iterations_2 // len(act.objects), act.iterations())
def test_length_full(self):
objects = [Field("test0", b"\x11\x11", fuzz="std"), Field("test1", b"\x22", fuzz="std"),
Field("test2", b"\x33\x33", slice(9, 17), fuzz="std"), Field("length", b"\x00\x00", fuzz="std")]
functions = [length("length", "test0", "test2")]
d0 = Dizz("test0", objects, functions, fuzz="std")
objects = [Field("test0", b"\xff", fuzz="full"), Field("test1", b"\xaa", 10, fuzz="std"),
Field("test2", b"\x00\x00"), Field("checksum", b"\x00\x00")]
functions = [checksum("checksum", "test0", "test2", "sha1")]
d1 = Dizz("test1", objects, functions, fuzz="std")
def inc(interaction_iterator, dizzy_iterator, response):
i = int.from_bytes(dizzy_iterator["test2"].byte, "big") + 1
dizzy_iterator["test2"] = Value(i.to_bytes(2, "big"))
act = Interaction("Test", [d0, d1], {1: [inc]}, fuzz="full")
self.assertEqual(len([i for i in act]), act.length())
def test_iterations_full(self):
objects = [Field("test0", b"\x11\x11", fuzz="std"), Field("test1", b"\x22", fuzz="std"),
Field("test2", b"\x33\x33", slice(9, 17), fuzz="std"), Field("length", b"\x00\x00", fuzz="std")]
functions = [length("length", "test0", "test2")]
d0 = Dizz("test0", objects, functions, fuzz="std")
objects = [Field("test0", b"\xff", fuzz="full"), Field("test1", b"\xaa", 10, fuzz="std"),
Field("test2", b"\x00\x00"), Field("checksum", b"\x00\x00")]
functions = [checksum("checksum", "test0", "test2", "sha1")]
d1 = Dizz("test1", objects, functions, fuzz="std")
def inc(_, dizzy_iterator, __):
i = int.from_bytes(dizzy_iterator["test2"].byte, "big") + 1
dizzy_iterator["test2"] = Value(i.to_bytes(2, "big"))
act = Interaction("Test", [d0, d1], {1: [inc]}, fuzz="full")
iterations_1 = 1
iterations_2 = 0
for obj in act:
if obj is None:
iterations_1 += 1
else:
iterations_2 += 1
self.assertEqual(iterations_1, act.iterations())
self.assertEqual(iterations_2 // len(act.objects), act.iterations())
def test_start_at_std(self):
objects = [Field("test0", b"\x11\x11", fuzz="std"), Field("test1", b"\x22", fuzz="std"),
Field("test2", b"\x33\x33", slice(9, 11), fuzz="std"), Field("length", b"\x00\x00")]
functions = [length("length", "test0", "test2")]
d0 = Dizz("test0", objects, functions, fuzz="std")
objects = [Field("test0", b"\xff"), Field("test1", b"\xaa", 9, fuzz="std"), Field("test2", b"\x00\x00")]
functions = list()
d1 = Dizz("test1", objects, functions, fuzz="std")
dizz_objects = [d0, d1]
act = Interaction("Test", dizz_objects, {}, fuzz="std")
excepted = list(act)
for i in range(act.iterations()):
got = list(Interaction("Test", dizz_objects, {}, fuzz="std", start_at=i))
self.assertListEqual(excepted[(i * (len(dizz_objects) + 1)):], got)
def test_start_at_full(self):
objects = [Field("test0", b"\x11\x11", fuzz="std"), Field("test1", b"\x22", fuzz="std"),
Field("test2", b"\x33\x33", slice(9, 11), fuzz="std"), Field("length", b"\x00\x00")]
functions = [length("length", "test0", "test2")]
d0 = Dizz("test0", objects, functions, fuzz="std")
objects = [Field("test0", b"\xff"), Field("test1", b"\xaa", 9, fuzz="std"), Field("test2", b"\x00\x00")]
functions = list()
d1 = Dizz("test1", objects, functions, fuzz="std")
dizz_objects = [d0, d1]
act = Interaction("Test", dizz_objects, {}, fuzz="full")
excepted = list(act)
for i in range(act.iterations()):
got = list(Interaction("Test", [d0, d1], {}, fuzz="full", start_at=i))
self.assertListEqual(excepted[(i * (len(dizz_objects) + 1)):], got)
if __name__ == '__main__':
main()
| 81.981022 | 115 | 0.507096 | 8,910 | 56,157 | 3.185634 | 0.054097 | 0.198281 | 0.18137 | 0.197858 | 0.832053 | 0.829235 | 0.828495 | 0.828495 | 0.828495 | 0.828495 | 0 | 0.206456 | 0.253575 | 56,157 | 684 | 116 | 82.100877 | 0.470692 | 0.028955 | 0 | 0.200326 | 0 | 0 | 0.365538 | 0.224443 | 0 | 0 | 0 | 0 | 0.017915 | 1 | 0.022801 | false | 0 | 0.013029 | 0 | 0.037459 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
8a6219b36f724f9522aacf6afc0fff48900928ed | 13,260 | py | Python | lib/model/rpn/proposal_layer.py | miyamotost/faster-rcnn.pytorch | 374396d87c85ffe2373d2ae13dc002ea6bb08734 | [
"MIT"
] | null | null | null | lib/model/rpn/proposal_layer.py | miyamotost/faster-rcnn.pytorch | 374396d87c85ffe2373d2ae13dc002ea6bb08734 | [
"MIT"
] | null | null | null | lib/model/rpn/proposal_layer.py | miyamotost/faster-rcnn.pytorch | 374396d87c85ffe2373d2ae13dc002ea6bb08734 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
# --------------------------------------------------------
# Faster R-CNN
# Copyright (c) 2015 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ross Girshick and Sean Bell
# --------------------------------------------------------
# --------------------------------------------------------
# Reorganized and modified by Jianwei Yang and Jiasen Lu
# --------------------------------------------------------
import torch
import torch.nn as nn
import numpy as np
import math
import yaml
from model.utils.config import cfg
from .generate_anchors import generate_anchors
from .bbox_transform import bbox_transform_inv, clip_boxes, clip_boxes_batch
from model.nms.nms_wrapper import nms
import pdb
DEBUG = False
class _ProposalLayer(nn.Module):
"""
Outputs object detection proposals by applying estimated bounding-box
transformations to a set of regular boxes (called "anchors").
"""
def __init__(self, feat_stride, scales, ratios):
super(_ProposalLayer, self).__init__()
self._feat_stride = feat_stride
self._anchors = torch.from_numpy(generate_anchors(scales=np.array(scales),
ratios=np.array(ratios))).float()
self._num_anchors = self._anchors.size(0)
# rois blob: holds R regions of interest, each is a 5-tuple
# (n, x1, y1, x2, y2) specifying an image batch index n and a
# rectangle (x1, y1, x2, y2)
# top[0].reshape(1, 5)
#
# # scores blob: holds scores for R regions of interest
# if len(top) > 1:
# top[1].reshape(1, 1, 1, 1)
def forward(self, input):
# Algorithm:
#
# for each (H, W) location i
# generate A anchor boxes centered on cell i
# apply predicted bbox deltas at cell i to each of the A anchors
# clip predicted boxes to image
# remove predicted boxes with either height or width < threshold
# sort all (proposal, score) pairs by score from highest to lowest
# take top pre_nms_topN proposals before NMS
# apply NMS with threshold 0.7 to remaining proposals
# take after_nms_topN proposals after NMS
# return the top proposals (-> RoIs top, scores top)
# the first set of _num_anchors channels are bg probs
# the second set are the fg probs
scores = input[0][:, self._num_anchors:, :, :]
bbox_deltas = input[1]
im_info = input[2]
cfg_key = input[3]
pre_nms_topN = cfg[cfg_key].RPN_PRE_NMS_TOP_N
post_nms_topN = cfg[cfg_key].RPN_POST_NMS_TOP_N
nms_thresh = cfg[cfg_key].RPN_NMS_THRESH
min_size = cfg[cfg_key].RPN_MIN_SIZE
batch_size = bbox_deltas.size(0)
feat_height, feat_width = scores.size(2), scores.size(3)
shift_x = np.arange(0, feat_width) * self._feat_stride
shift_y = np.arange(0, feat_height) * self._feat_stride
shift_x, shift_y = np.meshgrid(shift_x, shift_y)
shifts = torch.from_numpy(np.vstack((shift_x.ravel(), shift_y.ravel(),
shift_x.ravel(), shift_y.ravel())).transpose())
shifts = shifts.contiguous().type_as(scores).float()
A = self._num_anchors
K = shifts.size(0)
self._anchors = self._anchors.type_as(scores)
# anchors = self._anchors.view(1, A, 4) + shifts.view(1, K, 4).permute(1, 0, 2).contiguous()
anchors = self._anchors.view(1, A, 4) + shifts.view(K, 1, 4)
anchors = anchors.view(1, K * A, 4).expand(batch_size, K * A, 4)
# Transpose and reshape predicted bbox transformations to get them
# into the same order as the anchors:
bbox_deltas = bbox_deltas.permute(0, 2, 3, 1).contiguous()
bbox_deltas = bbox_deltas.view(batch_size, -1, 4)
# Same story for the scores:
scores = scores.permute(0, 2, 3, 1).contiguous()
scores = scores.view(batch_size, -1)
# Convert anchors into proposals via bbox transformations
proposals = bbox_transform_inv(anchors, bbox_deltas, batch_size)
# 2. clip predicted boxes to image
proposals = clip_boxes(proposals, im_info, batch_size)
# proposals = clip_boxes_batch(proposals, im_info, batch_size)
# assign the score to 0 if it's non keep.
# keep = self._filter_boxes(proposals, min_size * im_info[:, 2])
# trim keep index to make it euqal over batch
# keep_idx = torch.cat(tuple(keep_idx), 0)
# scores_keep = scores.view(-1)[keep_idx].view(batch_size, trim_size)
# proposals_keep = proposals.view(-1, 4)[keep_idx, :].contiguous().view(batch_size, trim_size, 4)
# _, order = torch.sort(scores_keep, 1, True)
scores_keep = scores
proposals_keep = proposals
_, order = torch.sort(scores_keep, 1, True)
output = scores.new(batch_size, post_nms_topN, 5).zero_()
for i in range(batch_size):
# # 3. remove predicted boxes with either height or width < threshold
# # (NOTE: convert min_size to input image scale stored in im_info[2])
proposals_single = proposals_keep[i]
scores_single = scores_keep[i]
# # 4. sort all (proposal, score) pairs by score from highest to lowest
# # 5. take top pre_nms_topN (e.g. 6000)
order_single = order[i]
if pre_nms_topN > 0 and pre_nms_topN < scores_keep.numel():
order_single = order_single[:pre_nms_topN]
proposals_single = proposals_single[order_single, :]
scores_single = scores_single[order_single].view(-1,1)
# 6. apply nms (e.g. threshold = 0.7)
# 7. take after_nms_topN (e.g. 300)
# 8. return the top proposals (-> RoIs top)
keep_idx_i = nms(torch.cat((proposals_single, scores_single), 1), nms_thresh, force_cpu=not cfg.USE_GPU_NMS)
keep_idx_i = keep_idx_i.long().view(-1)
if post_nms_topN > 0:
keep_idx_i = keep_idx_i[:post_nms_topN]
proposals_single = proposals_single[keep_idx_i, :]
scores_single = scores_single[keep_idx_i, :]
# padding 0 at the end.
num_proposal = proposals_single.size(0)
output[i,:,0] = i
output[i,:num_proposal,1:] = proposals_single
return output
def backward(self, top, propagate_down, bottom):
"""This layer does not propagate gradients."""
pass
def reshape(self, bottom, top):
"""Reshaping happens during the call to forward."""
pass
def _filter_boxes(self, boxes, min_size):
"""Remove all boxes with any side smaller than min_size."""
ws = boxes[:, :, 2] - boxes[:, :, 0] + 1
hs = boxes[:, :, 3] - boxes[:, :, 1] + 1
keep = ((ws >= min_size.view(-1,1).expand_as(ws)) & (hs >= min_size.view(-1,1).expand_as(hs)))
return keep
class _ProposalLayer2(nn.Module):
"""
Outputs object detection proposals by applying estimated bounding-box
transformations to a set of regular boxes (called "anchors").
"""
def __init__(self, feat_stride, scales, ratios):
super(_ProposalLayer2, self).__init__()
self._feat_stride = feat_stride
self._anchors = torch.from_numpy(generate_anchors(scales=np.array(scales),
ratios=np.array(ratios))).float()
self._num_anchors = self._anchors.size(0)
# rois blob: holds R regions of interest, each is a 5-tuple
# (n, x1, y1, x2, y2) specifying an image batch index n and a
# rectangle (x1, y1, x2, y2)
# top[0].reshape(1, 5)
#
# # scores blob: holds scores for R regions of interest
# if len(top) > 1:
# top[1].reshape(1, 1, 1, 1)
def forward(self, input0, input1, input2):
# Algorithm:
#
# for each (H, W) location i
# generate A anchor boxes centered on cell i
# apply predicted bbox deltas at cell i to each of the A anchors
# clip predicted boxes to image
# remove predicted boxes with either height or width < threshold
# sort all (proposal, score) pairs by score from highest to lowest
# take top pre_nms_topN proposals before NMS
# apply NMS with threshold 0.7 to remaining proposals
# take after_nms_topN proposals after NMS
# return the top proposals (-> RoIs top, scores top)
# the first set of _num_anchors channels are bg probs
# the second set are the fg probs
scores = input0[:, self._num_anchors:, :, :]
bbox_deltas = input1
im_info = input2
cfg_key = "TEST"
pre_nms_topN = cfg[cfg_key].RPN_PRE_NMS_TOP_N
post_nms_topN = cfg[cfg_key].RPN_POST_NMS_TOP_N
nms_thresh = cfg[cfg_key].RPN_NMS_THRESH
min_size = cfg[cfg_key].RPN_MIN_SIZE
batch_size = bbox_deltas.size(0)
feat_height, feat_width = scores.size(2), scores.size(3)
shift_x = np.arange(0, feat_width) * self._feat_stride
shift_y = np.arange(0, feat_height) * self._feat_stride
shift_x, shift_y = np.meshgrid(shift_x, shift_y)
shifts = torch.from_numpy(np.vstack((shift_x.ravel(), shift_y.ravel(),
shift_x.ravel(), shift_y.ravel())).transpose())
shifts = shifts.contiguous().type_as(scores).float()
A = self._num_anchors
K = shifts.size(0)
self._anchors = self._anchors.type_as(scores)
# anchors = self._anchors.view(1, A, 4) + shifts.view(1, K, 4).permute(1, 0, 2).contiguous()
anchors = self._anchors.view(1, A, 4) + shifts.view(K, 1, 4)
anchors = anchors.view(1, K * A, 4).expand(batch_size, K * A, 4)
# Transpose and reshape predicted bbox transformations to get them
# into the same order as the anchors:
bbox_deltas = bbox_deltas.permute(0, 2, 3, 1).contiguous()
bbox_deltas = bbox_deltas.view(batch_size, -1, 4)
# Same story for the scores:
scores = scores.permute(0, 2, 3, 1).contiguous()
scores = scores.view(batch_size, -1)
# Convert anchors into proposals via bbox transformations
proposals = bbox_transform_inv(anchors, bbox_deltas, batch_size)
# 2. clip predicted boxes to image
proposals = clip_boxes(proposals, im_info, batch_size)
# proposals = clip_boxes_batch(proposals, im_info, batch_size)
# assign the score to 0 if it's non keep.
# keep = self._filter_boxes(proposals, min_size * im_info[:, 2])
# trim keep index to make it euqal over batch
# keep_idx = torch.cat(tuple(keep_idx), 0)
# scores_keep = scores.view(-1)[keep_idx].view(batch_size, trim_size)
# proposals_keep = proposals.view(-1, 4)[keep_idx, :].contiguous().view(batch_size, trim_size, 4)
# _, order = torch.sort(scores_keep, 1, True)
scores_keep = scores
proposals_keep = proposals
_, order = torch.sort(scores_keep, 1, True)
output = scores.new(batch_size, post_nms_topN, 5).zero_()
for i in range(batch_size):
# # 3. remove predicted boxes with either height or width < threshold
# # (NOTE: convert min_size to input image scale stored in im_info[2])
proposals_single = proposals_keep[i]
scores_single = scores_keep[i]
# # 4. sort all (proposal, score) pairs by score from highest to lowest
# # 5. take top pre_nms_topN (e.g. 6000)
order_single = order[i]
if pre_nms_topN > 0 and pre_nms_topN < scores_keep.numel():
order_single = order_single[:pre_nms_topN]
proposals_single = proposals_single[order_single, :]
scores_single = scores_single[order_single].view(-1,1)
# 6. apply nms (e.g. threshold = 0.7)
# 7. take after_nms_topN (e.g. 300)
# 8. return the top proposals (-> RoIs top)
keep_idx_i = nms(torch.cat((proposals_single, scores_single), 1), nms_thresh, force_cpu=not cfg.USE_GPU_NMS)
keep_idx_i = keep_idx_i.long().view(-1)
if post_nms_topN > 0:
keep_idx_i = keep_idx_i[:post_nms_topN]
proposals_single = proposals_single[keep_idx_i, :]
scores_single = scores_single[keep_idx_i, :]
# padding 0 at the end.
num_proposal = proposals_single.size(0)
output[i,:,0] = i
output[i,:num_proposal,1:] = proposals_single
return output
def backward(self, top, propagate_down, bottom):
"""This layer does not propagate gradients."""
pass
def reshape(self, bottom, top):
"""Reshaping happens during the call to forward."""
pass
def _filter_boxes(self, boxes, min_size):
"""Remove all boxes with any side smaller than min_size."""
ws = boxes[:, :, 2] - boxes[:, :, 0] + 1
hs = boxes[:, :, 3] - boxes[:, :, 1] + 1
keep = ((ws >= min_size.view(-1,1).expand_as(ws)) & (hs >= min_size.view(-1,1).expand_as(hs)))
return keep
| 40.303951 | 120 | 0.610558 | 1,831 | 13,260 | 4.20426 | 0.136537 | 0.021824 | 0.014549 | 0.012471 | 0.923617 | 0.917381 | 0.917381 | 0.917381 | 0.917381 | 0.917381 | 0 | 0.02213 | 0.270739 | 13,260 | 328 | 121 | 40.426829 | 0.77394 | 0.372172 | 0 | 0.819444 | 0 | 0 | 0.000491 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069444 | false | 0.027778 | 0.076389 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8a7c242242e10ebbc9a164675a985b97d08a5c4d | 140,469 | py | Python | lookerapi/apis/project_api.py | jcarah/python_sdk | 3bff34d04a828c940c3f93055e10b6a0095c2327 | [
"MIT"
] | null | null | null | lookerapi/apis/project_api.py | jcarah/python_sdk | 3bff34d04a828c940c3f93055e10b6a0095c2327 | [
"MIT"
] | null | null | null | lookerapi/apis/project_api.py | jcarah/python_sdk | 3bff34d04a828c940c3f93055e10b6a0095c2327 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Looker API 3.1 Reference
### Authorization The Looker API uses Looker **API3** credentials for authorization and access control. Looker admins can create API3 credentials on Looker's **Admin/Users** page. Pass API3 credentials to the **/login** endpoint to obtain a temporary access_token. Include that access_token in the Authorization header of Looker API requests. For details, see [Looker API Authorization](https://looker.com/docs/r/api/authorization) ### Client SDKs The Looker API is a RESTful system that should be usable by any programming language capable of making HTTPS requests. Client SDKs for a variety of programming languages can be generated from the Looker API's Swagger JSON metadata to streamline use of the Looker API in your applications. A client SDK for Ruby is available as an example. For more information, see [Looker API Client SDKs](https://looker.com/docs/r/api/client_sdks) ### Try It Out! The 'api-docs' page served by the Looker instance includes 'Try It Out!' buttons for each API method. After logging in with API3 credentials, you can use the \"Try It Out!\" buttons to call the API directly from the documentation page to interactively explore API features and responses. Note! With great power comes great responsibility: The \"Try It Out!\" button makes API calls to your live Looker instance. Be especially careful with destructive API operations such as `delete_user` or similar. There is no \"undo\" for API operations. ### Versioning Future releases of Looker will expand this API release-by-release to securely expose more and more of the core power of Looker to API client applications. API endpoints marked as \"beta\" may receive breaking changes without warning (but we will try to avoid doing that). Stable (non-beta) API endpoints should not receive breaking changes in future releases. For more information, see [Looker API Versioning](https://looker.com/docs/r/api/versioning) This **API 3.1** is in active development. This is where support for new Looker features will appear as non-breaking additions - new functions, new optional parameters on existing functions, or new optional properties in existing types. Additive changes should not impact your existing application code that calls the Looker API. Your existing application code will not be aware of any new Looker API functionality until you choose to upgrade your app to use a newer Looker API client SDK release. The following are a few examples of noteworthy items that have changed between API 3.0 and API 3.1. For more comprehensive coverage of API changes, please see the release notes for your Looker release. ### Examples of new things added in API 3.1: * Dashboard construction APIs * Themes and custom color collections APIs * Create and run SQL_runner queries * Create and run merged results queries * Create and modify dashboard filters * Create and modify password requirements ### Deprecated in API 3.0 The following functions and properties have been deprecated in API 3.0. They continue to exist and work in API 3.0 for the next several Looker releases but they have not been carried forward to API 3.1: * Dashboard Prefetch functions * User access_filter functions * User API 1.0 credentials functions * Space.is_root and Space.is_user_root properties. Use Space.is_shared_root and Space.is_users_root instead. ### Semantic changes in API 3.1: * `all_looks` no longer includes soft-deleted looks, matching `all_dashboards` behavior. You can find soft-deleted looks using `search_looks` with the `deleted` param set to True. * `all_spaces` no longer includes duplicate items * `search_users` no longer accepts Y,y,1,0,N,n for Boolean params, only \"true\" and \"false\". * For greater client and network compatibility, `render_task_results` now returns HTTP status ***202 Accepted*** instead of HTTP status ***102 Processing*** * `all_running_queries` and `kill_query` functions have moved into the `Query` function group. If you have application code which relies on the old behavior of the APIs above, you may continue using the API 3.0 functions in this Looker release. We strongly suggest you update your code to use API 3.1 analogs as soon as possible.
OpenAPI spec version: 3.1.0
Contact: support@looker.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class ProjectApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def all_git_branches(self, project_id, **kwargs):
"""
Get All Git Branches
### Get All Git Branches Returns a list of git branches in the project repository
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.all_git_branches(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:return: list[GitBranch]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.all_git_branches_with_http_info(project_id, **kwargs)
else:
(data) = self.all_git_branches_with_http_info(project_id, **kwargs)
return data
def all_git_branches_with_http_info(self, project_id, **kwargs):
"""
Get All Git Branches
### Get All Git Branches Returns a list of git branches in the project repository
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.all_git_branches_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:return: list[GitBranch]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method all_git_branches" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `all_git_branches`")
collection_formats = {}
resource_path = '/projects/{project_id}/git_branches'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[GitBranch]',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def all_git_connection_tests(self, project_id, **kwargs):
"""
Get All Git Connection Tests
### Get All Git Connection Tests Returns a list of tests which can be run against a project's (or the dependency project for the provided remote_url) git connection. Call [Run Git Connection Test](#!/Project/run_git_connection_test) to execute each test in sequence. Tests are ordered by increasing specificity. Tests should be run in the order returned because later tests require functionality tested by tests earlier in the test list. For example, a late-stage test for write access is meaningless if connecting to the git server (an early test) is failing.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.all_git_connection_tests(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str remote_url: (Optional: leave blank for root project) The remote url for remote dependency to test.
:return: list[GitConnectionTest]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.all_git_connection_tests_with_http_info(project_id, **kwargs)
else:
(data) = self.all_git_connection_tests_with_http_info(project_id, **kwargs)
return data
def all_git_connection_tests_with_http_info(self, project_id, **kwargs):
"""
Get All Git Connection Tests
### Get All Git Connection Tests Returns a list of tests which can be run against a project's (or the dependency project for the provided remote_url) git connection. Call [Run Git Connection Test](#!/Project/run_git_connection_test) to execute each test in sequence. Tests are ordered by increasing specificity. Tests should be run in the order returned because later tests require functionality tested by tests earlier in the test list. For example, a late-stage test for write access is meaningless if connecting to the git server (an early test) is failing.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.all_git_connection_tests_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str remote_url: (Optional: leave blank for root project) The remote url for remote dependency to test.
:return: list[GitConnectionTest]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'remote_url']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method all_git_connection_tests" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `all_git_connection_tests`")
collection_formats = {}
resource_path = '/projects/{project_id}/git_connection_tests'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
if 'remote_url' in params:
query_params['remote_url'] = params['remote_url']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[GitConnectionTest]',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def all_project_files(self, project_id, **kwargs):
"""
Get All Project Files
### Get All Project Files Returns a list of the files in the project
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.all_project_files(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: list[ProjectFile]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.all_project_files_with_http_info(project_id, **kwargs)
else:
(data) = self.all_project_files_with_http_info(project_id, **kwargs)
return data
def all_project_files_with_http_info(self, project_id, **kwargs):
"""
Get All Project Files
### Get All Project Files Returns a list of the files in the project
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.all_project_files_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: list[ProjectFile]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'fields']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method all_project_files" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `all_project_files`")
collection_formats = {}
resource_path = '/projects/{project_id}/files'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[ProjectFile]',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def all_projects(self, **kwargs):
"""
Get All Projects
### Get All Projects Returns all projects visible to the current user
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.all_projects(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str fields: Requested fields
:return: list[Project]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.all_projects_with_http_info(**kwargs)
else:
(data) = self.all_projects_with_http_info(**kwargs)
return data
def all_projects_with_http_info(self, **kwargs):
"""
Get All Projects
### Get All Projects Returns all projects visible to the current user
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.all_projects_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str fields: Requested fields
:return: list[Project]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['fields']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method all_projects" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/projects'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Project]',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_git_branch(self, project_id, **kwargs):
"""
Checkout New Git Branch
### Create and Checkout a Git Branch Creates and checks out a new branch in the given project repository Only allowed in development mode - Call `update_session` to select the 'dev' workspace. Optionally specify a branch name, tag name or commit SHA as the start point in the ref field. If no ref is specified, HEAD of the current branch will be used as the start point for the new branch.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_git_branch(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param GitBranch body: Git Branch
:return: GitBranch
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_git_branch_with_http_info(project_id, **kwargs)
else:
(data) = self.create_git_branch_with_http_info(project_id, **kwargs)
return data
def create_git_branch_with_http_info(self, project_id, **kwargs):
"""
Checkout New Git Branch
### Create and Checkout a Git Branch Creates and checks out a new branch in the given project repository Only allowed in development mode - Call `update_session` to select the 'dev' workspace. Optionally specify a branch name, tag name or commit SHA as the start point in the ref field. If no ref is specified, HEAD of the current branch will be used as the start point for the new branch.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_git_branch_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param GitBranch body: Git Branch
:return: GitBranch
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'body']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_git_branch" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `create_git_branch`")
collection_formats = {}
resource_path = '/projects/{project_id}/git_branch'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GitBranch',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_git_deploy_key(self, project_id, **kwargs):
"""
Create Deploy Key
### Create Git Deploy Key Create a public/private key pair for authenticating ssh git requests from Looker to a remote git repository for a particular Looker project. Returns the public key of the generated ssh key pair. Copy this public key to your remote git repository's ssh keys configuration so that the remote git service can validate and accept git requests from the Looker server.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_git_deploy_key(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_git_deploy_key_with_http_info(project_id, **kwargs)
else:
(data) = self.create_git_deploy_key_with_http_info(project_id, **kwargs)
return data
def create_git_deploy_key_with_http_info(self, project_id, **kwargs):
"""
Create Deploy Key
### Create Git Deploy Key Create a public/private key pair for authenticating ssh git requests from Looker to a remote git repository for a particular Looker project. Returns the public key of the generated ssh key pair. Copy this public key to your remote git repository's ssh keys configuration so that the remote git service can validate and accept git requests from the Looker server.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_git_deploy_key_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_git_deploy_key" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `create_git_deploy_key`")
collection_formats = {}
resource_path = '/projects/{project_id}/git/deploy_key'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_project(self, **kwargs):
"""
Create Project
### Create A Project dev mode required. - Call `update_session` to select the 'dev' workspace. `name` is required. `git_remote_url` is not allowed. To configure Git for the newly created project, follow the instructions in `update_project`.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_project(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param Project body: Project
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_project_with_http_info(**kwargs)
else:
(data) = self.create_project_with_http_info(**kwargs)
return data
def create_project_with_http_info(self, **kwargs):
"""
Create Project
### Create A Project dev mode required. - Call `update_session` to select the 'dev' workspace. `name` is required. `git_remote_url` is not allowed. To configure Git for the newly created project, follow the instructions in `update_project`.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_project_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param Project body: Project
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_project" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
resource_path = '/projects'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Project',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_git_branch(self, project_id, branch_name, **kwargs):
"""
Delete a Git Branch
### Delete the specified Git Branch Delete git branch specified in branch_name path param from local and remote of specified project repository
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_git_branch(project_id, branch_name, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str branch_name: Branch Name (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_git_branch_with_http_info(project_id, branch_name, **kwargs)
else:
(data) = self.delete_git_branch_with_http_info(project_id, branch_name, **kwargs)
return data
def delete_git_branch_with_http_info(self, project_id, branch_name, **kwargs):
"""
Delete a Git Branch
### Delete the specified Git Branch Delete git branch specified in branch_name path param from local and remote of specified project repository
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_git_branch_with_http_info(project_id, branch_name, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str branch_name: Branch Name (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'branch_name']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_git_branch" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `delete_git_branch`")
# verify the required parameter 'branch_name' is set
if ('branch_name' not in params) or (params['branch_name'] is None):
raise ValueError("Missing the required parameter `branch_name` when calling `delete_git_branch`")
collection_formats = {}
resource_path = '/projects/{project_id}/git_branch/{branch_name}'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
if 'branch_name' in params:
path_params['branch_name'] = params['branch_name']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_repository_credential(self, root_project_id, credential_id, **kwargs):
"""
Delete Repository Credential
### Repository Credential for a remote dependency Admin required. `root_project_id` is required. `credential_id` is required.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_repository_credential(root_project_id, credential_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str root_project_id: Root Project Id (required)
:param str credential_id: Credential Id (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_repository_credential_with_http_info(root_project_id, credential_id, **kwargs)
else:
(data) = self.delete_repository_credential_with_http_info(root_project_id, credential_id, **kwargs)
return data
def delete_repository_credential_with_http_info(self, root_project_id, credential_id, **kwargs):
"""
Delete Repository Credential
### Repository Credential for a remote dependency Admin required. `root_project_id` is required. `credential_id` is required.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_repository_credential_with_http_info(root_project_id, credential_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str root_project_id: Root Project Id (required)
:param str credential_id: Credential Id (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['root_project_id', 'credential_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_repository_credential" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'root_project_id' is set
if ('root_project_id' not in params) or (params['root_project_id'] is None):
raise ValueError("Missing the required parameter `root_project_id` when calling `delete_repository_credential`")
# verify the required parameter 'credential_id' is set
if ('credential_id' not in params) or (params['credential_id'] is None):
raise ValueError("Missing the required parameter `credential_id` when calling `delete_repository_credential`")
collection_formats = {}
resource_path = '/projects/{root_project_id}/credential/{credential_id}'.replace('{format}', 'json')
path_params = {}
if 'root_project_id' in params:
path_params['root_project_id'] = params['root_project_id']
if 'credential_id' in params:
path_params['credential_id'] = params['credential_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def deploy_to_production(self, project_id, **kwargs):
"""
Deploy To Production
### Deploy LookML from this Development Mode Project to Production Git must have been configured, must be in dev mode and deploy permission required Deploy is a two / three step process 1. Push commits in current branch of dev mode project to the production branch (origin/master). Note a. This step is skipped in read-only projects. Note b. If this step is unsuccessful for any reason (e.g. rejected non-fastforward because production branch has commits not in current branch), subsequent steps will be skipped. 2. If this is the first deploy of this project, create the production project with git repository. 3. Pull the production branch into the production project.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.deploy_to_production(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Id of project (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.deploy_to_production_with_http_info(project_id, **kwargs)
else:
(data) = self.deploy_to_production_with_http_info(project_id, **kwargs)
return data
def deploy_to_production_with_http_info(self, project_id, **kwargs):
"""
Deploy To Production
### Deploy LookML from this Development Mode Project to Production Git must have been configured, must be in dev mode and deploy permission required Deploy is a two / three step process 1. Push commits in current branch of dev mode project to the production branch (origin/master). Note a. This step is skipped in read-only projects. Note b. If this step is unsuccessful for any reason (e.g. rejected non-fastforward because production branch has commits not in current branch), subsequent steps will be skipped. 2. If this is the first deploy of this project, create the production project with git repository. 3. Pull the production branch into the production project.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.deploy_to_production_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Id of project (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method deploy_to_production" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `deploy_to_production`")
collection_formats = {}
resource_path = '/projects/{project_id}/deploy_to_production'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def find_git_branch(self, project_id, branch_name, **kwargs):
"""
Find a Git Branch
### Get the specified Git Branch Returns the git branch specified in branch_name path param if it exists in the given project repository
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.find_git_branch(project_id, branch_name, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str branch_name: Branch Name (required)
:return: GitBranch
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.find_git_branch_with_http_info(project_id, branch_name, **kwargs)
else:
(data) = self.find_git_branch_with_http_info(project_id, branch_name, **kwargs)
return data
def find_git_branch_with_http_info(self, project_id, branch_name, **kwargs):
"""
Find a Git Branch
### Get the specified Git Branch Returns the git branch specified in branch_name path param if it exists in the given project repository
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.find_git_branch_with_http_info(project_id, branch_name, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str branch_name: Branch Name (required)
:return: GitBranch
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'branch_name']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method find_git_branch" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `find_git_branch`")
# verify the required parameter 'branch_name' is set
if ('branch_name' not in params) or (params['branch_name'] is None):
raise ValueError("Missing the required parameter `branch_name` when calling `find_git_branch`")
collection_formats = {}
resource_path = '/projects/{project_id}/git_branch/{branch_name}'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
if 'branch_name' in params:
path_params['branch_name'] = params['branch_name']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GitBranch',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_all_repository_credentials(self, root_project_id, **kwargs):
"""
Get All Repository Credentials
### Get all Repository Credentials for a project `root_project_id` is required.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_all_repository_credentials(root_project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str root_project_id: Root Project Id (required)
:return: list[RepositoryCredential]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_all_repository_credentials_with_http_info(root_project_id, **kwargs)
else:
(data) = self.get_all_repository_credentials_with_http_info(root_project_id, **kwargs)
return data
def get_all_repository_credentials_with_http_info(self, root_project_id, **kwargs):
"""
Get All Repository Credentials
### Get all Repository Credentials for a project `root_project_id` is required.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_all_repository_credentials_with_http_info(root_project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str root_project_id: Root Project Id (required)
:return: list[RepositoryCredential]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['root_project_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_all_repository_credentials" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'root_project_id' is set
if ('root_project_id' not in params) or (params['root_project_id'] is None):
raise ValueError("Missing the required parameter `root_project_id` when calling `get_all_repository_credentials`")
collection_formats = {}
resource_path = '/projects/{root_project_id}/credentials'.replace('{format}', 'json')
path_params = {}
if 'root_project_id' in params:
path_params['root_project_id'] = params['root_project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[RepositoryCredential]',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def git_branch(self, project_id, **kwargs):
"""
Get Active Git Branch
### Get the Current Git Branch Returns the git branch currently checked out in the given project repository
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.git_branch(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:return: GitBranch
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.git_branch_with_http_info(project_id, **kwargs)
else:
(data) = self.git_branch_with_http_info(project_id, **kwargs)
return data
def git_branch_with_http_info(self, project_id, **kwargs):
"""
Get Active Git Branch
### Get the Current Git Branch Returns the git branch currently checked out in the given project repository
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.git_branch_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:return: GitBranch
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method git_branch" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `git_branch`")
collection_formats = {}
resource_path = '/projects/{project_id}/git_branch'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GitBranch',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def git_deploy_key(self, project_id, **kwargs):
"""
Git Deploy Key
### Git Deploy Key Returns the ssh public key previously created for a project's git repository.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.git_deploy_key(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.git_deploy_key_with_http_info(project_id, **kwargs)
else:
(data) = self.git_deploy_key_with_http_info(project_id, **kwargs)
return data
def git_deploy_key_with_http_info(self, project_id, **kwargs):
"""
Git Deploy Key
### Git Deploy Key Returns the ssh public key previously created for a project's git repository.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.git_deploy_key_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method git_deploy_key" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `git_deploy_key`")
collection_formats = {}
resource_path = '/projects/{project_id}/git/deploy_key'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['text/plain'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def manifest(self, project_id, **kwargs):
"""
Get Manifest
### Get A Projects Manifest object Returns the project with the given project id
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.manifest(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:return: Manifest
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.manifest_with_http_info(project_id, **kwargs)
else:
(data) = self.manifest_with_http_info(project_id, **kwargs)
return data
def manifest_with_http_info(self, project_id, **kwargs):
"""
Get Manifest
### Get A Projects Manifest object Returns the project with the given project id
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.manifest_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:return: Manifest
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method manifest" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `manifest`")
collection_formats = {}
resource_path = '/projects/{project_id}/manifest'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Manifest',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def project(self, project_id, **kwargs):
"""
Get Project
### Get A Project Returns the project with the given project id
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.project(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.project_with_http_info(project_id, **kwargs)
else:
(data) = self.project_with_http_info(project_id, **kwargs)
return data
def project_with_http_info(self, project_id, **kwargs):
"""
Get Project
### Get A Project Returns the project with the given project id
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.project_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'fields']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method project" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `project`")
collection_formats = {}
resource_path = '/projects/{project_id}'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Project',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def project_file(self, project_id, file_id, **kwargs):
"""
Get Project File
### Get Project File Info Returns information about a file in the project
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.project_file(project_id, file_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str file_id: File Id (required)
:param str fields: Requested fields
:return: ProjectFile
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.project_file_with_http_info(project_id, file_id, **kwargs)
else:
(data) = self.project_file_with_http_info(project_id, file_id, **kwargs)
return data
def project_file_with_http_info(self, project_id, file_id, **kwargs):
"""
Get Project File
### Get Project File Info Returns information about a file in the project
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.project_file_with_http_info(project_id, file_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str file_id: File Id (required)
:param str fields: Requested fields
:return: ProjectFile
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'file_id', 'fields']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method project_file" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `project_file`")
# verify the required parameter 'file_id' is set
if ('file_id' not in params) or (params['file_id'] is None):
raise ValueError("Missing the required parameter `file_id` when calling `project_file`")
collection_formats = {}
resource_path = '/projects/{project_id}/files/file'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
if 'file_id' in params:
query_params['file_id'] = params['file_id']
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ProjectFile',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def project_validation_results(self, project_id, **kwargs):
"""
Cached Project Validation Results
### Get Cached Project Validation Results Returns the cached results of a previous project validation calculation, if any. Returns http status 204 No Content if no validation results exist. Validating the content of all the files in a project can be computationally intensive for large projects. Use this API to simply fetch the results of the most recent project validation rather than revalidating the entire project from scratch. A value of `\"stale\": true` in the response indicates that the project has changed since the cached validation results were computed. The cached validation results may no longer reflect the current state of the project.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.project_validation_results(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: ProjectValidationCache
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.project_validation_results_with_http_info(project_id, **kwargs)
else:
(data) = self.project_validation_results_with_http_info(project_id, **kwargs)
return data
def project_validation_results_with_http_info(self, project_id, **kwargs):
"""
Cached Project Validation Results
### Get Cached Project Validation Results Returns the cached results of a previous project validation calculation, if any. Returns http status 204 No Content if no validation results exist. Validating the content of all the files in a project can be computationally intensive for large projects. Use this API to simply fetch the results of the most recent project validation rather than revalidating the entire project from scratch. A value of `\"stale\": true` in the response indicates that the project has changed since the cached validation results were computed. The cached validation results may no longer reflect the current state of the project.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.project_validation_results_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: ProjectValidationCache
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'fields']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method project_validation_results" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `project_validation_results`")
collection_formats = {}
resource_path = '/projects/{project_id}/validate'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ProjectValidationCache',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def project_workspace(self, project_id, **kwargs):
"""
Get Project Workspace
### Get Project Workspace Returns information about the state of the project files in the currently selected workspace
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.project_workspace(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: ProjectWorkspace
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.project_workspace_with_http_info(project_id, **kwargs)
else:
(data) = self.project_workspace_with_http_info(project_id, **kwargs)
return data
def project_workspace_with_http_info(self, project_id, **kwargs):
"""
Get Project Workspace
### Get Project Workspace Returns information about the state of the project files in the currently selected workspace
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.project_workspace_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: ProjectWorkspace
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'fields']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method project_workspace" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `project_workspace`")
collection_formats = {}
resource_path = '/projects/{project_id}/current_workspace'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ProjectWorkspace',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def reset_project_to_production(self, project_id, **kwargs):
"""
Reset To Production
### Reset a project to the revision of the project that is in production. **DANGER** this will delete any changes that have not been pushed to a remote repository.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.reset_project_to_production(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Id of project (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.reset_project_to_production_with_http_info(project_id, **kwargs)
else:
(data) = self.reset_project_to_production_with_http_info(project_id, **kwargs)
return data
def reset_project_to_production_with_http_info(self, project_id, **kwargs):
"""
Reset To Production
### Reset a project to the revision of the project that is in production. **DANGER** this will delete any changes that have not been pushed to a remote repository.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.reset_project_to_production_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Id of project (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method reset_project_to_production" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `reset_project_to_production`")
collection_formats = {}
resource_path = '/projects/{project_id}/reset_to_production'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def reset_project_to_remote(self, project_id, **kwargs):
"""
Reset To Remote
### Reset a project development branch to the revision of the project that is on the remote. **DANGER** this will delete any changes that have not been pushed to a remote repository.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.reset_project_to_remote(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Id of project (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.reset_project_to_remote_with_http_info(project_id, **kwargs)
else:
(data) = self.reset_project_to_remote_with_http_info(project_id, **kwargs)
return data
def reset_project_to_remote_with_http_info(self, project_id, **kwargs):
"""
Reset To Remote
### Reset a project development branch to the revision of the project that is on the remote. **DANGER** this will delete any changes that have not been pushed to a remote repository.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.reset_project_to_remote_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Id of project (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method reset_project_to_remote" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `reset_project_to_remote`")
collection_formats = {}
resource_path = '/projects/{project_id}/reset_to_remote'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def run_git_connection_test(self, project_id, test_id, **kwargs):
"""
Run Git Connection Test
### Run a git connection test Run the named test on the git service used by this project (or the dependency project for the provided remote_url) and return the result. This is intended to help debug git connections when things do not work properly, to give more helpful information about why a git url is not working with Looker. Tests should be run in the order they are returned by [Get All Git Connection Tests](#!/Project/all_git_connection_tests).
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.run_git_connection_test(project_id, test_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str test_id: Test Id (required)
:param str remote_url: (Optional: leave blank for root project) The remote url for remote dependency to test.
:return: GitConnectionTestResult
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.run_git_connection_test_with_http_info(project_id, test_id, **kwargs)
else:
(data) = self.run_git_connection_test_with_http_info(project_id, test_id, **kwargs)
return data
def run_git_connection_test_with_http_info(self, project_id, test_id, **kwargs):
"""
Run Git Connection Test
### Run a git connection test Run the named test on the git service used by this project (or the dependency project for the provided remote_url) and return the result. This is intended to help debug git connections when things do not work properly, to give more helpful information about why a git url is not working with Looker. Tests should be run in the order they are returned by [Get All Git Connection Tests](#!/Project/all_git_connection_tests).
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.run_git_connection_test_with_http_info(project_id, test_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str test_id: Test Id (required)
:param str remote_url: (Optional: leave blank for root project) The remote url for remote dependency to test.
:return: GitConnectionTestResult
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'test_id', 'remote_url']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method run_git_connection_test" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `run_git_connection_test`")
# verify the required parameter 'test_id' is set
if ('test_id' not in params) or (params['test_id'] is None):
raise ValueError("Missing the required parameter `test_id` when calling `run_git_connection_test`")
collection_formats = {}
resource_path = '/projects/{project_id}/git_connection_tests/{test_id}'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
if 'test_id' in params:
path_params['test_id'] = params['test_id']
query_params = {}
if 'remote_url' in params:
query_params['remote_url'] = params['remote_url']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GitConnectionTestResult',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_git_branch(self, project_id, body, **kwargs):
"""
Update Project Git Branch
### Checkout and/or reset --hard an existing Git Branch Only allowed in development mode - Call `update_session` to select the 'dev' workspace. Checkout an existing branch if name field is different from the name of the currently checked out branch. Optionally specify a branch name, tag name or commit SHA to which the branch should be reset. **DANGER** hard reset will be force pushed to the remote. Unsaved changes and commits may be permanently lost.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_git_branch(project_id, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param GitBranch body: Git Branch (required)
:return: GitBranch
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.update_git_branch_with_http_info(project_id, body, **kwargs)
else:
(data) = self.update_git_branch_with_http_info(project_id, body, **kwargs)
return data
def update_git_branch_with_http_info(self, project_id, body, **kwargs):
"""
Update Project Git Branch
### Checkout and/or reset --hard an existing Git Branch Only allowed in development mode - Call `update_session` to select the 'dev' workspace. Checkout an existing branch if name field is different from the name of the currently checked out branch. Optionally specify a branch name, tag name or commit SHA to which the branch should be reset. **DANGER** hard reset will be force pushed to the remote. Unsaved changes and commits may be permanently lost.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_git_branch_with_http_info(project_id, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param GitBranch body: Git Branch (required)
:return: GitBranch
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'body']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_git_branch" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `update_git_branch`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `update_git_branch`")
collection_formats = {}
resource_path = '/projects/{project_id}/git_branch'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='GitBranch',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_project(self, project_id, body, **kwargs):
"""
Update Project
### Update Project Configuration Apply changes to a project's configuration. #### Configuring Git for a Project To set up a Looker project with a remote git repository, follow these steps: 1. Call `update_session` to select the 'dev' workspace. 1. Call `create_git_deploy_key` to create a new deploy key for the project 1. Copy the deploy key text into the remote git repository's ssh key configuration 1. Call `update_project` to set project's `git_remote_url` ()and `git_service_name`, if necessary). When you modify a project's `git_remote_url`, Looker connects to the remote repository to fetch metadata. The remote git repository MUST be configured with the Looker-generated deploy key for this project prior to setting the project's `git_remote_url`. To set up a Looker project with a git repository residing on the Looker server (a 'bare' git repo): 1. Call `update_session` to select the 'dev' workspace. 1. Call `update_project` setting `git_remote_url` to nil and `git_service_name` to \"bare\".
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_project(project_id, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param Project body: Project (required)
:param str fields: Requested fields
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.update_project_with_http_info(project_id, body, **kwargs)
else:
(data) = self.update_project_with_http_info(project_id, body, **kwargs)
return data
def update_project_with_http_info(self, project_id, body, **kwargs):
"""
Update Project
### Update Project Configuration Apply changes to a project's configuration. #### Configuring Git for a Project To set up a Looker project with a remote git repository, follow these steps: 1. Call `update_session` to select the 'dev' workspace. 1. Call `create_git_deploy_key` to create a new deploy key for the project 1. Copy the deploy key text into the remote git repository's ssh key configuration 1. Call `update_project` to set project's `git_remote_url` ()and `git_service_name`, if necessary). When you modify a project's `git_remote_url`, Looker connects to the remote repository to fetch metadata. The remote git repository MUST be configured with the Looker-generated deploy key for this project prior to setting the project's `git_remote_url`. To set up a Looker project with a git repository residing on the Looker server (a 'bare' git repo): 1. Call `update_session` to select the 'dev' workspace. 1. Call `update_project` setting `git_remote_url` to nil and `git_service_name` to \"bare\".
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_project_with_http_info(project_id, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param Project body: Project (required)
:param str fields: Requested fields
:return: Project
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'body', 'fields']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_project" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `update_project`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `update_project`")
collection_formats = {}
resource_path = '/projects/{project_id}'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Project',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_repository_credential(self, root_project_id, credential_id, body, **kwargs):
"""
Create Repository Credential
### Configure Repository Credential for a remote dependency Admin required. `root_project_id` is required. `credential_id` is required.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_repository_credential(root_project_id, credential_id, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str root_project_id: Root Project Id (required)
:param str credential_id: Credential Id (required)
:param RepositoryCredential body: Remote Project Information (required)
:return: RepositoryCredential
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.update_repository_credential_with_http_info(root_project_id, credential_id, body, **kwargs)
else:
(data) = self.update_repository_credential_with_http_info(root_project_id, credential_id, body, **kwargs)
return data
def update_repository_credential_with_http_info(self, root_project_id, credential_id, body, **kwargs):
"""
Create Repository Credential
### Configure Repository Credential for a remote dependency Admin required. `root_project_id` is required. `credential_id` is required.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_repository_credential_with_http_info(root_project_id, credential_id, body, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str root_project_id: Root Project Id (required)
:param str credential_id: Credential Id (required)
:param RepositoryCredential body: Remote Project Information (required)
:return: RepositoryCredential
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['root_project_id', 'credential_id', 'body']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_repository_credential" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'root_project_id' is set
if ('root_project_id' not in params) or (params['root_project_id'] is None):
raise ValueError("Missing the required parameter `root_project_id` when calling `update_repository_credential`")
# verify the required parameter 'credential_id' is set
if ('credential_id' not in params) or (params['credential_id'] is None):
raise ValueError("Missing the required parameter `credential_id` when calling `update_repository_credential`")
# verify the required parameter 'body' is set
if ('body' not in params) or (params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `update_repository_credential`")
collection_formats = {}
resource_path = '/projects/{root_project_id}/credential/{credential_id}'.replace('{format}', 'json')
path_params = {}
if 'root_project_id' in params:
path_params['root_project_id'] = params['root_project_id']
if 'credential_id' in params:
path_params['credential_id'] = params['credential_id']
query_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='RepositoryCredential',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def validate_project(self, project_id, **kwargs):
"""
Validate Project
### Validate Project Performs lint validation of all lookml files in the project. Returns a list of errors found, if any. Validating the content of all the files in a project can be computationally intensive for large projects. For best performance, call `validate_project(project_id)` only when you really want to recompute project validation. To quickly display the results of the most recent project validation (without recomputing), use `project_validation_results(project_id)`
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.validate_project(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: ProjectValidation
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.validate_project_with_http_info(project_id, **kwargs)
else:
(data) = self.validate_project_with_http_info(project_id, **kwargs)
return data
def validate_project_with_http_info(self, project_id, **kwargs):
"""
Validate Project
### Validate Project Performs lint validation of all lookml files in the project. Returns a list of errors found, if any. Validating the content of all the files in a project can be computationally intensive for large projects. For best performance, call `validate_project(project_id)` only when you really want to recompute project validation. To quickly display the results of the most recent project validation (without recomputing), use `project_validation_results(project_id)`
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.validate_project_with_http_info(project_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str project_id: Project Id (required)
:param str fields: Requested fields
:return: ProjectValidation
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'fields']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method validate_project" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params) or (params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `validate_project`")
collection_formats = {}
resource_path = '/projects/{project_id}/validate'.replace('{format}', 'json')
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id']
query_params = {}
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = []
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ProjectValidation',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 48.089353 | 4,190 | 0.595882 | 15,277 | 140,469 | 5.258166 | 0.036984 | 0.051986 | 0.018125 | 0.023304 | 0.946856 | 0.941777 | 0.937507 | 0.93081 | 0.926714 | 0.920153 | 0 | 0.00072 | 0.327353 | 140,469 | 2,920 | 4,191 | 48.105822 | 0.849449 | 0.390791 | 0 | 0.83195 | 0 | 0 | 0.177128 | 0.042668 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036653 | false | 0 | 0.004841 | 0 | 0.096127 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8a9682a5a98d86f6db38ea59da25b965efb309df | 1,073 | py | Python | processor/tests/conftest.py | soote1/bettingtool | 96472d05476088f44b3469c7d89722efe0f62d35 | [
"MIT"
] | 1 | 2020-07-18T19:07:28.000Z | 2020-07-18T19:07:28.000Z | processor/tests/conftest.py | soote1/bettingtool | 96472d05476088f44b3469c7d89722efe0f62d35 | [
"MIT"
] | 4 | 2020-06-20T02:15:00.000Z | 2021-12-13T20:47:42.000Z | processor/tests/conftest.py | soote1/bettingtool | 96472d05476088f44b3469c7d89722efe0f62d35 | [
"MIT"
] | 1 | 2020-07-20T13:23:11.000Z | 2020-07-20T13:23:11.000Z | import pytest
def processor_input_sample():
return '{"game_id": "https://sports.caliente.mx/es_MX/La-Liga/20200626/Sevilla-vs-Valladolid", "game_type": "correct_score", "odds": [[" 1-0 ", "4/1", "5.00", "+400"], [" 0-0 ", "8/1", "9.00", "+800"], [" 0-1 ", "16/1", "17.00", "+1600"], [" 2-0 ", "4/1", "5.00", "+400"], [" 1-1 ", "8/1", "9.00", "+800"], [" 0-2 ", "55/1", "56.00", "+5500"], [" 2-1 ", "8/1", "9.00", "+800"], [" 2-2 ", "30/1", "31.00", "+3000"], [" 1-2 ", "30/1", "31.00", "+3000"], [" 3-0 ", "7/1", "8.00", "+700"], [" 3-3 ", "125/1", "126.00", "+12500"], [" 1-3 ", "125/1", "126.00", "+12500"], [" 3-1 ", "12/1", "13.00", "+1200"], [" 2-3 ", "125/1", "126.00", "+12500"], [" 3-2 ", "45/1", "46.00", "+4500"], [" 4-0 ", "14/1", "15.00", "+1400"], [" 4-1 ", "25/1", "26.00", "+2500"], [" 4-2 ", "75/1", "76.00", "+7500"], [" 5-0 ", "28/1", "29.00", "+2800"], [" 5-1 ", "45/1", "46.00", "+4500"], [" 5-2 ", "100/1", "101.00", "+10000"], [" 6-0 ", "66/1", "67.00", "+6600"], [" 6-1 ", "90/1", "91.00", "+9000"]], "crawled_at": "2020-06-26 19:05:24.788499"}' | 268.25 | 1,028 | 0.426841 | 196 | 1,073 | 2.30102 | 0.433673 | 0.013304 | 0.019956 | 0.033259 | 0.308204 | 0.259424 | 0.070953 | 0 | 0 | 0 | 0 | 0.349673 | 0.144455 | 1,073 | 4 | 1,028 | 268.25 | 0.141612 | 0 | 0 | 0 | 0 | 0.333333 | 0.945065 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 12 |
8aa2ebb28dc10e9ed6891b9c45315c8db5575b45 | 44 | py | Python | sol/sol_count_hi.py | igamberdievhasan/codingbat-notebooks | b0a41f22b9064efc7f7f7da55e8c99fc21ce364d | [
"Apache-2.0"
] | null | null | null | sol/sol_count_hi.py | igamberdievhasan/codingbat-notebooks | b0a41f22b9064efc7f7f7da55e8c99fc21ce364d | [
"Apache-2.0"
] | 6 | 2020-03-02T20:59:43.000Z | 2020-03-18T01:20:30.000Z | sol/sol_count_hi.py | igamberdievhasan/codingbat-notebooks | b0a41f22b9064efc7f7f7da55e8c99fc21ce364d | [
"Apache-2.0"
] | 1 | 2020-03-13T02:48:04.000Z | 2020-03-13T02:48:04.000Z | def count_hi(str):
return str.count("hi")
| 14.666667 | 24 | 0.681818 | 8 | 44 | 3.625 | 0.625 | 0.482759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 44 | 2 | 25 | 22 | 0.763158 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
8aac7e02f9e0aba7ba46166890294f64442e5953 | 35,594 | py | Python | looker_client_31/api/look_api.py | ContrastingSounds/looker_sdk_31 | f973434049fff1b605b10086ab8b84f2f62e3489 | [
"MIT"
] | null | null | null | looker_client_31/api/look_api.py | ContrastingSounds/looker_sdk_31 | f973434049fff1b605b10086ab8b84f2f62e3489 | [
"MIT"
] | null | null | null | looker_client_31/api/look_api.py | ContrastingSounds/looker_sdk_31 | f973434049fff1b605b10086ab8b84f2f62e3489 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Experimental Looker API 3.1 Preview
This API 3.1 is in active development. Breaking changes are likely to occur to some API functions in future Looker releases until API 3.1 is officially launched and upgraded to beta status. If you have time and interest to experiment with new or modified services exposed in this embryonic API 3.1, we welcome your participation and feedback! For large development efforts or critical line-of-business projects, we strongly recommend you stick with the API 3.0 while API 3.1 is under construction. # noqa: E501
OpenAPI spec version: 3.1.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from looker_client_31.api_client import ApiClient
class LookApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def all_looks(self, **kwargs): # noqa: E501
"""Get All Looks # noqa: E501
### Get all the looks. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.all_looks(async=True)
>>> result = thread.get()
:param async bool
:param str fields: Requested fields.
:return: list[Look]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.all_looks_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.all_looks_with_http_info(**kwargs) # noqa: E501
return data
def all_looks_with_http_info(self, **kwargs): # noqa: E501
"""Get All Looks # noqa: E501
### Get all the looks. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.all_looks_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str fields: Requested fields.
:return: list[Look]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['fields'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method all_looks" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/looks', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Look]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_look(self, **kwargs): # noqa: E501
"""Create Look # noqa: E501
### Create a Look with specified information. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_look(async=True)
>>> result = thread.get()
:param async bool
:param LookWithQuery body: Look
:param str fields: Requested fields.
:return: LookWithQuery
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.create_look_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.create_look_with_http_info(**kwargs) # noqa: E501
return data
def create_look_with_http_info(self, **kwargs): # noqa: E501
"""Create Look # noqa: E501
### Create a Look with specified information. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_look_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param LookWithQuery body: Look
:param str fields: Requested fields.
:return: LookWithQuery
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'fields'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_look" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/looks', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='LookWithQuery', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_look(self, look_id, **kwargs): # noqa: E501
"""Delete Look # noqa: E501
### Delete the look with a specific id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_look(look_id, async=True)
>>> result = thread.get()
:param async bool
:param int look_id: Id of look (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.delete_look_with_http_info(look_id, **kwargs) # noqa: E501
else:
(data) = self.delete_look_with_http_info(look_id, **kwargs) # noqa: E501
return data
def delete_look_with_http_info(self, look_id, **kwargs): # noqa: E501
"""Delete Look # noqa: E501
### Delete the look with a specific id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_look_with_http_info(look_id, async=True)
>>> result = thread.get()
:param async bool
:param int look_id: Id of look (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['look_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_look" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'look_id' is set
if ('look_id' not in params or
params['look_id'] is None):
raise ValueError("Missing the required parameter `look_id` when calling `delete_look`") # noqa: E501
collection_formats = {}
path_params = {}
if 'look_id' in params:
path_params['look_id'] = params['look_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/looks/{look_id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def look(self, look_id, **kwargs): # noqa: E501
"""Get Look # noqa: E501
### Get a Look. Return detailed information about the Look and its associated Query. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.look(look_id, async=True)
>>> result = thread.get()
:param async bool
:param int look_id: Id of look (required)
:param str fields: Requested fields.
:return: LookWithQuery
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.look_with_http_info(look_id, **kwargs) # noqa: E501
else:
(data) = self.look_with_http_info(look_id, **kwargs) # noqa: E501
return data
def look_with_http_info(self, look_id, **kwargs): # noqa: E501
"""Get Look # noqa: E501
### Get a Look. Return detailed information about the Look and its associated Query. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.look_with_http_info(look_id, async=True)
>>> result = thread.get()
:param async bool
:param int look_id: Id of look (required)
:param str fields: Requested fields.
:return: LookWithQuery
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['look_id', 'fields'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method look" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'look_id' is set
if ('look_id' not in params or
params['look_id'] is None):
raise ValueError("Missing the required parameter `look_id` when calling `look`") # noqa: E501
collection_formats = {}
path_params = {}
if 'look_id' in params:
path_params['look_id'] = params['look_id'] # noqa: E501
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/looks/{look_id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='LookWithQuery', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def run_look(self, look_id, result_format, **kwargs): # noqa: E501
"""Run Look # noqa: E501
### Run a Look. Runs a given look's query and returns the results in the requested format. Suported formats: | result_format | Description | :-----------: | :--- | | json | Plain json | json_detail | Row data plus metadata describing the fields, pivots, table calcs, and other aspects of the query | csv | Comma separated values with a header | txt | Tab separated values with a header | html | Simple html | md | Simple markdown | xlsx | MS Excel spreadsheet | sql | Returns the generated SQL rather than running the query | png | A PNG image of the visualization of the query | jpg | A JPG image of the visualization of the query # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.run_look(look_id, result_format, async=True)
>>> result = thread.get()
:param async bool
:param int look_id: Id of look (required)
:param str result_format: Format of result (required)
:param int limit: Row limit (may override the limit in the saved query).
:param bool apply_formatting: Apply model-specified formatting to each result.
:param bool apply_vis: Apply visualization options to results.
:param bool cache: Get results from cache if available.
:param int image_width: Render width for image formats.
:param int image_height: Render height for image formats.
:param bool generate_drill_links: Generate drill links (only applicable to 'json_detail' format.
:param bool force_production: Force use of production models even if the user is in development mode.
:param bool cache_only: Retrieve any results from cache even if the results have expired.
:param str path_prefix: Prefix to use for drill links (url encoded).
:param bool rebuild_pdts: Rebuild PDTS used in query.
:param bool server_table_calcs: Perform table calculations on query results
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.run_look_with_http_info(look_id, result_format, **kwargs) # noqa: E501
else:
(data) = self.run_look_with_http_info(look_id, result_format, **kwargs) # noqa: E501
return data
def run_look_with_http_info(self, look_id, result_format, **kwargs): # noqa: E501
"""Run Look # noqa: E501
### Run a Look. Runs a given look's query and returns the results in the requested format. Suported formats: | result_format | Description | :-----------: | :--- | | json | Plain json | json_detail | Row data plus metadata describing the fields, pivots, table calcs, and other aspects of the query | csv | Comma separated values with a header | txt | Tab separated values with a header | html | Simple html | md | Simple markdown | xlsx | MS Excel spreadsheet | sql | Returns the generated SQL rather than running the query | png | A PNG image of the visualization of the query | jpg | A JPG image of the visualization of the query # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.run_look_with_http_info(look_id, result_format, async=True)
>>> result = thread.get()
:param async bool
:param int look_id: Id of look (required)
:param str result_format: Format of result (required)
:param int limit: Row limit (may override the limit in the saved query).
:param bool apply_formatting: Apply model-specified formatting to each result.
:param bool apply_vis: Apply visualization options to results.
:param bool cache: Get results from cache if available.
:param int image_width: Render width for image formats.
:param int image_height: Render height for image formats.
:param bool generate_drill_links: Generate drill links (only applicable to 'json_detail' format.
:param bool force_production: Force use of production models even if the user is in development mode.
:param bool cache_only: Retrieve any results from cache even if the results have expired.
:param str path_prefix: Prefix to use for drill links (url encoded).
:param bool rebuild_pdts: Rebuild PDTS used in query.
:param bool server_table_calcs: Perform table calculations on query results
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['look_id', 'result_format', 'limit', 'apply_formatting', 'apply_vis', 'cache', 'image_width', 'image_height', 'generate_drill_links', 'force_production', 'cache_only', 'path_prefix', 'rebuild_pdts', 'server_table_calcs'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method run_look" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'look_id' is set
if ('look_id' not in params or
params['look_id'] is None):
raise ValueError("Missing the required parameter `look_id` when calling `run_look`") # noqa: E501
# verify the required parameter 'result_format' is set
if ('result_format' not in params or
params['result_format'] is None):
raise ValueError("Missing the required parameter `result_format` when calling `run_look`") # noqa: E501
collection_formats = {}
path_params = {}
if 'look_id' in params:
path_params['look_id'] = params['look_id'] # noqa: E501
if 'result_format' in params:
path_params['result_format'] = params['result_format'] # noqa: E501
query_params = []
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'apply_formatting' in params:
query_params.append(('apply_formatting', params['apply_formatting'])) # noqa: E501
if 'apply_vis' in params:
query_params.append(('apply_vis', params['apply_vis'])) # noqa: E501
if 'cache' in params:
query_params.append(('cache', params['cache'])) # noqa: E501
if 'image_width' in params:
query_params.append(('image_width', params['image_width'])) # noqa: E501
if 'image_height' in params:
query_params.append(('image_height', params['image_height'])) # noqa: E501
if 'generate_drill_links' in params:
query_params.append(('generate_drill_links', params['generate_drill_links'])) # noqa: E501
if 'force_production' in params:
query_params.append(('force_production', params['force_production'])) # noqa: E501
if 'cache_only' in params:
query_params.append(('cache_only', params['cache_only'])) # noqa: E501
if 'path_prefix' in params:
query_params.append(('path_prefix', params['path_prefix'])) # noqa: E501
if 'rebuild_pdts' in params:
query_params.append(('rebuild_pdts', params['rebuild_pdts'])) # noqa: E501
if 'server_table_calcs' in params:
query_params.append(('server_table_calcs', params['server_table_calcs'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['text', 'application/json', 'image/png', 'image/jpg']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/looks/{look_id}/run/{result_format}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def search_looks(self, **kwargs): # noqa: E501
"""Search Looks # noqa: E501
Search looks. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.search_looks(async=True)
>>> result = thread.get()
:param async bool
:param str fields: Requested fields.
:param int page: Requested page.
:param int per_page: Results per page.
:param int limit: Number of results to return. (used with offset and takes priority over page and per_page)
:param int offset: Number of results to skip before returning any. (used with limit and takes priority over page and per_page)
:param str sorts: Fields to sort by.
:param str title: Match Look title.
:param str description: Match Look description.
:param int content_favorite_id: Match content favorite id
:param str space_id: Filter on a particular space.
:param str user_id: Filter on dashboards created by a particular user.
:param str view_count: Filter on a particular value of view_count
:return: list[Look]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.search_looks_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.search_looks_with_http_info(**kwargs) # noqa: E501
return data
def search_looks_with_http_info(self, **kwargs): # noqa: E501
"""Search Looks # noqa: E501
Search looks. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.search_looks_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param str fields: Requested fields.
:param int page: Requested page.
:param int per_page: Results per page.
:param int limit: Number of results to return. (used with offset and takes priority over page and per_page)
:param int offset: Number of results to skip before returning any. (used with limit and takes priority over page and per_page)
:param str sorts: Fields to sort by.
:param str title: Match Look title.
:param str description: Match Look description.
:param int content_favorite_id: Match content favorite id
:param str space_id: Filter on a particular space.
:param str user_id: Filter on dashboards created by a particular user.
:param str view_count: Filter on a particular value of view_count
:return: list[Look]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['fields', 'page', 'per_page', 'limit', 'offset', 'sorts', 'title', 'description', 'content_favorite_id', 'space_id', 'user_id', 'view_count'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method search_looks" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
if 'page' in params:
query_params.append(('page', params['page'])) # noqa: E501
if 'per_page' in params:
query_params.append(('per_page', params['per_page'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'offset' in params:
query_params.append(('offset', params['offset'])) # noqa: E501
if 'sorts' in params:
query_params.append(('sorts', params['sorts'])) # noqa: E501
if 'title' in params:
query_params.append(('title', params['title'])) # noqa: E501
if 'description' in params:
query_params.append(('description', params['description'])) # noqa: E501
if 'content_favorite_id' in params:
query_params.append(('content_favorite_id', params['content_favorite_id'])) # noqa: E501
if 'space_id' in params:
query_params.append(('space_id', params['space_id'])) # noqa: E501
if 'user_id' in params:
query_params.append(('user_id', params['user_id'])) # noqa: E501
if 'view_count' in params:
query_params.append(('view_count', params['view_count'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/looks/search', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Look]', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_look(self, look_id, body, **kwargs): # noqa: E501
"""Update Look # noqa: E501
### Update the Look with a specific id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.update_look(look_id, body, async=True)
>>> result = thread.get()
:param async bool
:param int look_id: Id of look (required)
:param LookWithQuery body: Look (required)
:param str fields: Requested fields.
:return: LookWithQuery
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.update_look_with_http_info(look_id, body, **kwargs) # noqa: E501
else:
(data) = self.update_look_with_http_info(look_id, body, **kwargs) # noqa: E501
return data
def update_look_with_http_info(self, look_id, body, **kwargs): # noqa: E501
"""Update Look # noqa: E501
### Update the Look with a specific id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.update_look_with_http_info(look_id, body, async=True)
>>> result = thread.get()
:param async bool
:param int look_id: Id of look (required)
:param LookWithQuery body: Look (required)
:param str fields: Requested fields.
:return: LookWithQuery
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['look_id', 'body', 'fields'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_look" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'look_id' is set
if ('look_id' not in params or
params['look_id'] is None):
raise ValueError("Missing the required parameter `look_id` when calling `update_look`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `update_look`") # noqa: E501
collection_formats = {}
path_params = {}
if 'look_id' in params:
path_params['look_id'] = params['look_id'] # noqa: E501
query_params = []
if 'fields' in params:
query_params.append(('fields', params['fields'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/looks/{look_id}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='LookWithQuery', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 42.576555 | 659 | 0.610805 | 4,274 | 35,594 | 4.886289 | 0.072064 | 0.052863 | 0.030933 | 0.024133 | 0.896619 | 0.877131 | 0.85793 | 0.848161 | 0.84117 | 0.835089 | 0 | 0.017406 | 0.294656 | 35,594 | 835 | 660 | 42.627545 | 0.814427 | 0.057566 | 0 | 0.727273 | 0 | 0 | 0.191853 | 0.025077 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.009091 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8ad642b18a67ece3f54822b8edb719c4022cde82 | 472 | py | Python | lifelines/__init__.py | fmfn/lifelines | fec81897674ebeb3223efba48b99e7b1302cdf9e | [
"MIT"
] | 2 | 2020-02-06T09:24:09.000Z | 2021-03-20T08:10:26.000Z | lifelines/__init__.py | fmfn/lifelines | fec81897674ebeb3223efba48b99e7b1302cdf9e | [
"MIT"
] | null | null | null | lifelines/__init__.py | fmfn/lifelines | fec81897674ebeb3223efba48b99e7b1302cdf9e | [
"MIT"
] | 1 | 2018-05-08T08:10:12.000Z | 2018-05-08T08:10:12.000Z | # -*- coding: utf-8 -*-
from .estimation import KaplanMeierFitter, NelsonAalenFitter, \
AalenAdditiveFitter, BreslowFlemingHarringtonFitter, CoxPHFitter, \
WeibullFitter, ExponentialFitter, SBGSurvival
import lifelines.datasets
from .version import __version__
__all__ = ['KaplanMeierFitter', 'NelsonAalenFitter', 'AalenAdditiveFitter',
'BreslowFlemingHarringtonFitter', 'CoxPHFitter', 'WeibullFitter',
'ExponentialFitter', 'SBGSurvival']
| 36.307692 | 76 | 0.758475 | 30 | 472 | 11.666667 | 0.6 | 0.194286 | 0.302857 | 0.474286 | 0.771429 | 0.771429 | 0.771429 | 0.771429 | 0 | 0 | 0 | 0.002469 | 0.141949 | 472 | 12 | 77 | 39.333333 | 0.861728 | 0.044492 | 0 | 0 | 0 | 0 | 0.300668 | 0.066815 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 1 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
76d04d7055ca8b102379be6a2ddf47f7f79a7a9d | 102,686 | py | Python | eeauditor/auditors/aws/AWS_IAM_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 442 | 2020-03-15T20:56:36.000Z | 2022-03-31T22:13:07.000Z | eeauditor/auditors/aws/AWS_IAM_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 57 | 2020-03-15T22:09:56.000Z | 2022-03-31T13:17:06.000Z | eeauditor/auditors/aws/AWS_IAM_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 59 | 2020-03-15T21:19:10.000Z | 2022-03-31T15:01:31.000Z | #This file is part of ElectricEye.
#SPDX-License-Identifier: Apache-2.0
#Licensed to the Apache Software Foundation (ASF) under one
#or more contributor license agreements. See the NOTICE file
#distributed with this work for additional information
#regarding copyright ownership. The ASF licenses this file
#to you under the Apache License, Version 2.0 (the
#"License"); you may not use this file except in compliance
#with the License. You may obtain a copy of the License at
#http://www.apache.org/licenses/LICENSE-2.0
#Unless required by applicable law or agreed to in writing,
#software distributed under the License is distributed on an
#"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
#KIND, either express or implied. See the License for the
#specific language governing permissions and limitations
#under the License.
import boto3
import datetime
from check_register import CheckRegister
import json
registry = CheckRegister()
# import boto3 clients
iam = boto3.client("iam")
# loop through IAM users
def list_users(cache):
response = cache.get("list_users")
if response:
return response
cache["list_users"] = iam.list_users(MaxItems=1000)
return cache["list_users"]
@registry.register_check("iam")
def iam_access_key_age_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[IAM.1] IAM Access Keys should be rotated every 90 days"""
user = list_users(cache=cache)
for users in user["Users"]:
userName = str(users["UserName"])
userArn = str(users["Arn"])
try:
response = iam.list_access_keys(UserName=userName)
for keys in response["AccessKeyMetadata"]:
keyUserName = str(keys["UserName"])
keyId = str(keys["AccessKeyId"])
keyStatus = str(keys["Status"])
# ISO Time
iso8601Time = (
datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
)
if keyStatus == "Active":
keyCreateDate = keys["CreateDate"]
todaysDatetime = datetime.datetime.now(datetime.timezone.utc)
keyAgeFinder = todaysDatetime - keyCreateDate
if keyAgeFinder <= datetime.timedelta(days=90):
# this is a passing check
finding = {
"SchemaVersion": "2018-10-08",
"Id": keyUserName + keyId + "/iam-access-key-age-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": userArn + keyId,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices"
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[IAM.1] IAM Access Keys should be rotated every 90 days",
"Description": "IAM access key "
+ keyId
+ " for user "
+ keyUserName
+ " is not over 90 days old.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM access key rotation refer to the Rotating Access Keys section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_RotateAccessKey",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamAccessKey",
"Id": userArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"AwsIamAccessKey": {
"PrincipalId": keyId,
"PrincipalName": keyUserName,
"Status": keyStatus,
}
},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": keyUserName + keyId + "/iam-access-key-age-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": userArn + keyId,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices"
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[IAM.1] IAM Access Keys should be rotated every 90 days",
"Description": "IAM access key "
+ keyId
+ " for user "
+ keyUserName
+ " is over 90 days old. As a security best practice, AWS recommends that you regularly rotate (change) IAM user access keys. If your administrator granted you the necessary permissions, you can rotate your own access keys. Refer to the remediation section to remediate this behavior.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM access key rotation refer to the Rotating Access Keys section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_RotateAccessKey",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamAccessKey",
"Id": userArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"AwsIamAccessKey": {
"PrincipalId": keyId,
"PrincipalName": keyUserName,
"Status": keyStatus,
}
},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
pass
except Exception as e:
print(e)
@registry.register_check("iam")
def user_permission_boundary_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""aaa"""
user = list_users(cache=cache)
for users in user["Users"]:
userName = str(users["UserName"])
userArn = str(users["Arn"])
# ISO Time
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
try:
permBoundaryArn = str(users["PermissionsBoundary"]["PermissionsBoundaryArn"])
# this is a passing check
finding = {
"SchemaVersion": "2018-10-08",
"Id": userArn + "/iam-user-permissions-boundary-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": userArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[IAM.2] IAM users should have permissions boundaries attached",
"Description": "IAM user " + userName + " has a permissions boundary attached.",
"Remediation": {
"Recommendation": {
"Text": "For information on permissions boundaries refer to the Permissions Boundaries for IAM Entities section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamUser",
"Id": userArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"Other": {
"PrincipalName": userName,
"permissionsBoundaryArn": permBoundaryArn,
}
},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-4",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 AC-3",
"NIST SP 800-53 AC-5",
"NIST SP 800-53 AC-6",
"NIST SP 800-53 AC-14",
"NIST SP 800-53 AC-16",
"NIST SP 800-53 AC-24",
"AICPA TSC CC6.3",
"ISO 27001:2013 A.6.1.2",
"ISO 27001:2013 A.9.1.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.4.1",
"ISO 27001:2013 A.9.4.4",
"ISO 27001:2013 A.9.4.5",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
except Exception as e:
if str(e) == "'PermissionsBoundary'":
finding = {
"SchemaVersion": "2018-10-08",
"Id": userArn + "/iam-user-permissions-boundary-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": userArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[IAM.2] IAM users should have permissions boundaries attached",
"Description": "IAM user "
+ userName
+ " does not have a permissions boundary attached. A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. A permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries. Refer to the remediation section to remediate this behavior.",
"Remediation": {
"Recommendation": {
"Text": "For information on permissions boundaries refer to the Permissions Boundaries for IAM Entities section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamUser",
"Id": userArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"PrincipalName": userName}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-4",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 AC-3",
"NIST SP 800-53 AC-5",
"NIST SP 800-53 AC-6",
"NIST SP 800-53 AC-14",
"NIST SP 800-53 AC-16",
"NIST SP 800-53 AC-24",
"AICPA TSC CC6.3",
"ISO 27001:2013 A.6.1.2",
"ISO 27001:2013 A.9.1.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.4.1",
"ISO 27001:2013 A.9.4.4",
"ISO 27001:2013 A.9.4.5",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
print(e)
@registry.register_check("iam")
def user_mfa_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[IAM.3] IAM users should have Multi-Factor Authentication (MFA) enabled"""
user = list_users(cache=cache)
for users in user["Users"]:
userName = str(users["UserName"])
userArn = str(users["Arn"])
# ISO Time
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
try:
response = iam.list_mfa_devices(UserName=userName)
if str(response["MFADevices"]) == "[]":
finding = {
"SchemaVersion": "2018-10-08",
"Id": userArn + "/iam-user-mfa-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": userArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[IAM.3] IAM users should have Multi-Factor Authentication (MFA) enabled",
"Description": "IAM user "
+ userName
+ " does not have MFA enabled. For increased security, AWS recommends that you configure multi-factor authentication (MFA) to help protect your AWS resources. Refer to the remediation section to remediate this behavior.",
"Remediation": {
"Recommendation": {
"Text": "For information on MFA refer to the Using Multi-Factor Authentication (MFA) in AWS section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamUser",
"Id": userArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"PrincipalName": userName}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": userArn + "/iam-user-mfa-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": userArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[IAM.3] IAM users should have Multi-Factor Authentication (MFA) enabled",
"Description": "IAM user " + userName + " has MFA enabled.",
"Remediation": {
"Recommendation": {
"Text": "For information on MFA refer to the Using Multi-Factor Authentication (MFA) in AWS section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamUser",
"Id": userArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"PrincipalName": userName}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
except Exception as e:
print(e)
@registry.register_check("iam")
def user_inline_policy_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[IAM.4] IAM users should not have attached in-line policies"""
user = list_users(cache=cache)
allUsers = user["Users"]
for users in allUsers:
userName = str(users["UserName"])
userArn = str(users["Arn"])
# ISO Time
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
try:
response = iam.list_user_policies(UserName=userName)
if str(response["PolicyNames"]) != "[]":
finding = {
"SchemaVersion": "2018-10-08",
"Id": userArn + "/iam-user-attach-inline-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": userArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "LOW"},
"Confidence": 99,
"Title": "[IAM.4] IAM users should not have attached in-line policies",
"Description": "IAM user "
+ userName
+ " has an in-line policy attached. It is recommended that IAM policies be applied directly to groups and roles but not users. Refer to the remediation section to remediate this behavior.",
"Remediation": {
"Recommendation": {
"Text": "For information on user attached policies refer to the Managed Policies and Inline Policies section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamUser",
"Id": userArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"PrincipalName": userName}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": userArn + "/iam-user-attach-inline-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": userArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[IAM.4] IAM users should not have attached in-line policies",
"Description": "IAM user "
+ userName
+ " does not have an in-line policy attached.",
"Remediation": {
"Recommendation": {
"Text": "For information on user attached policies refer to the Managed Policies and Inline Policies section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamUser",
"Id": userArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"PrincipalName": userName}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
except Exception as e:
print(e)
@registry.register_check("iam")
def user_direct_attached_policy_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[IAM.5] IAM users should not have attached managed policies"""
user = list_users(cache=cache)
allUsers = user["Users"]
for users in allUsers:
userName = str(users["UserName"])
userArn = str(users["Arn"])
# ISO Time
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
try:
response = iam.list_attached_user_policies(UserName=userName)
if str(response["AttachedPolicies"]) != "[]":
finding = {
"SchemaVersion": "2018-10-08",
"Id": userArn + "/iam-user-attach-managed-policy-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": userArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "LOW"},
"Confidence": 99,
"Title": "[IAM.5] IAM users should not have attached managed policies",
"Description": "IAM user "
+ userName
+ " has a managed policy attached. It is recommended that IAM policies be applied directly to groups and roles but not users. Refer to the remediation section to remediate this behavior.",
"Remediation": {
"Recommendation": {
"Text": "For information on user attached policies refer to the Managed Policies and Inline Policies section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamUser",
"Id": userArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"PrincipalName": userName}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": userArn + "/iam-user-attach-managed-policy-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": userArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[IAM.5] IAM users should not have attached managed policies",
"Description": "IAM user "
+ userName
+ " does not have a managed policy attached.",
"Remediation": {
"Recommendation": {
"Text": "For information on user attached policies refer to the Managed Policies and Inline Policies section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamUser",
"Id": userArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"PrincipalName": userName}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
except Exception as e:
print(e)
@registry.register_check("iam")
def cis_aws_foundation_benchmark_pw_policy_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[IAM.6] The IAM password policy should meet or exceed the AWS CIS Foundations Benchmark standard"""
try:
# TODO: if no policy is found, this will throw an exception in
# which case we need to create an ACTIVE finding
response = iam.get_account_password_policy()
pwPolicy = response["PasswordPolicy"]
minPwLength = int(pwPolicy["MinimumPasswordLength"])
symbolReq = str(pwPolicy["RequireSymbols"])
numberReq = str(pwPolicy["RequireNumbers"])
uppercaseReq = str(pwPolicy["RequireUppercaseCharacters"])
lowercaseReq = str(pwPolicy["RequireLowercaseCharacters"])
maxPwAge = int(pwPolicy["MaxPasswordAge"])
pwReuse = int(pwPolicy["PasswordReusePrevention"])
# ISO Time
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
if (
minPwLength >= 14
and maxPwAge <= 90
and pwReuse >= 24
and symbolReq == "True"
and numberReq == "True"
and uppercaseReq == "True"
and lowercaseReq == "True"
):
finding = {
"SchemaVersion": "2018-10-08",
"Id": awsAccountId + "/cis-aws-foundations-benchmark-pw-policy-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": awsAccountId + "iam-password-policy",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[IAM.6] The IAM password policy should meet or exceed the AWS CIS Foundations Benchmark standard",
"Description": "The IAM password policy for account "
+ awsAccountId
+ " meets or exceeds the AWS CIS Foundations Benchmark standard.",
"Remediation": {
"Recommendation": {
"Text": "For information on the CIS AWS Foundations Benchmark standard for the password policy refer to the linked Standard",
"Url": "https://d1.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsAccount",
"Id": f"{awsPartition.upper()}::::Account:{awsAccountId}",
"Partition": awsPartition,
"Region": awsRegion,
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": awsAccountId + "/cis-aws-foundations-benchmark-pw-policy-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": awsAccountId + "iam-password-policy",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[IAM.6] The IAM password policy should meet or exceed the AWS CIS Foundations Benchmark standard",
"Description": "The IAM password policy for account "
+ awsAccountId
+ " does not meet the AWS CIS Foundations Benchmark standard. Refer to the remediation instructions if this configuration is not intended.",
"Remediation": {
"Recommendation": {
"Text": "For information on the CIS AWS Foundations Benchmark standard for the password policy refer to the linked Standard",
"Url": "https://d1.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsAccount",
"Id": f"{awsPartition.upper()}::::Account:{awsAccountId}",
"Partition": awsPartition,
"Region": awsRegion,
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
except Exception as e:
print(e)
@registry.register_check("iam")
def server_certs_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[IAM.7] There should not be any server certificates stored in AWS IAM"""
try:
response = iam.list_server_certificates()
# ISO Time
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
if str(response["ServerCertificateMetadataList"]) != "[]":
finding = {
"SchemaVersion": "2018-10-08",
"Id": awsAccountId + "/server-x509-certs-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": awsAccountId + "server-cert",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[IAM.7] There should not be any server certificates stored in AWS IAM",
"Description": "There are server certificates stored in AWS IAM for the account "
+ awsAccountId
+ ". ACM is the preferred tool to provision, manage, and deploy your server certificates. With ACM you can request a certificate or deploy an existing ACM or external certificate to AWS resources. Certificates provided by ACM are free and automatically renew. Refer to the remediation instructions if this configuration is not intended.",
"Remediation": {
"Recommendation": {
"Text": "For information on server certificates refer to the Working with Server Certificates section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsAccount",
"Id": f"{awsPartition.upper()}::::Account:{awsAccountId}",
"Partition": awsPartition,
"Region": awsRegion,
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": awsAccountId + "/server-x509-certs-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": awsAccountId + "server-cert",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[IAM.7] There should not be any server certificates stored in AWS IAM",
"Description": "There are not server certificates stored in AWS IAM for the account "
+ awsAccountId
+ ".",
"Remediation": {
"Recommendation": {
"Text": "For information on server certificates refer to the Working with Server Certificates section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsAccount",
"Id": f"{awsPartition.upper()}::::Account:{awsAccountId}",
"Partition": awsPartition,
"Region": awsRegion,
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-1",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-2",
"NIST SP 800-53 IA-1",
"NIST SP 800-53 IA-2",
"NIST SP 800-53 IA-3",
"NIST SP 800-53 IA-4",
"NIST SP 800-53 IA-5",
"NIST SP 800-53 IA-6",
"NIST SP 800-53 IA-7",
"NIST SP 800-53 IA-8",
"NIST SP 800-53 IA-9",
"NIST SP 800-53 IA-10",
"NIST SP 800-53 IA-11",
"AICPA TSC CC6.1",
"AICPA TSC CC6.2",
"ISO 27001:2013 A.9.2.1",
"ISO 27001:2013 A.9.2.2",
"ISO 27001:2013 A.9.2.3",
"ISO 27001:2013 A.9.2.4",
"ISO 27001:2013 A.9.2.6",
"ISO 27001:2013 A.9.3.1",
"ISO 27001:2013 A.9.4.2",
"ISO 27001:2013 A.9.4.3",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
except Exception as e:
print(e)
@registry.register_check("iam")
def iam_mngd_policy_least_priv_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[IAM.8] Managed policies should follow least privilege principles"""
try:
policies = iam.list_policies(Scope='Local')
for mngd_policy in policies['Policies']:
policy_arn = mngd_policy['Arn']
version_id = mngd_policy['DefaultVersionId']
policy_doc = iam.get_policy_version(
PolicyArn=policy_arn,
VersionId=version_id
)['PolicyVersion']['Document']
#handle policies docs returned as strings
if type(policy_doc) == str:
policy_doc = json.loads(policy_doc)
least_priv_rating = 'passing'
for statement in policy_doc['Statement']:
if statement["Effect"] == 'Allow':
if statement.get('Condition') == None:
# action structure could be a string or a list
if type(statement['Action']) == list:
if len(['True' for x in statement['Action'] if ":*" in x or '*' == x]) > 0:
if type(statement['Resource']) == str and statement['Resource'] == '*':
least_priv_rating = 'failed_high'
# Means that an initial failure will not be overwritten by a lower finding later
next
elif type(statement['Resource']) == list:
least_priv_rating = 'failed_low'
# Single action in a statement
elif type(statement['Action']) == str:
if ":*" in statement['Action'] or statement['Action'] == '*':
if type(statement['Resource']) == str and statement['Resource'] == '*':
least_priv_rating = 'failed_high'
# Means that an initial failure will not be overwritten by a lower finding later
next
elif type(statement['Resource']) == list:
least_priv_rating = 'failed_low'
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
if least_priv_rating == 'passing':
finding = {
"SchemaVersion": "2018-10-08",
"Id": policy_arn + "/mngd_policy_least_priv",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": policy_arn + "mngd_policy_least_priv",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[IAM.8] Managed policies should follow least privilege principles",
"Description": f"The customer managed policy {policy_arn} is following least privilege principles.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM least privilege refer to the Controlling access section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_controlling.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamPolicy",
"Id": policy_arn,
"Partition": awsPartition,
"Region": awsRegion
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1"
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
elif least_priv_rating == 'failed_low':
finding = {
"SchemaVersion": "2018-10-08",
"Id": policy_arn + "/mngd_policy_least_priv",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": policy_arn + "mngd_policy_least_priv",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "LOW"},
"Confidence": 99,
"Title": "[IAM.8] Managed policies should follow least privilege principles",
"Description": f"The customer managed policy {policy_arn} is not following least privilege principles and has been rated: {least_priv_rating}.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM least privilege refer to the Controlling access section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_controlling.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamPolicy",
"Id": policy_arn,
"Partition": awsPartition,
"Region": awsRegion
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1"
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
elif least_priv_rating == 'failed_high':
finding = {
"SchemaVersion": "2018-10-08",
"Id": policy_arn + "/mngd_policy_least_priv",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": policy_arn + "mngd_policy_least_priv",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "HIGH"},
"Confidence": 99,
"Title": "[IAM.8] Managed policies should follow least privilege principles",
"Description": f"The customer managed policy {policy_arn} is not following least privilege principles and has been rated: {least_priv_rating}.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM least privilege refer to the Controlling access section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_controlling.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamPolicy",
"Id": policy_arn,
"Partition": awsPartition,
"Region": awsRegion
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1"
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
except:
pass
@registry.register_check("iam")
def iam_user_policy_least_priv_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[IAM.9] User inline policies should follow least privilege principles"""
try:
Users = iam.list_users()
for user in Users['Users']:
user_arn = user['Arn']
UserName = user['UserName']
policy_names = iam.list_user_policies(
UserName=UserName
)['PolicyNames']
for policy_name in policy_names:
policy_doc = iam.get_user_policy(
UserName=UserName,
PolicyName=policy_name
)['PolicyDocument']
#handle policies docs returned as strings
if type(policy_doc) == str:
policy_doc = json.loads(policy_doc)
least_priv_rating = 'passing'
for statement in policy_doc['Statement']:
if statement["Effect"] == 'Allow':
if statement.get('Condition') == None:
# action structure could be a string or a list
if type(statement['Action']) == list:
if len(['True' for x in statement['Action'] if ":*" in x or '*' == x]) > 0:
if type(statement['Resource']) == str and statement['Resource'] == '*':
least_priv_rating = 'failed_high'
# Means that an initial failure will not be overwritten by a lower finding later
next
elif type(statement['Resource']) == list:
least_priv_rating = 'failed_low'
# Single action in a statement
elif type(statement['Action']) == str:
if ":*" in statement['Action'] or statement['Action'] == '*':
if type(statement['Resource']) == str and statement['Resource'] == '*':
least_priv_rating = 'failed_high'
# Means that an initial failure will not be overwritten by a lower finding later
next
elif type(statement['Resource']) == list:
least_priv_rating = 'failed_low'
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
if least_priv_rating == 'passing':
finding = {
"SchemaVersion": "2018-10-08",
"Id": user_arn + "/user_policy_least_priv",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": user_arn + "user_policy_least_priv",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[IAM.9] User inline policies should follow least privilege principles",
"Description": f"The user {user_arn} inline policy {policy_name} is following least privilege principles.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM least privilege refer to the inline policy section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#inline-policies",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamUser",
"Id": user_arn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"Other": {
"PrincipalName": UserName
}
},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1"
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
elif least_priv_rating == 'failed_low':
finding = {
"SchemaVersion": "2018-10-08",
"Id": user_arn + "/user_policy_least_priv",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": user_arn + "user_policy_least_priv",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "LOW"},
"Confidence": 99,
"Title": "[IAM.9] User inline policies should follow least privilege principles",
"Description": f"The user {user_arn} inline policy {policy_name} is not following least privilege principles.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM least privilege refer to the inline policy section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#inline-policies",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamUser",
"Id": user_arn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"Other": {
"PrincipalName": UserName
}
},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1"
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
elif least_priv_rating == 'failed_high':
finding = {
"SchemaVersion": "2018-10-08",
"Id": user_arn + "/user_policy_least_priv",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": user_arn + "user_policy_least_priv",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "HIGH"},
"Confidence": 99,
"Title": "[IAM.9] User inline policies should follow least privilege principles",
"Description": f"The user {user_arn} inline policy {policy_name} is not following least privilege principles.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM least privilege refer to the inline policy section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#inline-policies",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamUser",
"Id": user_arn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {
"Other": {
"PrincipalName": UserName
}
}
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1"
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
except:
pass
@registry.register_check("iam")
def iam_group_policy_least_priv_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[IAM.10] Group inline policies should follow least privilege principles"""
try:
Groups = iam.list_groups()
for group in Groups['Groups']:
group_arn = group['Arn']
GroupName = group['GroupName']
policy_names = iam.list_group_policies(
GroupName=GroupName
)['PolicyNames']
for policy_name in policy_names:
policy_doc = iam.get_group_policy(
GroupName=GroupName,
PolicyName=policy_name
)['PolicyDocument']
#handle policies docs returned as strings
if type(policy_doc) == str:
policy_doc = json.loads(policy_doc)
least_priv_rating = 'passing'
for statement in policy_doc['Statement']:
if statement["Effect"] == 'Allow':
if statement.get('Condition') == None:
# action structure could be a string or a list
if type(statement['Action']) == list:
if len(['True' for x in statement['Action'] if ":*" in x or '*' == x]) > 0:
if type(statement['Resource']) == str and statement['Resource'] == '*':
least_priv_rating = 'failed_high'
# Means that an initial failure will not be overwritten by a lower finding later
next
elif type(statement['Resource']) == list:
least_priv_rating = 'failed_low'
# Single action in a statement
elif type(statement['Action']) == str:
if ":*" in statement['Action'] or statement['Action'] == '*':
if type(statement['Resource']) == str and statement['Resource'] == '*':
least_priv_rating = 'failed_high'
# Means that an initial failure will not be overwritten by a lower finding later
next
elif type(statement['Resource']) == list:
least_priv_rating = 'failed_low'
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
if least_priv_rating == 'passing':
finding = {
"SchemaVersion": "2018-10-08",
"Id": group_arn + "/group_policy_least_priv",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": group_arn + "group_policy_least_priv",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[IAM.10] Group inline policies should follow least privilege principles",
"Description": f"The group {group_arn} inline policy {policy_name} is following least privilege principles.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM least privilege refer to the inline policy section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#inline-policies",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamGroup",
"Id": group_arn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"PolicyName": policy_name}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1"
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
elif least_priv_rating == 'failed_low':
finding = {
"SchemaVersion": "2018-10-08",
"Id": group_arn + "/group_policy_least_priv",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": group_arn + "group_policy_least_priv",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "LOW"},
"Confidence": 99,
"Title": "[IAM.10] Group inline policies should follow least privilege principles",
"Description": f"The group {group_arn} inline policy {policy_name} is not following least privilege principles.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM least privilege refer to the inline policy section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#inline-policies",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamGroup",
"Id": group_arn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"PolicyName": policy_name}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1"
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
elif least_priv_rating == 'failed_high':
finding = {
"SchemaVersion": "2018-10-08",
"Id": group_arn + "/group_policy_least_priv",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": group_arn + "group_policy_least_priv",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "HIGH"},
"Confidence": 99,
"Title": "[IAM.10] Group inline policies should follow least privilege principles",
"Description": f"The group {group_arn} inline policy {policy_name} is not following least privilege principles.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM least privilege refer to the inline policy section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#inline-policies",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamGroup",
"Id": group_arn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"PolicyName": policy_name}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1"
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
except:
pass
@registry.register_check("iam")
def iam_role_policy_least_priv_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[IAM.11] Role inline policies should follow least privilege principles"""
try:
Roles = iam.list_roles()
for role in Roles['Roles']:
role_arn = role['Arn']
RoleName = role['RoleName']
policy_names = iam.list_role_policies(
RoleName=RoleName
)['PolicyNames']
for policy_name in policy_names:
policy_doc = iam.get_role_policy(
RoleName=RoleName,
PolicyName=policy_name
)['PolicyDocument']
#handle policies docs returned as strings
if type(policy_doc) == str:
policy_doc = json.loads(policy_doc)
least_priv_rating = 'passing'
for statement in policy_doc['Statement']:
if statement["Effect"] == 'Allow':
if statement.get('Condition') == None:
# action structure could be a string or a list
if type(statement['Action']) == list:
if len(['True' for x in statement['Action'] if ":*" in x or '*' == x]) > 0:
if type(statement['Resource']) == str and statement['Resource'] == '*':
least_priv_rating = 'failed_high'
# Means that an initial failure will not be overwritten by a lower finding later
next
elif type(statement['Resource']) == list:
least_priv_rating = 'failed_low'
# Single action in a statement
elif type(statement['Action']) == str:
if ":*" in statement['Action'] or statement['Action'] == '*':
if type(statement['Resource']) == str and statement['Resource'] == '*':
least_priv_rating = 'failed_high'
# Means that an initial failure will not be overwritten by a lower finding later
next
elif type(statement['Resource']) == list:
least_priv_rating = 'failed_low'
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
if least_priv_rating == 'passing':
finding = {
"SchemaVersion": "2018-10-08",
"Id": role_arn + "/role_policy_least_priv",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": role_arn + "role_policy_least_priv",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[IAM.11] Role inline policies should follow least privilege principles",
"Description": f"The role {role_arn} inline policy {policy_name} is following least privilege principles.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM least privilege refer to the inline policy section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#inline-policies",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamRole",
"Id": role_arn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {
"PolicyName": policy_name}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1"
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
elif least_priv_rating == 'failed_low':
finding = {
"SchemaVersion": "2018-10-08",
"Id": role_arn + "/role_policy_least_priv",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": role_arn + "role_policy_least_priv",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "LOW"},
"Confidence": 99,
"Title": "[IAM.11] Role inline policies should follow least privilege principles",
"Description": f"The role {role_arn} inline policy {policy_name} is not following least privilege principles.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM least privilege refer to the inline policy section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#inline-policies",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamRole",
"Id": role_arn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {
"PolicyName": policy_name}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1"
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
elif least_priv_rating == 'failed_high':
finding = {
"SchemaVersion": "2018-10-08",
"Id": role_arn + "/role_policy_least_priv",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": role_arn + "role_policy_least_priv",
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "HIGH"},
"Confidence": 99,
"Title": "[IAM.11] Role inline policies should follow least privilege principles",
"Description": f"The role {role_arn} inline policy {policy_name} is not following least privilege principles.",
"Remediation": {
"Recommendation": {
"Text": "For information on IAM least privilege refer to the inline policy section of the AWS IAM User Guide",
"Url": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#inline-policies",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsIamRole",
"Id": role_arn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {
"PolicyName": policy_name}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1"
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
except:
pass | 53.510162 | 441 | 0.411507 | 8,569 | 102,686 | 4.88225 | 0.052515 | 0.033273 | 0.049909 | 0.061 | 0.913519 | 0.911583 | 0.907926 | 0.90544 | 0.902118 | 0.900875 | 0 | 0.082128 | 0.486921 | 102,686 | 1,919 | 442 | 53.510162 | 0.711935 | 0.027793 | 0 | 0.842857 | 0 | 0.017582 | 0.353919 | 0.036198 | 0 | 0 | 0 | 0.000521 | 0 | 1 | 0.006593 | false | 0.02033 | 0.002198 | 0 | 0.00989 | 0.003846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0a0221dc282870aa9ec1d869bf832b201a3c7e3c | 351,638 | py | Python | tests/unit/gapic/aiplatform_v1/test_job_service.py | conankun/python-aiplatform | d6c1bce7e00186aa5ee3cd0e7b8712b21bd06f2a | [
"Apache-2.0"
] | 180 | 2020-09-23T17:21:15.000Z | 2022-03-30T17:25:47.000Z | tests/unit/gapic/aiplatform_v1/test_job_service.py | conankun/python-aiplatform | d6c1bce7e00186aa5ee3cd0e7b8712b21bd06f2a | [
"Apache-2.0"
] | 601 | 2020-09-23T16:23:44.000Z | 2022-03-31T19:08:23.000Z | tests/unit/gapic/aiplatform_v1/test_job_service.py | conankun/python-aiplatform | d6c1bce7e00186aa5ee3cd0e7b8712b21bd06f2a | [
"Apache-2.0"
] | 109 | 2020-09-23T16:22:04.000Z | 2022-03-28T21:18:29.000Z | # -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import mock
import grpc
from grpc.experimental import aio
import math
import pytest
from proto.marshal.rules.dates import DurationRule, TimestampRule
from google.api_core import client_options
from google.api_core import exceptions as core_exceptions
from google.api_core import future
from google.api_core import gapic_v1
from google.api_core import grpc_helpers
from google.api_core import grpc_helpers_async
from google.api_core import operation_async # type: ignore
from google.api_core import operations_v1
from google.api_core import path_template
from google.auth import credentials as ga_credentials
from google.auth.exceptions import MutualTLSChannelError
from google.cloud.aiplatform_v1.services.job_service import JobServiceAsyncClient
from google.cloud.aiplatform_v1.services.job_service import JobServiceClient
from google.cloud.aiplatform_v1.services.job_service import pagers
from google.cloud.aiplatform_v1.services.job_service import transports
from google.cloud.aiplatform_v1.types import accelerator_type
from google.cloud.aiplatform_v1.types import batch_prediction_job
from google.cloud.aiplatform_v1.types import (
batch_prediction_job as gca_batch_prediction_job,
)
from google.cloud.aiplatform_v1.types import completion_stats
from google.cloud.aiplatform_v1.types import custom_job
from google.cloud.aiplatform_v1.types import custom_job as gca_custom_job
from google.cloud.aiplatform_v1.types import data_labeling_job
from google.cloud.aiplatform_v1.types import data_labeling_job as gca_data_labeling_job
from google.cloud.aiplatform_v1.types import encryption_spec
from google.cloud.aiplatform_v1.types import env_var
from google.cloud.aiplatform_v1.types import explanation
from google.cloud.aiplatform_v1.types import explanation_metadata
from google.cloud.aiplatform_v1.types import hyperparameter_tuning_job
from google.cloud.aiplatform_v1.types import (
hyperparameter_tuning_job as gca_hyperparameter_tuning_job,
)
from google.cloud.aiplatform_v1.types import io
from google.cloud.aiplatform_v1.types import job_service
from google.cloud.aiplatform_v1.types import job_state
from google.cloud.aiplatform_v1.types import machine_resources
from google.cloud.aiplatform_v1.types import manual_batch_tuning_parameters
from google.cloud.aiplatform_v1.types import model
from google.cloud.aiplatform_v1.types import model_deployment_monitoring_job
from google.cloud.aiplatform_v1.types import (
model_deployment_monitoring_job as gca_model_deployment_monitoring_job,
)
from google.cloud.aiplatform_v1.types import model_monitoring
from google.cloud.aiplatform_v1.types import operation as gca_operation
from google.cloud.aiplatform_v1.types import study
from google.cloud.aiplatform_v1.types import unmanaged_container_model
from google.longrunning import operations_pb2
from google.oauth2 import service_account
from google.protobuf import any_pb2 # type: ignore
from google.protobuf import duration_pb2 # type: ignore
from google.protobuf import field_mask_pb2 # type: ignore
from google.protobuf import struct_pb2 # type: ignore
from google.protobuf import timestamp_pb2 # type: ignore
from google.rpc import status_pb2 # type: ignore
from google.type import money_pb2 # type: ignore
import google.auth
def client_cert_source_callback():
return b"cert bytes", b"key bytes"
# If default endpoint is localhost, then default mtls endpoint will be the same.
# This method modifies the default endpoint so the client can produce a different
# mtls endpoint for endpoint testing purposes.
def modify_default_endpoint(client):
return (
"foo.googleapis.com"
if ("localhost" in client.DEFAULT_ENDPOINT)
else client.DEFAULT_ENDPOINT
)
def test__get_default_mtls_endpoint():
api_endpoint = "example.googleapis.com"
api_mtls_endpoint = "example.mtls.googleapis.com"
sandbox_endpoint = "example.sandbox.googleapis.com"
sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com"
non_googleapi = "api.example.com"
assert JobServiceClient._get_default_mtls_endpoint(None) is None
assert (
JobServiceClient._get_default_mtls_endpoint(api_endpoint) == api_mtls_endpoint
)
assert (
JobServiceClient._get_default_mtls_endpoint(api_mtls_endpoint)
== api_mtls_endpoint
)
assert (
JobServiceClient._get_default_mtls_endpoint(sandbox_endpoint)
== sandbox_mtls_endpoint
)
assert (
JobServiceClient._get_default_mtls_endpoint(sandbox_mtls_endpoint)
== sandbox_mtls_endpoint
)
assert JobServiceClient._get_default_mtls_endpoint(non_googleapi) == non_googleapi
@pytest.mark.parametrize("client_class", [JobServiceClient, JobServiceAsyncClient,])
def test_job_service_client_from_service_account_info(client_class):
creds = ga_credentials.AnonymousCredentials()
with mock.patch.object(
service_account.Credentials, "from_service_account_info"
) as factory:
factory.return_value = creds
info = {"valid": True}
client = client_class.from_service_account_info(info)
assert client.transport._credentials == creds
assert isinstance(client, client_class)
assert client.transport._host == "aiplatform.googleapis.com:443"
@pytest.mark.parametrize(
"transport_class,transport_name",
[
(transports.JobServiceGrpcTransport, "grpc"),
(transports.JobServiceGrpcAsyncIOTransport, "grpc_asyncio"),
],
)
def test_job_service_client_service_account_always_use_jwt(
transport_class, transport_name
):
with mock.patch.object(
service_account.Credentials, "with_always_use_jwt_access", create=True
) as use_jwt:
creds = service_account.Credentials(None, None, None)
transport = transport_class(credentials=creds, always_use_jwt_access=True)
use_jwt.assert_called_once_with(True)
with mock.patch.object(
service_account.Credentials, "with_always_use_jwt_access", create=True
) as use_jwt:
creds = service_account.Credentials(None, None, None)
transport = transport_class(credentials=creds, always_use_jwt_access=False)
use_jwt.assert_not_called()
@pytest.mark.parametrize("client_class", [JobServiceClient, JobServiceAsyncClient,])
def test_job_service_client_from_service_account_file(client_class):
creds = ga_credentials.AnonymousCredentials()
with mock.patch.object(
service_account.Credentials, "from_service_account_file"
) as factory:
factory.return_value = creds
client = client_class.from_service_account_file("dummy/file/path.json")
assert client.transport._credentials == creds
assert isinstance(client, client_class)
client = client_class.from_service_account_json("dummy/file/path.json")
assert client.transport._credentials == creds
assert isinstance(client, client_class)
assert client.transport._host == "aiplatform.googleapis.com:443"
def test_job_service_client_get_transport_class():
transport = JobServiceClient.get_transport_class()
available_transports = [
transports.JobServiceGrpcTransport,
]
assert transport in available_transports
transport = JobServiceClient.get_transport_class("grpc")
assert transport == transports.JobServiceGrpcTransport
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(JobServiceClient, transports.JobServiceGrpcTransport, "grpc"),
(
JobServiceAsyncClient,
transports.JobServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
@mock.patch.object(
JobServiceClient, "DEFAULT_ENDPOINT", modify_default_endpoint(JobServiceClient)
)
@mock.patch.object(
JobServiceAsyncClient,
"DEFAULT_ENDPOINT",
modify_default_endpoint(JobServiceAsyncClient),
)
def test_job_service_client_client_options(
client_class, transport_class, transport_name
):
# Check that if channel is provided we won't create a new one.
with mock.patch.object(JobServiceClient, "get_transport_class") as gtc:
transport = transport_class(credentials=ga_credentials.AnonymousCredentials())
client = client_class(transport=transport)
gtc.assert_not_called()
# Check that if channel is provided via str we will create a new one.
with mock.patch.object(JobServiceClient, "get_transport_class") as gtc:
client = client_class(transport=transport_name)
gtc.assert_called()
# Check the case api_endpoint is provided.
options = client_options.ClientOptions(api_endpoint="squid.clam.whelk")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name, client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is
# "never".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}):
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is
# "always".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}):
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_MTLS_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has
# unsupported value.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}):
with pytest.raises(MutualTLSChannelError):
client = client_class()
# Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"}
):
with pytest.raises(ValueError):
client = client_class()
# Check the case quota_project_id is provided
options = client_options.ClientOptions(quota_project_id="octopus")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name, client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id="octopus",
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name,use_client_cert_env",
[
(JobServiceClient, transports.JobServiceGrpcTransport, "grpc", "true"),
(
JobServiceAsyncClient,
transports.JobServiceGrpcAsyncIOTransport,
"grpc_asyncio",
"true",
),
(JobServiceClient, transports.JobServiceGrpcTransport, "grpc", "false"),
(
JobServiceAsyncClient,
transports.JobServiceGrpcAsyncIOTransport,
"grpc_asyncio",
"false",
),
],
)
@mock.patch.object(
JobServiceClient, "DEFAULT_ENDPOINT", modify_default_endpoint(JobServiceClient)
)
@mock.patch.object(
JobServiceAsyncClient,
"DEFAULT_ENDPOINT",
modify_default_endpoint(JobServiceAsyncClient),
)
@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"})
def test_job_service_client_mtls_env_auto(
client_class, transport_class, transport_name, use_client_cert_env
):
# This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default
# mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists.
# Check the case client_cert_source is provided. Whether client cert is used depends on
# GOOGLE_API_USE_CLIENT_CERTIFICATE value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
options = client_options.ClientOptions(
client_cert_source=client_cert_source_callback
)
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name, client_options=options)
if use_client_cert_env == "false":
expected_client_cert_source = None
expected_host = client.DEFAULT_ENDPOINT
else:
expected_client_cert_source = client_cert_source_callback
expected_host = client.DEFAULT_MTLS_ENDPOINT
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=expected_host,
scopes=None,
client_cert_source_for_mtls=expected_client_cert_source,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case ADC client cert is provided. Whether client cert is used depends on
# GOOGLE_API_USE_CLIENT_CERTIFICATE value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
with mock.patch.object(transport_class, "__init__") as patched:
with mock.patch(
"google.auth.transport.mtls.has_default_client_cert_source",
return_value=True,
):
with mock.patch(
"google.auth.transport.mtls.default_client_cert_source",
return_value=client_cert_source_callback,
):
if use_client_cert_env == "false":
expected_host = client.DEFAULT_ENDPOINT
expected_client_cert_source = None
else:
expected_host = client.DEFAULT_MTLS_ENDPOINT
expected_client_cert_source = client_cert_source_callback
patched.return_value = None
client = client_class(transport=transport_name)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=expected_host,
scopes=None,
client_cert_source_for_mtls=expected_client_cert_source,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case client_cert_source and ADC client cert are not provided.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
with mock.patch.object(transport_class, "__init__") as patched:
with mock.patch(
"google.auth.transport.mtls.has_default_client_cert_source",
return_value=False,
):
patched.return_value = None
client = client_class(transport=transport_name)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(JobServiceClient, transports.JobServiceGrpcTransport, "grpc"),
(
JobServiceAsyncClient,
transports.JobServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
def test_job_service_client_client_options_scopes(
client_class, transport_class, transport_name
):
# Check the case scopes are provided.
options = client_options.ClientOptions(scopes=["1", "2"],)
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name, client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=["1", "2"],
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(JobServiceClient, transports.JobServiceGrpcTransport, "grpc"),
(
JobServiceAsyncClient,
transports.JobServiceGrpcAsyncIOTransport,
"grpc_asyncio",
),
],
)
def test_job_service_client_client_options_credentials_file(
client_class, transport_class, transport_name
):
# Check the case credentials file is provided.
options = client_options.ClientOptions(credentials_file="credentials.json")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name, client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file="credentials.json",
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
def test_job_service_client_client_options_from_dict():
with mock.patch(
"google.cloud.aiplatform_v1.services.job_service.transports.JobServiceGrpcTransport.__init__"
) as grpc_transport:
grpc_transport.return_value = None
client = JobServiceClient(client_options={"api_endpoint": "squid.clam.whelk"})
grpc_transport.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
def test_create_custom_job(
transport: str = "grpc", request_type=job_service.CreateCustomJobRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_custom_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_custom_job.CustomJob(
name="name_value",
display_name="display_name_value",
state=job_state.JobState.JOB_STATE_QUEUED,
)
response = client.create_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateCustomJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_custom_job.CustomJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.state == job_state.JobState.JOB_STATE_QUEUED
def test_create_custom_job_from_dict():
test_create_custom_job(request_type=dict)
def test_create_custom_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_custom_job), "__call__"
) as call:
client.create_custom_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateCustomJobRequest()
@pytest.mark.asyncio
async def test_create_custom_job_async(
transport: str = "grpc_asyncio", request_type=job_service.CreateCustomJobRequest
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_custom_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_custom_job.CustomJob(
name="name_value",
display_name="display_name_value",
state=job_state.JobState.JOB_STATE_QUEUED,
)
)
response = await client.create_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateCustomJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_custom_job.CustomJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.state == job_state.JobState.JOB_STATE_QUEUED
@pytest.mark.asyncio
async def test_create_custom_job_async_from_dict():
await test_create_custom_job_async(request_type=dict)
def test_create_custom_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CreateCustomJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_custom_job), "__call__"
) as call:
call.return_value = gca_custom_job.CustomJob()
client.create_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_custom_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CreateCustomJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_custom_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_custom_job.CustomJob()
)
await client.create_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_custom_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_custom_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_custom_job.CustomJob()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_custom_job(
parent="parent_value",
custom_job=gca_custom_job.CustomJob(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].custom_job
mock_val = gca_custom_job.CustomJob(name="name_value")
assert arg == mock_val
def test_create_custom_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_custom_job(
job_service.CreateCustomJobRequest(),
parent="parent_value",
custom_job=gca_custom_job.CustomJob(name="name_value"),
)
@pytest.mark.asyncio
async def test_create_custom_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_custom_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_custom_job.CustomJob()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_custom_job.CustomJob()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_custom_job(
parent="parent_value",
custom_job=gca_custom_job.CustomJob(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].custom_job
mock_val = gca_custom_job.CustomJob(name="name_value")
assert arg == mock_val
@pytest.mark.asyncio
async def test_create_custom_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_custom_job(
job_service.CreateCustomJobRequest(),
parent="parent_value",
custom_job=gca_custom_job.CustomJob(name="name_value"),
)
def test_get_custom_job(
transport: str = "grpc", request_type=job_service.GetCustomJobRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_custom_job), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = custom_job.CustomJob(
name="name_value",
display_name="display_name_value",
state=job_state.JobState.JOB_STATE_QUEUED,
)
response = client.get_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetCustomJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, custom_job.CustomJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.state == job_state.JobState.JOB_STATE_QUEUED
def test_get_custom_job_from_dict():
test_get_custom_job(request_type=dict)
def test_get_custom_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_custom_job), "__call__") as call:
client.get_custom_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetCustomJobRequest()
@pytest.mark.asyncio
async def test_get_custom_job_async(
transport: str = "grpc_asyncio", request_type=job_service.GetCustomJobRequest
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_custom_job), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
custom_job.CustomJob(
name="name_value",
display_name="display_name_value",
state=job_state.JobState.JOB_STATE_QUEUED,
)
)
response = await client.get_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetCustomJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, custom_job.CustomJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.state == job_state.JobState.JOB_STATE_QUEUED
@pytest.mark.asyncio
async def test_get_custom_job_async_from_dict():
await test_get_custom_job_async(request_type=dict)
def test_get_custom_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.GetCustomJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_custom_job), "__call__") as call:
call.return_value = custom_job.CustomJob()
client.get_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_custom_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.GetCustomJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_custom_job), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
custom_job.CustomJob()
)
await client.get_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_custom_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_custom_job), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = custom_job.CustomJob()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_custom_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_get_custom_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_custom_job(
job_service.GetCustomJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_custom_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_custom_job), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = custom_job.CustomJob()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
custom_job.CustomJob()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_custom_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_get_custom_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_custom_job(
job_service.GetCustomJobRequest(), name="name_value",
)
def test_list_custom_jobs(
transport: str = "grpc", request_type=job_service.ListCustomJobsRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_custom_jobs), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListCustomJobsResponse(
next_page_token="next_page_token_value",
)
response = client.list_custom_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListCustomJobsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListCustomJobsPager)
assert response.next_page_token == "next_page_token_value"
def test_list_custom_jobs_from_dict():
test_list_custom_jobs(request_type=dict)
def test_list_custom_jobs_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_custom_jobs), "__call__") as call:
client.list_custom_jobs()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListCustomJobsRequest()
@pytest.mark.asyncio
async def test_list_custom_jobs_async(
transport: str = "grpc_asyncio", request_type=job_service.ListCustomJobsRequest
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_custom_jobs), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListCustomJobsResponse(next_page_token="next_page_token_value",)
)
response = await client.list_custom_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListCustomJobsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListCustomJobsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_custom_jobs_async_from_dict():
await test_list_custom_jobs_async(request_type=dict)
def test_list_custom_jobs_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.ListCustomJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_custom_jobs), "__call__") as call:
call.return_value = job_service.ListCustomJobsResponse()
client.list_custom_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_custom_jobs_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.ListCustomJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_custom_jobs), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListCustomJobsResponse()
)
await client.list_custom_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_custom_jobs_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_custom_jobs), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListCustomJobsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_custom_jobs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
def test_list_custom_jobs_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_custom_jobs(
job_service.ListCustomJobsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_custom_jobs_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_custom_jobs), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListCustomJobsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListCustomJobsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_custom_jobs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_list_custom_jobs_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_custom_jobs(
job_service.ListCustomJobsRequest(), parent="parent_value",
)
def test_list_custom_jobs_pager():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_custom_jobs), "__call__") as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListCustomJobsResponse(
custom_jobs=[
custom_job.CustomJob(),
custom_job.CustomJob(),
custom_job.CustomJob(),
],
next_page_token="abc",
),
job_service.ListCustomJobsResponse(custom_jobs=[], next_page_token="def",),
job_service.ListCustomJobsResponse(
custom_jobs=[custom_job.CustomJob(),], next_page_token="ghi",
),
job_service.ListCustomJobsResponse(
custom_jobs=[custom_job.CustomJob(), custom_job.CustomJob(),],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_custom_jobs(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, custom_job.CustomJob) for i in results)
def test_list_custom_jobs_pages():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_custom_jobs), "__call__") as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListCustomJobsResponse(
custom_jobs=[
custom_job.CustomJob(),
custom_job.CustomJob(),
custom_job.CustomJob(),
],
next_page_token="abc",
),
job_service.ListCustomJobsResponse(custom_jobs=[], next_page_token="def",),
job_service.ListCustomJobsResponse(
custom_jobs=[custom_job.CustomJob(),], next_page_token="ghi",
),
job_service.ListCustomJobsResponse(
custom_jobs=[custom_job.CustomJob(), custom_job.CustomJob(),],
),
RuntimeError,
)
pages = list(client.list_custom_jobs(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_custom_jobs_async_pager():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_custom_jobs), "__call__", new_callable=mock.AsyncMock
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListCustomJobsResponse(
custom_jobs=[
custom_job.CustomJob(),
custom_job.CustomJob(),
custom_job.CustomJob(),
],
next_page_token="abc",
),
job_service.ListCustomJobsResponse(custom_jobs=[], next_page_token="def",),
job_service.ListCustomJobsResponse(
custom_jobs=[custom_job.CustomJob(),], next_page_token="ghi",
),
job_service.ListCustomJobsResponse(
custom_jobs=[custom_job.CustomJob(), custom_job.CustomJob(),],
),
RuntimeError,
)
async_pager = await client.list_custom_jobs(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, custom_job.CustomJob) for i in responses)
@pytest.mark.asyncio
async def test_list_custom_jobs_async_pages():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_custom_jobs), "__call__", new_callable=mock.AsyncMock
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListCustomJobsResponse(
custom_jobs=[
custom_job.CustomJob(),
custom_job.CustomJob(),
custom_job.CustomJob(),
],
next_page_token="abc",
),
job_service.ListCustomJobsResponse(custom_jobs=[], next_page_token="def",),
job_service.ListCustomJobsResponse(
custom_jobs=[custom_job.CustomJob(),], next_page_token="ghi",
),
job_service.ListCustomJobsResponse(
custom_jobs=[custom_job.CustomJob(), custom_job.CustomJob(),],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_custom_jobs(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_delete_custom_job(
transport: str = "grpc", request_type=job_service.DeleteCustomJobRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_custom_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.delete_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteCustomJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_delete_custom_job_from_dict():
test_delete_custom_job(request_type=dict)
def test_delete_custom_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_custom_job), "__call__"
) as call:
client.delete_custom_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteCustomJobRequest()
@pytest.mark.asyncio
async def test_delete_custom_job_async(
transport: str = "grpc_asyncio", request_type=job_service.DeleteCustomJobRequest
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_custom_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.delete_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteCustomJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_delete_custom_job_async_from_dict():
await test_delete_custom_job_async(request_type=dict)
def test_delete_custom_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.DeleteCustomJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_custom_job), "__call__"
) as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.delete_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_delete_custom_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.DeleteCustomJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_custom_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.delete_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_delete_custom_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_custom_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.delete_custom_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_delete_custom_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.delete_custom_job(
job_service.DeleteCustomJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_delete_custom_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_custom_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.delete_custom_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_delete_custom_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.delete_custom_job(
job_service.DeleteCustomJobRequest(), name="name_value",
)
def test_cancel_custom_job(
transport: str = "grpc", request_type=job_service.CancelCustomJobRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_custom_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
response = client.cancel_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CancelCustomJobRequest()
# Establish that the response is the type that we expect.
assert response is None
def test_cancel_custom_job_from_dict():
test_cancel_custom_job(request_type=dict)
def test_cancel_custom_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_custom_job), "__call__"
) as call:
client.cancel_custom_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CancelCustomJobRequest()
@pytest.mark.asyncio
async def test_cancel_custom_job_async(
transport: str = "grpc_asyncio", request_type=job_service.CancelCustomJobRequest
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_custom_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
response = await client.cancel_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CancelCustomJobRequest()
# Establish that the response is the type that we expect.
assert response is None
@pytest.mark.asyncio
async def test_cancel_custom_job_async_from_dict():
await test_cancel_custom_job_async(request_type=dict)
def test_cancel_custom_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CancelCustomJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_custom_job), "__call__"
) as call:
call.return_value = None
client.cancel_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_cancel_custom_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CancelCustomJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_custom_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
await client.cancel_custom_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_cancel_custom_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_custom_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.cancel_custom_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_cancel_custom_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.cancel_custom_job(
job_service.CancelCustomJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_cancel_custom_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_custom_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.cancel_custom_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_cancel_custom_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.cancel_custom_job(
job_service.CancelCustomJobRequest(), name="name_value",
)
def test_create_data_labeling_job(
transport: str = "grpc", request_type=job_service.CreateDataLabelingJobRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_data_labeling_job.DataLabelingJob(
name="name_value",
display_name="display_name_value",
datasets=["datasets_value"],
labeler_count=1375,
instruction_uri="instruction_uri_value",
inputs_schema_uri="inputs_schema_uri_value",
state=job_state.JobState.JOB_STATE_QUEUED,
labeling_progress=1810,
specialist_pools=["specialist_pools_value"],
)
response = client.create_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateDataLabelingJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_data_labeling_job.DataLabelingJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.datasets == ["datasets_value"]
assert response.labeler_count == 1375
assert response.instruction_uri == "instruction_uri_value"
assert response.inputs_schema_uri == "inputs_schema_uri_value"
assert response.state == job_state.JobState.JOB_STATE_QUEUED
assert response.labeling_progress == 1810
assert response.specialist_pools == ["specialist_pools_value"]
def test_create_data_labeling_job_from_dict():
test_create_data_labeling_job(request_type=dict)
def test_create_data_labeling_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_data_labeling_job), "__call__"
) as call:
client.create_data_labeling_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateDataLabelingJobRequest()
@pytest.mark.asyncio
async def test_create_data_labeling_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.CreateDataLabelingJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_data_labeling_job.DataLabelingJob(
name="name_value",
display_name="display_name_value",
datasets=["datasets_value"],
labeler_count=1375,
instruction_uri="instruction_uri_value",
inputs_schema_uri="inputs_schema_uri_value",
state=job_state.JobState.JOB_STATE_QUEUED,
labeling_progress=1810,
specialist_pools=["specialist_pools_value"],
)
)
response = await client.create_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateDataLabelingJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_data_labeling_job.DataLabelingJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.datasets == ["datasets_value"]
assert response.labeler_count == 1375
assert response.instruction_uri == "instruction_uri_value"
assert response.inputs_schema_uri == "inputs_schema_uri_value"
assert response.state == job_state.JobState.JOB_STATE_QUEUED
assert response.labeling_progress == 1810
assert response.specialist_pools == ["specialist_pools_value"]
@pytest.mark.asyncio
async def test_create_data_labeling_job_async_from_dict():
await test_create_data_labeling_job_async(request_type=dict)
def test_create_data_labeling_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CreateDataLabelingJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_data_labeling_job), "__call__"
) as call:
call.return_value = gca_data_labeling_job.DataLabelingJob()
client.create_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_data_labeling_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CreateDataLabelingJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_data_labeling_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_data_labeling_job.DataLabelingJob()
)
await client.create_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_data_labeling_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_data_labeling_job.DataLabelingJob()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_data_labeling_job(
parent="parent_value",
data_labeling_job=gca_data_labeling_job.DataLabelingJob(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].data_labeling_job
mock_val = gca_data_labeling_job.DataLabelingJob(name="name_value")
assert arg == mock_val
def test_create_data_labeling_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_data_labeling_job(
job_service.CreateDataLabelingJobRequest(),
parent="parent_value",
data_labeling_job=gca_data_labeling_job.DataLabelingJob(name="name_value"),
)
@pytest.mark.asyncio
async def test_create_data_labeling_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_data_labeling_job.DataLabelingJob()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_data_labeling_job.DataLabelingJob()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_data_labeling_job(
parent="parent_value",
data_labeling_job=gca_data_labeling_job.DataLabelingJob(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].data_labeling_job
mock_val = gca_data_labeling_job.DataLabelingJob(name="name_value")
assert arg == mock_val
@pytest.mark.asyncio
async def test_create_data_labeling_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_data_labeling_job(
job_service.CreateDataLabelingJobRequest(),
parent="parent_value",
data_labeling_job=gca_data_labeling_job.DataLabelingJob(name="name_value"),
)
def test_get_data_labeling_job(
transport: str = "grpc", request_type=job_service.GetDataLabelingJobRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = data_labeling_job.DataLabelingJob(
name="name_value",
display_name="display_name_value",
datasets=["datasets_value"],
labeler_count=1375,
instruction_uri="instruction_uri_value",
inputs_schema_uri="inputs_schema_uri_value",
state=job_state.JobState.JOB_STATE_QUEUED,
labeling_progress=1810,
specialist_pools=["specialist_pools_value"],
)
response = client.get_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetDataLabelingJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, data_labeling_job.DataLabelingJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.datasets == ["datasets_value"]
assert response.labeler_count == 1375
assert response.instruction_uri == "instruction_uri_value"
assert response.inputs_schema_uri == "inputs_schema_uri_value"
assert response.state == job_state.JobState.JOB_STATE_QUEUED
assert response.labeling_progress == 1810
assert response.specialist_pools == ["specialist_pools_value"]
def test_get_data_labeling_job_from_dict():
test_get_data_labeling_job(request_type=dict)
def test_get_data_labeling_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_data_labeling_job), "__call__"
) as call:
client.get_data_labeling_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetDataLabelingJobRequest()
@pytest.mark.asyncio
async def test_get_data_labeling_job_async(
transport: str = "grpc_asyncio", request_type=job_service.GetDataLabelingJobRequest
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
data_labeling_job.DataLabelingJob(
name="name_value",
display_name="display_name_value",
datasets=["datasets_value"],
labeler_count=1375,
instruction_uri="instruction_uri_value",
inputs_schema_uri="inputs_schema_uri_value",
state=job_state.JobState.JOB_STATE_QUEUED,
labeling_progress=1810,
specialist_pools=["specialist_pools_value"],
)
)
response = await client.get_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetDataLabelingJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, data_labeling_job.DataLabelingJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.datasets == ["datasets_value"]
assert response.labeler_count == 1375
assert response.instruction_uri == "instruction_uri_value"
assert response.inputs_schema_uri == "inputs_schema_uri_value"
assert response.state == job_state.JobState.JOB_STATE_QUEUED
assert response.labeling_progress == 1810
assert response.specialist_pools == ["specialist_pools_value"]
@pytest.mark.asyncio
async def test_get_data_labeling_job_async_from_dict():
await test_get_data_labeling_job_async(request_type=dict)
def test_get_data_labeling_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.GetDataLabelingJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_data_labeling_job), "__call__"
) as call:
call.return_value = data_labeling_job.DataLabelingJob()
client.get_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_data_labeling_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.GetDataLabelingJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_data_labeling_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
data_labeling_job.DataLabelingJob()
)
await client.get_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_data_labeling_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = data_labeling_job.DataLabelingJob()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_data_labeling_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_get_data_labeling_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_data_labeling_job(
job_service.GetDataLabelingJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_data_labeling_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = data_labeling_job.DataLabelingJob()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
data_labeling_job.DataLabelingJob()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_data_labeling_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_get_data_labeling_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_data_labeling_job(
job_service.GetDataLabelingJobRequest(), name="name_value",
)
def test_list_data_labeling_jobs(
transport: str = "grpc", request_type=job_service.ListDataLabelingJobsRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_data_labeling_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListDataLabelingJobsResponse(
next_page_token="next_page_token_value",
)
response = client.list_data_labeling_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListDataLabelingJobsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListDataLabelingJobsPager)
assert response.next_page_token == "next_page_token_value"
def test_list_data_labeling_jobs_from_dict():
test_list_data_labeling_jobs(request_type=dict)
def test_list_data_labeling_jobs_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_data_labeling_jobs), "__call__"
) as call:
client.list_data_labeling_jobs()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListDataLabelingJobsRequest()
@pytest.mark.asyncio
async def test_list_data_labeling_jobs_async(
transport: str = "grpc_asyncio",
request_type=job_service.ListDataLabelingJobsRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_data_labeling_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListDataLabelingJobsResponse(
next_page_token="next_page_token_value",
)
)
response = await client.list_data_labeling_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListDataLabelingJobsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListDataLabelingJobsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_data_labeling_jobs_async_from_dict():
await test_list_data_labeling_jobs_async(request_type=dict)
def test_list_data_labeling_jobs_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.ListDataLabelingJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_data_labeling_jobs), "__call__"
) as call:
call.return_value = job_service.ListDataLabelingJobsResponse()
client.list_data_labeling_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_data_labeling_jobs_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.ListDataLabelingJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_data_labeling_jobs), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListDataLabelingJobsResponse()
)
await client.list_data_labeling_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_data_labeling_jobs_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_data_labeling_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListDataLabelingJobsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_data_labeling_jobs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
def test_list_data_labeling_jobs_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_data_labeling_jobs(
job_service.ListDataLabelingJobsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_data_labeling_jobs_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_data_labeling_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListDataLabelingJobsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListDataLabelingJobsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_data_labeling_jobs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_list_data_labeling_jobs_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_data_labeling_jobs(
job_service.ListDataLabelingJobsRequest(), parent="parent_value",
)
def test_list_data_labeling_jobs_pager():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_data_labeling_jobs), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[
data_labeling_job.DataLabelingJob(),
data_labeling_job.DataLabelingJob(),
data_labeling_job.DataLabelingJob(),
],
next_page_token="abc",
),
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[], next_page_token="def",
),
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[data_labeling_job.DataLabelingJob(),],
next_page_token="ghi",
),
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[
data_labeling_job.DataLabelingJob(),
data_labeling_job.DataLabelingJob(),
],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_data_labeling_jobs(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, data_labeling_job.DataLabelingJob) for i in results)
def test_list_data_labeling_jobs_pages():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_data_labeling_jobs), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[
data_labeling_job.DataLabelingJob(),
data_labeling_job.DataLabelingJob(),
data_labeling_job.DataLabelingJob(),
],
next_page_token="abc",
),
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[], next_page_token="def",
),
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[data_labeling_job.DataLabelingJob(),],
next_page_token="ghi",
),
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[
data_labeling_job.DataLabelingJob(),
data_labeling_job.DataLabelingJob(),
],
),
RuntimeError,
)
pages = list(client.list_data_labeling_jobs(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_data_labeling_jobs_async_pager():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_data_labeling_jobs),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[
data_labeling_job.DataLabelingJob(),
data_labeling_job.DataLabelingJob(),
data_labeling_job.DataLabelingJob(),
],
next_page_token="abc",
),
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[], next_page_token="def",
),
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[data_labeling_job.DataLabelingJob(),],
next_page_token="ghi",
),
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[
data_labeling_job.DataLabelingJob(),
data_labeling_job.DataLabelingJob(),
],
),
RuntimeError,
)
async_pager = await client.list_data_labeling_jobs(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, data_labeling_job.DataLabelingJob) for i in responses)
@pytest.mark.asyncio
async def test_list_data_labeling_jobs_async_pages():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_data_labeling_jobs),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[
data_labeling_job.DataLabelingJob(),
data_labeling_job.DataLabelingJob(),
data_labeling_job.DataLabelingJob(),
],
next_page_token="abc",
),
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[], next_page_token="def",
),
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[data_labeling_job.DataLabelingJob(),],
next_page_token="ghi",
),
job_service.ListDataLabelingJobsResponse(
data_labeling_jobs=[
data_labeling_job.DataLabelingJob(),
data_labeling_job.DataLabelingJob(),
],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_data_labeling_jobs(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_delete_data_labeling_job(
transport: str = "grpc", request_type=job_service.DeleteDataLabelingJobRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.delete_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteDataLabelingJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_delete_data_labeling_job_from_dict():
test_delete_data_labeling_job(request_type=dict)
def test_delete_data_labeling_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_data_labeling_job), "__call__"
) as call:
client.delete_data_labeling_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteDataLabelingJobRequest()
@pytest.mark.asyncio
async def test_delete_data_labeling_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.DeleteDataLabelingJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.delete_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteDataLabelingJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_delete_data_labeling_job_async_from_dict():
await test_delete_data_labeling_job_async(request_type=dict)
def test_delete_data_labeling_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.DeleteDataLabelingJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_data_labeling_job), "__call__"
) as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.delete_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_delete_data_labeling_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.DeleteDataLabelingJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_data_labeling_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.delete_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_delete_data_labeling_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.delete_data_labeling_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_delete_data_labeling_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.delete_data_labeling_job(
job_service.DeleteDataLabelingJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_delete_data_labeling_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.delete_data_labeling_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_delete_data_labeling_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.delete_data_labeling_job(
job_service.DeleteDataLabelingJobRequest(), name="name_value",
)
def test_cancel_data_labeling_job(
transport: str = "grpc", request_type=job_service.CancelDataLabelingJobRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
response = client.cancel_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CancelDataLabelingJobRequest()
# Establish that the response is the type that we expect.
assert response is None
def test_cancel_data_labeling_job_from_dict():
test_cancel_data_labeling_job(request_type=dict)
def test_cancel_data_labeling_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_data_labeling_job), "__call__"
) as call:
client.cancel_data_labeling_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CancelDataLabelingJobRequest()
@pytest.mark.asyncio
async def test_cancel_data_labeling_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.CancelDataLabelingJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
response = await client.cancel_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CancelDataLabelingJobRequest()
# Establish that the response is the type that we expect.
assert response is None
@pytest.mark.asyncio
async def test_cancel_data_labeling_job_async_from_dict():
await test_cancel_data_labeling_job_async(request_type=dict)
def test_cancel_data_labeling_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CancelDataLabelingJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_data_labeling_job), "__call__"
) as call:
call.return_value = None
client.cancel_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_cancel_data_labeling_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CancelDataLabelingJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_data_labeling_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
await client.cancel_data_labeling_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_cancel_data_labeling_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.cancel_data_labeling_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_cancel_data_labeling_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.cancel_data_labeling_job(
job_service.CancelDataLabelingJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_cancel_data_labeling_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_data_labeling_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.cancel_data_labeling_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_cancel_data_labeling_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.cancel_data_labeling_job(
job_service.CancelDataLabelingJobRequest(), name="name_value",
)
def test_create_hyperparameter_tuning_job(
transport: str = "grpc",
request_type=job_service.CreateHyperparameterTuningJobRequest,
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_hyperparameter_tuning_job.HyperparameterTuningJob(
name="name_value",
display_name="display_name_value",
max_trial_count=1609,
parallel_trial_count=2128,
max_failed_trial_count=2317,
state=job_state.JobState.JOB_STATE_QUEUED,
)
response = client.create_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateHyperparameterTuningJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_hyperparameter_tuning_job.HyperparameterTuningJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.max_trial_count == 1609
assert response.parallel_trial_count == 2128
assert response.max_failed_trial_count == 2317
assert response.state == job_state.JobState.JOB_STATE_QUEUED
def test_create_hyperparameter_tuning_job_from_dict():
test_create_hyperparameter_tuning_job(request_type=dict)
def test_create_hyperparameter_tuning_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_hyperparameter_tuning_job), "__call__"
) as call:
client.create_hyperparameter_tuning_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateHyperparameterTuningJobRequest()
@pytest.mark.asyncio
async def test_create_hyperparameter_tuning_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.CreateHyperparameterTuningJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_hyperparameter_tuning_job.HyperparameterTuningJob(
name="name_value",
display_name="display_name_value",
max_trial_count=1609,
parallel_trial_count=2128,
max_failed_trial_count=2317,
state=job_state.JobState.JOB_STATE_QUEUED,
)
)
response = await client.create_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateHyperparameterTuningJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_hyperparameter_tuning_job.HyperparameterTuningJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.max_trial_count == 1609
assert response.parallel_trial_count == 2128
assert response.max_failed_trial_count == 2317
assert response.state == job_state.JobState.JOB_STATE_QUEUED
@pytest.mark.asyncio
async def test_create_hyperparameter_tuning_job_async_from_dict():
await test_create_hyperparameter_tuning_job_async(request_type=dict)
def test_create_hyperparameter_tuning_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CreateHyperparameterTuningJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_hyperparameter_tuning_job), "__call__"
) as call:
call.return_value = gca_hyperparameter_tuning_job.HyperparameterTuningJob()
client.create_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_hyperparameter_tuning_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CreateHyperparameterTuningJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_hyperparameter_tuning_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_hyperparameter_tuning_job.HyperparameterTuningJob()
)
await client.create_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_hyperparameter_tuning_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_hyperparameter_tuning_job.HyperparameterTuningJob()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_hyperparameter_tuning_job(
parent="parent_value",
hyperparameter_tuning_job=gca_hyperparameter_tuning_job.HyperparameterTuningJob(
name="name_value"
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].hyperparameter_tuning_job
mock_val = gca_hyperparameter_tuning_job.HyperparameterTuningJob(
name="name_value"
)
assert arg == mock_val
def test_create_hyperparameter_tuning_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_hyperparameter_tuning_job(
job_service.CreateHyperparameterTuningJobRequest(),
parent="parent_value",
hyperparameter_tuning_job=gca_hyperparameter_tuning_job.HyperparameterTuningJob(
name="name_value"
),
)
@pytest.mark.asyncio
async def test_create_hyperparameter_tuning_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_hyperparameter_tuning_job.HyperparameterTuningJob()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_hyperparameter_tuning_job.HyperparameterTuningJob()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_hyperparameter_tuning_job(
parent="parent_value",
hyperparameter_tuning_job=gca_hyperparameter_tuning_job.HyperparameterTuningJob(
name="name_value"
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].hyperparameter_tuning_job
mock_val = gca_hyperparameter_tuning_job.HyperparameterTuningJob(
name="name_value"
)
assert arg == mock_val
@pytest.mark.asyncio
async def test_create_hyperparameter_tuning_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_hyperparameter_tuning_job(
job_service.CreateHyperparameterTuningJobRequest(),
parent="parent_value",
hyperparameter_tuning_job=gca_hyperparameter_tuning_job.HyperparameterTuningJob(
name="name_value"
),
)
def test_get_hyperparameter_tuning_job(
transport: str = "grpc", request_type=job_service.GetHyperparameterTuningJobRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = hyperparameter_tuning_job.HyperparameterTuningJob(
name="name_value",
display_name="display_name_value",
max_trial_count=1609,
parallel_trial_count=2128,
max_failed_trial_count=2317,
state=job_state.JobState.JOB_STATE_QUEUED,
)
response = client.get_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetHyperparameterTuningJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, hyperparameter_tuning_job.HyperparameterTuningJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.max_trial_count == 1609
assert response.parallel_trial_count == 2128
assert response.max_failed_trial_count == 2317
assert response.state == job_state.JobState.JOB_STATE_QUEUED
def test_get_hyperparameter_tuning_job_from_dict():
test_get_hyperparameter_tuning_job(request_type=dict)
def test_get_hyperparameter_tuning_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_hyperparameter_tuning_job), "__call__"
) as call:
client.get_hyperparameter_tuning_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetHyperparameterTuningJobRequest()
@pytest.mark.asyncio
async def test_get_hyperparameter_tuning_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.GetHyperparameterTuningJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
hyperparameter_tuning_job.HyperparameterTuningJob(
name="name_value",
display_name="display_name_value",
max_trial_count=1609,
parallel_trial_count=2128,
max_failed_trial_count=2317,
state=job_state.JobState.JOB_STATE_QUEUED,
)
)
response = await client.get_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetHyperparameterTuningJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, hyperparameter_tuning_job.HyperparameterTuningJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.max_trial_count == 1609
assert response.parallel_trial_count == 2128
assert response.max_failed_trial_count == 2317
assert response.state == job_state.JobState.JOB_STATE_QUEUED
@pytest.mark.asyncio
async def test_get_hyperparameter_tuning_job_async_from_dict():
await test_get_hyperparameter_tuning_job_async(request_type=dict)
def test_get_hyperparameter_tuning_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.GetHyperparameterTuningJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_hyperparameter_tuning_job), "__call__"
) as call:
call.return_value = hyperparameter_tuning_job.HyperparameterTuningJob()
client.get_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_hyperparameter_tuning_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.GetHyperparameterTuningJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_hyperparameter_tuning_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
hyperparameter_tuning_job.HyperparameterTuningJob()
)
await client.get_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_hyperparameter_tuning_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = hyperparameter_tuning_job.HyperparameterTuningJob()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_hyperparameter_tuning_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_get_hyperparameter_tuning_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_hyperparameter_tuning_job(
job_service.GetHyperparameterTuningJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_hyperparameter_tuning_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = hyperparameter_tuning_job.HyperparameterTuningJob()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
hyperparameter_tuning_job.HyperparameterTuningJob()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_hyperparameter_tuning_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_get_hyperparameter_tuning_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_hyperparameter_tuning_job(
job_service.GetHyperparameterTuningJobRequest(), name="name_value",
)
def test_list_hyperparameter_tuning_jobs(
transport: str = "grpc",
request_type=job_service.ListHyperparameterTuningJobsRequest,
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_hyperparameter_tuning_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListHyperparameterTuningJobsResponse(
next_page_token="next_page_token_value",
)
response = client.list_hyperparameter_tuning_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListHyperparameterTuningJobsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListHyperparameterTuningJobsPager)
assert response.next_page_token == "next_page_token_value"
def test_list_hyperparameter_tuning_jobs_from_dict():
test_list_hyperparameter_tuning_jobs(request_type=dict)
def test_list_hyperparameter_tuning_jobs_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_hyperparameter_tuning_jobs), "__call__"
) as call:
client.list_hyperparameter_tuning_jobs()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListHyperparameterTuningJobsRequest()
@pytest.mark.asyncio
async def test_list_hyperparameter_tuning_jobs_async(
transport: str = "grpc_asyncio",
request_type=job_service.ListHyperparameterTuningJobsRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_hyperparameter_tuning_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListHyperparameterTuningJobsResponse(
next_page_token="next_page_token_value",
)
)
response = await client.list_hyperparameter_tuning_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListHyperparameterTuningJobsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListHyperparameterTuningJobsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_hyperparameter_tuning_jobs_async_from_dict():
await test_list_hyperparameter_tuning_jobs_async(request_type=dict)
def test_list_hyperparameter_tuning_jobs_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.ListHyperparameterTuningJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_hyperparameter_tuning_jobs), "__call__"
) as call:
call.return_value = job_service.ListHyperparameterTuningJobsResponse()
client.list_hyperparameter_tuning_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_hyperparameter_tuning_jobs_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.ListHyperparameterTuningJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_hyperparameter_tuning_jobs), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListHyperparameterTuningJobsResponse()
)
await client.list_hyperparameter_tuning_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_hyperparameter_tuning_jobs_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_hyperparameter_tuning_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListHyperparameterTuningJobsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_hyperparameter_tuning_jobs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
def test_list_hyperparameter_tuning_jobs_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_hyperparameter_tuning_jobs(
job_service.ListHyperparameterTuningJobsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_hyperparameter_tuning_jobs_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_hyperparameter_tuning_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListHyperparameterTuningJobsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListHyperparameterTuningJobsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_hyperparameter_tuning_jobs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_list_hyperparameter_tuning_jobs_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_hyperparameter_tuning_jobs(
job_service.ListHyperparameterTuningJobsRequest(), parent="parent_value",
)
def test_list_hyperparameter_tuning_jobs_pager():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_hyperparameter_tuning_jobs), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[
hyperparameter_tuning_job.HyperparameterTuningJob(),
hyperparameter_tuning_job.HyperparameterTuningJob(),
hyperparameter_tuning_job.HyperparameterTuningJob(),
],
next_page_token="abc",
),
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[], next_page_token="def",
),
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[
hyperparameter_tuning_job.HyperparameterTuningJob(),
],
next_page_token="ghi",
),
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[
hyperparameter_tuning_job.HyperparameterTuningJob(),
hyperparameter_tuning_job.HyperparameterTuningJob(),
],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_hyperparameter_tuning_jobs(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(
isinstance(i, hyperparameter_tuning_job.HyperparameterTuningJob)
for i in results
)
def test_list_hyperparameter_tuning_jobs_pages():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_hyperparameter_tuning_jobs), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[
hyperparameter_tuning_job.HyperparameterTuningJob(),
hyperparameter_tuning_job.HyperparameterTuningJob(),
hyperparameter_tuning_job.HyperparameterTuningJob(),
],
next_page_token="abc",
),
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[], next_page_token="def",
),
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[
hyperparameter_tuning_job.HyperparameterTuningJob(),
],
next_page_token="ghi",
),
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[
hyperparameter_tuning_job.HyperparameterTuningJob(),
hyperparameter_tuning_job.HyperparameterTuningJob(),
],
),
RuntimeError,
)
pages = list(client.list_hyperparameter_tuning_jobs(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_hyperparameter_tuning_jobs_async_pager():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_hyperparameter_tuning_jobs),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[
hyperparameter_tuning_job.HyperparameterTuningJob(),
hyperparameter_tuning_job.HyperparameterTuningJob(),
hyperparameter_tuning_job.HyperparameterTuningJob(),
],
next_page_token="abc",
),
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[], next_page_token="def",
),
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[
hyperparameter_tuning_job.HyperparameterTuningJob(),
],
next_page_token="ghi",
),
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[
hyperparameter_tuning_job.HyperparameterTuningJob(),
hyperparameter_tuning_job.HyperparameterTuningJob(),
],
),
RuntimeError,
)
async_pager = await client.list_hyperparameter_tuning_jobs(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(
isinstance(i, hyperparameter_tuning_job.HyperparameterTuningJob)
for i in responses
)
@pytest.mark.asyncio
async def test_list_hyperparameter_tuning_jobs_async_pages():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_hyperparameter_tuning_jobs),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[
hyperparameter_tuning_job.HyperparameterTuningJob(),
hyperparameter_tuning_job.HyperparameterTuningJob(),
hyperparameter_tuning_job.HyperparameterTuningJob(),
],
next_page_token="abc",
),
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[], next_page_token="def",
),
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[
hyperparameter_tuning_job.HyperparameterTuningJob(),
],
next_page_token="ghi",
),
job_service.ListHyperparameterTuningJobsResponse(
hyperparameter_tuning_jobs=[
hyperparameter_tuning_job.HyperparameterTuningJob(),
hyperparameter_tuning_job.HyperparameterTuningJob(),
],
),
RuntimeError,
)
pages = []
async for page_ in (
await client.list_hyperparameter_tuning_jobs(request={})
).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_delete_hyperparameter_tuning_job(
transport: str = "grpc",
request_type=job_service.DeleteHyperparameterTuningJobRequest,
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.delete_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteHyperparameterTuningJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_delete_hyperparameter_tuning_job_from_dict():
test_delete_hyperparameter_tuning_job(request_type=dict)
def test_delete_hyperparameter_tuning_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_hyperparameter_tuning_job), "__call__"
) as call:
client.delete_hyperparameter_tuning_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteHyperparameterTuningJobRequest()
@pytest.mark.asyncio
async def test_delete_hyperparameter_tuning_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.DeleteHyperparameterTuningJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.delete_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteHyperparameterTuningJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_delete_hyperparameter_tuning_job_async_from_dict():
await test_delete_hyperparameter_tuning_job_async(request_type=dict)
def test_delete_hyperparameter_tuning_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.DeleteHyperparameterTuningJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_hyperparameter_tuning_job), "__call__"
) as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.delete_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_delete_hyperparameter_tuning_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.DeleteHyperparameterTuningJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_hyperparameter_tuning_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.delete_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_delete_hyperparameter_tuning_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.delete_hyperparameter_tuning_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_delete_hyperparameter_tuning_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.delete_hyperparameter_tuning_job(
job_service.DeleteHyperparameterTuningJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_delete_hyperparameter_tuning_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.delete_hyperparameter_tuning_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_delete_hyperparameter_tuning_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.delete_hyperparameter_tuning_job(
job_service.DeleteHyperparameterTuningJobRequest(), name="name_value",
)
def test_cancel_hyperparameter_tuning_job(
transport: str = "grpc",
request_type=job_service.CancelHyperparameterTuningJobRequest,
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
response = client.cancel_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CancelHyperparameterTuningJobRequest()
# Establish that the response is the type that we expect.
assert response is None
def test_cancel_hyperparameter_tuning_job_from_dict():
test_cancel_hyperparameter_tuning_job(request_type=dict)
def test_cancel_hyperparameter_tuning_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_hyperparameter_tuning_job), "__call__"
) as call:
client.cancel_hyperparameter_tuning_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CancelHyperparameterTuningJobRequest()
@pytest.mark.asyncio
async def test_cancel_hyperparameter_tuning_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.CancelHyperparameterTuningJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
response = await client.cancel_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CancelHyperparameterTuningJobRequest()
# Establish that the response is the type that we expect.
assert response is None
@pytest.mark.asyncio
async def test_cancel_hyperparameter_tuning_job_async_from_dict():
await test_cancel_hyperparameter_tuning_job_async(request_type=dict)
def test_cancel_hyperparameter_tuning_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CancelHyperparameterTuningJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_hyperparameter_tuning_job), "__call__"
) as call:
call.return_value = None
client.cancel_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_cancel_hyperparameter_tuning_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CancelHyperparameterTuningJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_hyperparameter_tuning_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
await client.cancel_hyperparameter_tuning_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_cancel_hyperparameter_tuning_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.cancel_hyperparameter_tuning_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_cancel_hyperparameter_tuning_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.cancel_hyperparameter_tuning_job(
job_service.CancelHyperparameterTuningJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_cancel_hyperparameter_tuning_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_hyperparameter_tuning_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.cancel_hyperparameter_tuning_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_cancel_hyperparameter_tuning_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.cancel_hyperparameter_tuning_job(
job_service.CancelHyperparameterTuningJobRequest(), name="name_value",
)
def test_create_batch_prediction_job(
transport: str = "grpc", request_type=job_service.CreateBatchPredictionJobRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_batch_prediction_job.BatchPredictionJob(
name="name_value",
display_name="display_name_value",
model="model_value",
generate_explanation=True,
state=job_state.JobState.JOB_STATE_QUEUED,
)
response = client.create_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateBatchPredictionJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_batch_prediction_job.BatchPredictionJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.model == "model_value"
assert response.generate_explanation is True
assert response.state == job_state.JobState.JOB_STATE_QUEUED
def test_create_batch_prediction_job_from_dict():
test_create_batch_prediction_job(request_type=dict)
def test_create_batch_prediction_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_batch_prediction_job), "__call__"
) as call:
client.create_batch_prediction_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateBatchPredictionJobRequest()
@pytest.mark.asyncio
async def test_create_batch_prediction_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.CreateBatchPredictionJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_batch_prediction_job.BatchPredictionJob(
name="name_value",
display_name="display_name_value",
model="model_value",
generate_explanation=True,
state=job_state.JobState.JOB_STATE_QUEUED,
)
)
response = await client.create_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateBatchPredictionJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_batch_prediction_job.BatchPredictionJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.model == "model_value"
assert response.generate_explanation is True
assert response.state == job_state.JobState.JOB_STATE_QUEUED
@pytest.mark.asyncio
async def test_create_batch_prediction_job_async_from_dict():
await test_create_batch_prediction_job_async(request_type=dict)
def test_create_batch_prediction_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CreateBatchPredictionJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_batch_prediction_job), "__call__"
) as call:
call.return_value = gca_batch_prediction_job.BatchPredictionJob()
client.create_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_batch_prediction_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CreateBatchPredictionJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_batch_prediction_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_batch_prediction_job.BatchPredictionJob()
)
await client.create_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_batch_prediction_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_batch_prediction_job.BatchPredictionJob()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_batch_prediction_job(
parent="parent_value",
batch_prediction_job=gca_batch_prediction_job.BatchPredictionJob(
name="name_value"
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].batch_prediction_job
mock_val = gca_batch_prediction_job.BatchPredictionJob(name="name_value")
assert arg == mock_val
def test_create_batch_prediction_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_batch_prediction_job(
job_service.CreateBatchPredictionJobRequest(),
parent="parent_value",
batch_prediction_job=gca_batch_prediction_job.BatchPredictionJob(
name="name_value"
),
)
@pytest.mark.asyncio
async def test_create_batch_prediction_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_batch_prediction_job.BatchPredictionJob()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_batch_prediction_job.BatchPredictionJob()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_batch_prediction_job(
parent="parent_value",
batch_prediction_job=gca_batch_prediction_job.BatchPredictionJob(
name="name_value"
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].batch_prediction_job
mock_val = gca_batch_prediction_job.BatchPredictionJob(name="name_value")
assert arg == mock_val
@pytest.mark.asyncio
async def test_create_batch_prediction_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_batch_prediction_job(
job_service.CreateBatchPredictionJobRequest(),
parent="parent_value",
batch_prediction_job=gca_batch_prediction_job.BatchPredictionJob(
name="name_value"
),
)
def test_get_batch_prediction_job(
transport: str = "grpc", request_type=job_service.GetBatchPredictionJobRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = batch_prediction_job.BatchPredictionJob(
name="name_value",
display_name="display_name_value",
model="model_value",
generate_explanation=True,
state=job_state.JobState.JOB_STATE_QUEUED,
)
response = client.get_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetBatchPredictionJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, batch_prediction_job.BatchPredictionJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.model == "model_value"
assert response.generate_explanation is True
assert response.state == job_state.JobState.JOB_STATE_QUEUED
def test_get_batch_prediction_job_from_dict():
test_get_batch_prediction_job(request_type=dict)
def test_get_batch_prediction_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_batch_prediction_job), "__call__"
) as call:
client.get_batch_prediction_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetBatchPredictionJobRequest()
@pytest.mark.asyncio
async def test_get_batch_prediction_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.GetBatchPredictionJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
batch_prediction_job.BatchPredictionJob(
name="name_value",
display_name="display_name_value",
model="model_value",
generate_explanation=True,
state=job_state.JobState.JOB_STATE_QUEUED,
)
)
response = await client.get_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetBatchPredictionJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, batch_prediction_job.BatchPredictionJob)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.model == "model_value"
assert response.generate_explanation is True
assert response.state == job_state.JobState.JOB_STATE_QUEUED
@pytest.mark.asyncio
async def test_get_batch_prediction_job_async_from_dict():
await test_get_batch_prediction_job_async(request_type=dict)
def test_get_batch_prediction_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.GetBatchPredictionJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_batch_prediction_job), "__call__"
) as call:
call.return_value = batch_prediction_job.BatchPredictionJob()
client.get_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_batch_prediction_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.GetBatchPredictionJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_batch_prediction_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
batch_prediction_job.BatchPredictionJob()
)
await client.get_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_batch_prediction_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = batch_prediction_job.BatchPredictionJob()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_batch_prediction_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_get_batch_prediction_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_batch_prediction_job(
job_service.GetBatchPredictionJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_batch_prediction_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = batch_prediction_job.BatchPredictionJob()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
batch_prediction_job.BatchPredictionJob()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_batch_prediction_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_get_batch_prediction_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_batch_prediction_job(
job_service.GetBatchPredictionJobRequest(), name="name_value",
)
def test_list_batch_prediction_jobs(
transport: str = "grpc", request_type=job_service.ListBatchPredictionJobsRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_batch_prediction_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListBatchPredictionJobsResponse(
next_page_token="next_page_token_value",
)
response = client.list_batch_prediction_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListBatchPredictionJobsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListBatchPredictionJobsPager)
assert response.next_page_token == "next_page_token_value"
def test_list_batch_prediction_jobs_from_dict():
test_list_batch_prediction_jobs(request_type=dict)
def test_list_batch_prediction_jobs_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_batch_prediction_jobs), "__call__"
) as call:
client.list_batch_prediction_jobs()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListBatchPredictionJobsRequest()
@pytest.mark.asyncio
async def test_list_batch_prediction_jobs_async(
transport: str = "grpc_asyncio",
request_type=job_service.ListBatchPredictionJobsRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_batch_prediction_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListBatchPredictionJobsResponse(
next_page_token="next_page_token_value",
)
)
response = await client.list_batch_prediction_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListBatchPredictionJobsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListBatchPredictionJobsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_batch_prediction_jobs_async_from_dict():
await test_list_batch_prediction_jobs_async(request_type=dict)
def test_list_batch_prediction_jobs_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.ListBatchPredictionJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_batch_prediction_jobs), "__call__"
) as call:
call.return_value = job_service.ListBatchPredictionJobsResponse()
client.list_batch_prediction_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_batch_prediction_jobs_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.ListBatchPredictionJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_batch_prediction_jobs), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListBatchPredictionJobsResponse()
)
await client.list_batch_prediction_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_batch_prediction_jobs_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_batch_prediction_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListBatchPredictionJobsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_batch_prediction_jobs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
def test_list_batch_prediction_jobs_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_batch_prediction_jobs(
job_service.ListBatchPredictionJobsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_batch_prediction_jobs_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_batch_prediction_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListBatchPredictionJobsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListBatchPredictionJobsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_batch_prediction_jobs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_list_batch_prediction_jobs_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_batch_prediction_jobs(
job_service.ListBatchPredictionJobsRequest(), parent="parent_value",
)
def test_list_batch_prediction_jobs_pager():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_batch_prediction_jobs), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[
batch_prediction_job.BatchPredictionJob(),
batch_prediction_job.BatchPredictionJob(),
batch_prediction_job.BatchPredictionJob(),
],
next_page_token="abc",
),
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[], next_page_token="def",
),
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[batch_prediction_job.BatchPredictionJob(),],
next_page_token="ghi",
),
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[
batch_prediction_job.BatchPredictionJob(),
batch_prediction_job.BatchPredictionJob(),
],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_batch_prediction_jobs(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(
isinstance(i, batch_prediction_job.BatchPredictionJob) for i in results
)
def test_list_batch_prediction_jobs_pages():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_batch_prediction_jobs), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[
batch_prediction_job.BatchPredictionJob(),
batch_prediction_job.BatchPredictionJob(),
batch_prediction_job.BatchPredictionJob(),
],
next_page_token="abc",
),
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[], next_page_token="def",
),
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[batch_prediction_job.BatchPredictionJob(),],
next_page_token="ghi",
),
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[
batch_prediction_job.BatchPredictionJob(),
batch_prediction_job.BatchPredictionJob(),
],
),
RuntimeError,
)
pages = list(client.list_batch_prediction_jobs(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_batch_prediction_jobs_async_pager():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_batch_prediction_jobs),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[
batch_prediction_job.BatchPredictionJob(),
batch_prediction_job.BatchPredictionJob(),
batch_prediction_job.BatchPredictionJob(),
],
next_page_token="abc",
),
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[], next_page_token="def",
),
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[batch_prediction_job.BatchPredictionJob(),],
next_page_token="ghi",
),
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[
batch_prediction_job.BatchPredictionJob(),
batch_prediction_job.BatchPredictionJob(),
],
),
RuntimeError,
)
async_pager = await client.list_batch_prediction_jobs(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(
isinstance(i, batch_prediction_job.BatchPredictionJob) for i in responses
)
@pytest.mark.asyncio
async def test_list_batch_prediction_jobs_async_pages():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_batch_prediction_jobs),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[
batch_prediction_job.BatchPredictionJob(),
batch_prediction_job.BatchPredictionJob(),
batch_prediction_job.BatchPredictionJob(),
],
next_page_token="abc",
),
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[], next_page_token="def",
),
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[batch_prediction_job.BatchPredictionJob(),],
next_page_token="ghi",
),
job_service.ListBatchPredictionJobsResponse(
batch_prediction_jobs=[
batch_prediction_job.BatchPredictionJob(),
batch_prediction_job.BatchPredictionJob(),
],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_batch_prediction_jobs(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_delete_batch_prediction_job(
transport: str = "grpc", request_type=job_service.DeleteBatchPredictionJobRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.delete_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteBatchPredictionJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_delete_batch_prediction_job_from_dict():
test_delete_batch_prediction_job(request_type=dict)
def test_delete_batch_prediction_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_batch_prediction_job), "__call__"
) as call:
client.delete_batch_prediction_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteBatchPredictionJobRequest()
@pytest.mark.asyncio
async def test_delete_batch_prediction_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.DeleteBatchPredictionJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.delete_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteBatchPredictionJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_delete_batch_prediction_job_async_from_dict():
await test_delete_batch_prediction_job_async(request_type=dict)
def test_delete_batch_prediction_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.DeleteBatchPredictionJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_batch_prediction_job), "__call__"
) as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.delete_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_delete_batch_prediction_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.DeleteBatchPredictionJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_batch_prediction_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.delete_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_delete_batch_prediction_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.delete_batch_prediction_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_delete_batch_prediction_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.delete_batch_prediction_job(
job_service.DeleteBatchPredictionJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_delete_batch_prediction_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.delete_batch_prediction_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_delete_batch_prediction_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.delete_batch_prediction_job(
job_service.DeleteBatchPredictionJobRequest(), name="name_value",
)
def test_cancel_batch_prediction_job(
transport: str = "grpc", request_type=job_service.CancelBatchPredictionJobRequest
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
response = client.cancel_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CancelBatchPredictionJobRequest()
# Establish that the response is the type that we expect.
assert response is None
def test_cancel_batch_prediction_job_from_dict():
test_cancel_batch_prediction_job(request_type=dict)
def test_cancel_batch_prediction_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_batch_prediction_job), "__call__"
) as call:
client.cancel_batch_prediction_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CancelBatchPredictionJobRequest()
@pytest.mark.asyncio
async def test_cancel_batch_prediction_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.CancelBatchPredictionJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
response = await client.cancel_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CancelBatchPredictionJobRequest()
# Establish that the response is the type that we expect.
assert response is None
@pytest.mark.asyncio
async def test_cancel_batch_prediction_job_async_from_dict():
await test_cancel_batch_prediction_job_async(request_type=dict)
def test_cancel_batch_prediction_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CancelBatchPredictionJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_batch_prediction_job), "__call__"
) as call:
call.return_value = None
client.cancel_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_cancel_batch_prediction_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CancelBatchPredictionJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_batch_prediction_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
await client.cancel_batch_prediction_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_cancel_batch_prediction_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.cancel_batch_prediction_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_cancel_batch_prediction_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.cancel_batch_prediction_job(
job_service.CancelBatchPredictionJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_cancel_batch_prediction_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.cancel_batch_prediction_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.cancel_batch_prediction_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_cancel_batch_prediction_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.cancel_batch_prediction_job(
job_service.CancelBatchPredictionJobRequest(), name="name_value",
)
def test_create_model_deployment_monitoring_job(
transport: str = "grpc",
request_type=job_service.CreateModelDeploymentMonitoringJobRequest,
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value",
display_name="display_name_value",
endpoint="endpoint_value",
state=job_state.JobState.JOB_STATE_QUEUED,
schedule_state=gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob.MonitoringScheduleState.PENDING,
predict_instance_schema_uri="predict_instance_schema_uri_value",
analysis_instance_schema_uri="analysis_instance_schema_uri_value",
enable_monitoring_pipeline_logs=True,
)
response = client.create_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateModelDeploymentMonitoringJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(
response, gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob
)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.endpoint == "endpoint_value"
assert response.state == job_state.JobState.JOB_STATE_QUEUED
assert (
response.schedule_state
== gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob.MonitoringScheduleState.PENDING
)
assert response.predict_instance_schema_uri == "predict_instance_schema_uri_value"
assert response.analysis_instance_schema_uri == "analysis_instance_schema_uri_value"
assert response.enable_monitoring_pipeline_logs is True
def test_create_model_deployment_monitoring_job_from_dict():
test_create_model_deployment_monitoring_job(request_type=dict)
def test_create_model_deployment_monitoring_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_model_deployment_monitoring_job), "__call__"
) as call:
client.create_model_deployment_monitoring_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateModelDeploymentMonitoringJobRequest()
@pytest.mark.asyncio
async def test_create_model_deployment_monitoring_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.CreateModelDeploymentMonitoringJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value",
display_name="display_name_value",
endpoint="endpoint_value",
state=job_state.JobState.JOB_STATE_QUEUED,
schedule_state=gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob.MonitoringScheduleState.PENDING,
predict_instance_schema_uri="predict_instance_schema_uri_value",
analysis_instance_schema_uri="analysis_instance_schema_uri_value",
enable_monitoring_pipeline_logs=True,
)
)
response = await client.create_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.CreateModelDeploymentMonitoringJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(
response, gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob
)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.endpoint == "endpoint_value"
assert response.state == job_state.JobState.JOB_STATE_QUEUED
assert (
response.schedule_state
== gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob.MonitoringScheduleState.PENDING
)
assert response.predict_instance_schema_uri == "predict_instance_schema_uri_value"
assert response.analysis_instance_schema_uri == "analysis_instance_schema_uri_value"
assert response.enable_monitoring_pipeline_logs is True
@pytest.mark.asyncio
async def test_create_model_deployment_monitoring_job_async_from_dict():
await test_create_model_deployment_monitoring_job_async(request_type=dict)
def test_create_model_deployment_monitoring_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CreateModelDeploymentMonitoringJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_model_deployment_monitoring_job), "__call__"
) as call:
call.return_value = (
gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob()
)
client.create_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_model_deployment_monitoring_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.CreateModelDeploymentMonitoringJobRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_model_deployment_monitoring_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob()
)
await client.create_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_model_deployment_monitoring_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = (
gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_model_deployment_monitoring_job(
parent="parent_value",
model_deployment_monitoring_job=gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value"
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].model_deployment_monitoring_job
mock_val = gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value"
)
assert arg == mock_val
def test_create_model_deployment_monitoring_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_model_deployment_monitoring_job(
job_service.CreateModelDeploymentMonitoringJobRequest(),
parent="parent_value",
model_deployment_monitoring_job=gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value"
),
)
@pytest.mark.asyncio
async def test_create_model_deployment_monitoring_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.create_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = (
gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob()
)
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_model_deployment_monitoring_job(
parent="parent_value",
model_deployment_monitoring_job=gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value"
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].model_deployment_monitoring_job
mock_val = gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value"
)
assert arg == mock_val
@pytest.mark.asyncio
async def test_create_model_deployment_monitoring_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_model_deployment_monitoring_job(
job_service.CreateModelDeploymentMonitoringJobRequest(),
parent="parent_value",
model_deployment_monitoring_job=gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value"
),
)
def test_search_model_deployment_monitoring_stats_anomalies(
transport: str = "grpc",
request_type=job_service.SearchModelDeploymentMonitoringStatsAnomaliesRequest,
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.search_model_deployment_monitoring_stats_anomalies),
"__call__",
) as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
next_page_token="next_page_token_value",
)
response = client.search_model_deployment_monitoring_stats_anomalies(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert (
args[0]
== job_service.SearchModelDeploymentMonitoringStatsAnomaliesRequest()
)
# Establish that the response is the type that we expect.
assert isinstance(
response, pagers.SearchModelDeploymentMonitoringStatsAnomaliesPager
)
assert response.next_page_token == "next_page_token_value"
def test_search_model_deployment_monitoring_stats_anomalies_from_dict():
test_search_model_deployment_monitoring_stats_anomalies(request_type=dict)
def test_search_model_deployment_monitoring_stats_anomalies_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.search_model_deployment_monitoring_stats_anomalies),
"__call__",
) as call:
client.search_model_deployment_monitoring_stats_anomalies()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert (
args[0]
== job_service.SearchModelDeploymentMonitoringStatsAnomaliesRequest()
)
@pytest.mark.asyncio
async def test_search_model_deployment_monitoring_stats_anomalies_async(
transport: str = "grpc_asyncio",
request_type=job_service.SearchModelDeploymentMonitoringStatsAnomaliesRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.search_model_deployment_monitoring_stats_anomalies),
"__call__",
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
next_page_token="next_page_token_value",
)
)
response = await client.search_model_deployment_monitoring_stats_anomalies(
request
)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert (
args[0]
== job_service.SearchModelDeploymentMonitoringStatsAnomaliesRequest()
)
# Establish that the response is the type that we expect.
assert isinstance(
response, pagers.SearchModelDeploymentMonitoringStatsAnomaliesAsyncPager
)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_search_model_deployment_monitoring_stats_anomalies_async_from_dict():
await test_search_model_deployment_monitoring_stats_anomalies_async(
request_type=dict
)
def test_search_model_deployment_monitoring_stats_anomalies_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.SearchModelDeploymentMonitoringStatsAnomaliesRequest()
request.model_deployment_monitoring_job = "model_deployment_monitoring_job/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.search_model_deployment_monitoring_stats_anomalies),
"__call__",
) as call:
call.return_value = (
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse()
)
client.search_model_deployment_monitoring_stats_anomalies(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
"x-goog-request-params",
"model_deployment_monitoring_job=model_deployment_monitoring_job/value",
) in kw["metadata"]
@pytest.mark.asyncio
async def test_search_model_deployment_monitoring_stats_anomalies_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.SearchModelDeploymentMonitoringStatsAnomaliesRequest()
request.model_deployment_monitoring_job = "model_deployment_monitoring_job/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.search_model_deployment_monitoring_stats_anomalies),
"__call__",
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse()
)
await client.search_model_deployment_monitoring_stats_anomalies(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
"x-goog-request-params",
"model_deployment_monitoring_job=model_deployment_monitoring_job/value",
) in kw["metadata"]
def test_search_model_deployment_monitoring_stats_anomalies_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.search_model_deployment_monitoring_stats_anomalies),
"__call__",
) as call:
# Designate an appropriate return value for the call.
call.return_value = (
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.search_model_deployment_monitoring_stats_anomalies(
model_deployment_monitoring_job="model_deployment_monitoring_job_value",
deployed_model_id="deployed_model_id_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].model_deployment_monitoring_job
mock_val = "model_deployment_monitoring_job_value"
assert arg == mock_val
arg = args[0].deployed_model_id
mock_val = "deployed_model_id_value"
assert arg == mock_val
def test_search_model_deployment_monitoring_stats_anomalies_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.search_model_deployment_monitoring_stats_anomalies(
job_service.SearchModelDeploymentMonitoringStatsAnomaliesRequest(),
model_deployment_monitoring_job="model_deployment_monitoring_job_value",
deployed_model_id="deployed_model_id_value",
)
@pytest.mark.asyncio
async def test_search_model_deployment_monitoring_stats_anomalies_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.search_model_deployment_monitoring_stats_anomalies),
"__call__",
) as call:
# Designate an appropriate return value for the call.
call.return_value = (
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse()
)
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.search_model_deployment_monitoring_stats_anomalies(
model_deployment_monitoring_job="model_deployment_monitoring_job_value",
deployed_model_id="deployed_model_id_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].model_deployment_monitoring_job
mock_val = "model_deployment_monitoring_job_value"
assert arg == mock_val
arg = args[0].deployed_model_id
mock_val = "deployed_model_id_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_search_model_deployment_monitoring_stats_anomalies_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.search_model_deployment_monitoring_stats_anomalies(
job_service.SearchModelDeploymentMonitoringStatsAnomaliesRequest(),
model_deployment_monitoring_job="model_deployment_monitoring_job_value",
deployed_model_id="deployed_model_id_value",
)
def test_search_model_deployment_monitoring_stats_anomalies_pager():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.search_model_deployment_monitoring_stats_anomalies),
"__call__",
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
],
next_page_token="abc",
),
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[], next_page_token="def",
),
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
],
next_page_token="ghi",
),
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata(
(("model_deployment_monitoring_job", ""),)
),
)
pager = client.search_model_deployment_monitoring_stats_anomalies(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(
isinstance(
i, gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies
)
for i in results
)
def test_search_model_deployment_monitoring_stats_anomalies_pages():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.search_model_deployment_monitoring_stats_anomalies),
"__call__",
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
],
next_page_token="abc",
),
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[], next_page_token="def",
),
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
],
next_page_token="ghi",
),
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
],
),
RuntimeError,
)
pages = list(
client.search_model_deployment_monitoring_stats_anomalies(request={}).pages
)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_search_model_deployment_monitoring_stats_anomalies_async_pager():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.search_model_deployment_monitoring_stats_anomalies),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
],
next_page_token="abc",
),
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[], next_page_token="def",
),
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
],
next_page_token="ghi",
),
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
],
),
RuntimeError,
)
async_pager = await client.search_model_deployment_monitoring_stats_anomalies(
request={},
)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(
isinstance(
i, gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies
)
for i in responses
)
@pytest.mark.asyncio
async def test_search_model_deployment_monitoring_stats_anomalies_async_pages():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.search_model_deployment_monitoring_stats_anomalies),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
],
next_page_token="abc",
),
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[], next_page_token="def",
),
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
],
next_page_token="ghi",
),
job_service.SearchModelDeploymentMonitoringStatsAnomaliesResponse(
monitoring_stats=[
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
gca_model_deployment_monitoring_job.ModelMonitoringStatsAnomalies(),
],
),
RuntimeError,
)
pages = []
async for page_ in (
await client.search_model_deployment_monitoring_stats_anomalies(request={})
).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_get_model_deployment_monitoring_job(
transport: str = "grpc",
request_type=job_service.GetModelDeploymentMonitoringJobRequest,
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value",
display_name="display_name_value",
endpoint="endpoint_value",
state=job_state.JobState.JOB_STATE_QUEUED,
schedule_state=model_deployment_monitoring_job.ModelDeploymentMonitoringJob.MonitoringScheduleState.PENDING,
predict_instance_schema_uri="predict_instance_schema_uri_value",
analysis_instance_schema_uri="analysis_instance_schema_uri_value",
enable_monitoring_pipeline_logs=True,
)
response = client.get_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetModelDeploymentMonitoringJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(
response, model_deployment_monitoring_job.ModelDeploymentMonitoringJob
)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.endpoint == "endpoint_value"
assert response.state == job_state.JobState.JOB_STATE_QUEUED
assert (
response.schedule_state
== model_deployment_monitoring_job.ModelDeploymentMonitoringJob.MonitoringScheduleState.PENDING
)
assert response.predict_instance_schema_uri == "predict_instance_schema_uri_value"
assert response.analysis_instance_schema_uri == "analysis_instance_schema_uri_value"
assert response.enable_monitoring_pipeline_logs is True
def test_get_model_deployment_monitoring_job_from_dict():
test_get_model_deployment_monitoring_job(request_type=dict)
def test_get_model_deployment_monitoring_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_deployment_monitoring_job), "__call__"
) as call:
client.get_model_deployment_monitoring_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetModelDeploymentMonitoringJobRequest()
@pytest.mark.asyncio
async def test_get_model_deployment_monitoring_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.GetModelDeploymentMonitoringJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value",
display_name="display_name_value",
endpoint="endpoint_value",
state=job_state.JobState.JOB_STATE_QUEUED,
schedule_state=model_deployment_monitoring_job.ModelDeploymentMonitoringJob.MonitoringScheduleState.PENDING,
predict_instance_schema_uri="predict_instance_schema_uri_value",
analysis_instance_schema_uri="analysis_instance_schema_uri_value",
enable_monitoring_pipeline_logs=True,
)
)
response = await client.get_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.GetModelDeploymentMonitoringJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(
response, model_deployment_monitoring_job.ModelDeploymentMonitoringJob
)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.endpoint == "endpoint_value"
assert response.state == job_state.JobState.JOB_STATE_QUEUED
assert (
response.schedule_state
== model_deployment_monitoring_job.ModelDeploymentMonitoringJob.MonitoringScheduleState.PENDING
)
assert response.predict_instance_schema_uri == "predict_instance_schema_uri_value"
assert response.analysis_instance_schema_uri == "analysis_instance_schema_uri_value"
assert response.enable_monitoring_pipeline_logs is True
@pytest.mark.asyncio
async def test_get_model_deployment_monitoring_job_async_from_dict():
await test_get_model_deployment_monitoring_job_async(request_type=dict)
def test_get_model_deployment_monitoring_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.GetModelDeploymentMonitoringJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_deployment_monitoring_job), "__call__"
) as call:
call.return_value = (
model_deployment_monitoring_job.ModelDeploymentMonitoringJob()
)
client.get_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_model_deployment_monitoring_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.GetModelDeploymentMonitoringJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_deployment_monitoring_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
model_deployment_monitoring_job.ModelDeploymentMonitoringJob()
)
await client.get_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_model_deployment_monitoring_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = (
model_deployment_monitoring_job.ModelDeploymentMonitoringJob()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_model_deployment_monitoring_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_get_model_deployment_monitoring_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_model_deployment_monitoring_job(
job_service.GetModelDeploymentMonitoringJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_model_deployment_monitoring_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = (
model_deployment_monitoring_job.ModelDeploymentMonitoringJob()
)
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
model_deployment_monitoring_job.ModelDeploymentMonitoringJob()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_model_deployment_monitoring_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_get_model_deployment_monitoring_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_model_deployment_monitoring_job(
job_service.GetModelDeploymentMonitoringJobRequest(), name="name_value",
)
def test_list_model_deployment_monitoring_jobs(
transport: str = "grpc",
request_type=job_service.ListModelDeploymentMonitoringJobsRequest,
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_deployment_monitoring_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListModelDeploymentMonitoringJobsResponse(
next_page_token="next_page_token_value",
)
response = client.list_model_deployment_monitoring_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListModelDeploymentMonitoringJobsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListModelDeploymentMonitoringJobsPager)
assert response.next_page_token == "next_page_token_value"
def test_list_model_deployment_monitoring_jobs_from_dict():
test_list_model_deployment_monitoring_jobs(request_type=dict)
def test_list_model_deployment_monitoring_jobs_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_deployment_monitoring_jobs), "__call__"
) as call:
client.list_model_deployment_monitoring_jobs()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListModelDeploymentMonitoringJobsRequest()
@pytest.mark.asyncio
async def test_list_model_deployment_monitoring_jobs_async(
transport: str = "grpc_asyncio",
request_type=job_service.ListModelDeploymentMonitoringJobsRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_deployment_monitoring_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListModelDeploymentMonitoringJobsResponse(
next_page_token="next_page_token_value",
)
)
response = await client.list_model_deployment_monitoring_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ListModelDeploymentMonitoringJobsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListModelDeploymentMonitoringJobsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_model_deployment_monitoring_jobs_async_from_dict():
await test_list_model_deployment_monitoring_jobs_async(request_type=dict)
def test_list_model_deployment_monitoring_jobs_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.ListModelDeploymentMonitoringJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_deployment_monitoring_jobs), "__call__"
) as call:
call.return_value = job_service.ListModelDeploymentMonitoringJobsResponse()
client.list_model_deployment_monitoring_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_model_deployment_monitoring_jobs_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.ListModelDeploymentMonitoringJobsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_deployment_monitoring_jobs), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListModelDeploymentMonitoringJobsResponse()
)
await client.list_model_deployment_monitoring_jobs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_model_deployment_monitoring_jobs_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_deployment_monitoring_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListModelDeploymentMonitoringJobsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_model_deployment_monitoring_jobs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
def test_list_model_deployment_monitoring_jobs_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_model_deployment_monitoring_jobs(
job_service.ListModelDeploymentMonitoringJobsRequest(),
parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_model_deployment_monitoring_jobs_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_deployment_monitoring_jobs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = job_service.ListModelDeploymentMonitoringJobsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
job_service.ListModelDeploymentMonitoringJobsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_model_deployment_monitoring_jobs(
parent="parent_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_list_model_deployment_monitoring_jobs_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_model_deployment_monitoring_jobs(
job_service.ListModelDeploymentMonitoringJobsRequest(),
parent="parent_value",
)
def test_list_model_deployment_monitoring_jobs_pager():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_deployment_monitoring_jobs), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
],
next_page_token="abc",
),
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[], next_page_token="def",
),
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
],
next_page_token="ghi",
),
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_model_deployment_monitoring_jobs(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(
isinstance(i, model_deployment_monitoring_job.ModelDeploymentMonitoringJob)
for i in results
)
def test_list_model_deployment_monitoring_jobs_pages():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_deployment_monitoring_jobs), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
],
next_page_token="abc",
),
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[], next_page_token="def",
),
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
],
next_page_token="ghi",
),
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
],
),
RuntimeError,
)
pages = list(client.list_model_deployment_monitoring_jobs(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_model_deployment_monitoring_jobs_async_pager():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_deployment_monitoring_jobs),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
],
next_page_token="abc",
),
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[], next_page_token="def",
),
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
],
next_page_token="ghi",
),
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
],
),
RuntimeError,
)
async_pager = await client.list_model_deployment_monitoring_jobs(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(
isinstance(i, model_deployment_monitoring_job.ModelDeploymentMonitoringJob)
for i in responses
)
@pytest.mark.asyncio
async def test_list_model_deployment_monitoring_jobs_async_pages():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_deployment_monitoring_jobs),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
],
next_page_token="abc",
),
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[], next_page_token="def",
),
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
],
next_page_token="ghi",
),
job_service.ListModelDeploymentMonitoringJobsResponse(
model_deployment_monitoring_jobs=[
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
model_deployment_monitoring_job.ModelDeploymentMonitoringJob(),
],
),
RuntimeError,
)
pages = []
async for page_ in (
await client.list_model_deployment_monitoring_jobs(request={})
).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_update_model_deployment_monitoring_job(
transport: str = "grpc",
request_type=job_service.UpdateModelDeploymentMonitoringJobRequest,
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.update_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.UpdateModelDeploymentMonitoringJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_update_model_deployment_monitoring_job_from_dict():
test_update_model_deployment_monitoring_job(request_type=dict)
def test_update_model_deployment_monitoring_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_model_deployment_monitoring_job), "__call__"
) as call:
client.update_model_deployment_monitoring_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.UpdateModelDeploymentMonitoringJobRequest()
@pytest.mark.asyncio
async def test_update_model_deployment_monitoring_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.UpdateModelDeploymentMonitoringJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.update_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.UpdateModelDeploymentMonitoringJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_update_model_deployment_monitoring_job_async_from_dict():
await test_update_model_deployment_monitoring_job_async(request_type=dict)
def test_update_model_deployment_monitoring_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.UpdateModelDeploymentMonitoringJobRequest()
request.model_deployment_monitoring_job.name = (
"model_deployment_monitoring_job.name/value"
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_model_deployment_monitoring_job), "__call__"
) as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.update_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
"x-goog-request-params",
"model_deployment_monitoring_job.name=model_deployment_monitoring_job.name/value",
) in kw["metadata"]
@pytest.mark.asyncio
async def test_update_model_deployment_monitoring_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.UpdateModelDeploymentMonitoringJobRequest()
request.model_deployment_monitoring_job.name = (
"model_deployment_monitoring_job.name/value"
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_model_deployment_monitoring_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.update_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert (
"x-goog-request-params",
"model_deployment_monitoring_job.name=model_deployment_monitoring_job.name/value",
) in kw["metadata"]
def test_update_model_deployment_monitoring_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.update_model_deployment_monitoring_job(
model_deployment_monitoring_job=gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value"
),
update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].model_deployment_monitoring_job
mock_val = gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value"
)
assert arg == mock_val
arg = args[0].update_mask
mock_val = field_mask_pb2.FieldMask(paths=["paths_value"])
assert arg == mock_val
def test_update_model_deployment_monitoring_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.update_model_deployment_monitoring_job(
job_service.UpdateModelDeploymentMonitoringJobRequest(),
model_deployment_monitoring_job=gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value"
),
update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]),
)
@pytest.mark.asyncio
async def test_update_model_deployment_monitoring_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.update_model_deployment_monitoring_job(
model_deployment_monitoring_job=gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value"
),
update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].model_deployment_monitoring_job
mock_val = gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value"
)
assert arg == mock_val
arg = args[0].update_mask
mock_val = field_mask_pb2.FieldMask(paths=["paths_value"])
assert arg == mock_val
@pytest.mark.asyncio
async def test_update_model_deployment_monitoring_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.update_model_deployment_monitoring_job(
job_service.UpdateModelDeploymentMonitoringJobRequest(),
model_deployment_monitoring_job=gca_model_deployment_monitoring_job.ModelDeploymentMonitoringJob(
name="name_value"
),
update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]),
)
def test_delete_model_deployment_monitoring_job(
transport: str = "grpc",
request_type=job_service.DeleteModelDeploymentMonitoringJobRequest,
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.delete_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteModelDeploymentMonitoringJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_delete_model_deployment_monitoring_job_from_dict():
test_delete_model_deployment_monitoring_job(request_type=dict)
def test_delete_model_deployment_monitoring_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_model_deployment_monitoring_job), "__call__"
) as call:
client.delete_model_deployment_monitoring_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteModelDeploymentMonitoringJobRequest()
@pytest.mark.asyncio
async def test_delete_model_deployment_monitoring_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.DeleteModelDeploymentMonitoringJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.delete_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.DeleteModelDeploymentMonitoringJobRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_delete_model_deployment_monitoring_job_async_from_dict():
await test_delete_model_deployment_monitoring_job_async(request_type=dict)
def test_delete_model_deployment_monitoring_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.DeleteModelDeploymentMonitoringJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_model_deployment_monitoring_job), "__call__"
) as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.delete_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_delete_model_deployment_monitoring_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.DeleteModelDeploymentMonitoringJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_model_deployment_monitoring_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.delete_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_delete_model_deployment_monitoring_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.delete_model_deployment_monitoring_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_delete_model_deployment_monitoring_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.delete_model_deployment_monitoring_job(
job_service.DeleteModelDeploymentMonitoringJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_delete_model_deployment_monitoring_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.delete_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.delete_model_deployment_monitoring_job(
name="name_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_delete_model_deployment_monitoring_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.delete_model_deployment_monitoring_job(
job_service.DeleteModelDeploymentMonitoringJobRequest(), name="name_value",
)
def test_pause_model_deployment_monitoring_job(
transport: str = "grpc",
request_type=job_service.PauseModelDeploymentMonitoringJobRequest,
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.pause_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
response = client.pause_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.PauseModelDeploymentMonitoringJobRequest()
# Establish that the response is the type that we expect.
assert response is None
def test_pause_model_deployment_monitoring_job_from_dict():
test_pause_model_deployment_monitoring_job(request_type=dict)
def test_pause_model_deployment_monitoring_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.pause_model_deployment_monitoring_job), "__call__"
) as call:
client.pause_model_deployment_monitoring_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.PauseModelDeploymentMonitoringJobRequest()
@pytest.mark.asyncio
async def test_pause_model_deployment_monitoring_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.PauseModelDeploymentMonitoringJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.pause_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
response = await client.pause_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.PauseModelDeploymentMonitoringJobRequest()
# Establish that the response is the type that we expect.
assert response is None
@pytest.mark.asyncio
async def test_pause_model_deployment_monitoring_job_async_from_dict():
await test_pause_model_deployment_monitoring_job_async(request_type=dict)
def test_pause_model_deployment_monitoring_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.PauseModelDeploymentMonitoringJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.pause_model_deployment_monitoring_job), "__call__"
) as call:
call.return_value = None
client.pause_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_pause_model_deployment_monitoring_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.PauseModelDeploymentMonitoringJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.pause_model_deployment_monitoring_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
await client.pause_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_pause_model_deployment_monitoring_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.pause_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.pause_model_deployment_monitoring_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_pause_model_deployment_monitoring_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.pause_model_deployment_monitoring_job(
job_service.PauseModelDeploymentMonitoringJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_pause_model_deployment_monitoring_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.pause_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.pause_model_deployment_monitoring_job(
name="name_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_pause_model_deployment_monitoring_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.pause_model_deployment_monitoring_job(
job_service.PauseModelDeploymentMonitoringJobRequest(), name="name_value",
)
def test_resume_model_deployment_monitoring_job(
transport: str = "grpc",
request_type=job_service.ResumeModelDeploymentMonitoringJobRequest,
):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.resume_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
response = client.resume_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ResumeModelDeploymentMonitoringJobRequest()
# Establish that the response is the type that we expect.
assert response is None
def test_resume_model_deployment_monitoring_job_from_dict():
test_resume_model_deployment_monitoring_job(request_type=dict)
def test_resume_model_deployment_monitoring_job_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.resume_model_deployment_monitoring_job), "__call__"
) as call:
client.resume_model_deployment_monitoring_job()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ResumeModelDeploymentMonitoringJobRequest()
@pytest.mark.asyncio
async def test_resume_model_deployment_monitoring_job_async(
transport: str = "grpc_asyncio",
request_type=job_service.ResumeModelDeploymentMonitoringJobRequest,
):
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.resume_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
response = await client.resume_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == job_service.ResumeModelDeploymentMonitoringJobRequest()
# Establish that the response is the type that we expect.
assert response is None
@pytest.mark.asyncio
async def test_resume_model_deployment_monitoring_job_async_from_dict():
await test_resume_model_deployment_monitoring_job_async(request_type=dict)
def test_resume_model_deployment_monitoring_job_field_headers():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.ResumeModelDeploymentMonitoringJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.resume_model_deployment_monitoring_job), "__call__"
) as call:
call.return_value = None
client.resume_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_resume_model_deployment_monitoring_job_field_headers_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = job_service.ResumeModelDeploymentMonitoringJobRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.resume_model_deployment_monitoring_job), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
await client.resume_model_deployment_monitoring_job(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_resume_model_deployment_monitoring_job_flattened():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.resume_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.resume_model_deployment_monitoring_job(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_resume_model_deployment_monitoring_job_flattened_error():
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.resume_model_deployment_monitoring_job(
job_service.ResumeModelDeploymentMonitoringJobRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_resume_model_deployment_monitoring_job_flattened_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.resume_model_deployment_monitoring_job), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = None
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.resume_model_deployment_monitoring_job(
name="name_value",
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_resume_model_deployment_monitoring_job_flattened_error_async():
client = JobServiceAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.resume_model_deployment_monitoring_job(
job_service.ResumeModelDeploymentMonitoringJobRequest(), name="name_value",
)
def test_credentials_transport_error():
# It is an error to provide credentials and a transport instance.
transport = transports.JobServiceGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# It is an error to provide a credentials file and a transport instance.
transport = transports.JobServiceGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = JobServiceClient(
client_options={"credentials_file": "credentials.json"},
transport=transport,
)
# It is an error to provide scopes and a transport instance.
transport = transports.JobServiceGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = JobServiceClient(
client_options={"scopes": ["1", "2"]}, transport=transport,
)
def test_transport_instance():
# A client may be instantiated with a custom transport instance.
transport = transports.JobServiceGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
client = JobServiceClient(transport=transport)
assert client.transport is transport
def test_transport_get_channel():
# A client may be instantiated with a custom transport instance.
transport = transports.JobServiceGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
transport = transports.JobServiceGrpcAsyncIOTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
@pytest.mark.parametrize(
"transport_class",
[transports.JobServiceGrpcTransport, transports.JobServiceGrpcAsyncIOTransport,],
)
def test_transport_adc(transport_class):
# Test default credentials are used if not provided.
with mock.patch.object(google.auth, "default") as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport_class()
adc.assert_called_once()
def test_transport_grpc_default():
# A client should use the gRPC transport by default.
client = JobServiceClient(credentials=ga_credentials.AnonymousCredentials(),)
assert isinstance(client.transport, transports.JobServiceGrpcTransport,)
def test_job_service_base_transport_error():
# Passing both a credentials object and credentials_file should raise an error
with pytest.raises(core_exceptions.DuplicateCredentialArgs):
transport = transports.JobServiceTransport(
credentials=ga_credentials.AnonymousCredentials(),
credentials_file="credentials.json",
)
def test_job_service_base_transport():
# Instantiate the base transport.
with mock.patch(
"google.cloud.aiplatform_v1.services.job_service.transports.JobServiceTransport.__init__"
) as Transport:
Transport.return_value = None
transport = transports.JobServiceTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
# Every method on the transport should just blindly
# raise NotImplementedError.
methods = (
"create_custom_job",
"get_custom_job",
"list_custom_jobs",
"delete_custom_job",
"cancel_custom_job",
"create_data_labeling_job",
"get_data_labeling_job",
"list_data_labeling_jobs",
"delete_data_labeling_job",
"cancel_data_labeling_job",
"create_hyperparameter_tuning_job",
"get_hyperparameter_tuning_job",
"list_hyperparameter_tuning_jobs",
"delete_hyperparameter_tuning_job",
"cancel_hyperparameter_tuning_job",
"create_batch_prediction_job",
"get_batch_prediction_job",
"list_batch_prediction_jobs",
"delete_batch_prediction_job",
"cancel_batch_prediction_job",
"create_model_deployment_monitoring_job",
"search_model_deployment_monitoring_stats_anomalies",
"get_model_deployment_monitoring_job",
"list_model_deployment_monitoring_jobs",
"update_model_deployment_monitoring_job",
"delete_model_deployment_monitoring_job",
"pause_model_deployment_monitoring_job",
"resume_model_deployment_monitoring_job",
)
for method in methods:
with pytest.raises(NotImplementedError):
getattr(transport, method)(request=object())
with pytest.raises(NotImplementedError):
transport.close()
# Additionally, the LRO client (a property) should
# also raise NotImplementedError
with pytest.raises(NotImplementedError):
transport.operations_client
def test_job_service_base_transport_with_credentials_file():
# Instantiate the base transport with a credentials file
with mock.patch.object(
google.auth, "load_credentials_from_file", autospec=True
) as load_creds, mock.patch(
"google.cloud.aiplatform_v1.services.job_service.transports.JobServiceTransport._prep_wrapped_messages"
) as Transport:
Transport.return_value = None
load_creds.return_value = (ga_credentials.AnonymousCredentials(), None)
transport = transports.JobServiceTransport(
credentials_file="credentials.json", quota_project_id="octopus",
)
load_creds.assert_called_once_with(
"credentials.json",
scopes=None,
default_scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id="octopus",
)
def test_job_service_base_transport_with_adc():
# Test the default credentials are used if credentials and credentials_file are None.
with mock.patch.object(google.auth, "default", autospec=True) as adc, mock.patch(
"google.cloud.aiplatform_v1.services.job_service.transports.JobServiceTransport._prep_wrapped_messages"
) as Transport:
Transport.return_value = None
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport = transports.JobServiceTransport()
adc.assert_called_once()
def test_job_service_auth_adc():
# If no credentials are provided, we should use ADC credentials.
with mock.patch.object(google.auth, "default", autospec=True) as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
JobServiceClient()
adc.assert_called_once_with(
scopes=None,
default_scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id=None,
)
@pytest.mark.parametrize(
"transport_class",
[transports.JobServiceGrpcTransport, transports.JobServiceGrpcAsyncIOTransport,],
)
def test_job_service_transport_auth_adc(transport_class):
# If credentials and host are not provided, the transport class should use
# ADC credentials.
with mock.patch.object(google.auth, "default", autospec=True) as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport_class(quota_project_id="octopus", scopes=["1", "2"])
adc.assert_called_once_with(
scopes=["1", "2"],
default_scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id="octopus",
)
@pytest.mark.parametrize(
"transport_class,grpc_helpers",
[
(transports.JobServiceGrpcTransport, grpc_helpers),
(transports.JobServiceGrpcAsyncIOTransport, grpc_helpers_async),
],
)
def test_job_service_transport_create_channel(transport_class, grpc_helpers):
# If credentials and host are not provided, the transport class should use
# ADC credentials.
with mock.patch.object(
google.auth, "default", autospec=True
) as adc, mock.patch.object(
grpc_helpers, "create_channel", autospec=True
) as create_channel:
creds = ga_credentials.AnonymousCredentials()
adc.return_value = (creds, None)
transport_class(quota_project_id="octopus", scopes=["1", "2"])
create_channel.assert_called_with(
"aiplatform.googleapis.com:443",
credentials=creds,
credentials_file=None,
quota_project_id="octopus",
default_scopes=("https://www.googleapis.com/auth/cloud-platform",),
scopes=["1", "2"],
default_host="aiplatform.googleapis.com",
ssl_credentials=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
@pytest.mark.parametrize(
"transport_class",
[transports.JobServiceGrpcTransport, transports.JobServiceGrpcAsyncIOTransport],
)
def test_job_service_grpc_transport_client_cert_source_for_mtls(transport_class):
cred = ga_credentials.AnonymousCredentials()
# Check ssl_channel_credentials is used if provided.
with mock.patch.object(transport_class, "create_channel") as mock_create_channel:
mock_ssl_channel_creds = mock.Mock()
transport_class(
host="squid.clam.whelk",
credentials=cred,
ssl_channel_credentials=mock_ssl_channel_creds,
)
mock_create_channel.assert_called_once_with(
"squid.clam.whelk:443",
credentials=cred,
credentials_file=None,
scopes=None,
ssl_credentials=mock_ssl_channel_creds,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
# Check if ssl_channel_credentials is not provided, then client_cert_source_for_mtls
# is used.
with mock.patch.object(transport_class, "create_channel", return_value=mock.Mock()):
with mock.patch("grpc.ssl_channel_credentials") as mock_ssl_cred:
transport_class(
credentials=cred,
client_cert_source_for_mtls=client_cert_source_callback,
)
expected_cert, expected_key = client_cert_source_callback()
mock_ssl_cred.assert_called_once_with(
certificate_chain=expected_cert, private_key=expected_key
)
def test_job_service_host_no_port():
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(
api_endpoint="aiplatform.googleapis.com"
),
)
assert client.transport._host == "aiplatform.googleapis.com:443"
def test_job_service_host_with_port():
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(
api_endpoint="aiplatform.googleapis.com:8000"
),
)
assert client.transport._host == "aiplatform.googleapis.com:8000"
def test_job_service_grpc_transport_channel():
channel = grpc.secure_channel("http://localhost/", grpc.local_channel_credentials())
# Check that channel is used if provided.
transport = transports.JobServiceGrpcTransport(
host="squid.clam.whelk", channel=channel,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
assert transport._ssl_channel_credentials == None
def test_job_service_grpc_asyncio_transport_channel():
channel = aio.secure_channel("http://localhost/", grpc.local_channel_credentials())
# Check that channel is used if provided.
transport = transports.JobServiceGrpcAsyncIOTransport(
host="squid.clam.whelk", channel=channel,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
assert transport._ssl_channel_credentials == None
# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are
# removed from grpc/grpc_asyncio transport constructor.
@pytest.mark.parametrize(
"transport_class",
[transports.JobServiceGrpcTransport, transports.JobServiceGrpcAsyncIOTransport],
)
def test_job_service_transport_channel_mtls_with_client_cert_source(transport_class):
with mock.patch(
"grpc.ssl_channel_credentials", autospec=True
) as grpc_ssl_channel_cred:
with mock.patch.object(
transport_class, "create_channel"
) as grpc_create_channel:
mock_ssl_cred = mock.Mock()
grpc_ssl_channel_cred.return_value = mock_ssl_cred
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
cred = ga_credentials.AnonymousCredentials()
with pytest.warns(DeprecationWarning):
with mock.patch.object(google.auth, "default") as adc:
adc.return_value = (cred, None)
transport = transport_class(
host="squid.clam.whelk",
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=client_cert_source_callback,
)
adc.assert_called_once()
grpc_ssl_channel_cred.assert_called_once_with(
certificate_chain=b"cert bytes", private_key=b"key bytes"
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=cred,
credentials_file=None,
scopes=None,
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
assert transport.grpc_channel == mock_grpc_channel
assert transport._ssl_channel_credentials == mock_ssl_cred
# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are
# removed from grpc/grpc_asyncio transport constructor.
@pytest.mark.parametrize(
"transport_class",
[transports.JobServiceGrpcTransport, transports.JobServiceGrpcAsyncIOTransport],
)
def test_job_service_transport_channel_mtls_with_adc(transport_class):
mock_ssl_cred = mock.Mock()
with mock.patch.multiple(
"google.auth.transport.grpc.SslCredentials",
__init__=mock.Mock(return_value=None),
ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred),
):
with mock.patch.object(
transport_class, "create_channel"
) as grpc_create_channel:
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
mock_cred = mock.Mock()
with pytest.warns(DeprecationWarning):
transport = transport_class(
host="squid.clam.whelk",
credentials=mock_cred,
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=None,
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=mock_cred,
credentials_file=None,
scopes=None,
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
assert transport.grpc_channel == mock_grpc_channel
def test_job_service_grpc_lro_client():
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
transport = client.transport
# Ensure that we have a api-core operations client.
assert isinstance(transport.operations_client, operations_v1.OperationsClient,)
# Ensure that subsequent calls to the property send the exact same object.
assert transport.operations_client is transport.operations_client
def test_job_service_grpc_lro_async_client():
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc_asyncio",
)
transport = client.transport
# Ensure that we have a api-core operations client.
assert isinstance(transport.operations_client, operations_v1.OperationsAsyncClient,)
# Ensure that subsequent calls to the property send the exact same object.
assert transport.operations_client is transport.operations_client
def test_batch_prediction_job_path():
project = "squid"
location = "clam"
batch_prediction_job = "whelk"
expected = "projects/{project}/locations/{location}/batchPredictionJobs/{batch_prediction_job}".format(
project=project, location=location, batch_prediction_job=batch_prediction_job,
)
actual = JobServiceClient.batch_prediction_job_path(
project, location, batch_prediction_job
)
assert expected == actual
def test_parse_batch_prediction_job_path():
expected = {
"project": "octopus",
"location": "oyster",
"batch_prediction_job": "nudibranch",
}
path = JobServiceClient.batch_prediction_job_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_batch_prediction_job_path(path)
assert expected == actual
def test_custom_job_path():
project = "cuttlefish"
location = "mussel"
custom_job = "winkle"
expected = "projects/{project}/locations/{location}/customJobs/{custom_job}".format(
project=project, location=location, custom_job=custom_job,
)
actual = JobServiceClient.custom_job_path(project, location, custom_job)
assert expected == actual
def test_parse_custom_job_path():
expected = {
"project": "nautilus",
"location": "scallop",
"custom_job": "abalone",
}
path = JobServiceClient.custom_job_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_custom_job_path(path)
assert expected == actual
def test_data_labeling_job_path():
project = "squid"
location = "clam"
data_labeling_job = "whelk"
expected = "projects/{project}/locations/{location}/dataLabelingJobs/{data_labeling_job}".format(
project=project, location=location, data_labeling_job=data_labeling_job,
)
actual = JobServiceClient.data_labeling_job_path(
project, location, data_labeling_job
)
assert expected == actual
def test_parse_data_labeling_job_path():
expected = {
"project": "octopus",
"location": "oyster",
"data_labeling_job": "nudibranch",
}
path = JobServiceClient.data_labeling_job_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_data_labeling_job_path(path)
assert expected == actual
def test_dataset_path():
project = "cuttlefish"
location = "mussel"
dataset = "winkle"
expected = "projects/{project}/locations/{location}/datasets/{dataset}".format(
project=project, location=location, dataset=dataset,
)
actual = JobServiceClient.dataset_path(project, location, dataset)
assert expected == actual
def test_parse_dataset_path():
expected = {
"project": "nautilus",
"location": "scallop",
"dataset": "abalone",
}
path = JobServiceClient.dataset_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_dataset_path(path)
assert expected == actual
def test_endpoint_path():
project = "squid"
location = "clam"
endpoint = "whelk"
expected = "projects/{project}/locations/{location}/endpoints/{endpoint}".format(
project=project, location=location, endpoint=endpoint,
)
actual = JobServiceClient.endpoint_path(project, location, endpoint)
assert expected == actual
def test_parse_endpoint_path():
expected = {
"project": "octopus",
"location": "oyster",
"endpoint": "nudibranch",
}
path = JobServiceClient.endpoint_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_endpoint_path(path)
assert expected == actual
def test_hyperparameter_tuning_job_path():
project = "cuttlefish"
location = "mussel"
hyperparameter_tuning_job = "winkle"
expected = "projects/{project}/locations/{location}/hyperparameterTuningJobs/{hyperparameter_tuning_job}".format(
project=project,
location=location,
hyperparameter_tuning_job=hyperparameter_tuning_job,
)
actual = JobServiceClient.hyperparameter_tuning_job_path(
project, location, hyperparameter_tuning_job
)
assert expected == actual
def test_parse_hyperparameter_tuning_job_path():
expected = {
"project": "nautilus",
"location": "scallop",
"hyperparameter_tuning_job": "abalone",
}
path = JobServiceClient.hyperparameter_tuning_job_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_hyperparameter_tuning_job_path(path)
assert expected == actual
def test_model_path():
project = "squid"
location = "clam"
model = "whelk"
expected = "projects/{project}/locations/{location}/models/{model}".format(
project=project, location=location, model=model,
)
actual = JobServiceClient.model_path(project, location, model)
assert expected == actual
def test_parse_model_path():
expected = {
"project": "octopus",
"location": "oyster",
"model": "nudibranch",
}
path = JobServiceClient.model_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_model_path(path)
assert expected == actual
def test_model_deployment_monitoring_job_path():
project = "cuttlefish"
location = "mussel"
model_deployment_monitoring_job = "winkle"
expected = "projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}".format(
project=project,
location=location,
model_deployment_monitoring_job=model_deployment_monitoring_job,
)
actual = JobServiceClient.model_deployment_monitoring_job_path(
project, location, model_deployment_monitoring_job
)
assert expected == actual
def test_parse_model_deployment_monitoring_job_path():
expected = {
"project": "nautilus",
"location": "scallop",
"model_deployment_monitoring_job": "abalone",
}
path = JobServiceClient.model_deployment_monitoring_job_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_model_deployment_monitoring_job_path(path)
assert expected == actual
def test_network_path():
project = "squid"
network = "clam"
expected = "projects/{project}/global/networks/{network}".format(
project=project, network=network,
)
actual = JobServiceClient.network_path(project, network)
assert expected == actual
def test_parse_network_path():
expected = {
"project": "whelk",
"network": "octopus",
}
path = JobServiceClient.network_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_network_path(path)
assert expected == actual
def test_tensorboard_path():
project = "oyster"
location = "nudibranch"
tensorboard = "cuttlefish"
expected = "projects/{project}/locations/{location}/tensorboards/{tensorboard}".format(
project=project, location=location, tensorboard=tensorboard,
)
actual = JobServiceClient.tensorboard_path(project, location, tensorboard)
assert expected == actual
def test_parse_tensorboard_path():
expected = {
"project": "mussel",
"location": "winkle",
"tensorboard": "nautilus",
}
path = JobServiceClient.tensorboard_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_tensorboard_path(path)
assert expected == actual
def test_trial_path():
project = "scallop"
location = "abalone"
study = "squid"
trial = "clam"
expected = "projects/{project}/locations/{location}/studies/{study}/trials/{trial}".format(
project=project, location=location, study=study, trial=trial,
)
actual = JobServiceClient.trial_path(project, location, study, trial)
assert expected == actual
def test_parse_trial_path():
expected = {
"project": "whelk",
"location": "octopus",
"study": "oyster",
"trial": "nudibranch",
}
path = JobServiceClient.trial_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_trial_path(path)
assert expected == actual
def test_common_billing_account_path():
billing_account = "cuttlefish"
expected = "billingAccounts/{billing_account}".format(
billing_account=billing_account,
)
actual = JobServiceClient.common_billing_account_path(billing_account)
assert expected == actual
def test_parse_common_billing_account_path():
expected = {
"billing_account": "mussel",
}
path = JobServiceClient.common_billing_account_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_common_billing_account_path(path)
assert expected == actual
def test_common_folder_path():
folder = "winkle"
expected = "folders/{folder}".format(folder=folder,)
actual = JobServiceClient.common_folder_path(folder)
assert expected == actual
def test_parse_common_folder_path():
expected = {
"folder": "nautilus",
}
path = JobServiceClient.common_folder_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_common_folder_path(path)
assert expected == actual
def test_common_organization_path():
organization = "scallop"
expected = "organizations/{organization}".format(organization=organization,)
actual = JobServiceClient.common_organization_path(organization)
assert expected == actual
def test_parse_common_organization_path():
expected = {
"organization": "abalone",
}
path = JobServiceClient.common_organization_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_common_organization_path(path)
assert expected == actual
def test_common_project_path():
project = "squid"
expected = "projects/{project}".format(project=project,)
actual = JobServiceClient.common_project_path(project)
assert expected == actual
def test_parse_common_project_path():
expected = {
"project": "clam",
}
path = JobServiceClient.common_project_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_common_project_path(path)
assert expected == actual
def test_common_location_path():
project = "whelk"
location = "octopus"
expected = "projects/{project}/locations/{location}".format(
project=project, location=location,
)
actual = JobServiceClient.common_location_path(project, location)
assert expected == actual
def test_parse_common_location_path():
expected = {
"project": "oyster",
"location": "nudibranch",
}
path = JobServiceClient.common_location_path(**expected)
# Check that the path construction is reversible.
actual = JobServiceClient.parse_common_location_path(path)
assert expected == actual
def test_client_withDEFAULT_CLIENT_INFO():
client_info = gapic_v1.client_info.ClientInfo()
with mock.patch.object(
transports.JobServiceTransport, "_prep_wrapped_messages"
) as prep:
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), client_info=client_info,
)
prep.assert_called_once_with(client_info)
with mock.patch.object(
transports.JobServiceTransport, "_prep_wrapped_messages"
) as prep:
transport_class = JobServiceClient.get_transport_class()
transport = transport_class(
credentials=ga_credentials.AnonymousCredentials(), client_info=client_info,
)
prep.assert_called_once_with(client_info)
@pytest.mark.asyncio
async def test_transport_close_async():
client = JobServiceAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc_asyncio",
)
with mock.patch.object(
type(getattr(client.transport, "grpc_channel")), "close"
) as close:
async with client:
close.assert_not_called()
close.assert_called_once()
def test_transport_close():
transports = {
"grpc": "_grpc_channel",
}
for transport, close_name in transports.items():
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport
)
with mock.patch.object(
type(getattr(client.transport, close_name)), "close"
) as close:
with client:
close.assert_not_called()
close.assert_called_once()
def test_client_ctx():
transports = [
"grpc",
]
for transport in transports:
client = JobServiceClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport
)
# Test client calls underlying transport.
with mock.patch.object(type(client.transport), "close") as close:
close.assert_not_called()
with client:
pass
close.assert_called()
| 39.57659 | 128 | 0.700069 | 40,589 | 351,638 | 5.790978 | 0.013452 | 0.027122 | 0.045203 | 0.038715 | 0.960459 | 0.947181 | 0.925501 | 0.906926 | 0.893933 | 0.879158 | 0 | 0.003861 | 0.223724 | 351,638 | 8,884 | 129 | 39.581045 | 0.857229 | 0.201861 | 0 | 0.730247 | 0 | 0 | 0.065159 | 0.025791 | 0 | 0 | 0 | 0 | 0.134709 | 1 | 0.041604 | false | 0.000168 | 0.00973 | 0.000336 | 0.051669 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0a1edd3ed9870a2034ccede18eea00757c36d986 | 48,597 | py | Python | sdk/translation/azure-ai-translation-document/azure/ai/translation/document/_generated/operations/_document_translation_operations.py | ankitarorabit/azure-sdk-for-python | dd90281cbad9400f8080754a5ef2f56791a5a88f | [
"MIT"
] | null | null | null | sdk/translation/azure-ai-translation-document/azure/ai/translation/document/_generated/operations/_document_translation_operations.py | ankitarorabit/azure-sdk-for-python | dd90281cbad9400f8080754a5ef2f56791a5a88f | [
"MIT"
] | null | null | null | sdk/translation/azure-ai-translation-document/azure/ai/translation/document/_generated/operations/_document_translation_operations.py | ankitarorabit/azure-sdk-for-python | dd90281cbad9400f8080754a5ef2f56791a5a88f | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import datetime
from typing import TYPE_CHECKING
import warnings
from ..._polling import DocumentTranslationLROPollingMethod, DocumentTranslationPoller
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.paging import ItemPaged
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import HttpRequest, HttpResponse
from azure.core.polling import NoPolling, PollingMethod
from .. import models as _models
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any, Callable, Dict, Generic, Iterable, List, Optional, TypeVar, Union
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]]
class DocumentTranslationOperations(object):
"""DocumentTranslationOperations operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~azure.ai.translation.document.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
def _start_translation_initial(
self,
inputs, # type: List["_models.BatchRequest"]
**kwargs # type: Any
):
# type: (...) -> None
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
404: ResourceNotFoundError,
409: ResourceExistsError,
400: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
401: lambda response: ClientAuthenticationError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
429: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
500: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
503: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
}
error_map.update(kwargs.pop('error_map', {}))
_body = _models.StartTranslationDetails(inputs=inputs)
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self._start_translation_initial.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(_body, 'StartTranslationDetails')
body_content_kwargs['content'] = body_content
request = self._client.post(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
response_headers = {}
response_headers['Operation-Location']=self._deserialize('str', response.headers.get('Operation-Location'))
if cls:
return cls(pipeline_response, None, response_headers)
_start_translation_initial.metadata = {'url': '/batches'} # type: ignore
def begin_start_translation(
self,
inputs, # type: List["_models.BatchRequest"]
**kwargs # type: Any
):
# type: (...) -> DocumentTranslationPoller[None]
"""Submit a document translation request to the Document Translation service.
Use this API to submit a bulk (batch) translation request to the Document Translation service.
Each request can contain multiple documents and must contain a source and destination container
for each document.
The prefix and suffix filter (if supplied) are used to filter folders. The prefix is applied to
the subpath after the container name.
Glossaries / Translation memory can be included in the request and are applied by the service
when the document is translated.
If the glossary is invalid or unreachable during translation, an error is indicated in the
document status.
If a file with the same name already exists at the destination, it will be overwritten. The
targetUrl for each target language must be unique.
:param inputs: The input list of documents or folders containing documents.
:type inputs: list[~azure.ai.translation.document.models.BatchRequest]
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: By default, your polling method will be DocumentTranslationLROPollingMethod.
Pass in False for this operation to not poll, or pass in your own initialized polling object for a personal polling strategy.
:paramtype polling: bool or ~azure.core.polling.PollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no Retry-After header is present.
:return: An instance of DocumentTranslationPoller that returns either None or the result of cls(response)
:rtype: ~..._polling.DocumentTranslationPoller[None]
:raises ~azure.core.exceptions.HttpResponseError:
"""
polling = kwargs.pop('polling', True) # type: Union[bool, PollingMethod]
cls = kwargs.pop('cls', None) # type: ClsType[None]
lro_delay = kwargs.pop(
'polling_interval',
self._config.polling_interval
)
cont_token = kwargs.pop('continuation_token', None) # type: Optional[str]
if cont_token is None:
raw_result = self._start_translation_initial(
inputs=inputs,
cls=lambda x,y,z: x,
**kwargs
)
kwargs.pop('error_map', None)
kwargs.pop('content_type', None)
def get_long_running_output(pipeline_response):
if cls:
return cls(pipeline_response, None, {})
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
if polling is True: polling_method = DocumentTranslationLROPollingMethod(lro_delay, lro_options={'final-state-via': 'location'}, path_format_arguments=path_format_arguments, **kwargs)
elif polling is False: polling_method = NoPolling()
else: polling_method = polling
if cont_token:
return DocumentTranslationPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output
)
else:
return DocumentTranslationPoller(self._client, raw_result, get_long_running_output, polling_method)
begin_start_translation.metadata = {'url': '/batches'} # type: ignore
def get_translations_status(
self,
top=None, # type: Optional[int]
skip=0, # type: Optional[int]
maxpagesize=50, # type: Optional[int]
ids=None, # type: Optional[List[str]]
statuses=None, # type: Optional[List[str]]
created_date_time_utc_start=None, # type: Optional[datetime.datetime]
created_date_time_utc_end=None, # type: Optional[datetime.datetime]
order_by=None, # type: Optional[List[str]]
**kwargs # type: Any
):
# type: (...) -> Iterable["_models.TranslationsStatus"]
"""Returns a list of batch requests submitted and the status for each request.
Returns a list of batch requests submitted and the status for each request.
This list only contains batch requests submitted by the user (based on the resource).
If the number of requests exceeds our paging limit, server-side paging is used. Paginated
responses indicate a partial result and include a continuation token in the response.
The absence of a continuation token means that no additional pages are available.
$top, $skip and $maxpagesize query parameters can be used to specify a number of results to
return and an offset for the collection.
$top indicates the total number of records the user wants to be returned across all pages.
$skip indicates the number of records to skip from the list of batches based on the sorting
method specified. By default, we sort by descending start time.
$maxpagesize is the maximum items returned in a page. If more items are requested via $top (or
$top is not specified and there are more items to be returned), @nextLink will contain the link
to the next page.
$orderBy query parameter can be used to sort the returned list (ex "$orderBy=createdDateTimeUtc
asc" or "$orderBy=createdDateTimeUtc desc").
The default sorting is descending by createdDateTimeUtc.
Some query parameters can be used to filter the returned list (ex:
"status=Succeeded,Cancelled") will only return succeeded and cancelled operations.
createdDateTimeUtcStart and createdDateTimeUtcEnd can be used combined or separately to specify
a range of datetime to filter the returned list by.
The supported filtering query parameters are (status, ids, createdDateTimeUtcStart,
createdDateTimeUtcEnd).
The server honors the values specified by the client. However, clients must be prepared to
handle responses that contain a different page size or contain a continuation token.
When both $top and $skip are included, the server should first apply $skip and then $top on the
collection.
Note: If the server can't honor $top and/or $skip, the server must return an error to the
client informing about it instead of just ignoring the query options.
This reduces the risk of the client making assumptions about the data returned.
:param top: $top indicates the total number of records the user wants to be returned across all
pages.
Clients MAY use $top and $skip query parameters to specify a number of results to return and
an offset into the collection.
When both $top and $skip are given by a client, the server SHOULD first apply $skip and then
$top on the collection.
Note: If the server can't honor $top and/or $skip, the server MUST return an error to the
client informing about it instead of just ignoring the query options.
:type top: int
:param skip: $skip indicates the number of records to skip from the list of records held by the
server based on the sorting method specified. By default, we sort by descending start time.
Clients MAY use $top and $skip query parameters to specify a number of results to return and
an offset into the collection.
When both $top and $skip are given by a client, the server SHOULD first apply $skip and then
$top on the collection.
Note: If the server can't honor $top and/or $skip, the server MUST return an error to the
client informing about it instead of just ignoring the query options.
:type skip: int
:param maxpagesize: $maxpagesize is the maximum items returned in a page. If more items are
requested via $top (or $top is not specified and there are more items to be returned),
@nextLink will contain the link to the next page.
Clients MAY request server-driven paging with a specific page size by specifying a
$maxpagesize preference. The server SHOULD honor this preference if the specified page size is
smaller than the server's default page size.
:type maxpagesize: int
:param ids: Ids to use in filtering.
:type ids: list[str]
:param statuses: Statuses to use in filtering.
:type statuses: list[str]
:param created_date_time_utc_start: the start datetime to get items after.
:type created_date_time_utc_start: ~datetime.datetime
:param created_date_time_utc_end: the end datetime to get items before.
:type created_date_time_utc_end: ~datetime.datetime
:param order_by: the sorting query for the collection (ex: 'CreatedDateTimeUtc asc',
'CreatedDateTimeUtc desc').
:type order_by: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either TranslationsStatus or the result of cls(response)
:rtype: ~azure.core.paging.ItemPaged[~azure.ai.translation.document.models.TranslationsStatus]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.TranslationsStatus"]
error_map = {
404: ResourceNotFoundError,
409: ResourceExistsError,
400: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
401: lambda response: ClientAuthenticationError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
429: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
500: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
503: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_translations_status.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if top is not None:
query_parameters['$top'] = self._serialize.query("top", top, 'int', maximum=2147483647, minimum=0)
if skip is not None:
query_parameters['$skip'] = self._serialize.query("skip", skip, 'int', maximum=2147483647, minimum=0)
if maxpagesize is not None:
query_parameters['$maxpagesize'] = self._serialize.query("maxpagesize", maxpagesize, 'int', maximum=100, minimum=1)
if ids is not None:
query_parameters['ids'] = self._serialize.query("ids", ids, '[str]', div=',')
if statuses is not None:
query_parameters['statuses'] = self._serialize.query("statuses", statuses, '[str]', div=',')
if created_date_time_utc_start is not None:
query_parameters['createdDateTimeUtcStart'] = self._serialize.query("created_date_time_utc_start", created_date_time_utc_start, 'iso-8601')
if created_date_time_utc_end is not None:
query_parameters['createdDateTimeUtcEnd'] = self._serialize.query("created_date_time_utc_end", created_date_time_utc_end, 'iso-8601')
if order_by is not None:
query_parameters['$orderBy'] = self._serialize.query("order_by", order_by, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
request = self._client.get(url, query_parameters, header_parameters)
return request
def extract_data(pipeline_response):
deserialized = self._deserialize('TranslationsStatus', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, iter(list_of_elem)
def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
return pipeline_response
return ItemPaged(
get_next, extract_data
)
get_translations_status.metadata = {'url': '/batches'} # type: ignore
def get_document_status(
self,
id, # type: str
document_id, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.DocumentStatus"
"""Returns the status for a specific document.
Returns the translation status for a specific document based on the request Id and document Id.
:param id: Format - uuid. The batch id.
:type id: str
:param document_id: Format - uuid. The document id.
:type document_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: DocumentStatus, or the result of cls(response)
:rtype: ~azure.ai.translation.document.models.DocumentStatus
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.DocumentStatus"]
error_map = {
409: ResourceExistsError,
401: lambda response: ClientAuthenticationError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
404: lambda response: ResourceNotFoundError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
429: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
500: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
503: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_document_status.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
'id': self._serialize.url("id", id, 'str'),
'documentId': self._serialize.url("document_id", document_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
response_headers = {}
response_headers['Retry-After']=self._deserialize('int', response.headers.get('Retry-After'))
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
deserialized = self._deserialize('DocumentStatus', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
get_document_status.metadata = {'url': '/batches/{id}/documents/{documentId}'} # type: ignore
def get_translation_status(
self,
id, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.TranslationStatus"
"""Returns the status for a document translation request.
Returns the status for a document translation request.
The status includes the overall request status, as well as the status for documents that are
being translated as part of that request.
:param id: Format - uuid. The operation id.
:type id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: TranslationStatus, or the result of cls(response)
:rtype: ~azure.ai.translation.document.models.TranslationStatus
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.TranslationStatus"]
error_map = {
409: ResourceExistsError,
401: lambda response: ClientAuthenticationError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
404: lambda response: ResourceNotFoundError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
429: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
500: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
503: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_translation_status.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
'id': self._serialize.url("id", id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
response_headers = {}
response_headers['Retry-After']=self._deserialize('int', response.headers.get('Retry-After'))
response_headers['ETag']=self._deserialize('str', response.headers.get('ETag'))
deserialized = self._deserialize('TranslationStatus', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
get_translation_status.metadata = {'url': '/batches/{id}'} # type: ignore
def cancel_translation(
self,
id, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.TranslationStatus"
"""Cancel a currently processing or queued translation.
Cancel a currently processing or queued translation.
Cancel a currently processing or queued translation.
A translation will not be cancelled if it is already completed or failed or cancelling. A bad
request will be returned.
All documents that have completed translation will not be cancelled and will be charged.
All pending documents will be cancelled if possible.
:param id: Format - uuid. The operation-id.
:type id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: TranslationStatus, or the result of cls(response)
:rtype: ~azure.ai.translation.document.models.TranslationStatus
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.TranslationStatus"]
error_map = {
409: ResourceExistsError,
401: lambda response: ClientAuthenticationError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
404: lambda response: ResourceNotFoundError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
429: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
500: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
503: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.cancel_translation.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
'id': self._serialize.url("id", id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
deserialized = self._deserialize('TranslationStatus', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
cancel_translation.metadata = {'url': '/batches/{id}'} # type: ignore
def get_documents_status(
self,
id, # type: str
top=None, # type: Optional[int]
skip=0, # type: Optional[int]
maxpagesize=50, # type: Optional[int]
ids=None, # type: Optional[List[str]]
statuses=None, # type: Optional[List[str]]
created_date_time_utc_start=None, # type: Optional[datetime.datetime]
created_date_time_utc_end=None, # type: Optional[datetime.datetime]
order_by=None, # type: Optional[List[str]]
**kwargs # type: Any
):
# type: (...) -> Iterable["_models.DocumentsStatus"]
"""Returns the status for all documents in a batch document translation request.
Returns the status for all documents in a batch document translation request.
If the number of documents in the response exceeds our paging limit, server-side paging is
used.
Paginated responses indicate a partial result and include a continuation token in the response.
The absence of a continuation token means that no additional pages are available.
$top, $skip and $maxpagesize query parameters can be used to specify a number of results to
return and an offset for the collection.
$top indicates the total number of records the user wants to be returned across all pages.
$skip indicates the number of records to skip from the list of document status held by the
server based on the sorting method specified. By default, we sort by descending start time.
$maxpagesize is the maximum items returned in a page. If more items are requested via $top (or
$top is not specified and there are more items to be returned), @nextLink will contain the link
to the next page.
$orderBy query parameter can be used to sort the returned list (ex "$orderBy=createdDateTimeUtc
asc" or "$orderBy=createdDateTimeUtc desc").
The default sorting is descending by createdDateTimeUtc.
Some query parameters can be used to filter the returned list (ex:
"status=Succeeded,Cancelled") will only return succeeded and cancelled documents.
createdDateTimeUtcStart and createdDateTimeUtcEnd can be used combined or separately to specify
a range of datetime to filter the returned list by.
The supported filtering query parameters are (status, ids, createdDateTimeUtcStart,
createdDateTimeUtcEnd).
When both $top and $skip are included, the server should first apply $skip and then $top on the
collection.
Note: If the server can't honor $top and/or $skip, the server must return an error to the
client informing about it instead of just ignoring the query options.
This reduces the risk of the client making assumptions about the data returned.
:param id: Format - uuid. The operation id.
:type id: str
:param top: $top indicates the total number of records the user wants to be returned across all
pages.
Clients MAY use $top and $skip query parameters to specify a number of results to return and
an offset into the collection.
When both $top and $skip are given by a client, the server SHOULD first apply $skip and then
$top on the collection.
Note: If the server can't honor $top and/or $skip, the server MUST return an error to the
client informing about it instead of just ignoring the query options.
:type top: int
:param skip: $skip indicates the number of records to skip from the list of records held by the
server based on the sorting method specified. By default, we sort by descending start time.
Clients MAY use $top and $skip query parameters to specify a number of results to return and
an offset into the collection.
When both $top and $skip are given by a client, the server SHOULD first apply $skip and then
$top on the collection.
Note: If the server can't honor $top and/or $skip, the server MUST return an error to the
client informing about it instead of just ignoring the query options.
:type skip: int
:param maxpagesize: $maxpagesize is the maximum items returned in a page. If more items are
requested via $top (or $top is not specified and there are more items to be returned),
@nextLink will contain the link to the next page.
Clients MAY request server-driven paging with a specific page size by specifying a
$maxpagesize preference. The server SHOULD honor this preference if the specified page size is
smaller than the server's default page size.
:type maxpagesize: int
:param ids: Ids to use in filtering.
:type ids: list[str]
:param statuses: Statuses to use in filtering.
:type statuses: list[str]
:param created_date_time_utc_start: the start datetime to get items after.
:type created_date_time_utc_start: ~datetime.datetime
:param created_date_time_utc_end: the end datetime to get items before.
:type created_date_time_utc_end: ~datetime.datetime
:param order_by: the sorting query for the collection (ex: 'CreatedDateTimeUtc asc',
'CreatedDateTimeUtc desc').
:type order_by: list[str]
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either DocumentsStatus or the result of cls(response)
:rtype: ~azure.core.paging.ItemPaged[~azure.ai.translation.document.models.DocumentsStatus]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.DocumentsStatus"]
error_map = {
409: ResourceExistsError,
400: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
401: lambda response: ClientAuthenticationError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
404: lambda response: ResourceNotFoundError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
429: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
500: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
503: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_documents_status.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
'id': self._serialize.url("id", id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
if top is not None:
query_parameters['$top'] = self._serialize.query("top", top, 'int', maximum=2147483647, minimum=0)
if skip is not None:
query_parameters['$skip'] = self._serialize.query("skip", skip, 'int', maximum=2147483647, minimum=0)
if maxpagesize is not None:
query_parameters['$maxpagesize'] = self._serialize.query("maxpagesize", maxpagesize, 'int', maximum=100, minimum=1)
if ids is not None:
query_parameters['ids'] = self._serialize.query("ids", ids, '[str]', div=',')
if statuses is not None:
query_parameters['statuses'] = self._serialize.query("statuses", statuses, '[str]', div=',')
if created_date_time_utc_start is not None:
query_parameters['createdDateTimeUtcStart'] = self._serialize.query("created_date_time_utc_start", created_date_time_utc_start, 'iso-8601')
if created_date_time_utc_end is not None:
query_parameters['createdDateTimeUtcEnd'] = self._serialize.query("created_date_time_utc_end", created_date_time_utc_end, 'iso-8601')
if order_by is not None:
query_parameters['$orderBy'] = self._serialize.query("order_by", order_by, '[str]', div=',')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
'id': self._serialize.url("id", id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
request = self._client.get(url, query_parameters, header_parameters)
return request
def extract_data(pipeline_response):
deserialized = self._deserialize('DocumentsStatus', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, iter(list_of_elem)
def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
return pipeline_response
return ItemPaged(
get_next, extract_data
)
get_documents_status.metadata = {'url': '/batches/{id}/documents'} # type: ignore
def get_supported_document_formats(
self,
**kwargs # type: Any
):
# type: (...) -> "_models.SupportedFileFormats"
"""Returns a list of supported document formats.
The list of supported document formats supported by the Document Translation service.
The list includes the common file extension, as well as the content-type if using the upload
API.
:keyword callable cls: A custom type or function that will be passed the direct response
:return: SupportedFileFormats, or the result of cls(response)
:rtype: ~azure.ai.translation.document.models.SupportedFileFormats
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.SupportedFileFormats"]
error_map = {
401: ClientAuthenticationError,
404: ResourceNotFoundError,
409: ResourceExistsError,
429: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
500: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
503: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_supported_document_formats.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
response_headers = {}
response_headers['Retry-After']=self._deserialize('int', response.headers.get('Retry-After'))
deserialized = self._deserialize('SupportedFileFormats', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
get_supported_document_formats.metadata = {'url': '/documents/formats'} # type: ignore
def get_supported_glossary_formats(
self,
**kwargs # type: Any
):
# type: (...) -> "_models.SupportedFileFormats"
"""Returns the list of supported glossary formats.
The list of supported glossary formats supported by the Document Translation service.
The list includes the common file extension used.
:keyword callable cls: A custom type or function that will be passed the direct response
:return: SupportedFileFormats, or the result of cls(response)
:rtype: ~azure.ai.translation.document.models.SupportedFileFormats
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.SupportedFileFormats"]
error_map = {
401: ClientAuthenticationError,
404: ResourceNotFoundError,
409: ResourceExistsError,
429: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
500: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
503: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_supported_glossary_formats.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
response_headers = {}
response_headers['Retry-After']=self._deserialize('int', response.headers.get('Retry-After'))
deserialized = self._deserialize('SupportedFileFormats', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
get_supported_glossary_formats.metadata = {'url': '/glossaries/formats'} # type: ignore
def get_supported_storage_sources(
self,
**kwargs # type: Any
):
# type: (...) -> "_models.SupportedStorageSources"
"""Returns a list of supported storage sources.
Returns a list of storage sources/options supported by the Document Translation service.
:keyword callable cls: A custom type or function that will be passed the direct response
:return: SupportedStorageSources, or the result of cls(response)
:rtype: ~azure.ai.translation.document.models.SupportedStorageSources
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.SupportedStorageSources"]
error_map = {
401: ClientAuthenticationError,
404: ResourceNotFoundError,
409: ResourceExistsError,
429: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
500: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
503: lambda response: HttpResponseError(response=response, model=self._deserialize(_models.TranslationErrorResponse, response)),
}
error_map.update(kwargs.pop('error_map', {}))
accept = "application/json"
# Construct URL
url = self.get_supported_storage_sources.metadata['url'] # type: ignore
path_format_arguments = {
'endpoint': self._serialize.url("self._config.endpoint", self._config.endpoint, 'str', skip_quote=True),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
response_headers = {}
response_headers['Retry-After']=self._deserialize('int', response.headers.get('Retry-After'))
deserialized = self._deserialize('SupportedStorageSources', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, response_headers)
return deserialized
get_supported_storage_sources.metadata = {'url': '/storagesources'} # type: ignore
| 53.579934 | 192 | 0.673889 | 5,503 | 48,597 | 5.805015 | 0.081592 | 0.032055 | 0.026295 | 0.031304 | 0.834403 | 0.816153 | 0.802567 | 0.791955 | 0.784505 | 0.778933 | 0 | 0.007363 | 0.237011 | 48,597 | 906 | 193 | 53.639073 | 0.854176 | 0.368027 | 0 | 0.758763 | 0 | 0 | 0.069009 | 0.019096 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037113 | false | 0 | 0.02268 | 0 | 0.113402 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0a2163f73e1e9add2305cdadd97357c958a8e4d3 | 35,214 | py | Python | release-assistant/test/test_check/test_check_cli.py | openeuler-mirror/release-tools | 655d08d6555f9a9211f982da6470be6b17eeafae | [
"MulanPSL-1.0"
] | 1 | 2021-09-23T09:06:30.000Z | 2021-09-23T09:06:30.000Z | release-assistant/test/test_check/test_check_cli.py | openeuler-mirror/release-tools | 655d08d6555f9a9211f982da6470be6b17eeafae | [
"MulanPSL-1.0"
] | null | null | null | release-assistant/test/test_check/test_check_cli.py | openeuler-mirror/release-tools | 655d08d6555f9a9211f982da6470be6b17eeafae | [
"MulanPSL-1.0"
] | null | null | null | #!/usr/bin/python3
# ******************************************************************************
# Copyright (c) Huawei Technologies Co., Ltd. 2020-2020. All rights reserved.
# licensed under the Mulan PSL v2.
# You can use this software according to the terms and conditions of the Mulan PSL v2.
# You may obtain a copy of Mulan PSL v2 at:
# http://license.coscl.org.cn/MulanPSL2
# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
# PURPOSE.
# See the Mulan PSL v2 for more details.
# ******************************************************************************/
# -*- coding:utf-8 -*-
"""
TestCheck
"""
import os
from requests import RequestException
from test.base.basetest import TestMixin
from javcra.cli.commands.checkpart import CheckCommand
MOCK_DATA_FILE = os.path.join(os.path.abspath(os.path.dirname(__file__)), "mock_data")
EXPECT_DATA_FILE = os.path.join(os.path.abspath(os.path.dirname(__file__)), "expected_data")
class TestCheck(TestMixin):
"""
class for test TestCheck
"""
cmd_class = CheckCommand
def test_check_status_success(self):
"""
test check status success
"""
self.expect_str = """
[INFO] successfully update status in check part.
[INFO] All issues are completed, the next step is sending repo to test platform.
[INFO] successfully to send repo info.
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=status", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
self.mock_subprocess_check_output(return_value=b"published-everything-src")
mock_bugfix_r = self.make_need_content('mock_bugfix_issue.txt', MOCK_DATA_FILE)
mock_install_r = self.make_need_content('mock_install_issue.txt', MOCK_DATA_FILE)
mock_check_r = self.make_need_content('check_status_success.txt', MOCK_DATA_FILE)
mock_post_data = self.make_need_content('mock_post_data.txt', MOCK_DATA_FILE)
mock_repo_list_data = self.make_need_content('repo_list_data.txt', MOCK_DATA_FILE)
self.mock_requests_post(return_value=mock_post_data)
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, mock_install_r, resp, resp, resp, mock_bugfix_r, mock_check_r,
resp, resp, mock_install_r, mock_bugfix_r, resp, resp, resp, resp, mock_repo_list_data, resp])
self.assert_result()
def test_check_status_failed(self):
"""
test check status failed
"""
self.expect_str = """
during the operation status, a failure occurred, and the cause of the error was failed to update status in check part.
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=status", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
self.mock_request(
side_effect=[resp, resp, RequestException])
self.assert_result()
def test_count_issue_status_failed(self):
"""
test count issue status failed
"""
self.expect_str = """
[INFO] successfully update status in check part.
during the operation status, a failure occurred, and the cause of the error was the status of the issue is not all completed, please complete first
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=status", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
mock_bugfix_r = self.make_need_content('mock_bugfix_issue.txt', MOCK_DATA_FILE)
mock_install_r = self.make_need_content('mock_install_issue.txt', MOCK_DATA_FILE)
mock_abnormal_r = self.make_need_content('mock_abnormal_issue.txt', MOCK_DATA_FILE)
mock_check_r = self.make_need_content('check_status_success.txt', MOCK_DATA_FILE)
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, mock_install_r, resp, resp, resp, mock_bugfix_r, mock_check_r,
resp, resp, RequestException, mock_abnormal_r])
self.assert_result()
def test_send_repo_info_requests_post_failed(self):
"""
test send repo info requests post failed
"""
self.expect_str = """
[INFO] successfully update status in check part.
[INFO] All issues are completed, the next step is sending repo to test platform.
[ERROR] failed to send repo info.
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=status", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
self.mock_subprocess_check_output(return_value=b"published-everything-src")
mock_bugfix_r = self.make_need_content('mock_bugfix_issue.txt', MOCK_DATA_FILE)
mock_install_r = self.make_need_content('mock_install_issue.txt', MOCK_DATA_FILE)
mock_check_r = self.make_need_content('check_status_success.txt', MOCK_DATA_FILE)
mock_repo_list_data = self.make_need_content('repo_list_data.txt', MOCK_DATA_FILE)
self.mock_requests_post(return_value=None)
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, mock_install_r, resp, resp, resp, mock_bugfix_r, mock_check_r,
resp, resp, mock_install_r, mock_bugfix_r, resp, resp, resp, resp, mock_repo_list_data, resp])
self.assert_result()
def test_send_repo_info_request_exception(self):
"""
test send repo info test send repo info request exception
"""
self.expect_str = """
[INFO] successfully update status in check part.
[INFO] All issues are completed, the next step is sending repo to test platform.
[ERROR] failed to send repo info.
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=status", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
self.mock_subprocess_check_output(return_value=b"published-everything-src")
mock_bugfix_r = self.make_need_content('mock_bugfix_issue.txt', MOCK_DATA_FILE)
mock_install_r = self.make_need_content('mock_install_issue.txt', MOCK_DATA_FILE)
mock_check_r = self.make_need_content('check_status_success.txt', MOCK_DATA_FILE)
mock_repo_list_data = self.make_need_content('repo_list_data.txt', MOCK_DATA_FILE)
self.mock_requests_post(side_effect=[RequestException])
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, mock_install_r, resp, resp, resp, mock_bugfix_r, mock_check_r,
resp, resp, mock_install_r, mock_bugfix_r, resp, resp, resp, resp, mock_repo_list_data, resp])
self.assert_result()
def test_send_repo_info_error_code_400(self):
"""
test send repo info error code 400
"""
self.expect_str = """
[INFO] successfully update status in check part.
[INFO] All issues are completed, the next step is sending repo to test platform.
[ERROR] failed to send repo info.
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=status", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
self.mock_subprocess_check_output(return_value=b"published-everything-src")
mock_bugfix_r = self.make_need_content('mock_bugfix_issue.txt', MOCK_DATA_FILE)
mock_install_r = self.make_need_content('mock_install_issue.txt', MOCK_DATA_FILE)
mock_check_r = self.make_need_content('check_status_success.txt', MOCK_DATA_FILE)
mock_post_data = self.make_need_content('mock_error_post_data.txt', MOCK_DATA_FILE)
mock_repo_list_data = self.make_need_content('repo_list_data.txt', MOCK_DATA_FILE)
self.mock_requests_post(return_value=mock_post_data)
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, mock_install_r, resp, resp, resp, mock_bugfix_r, mock_check_r,
resp, resp, mock_install_r, mock_bugfix_r, resp, resp, resp, resp, mock_repo_list_data, resp])
self.assert_result()
def test_send_repo_info_request_repo_url_failed(self):
"""
test send repo info requests post failed
"""
self.expect_str = """
[INFO] successfully update status in check part.
[INFO] All issues are completed, the next step is sending repo to test platform.
[ERROR] failed to send repo info.
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=status", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
self.mock_subprocess_check_output(return_value=b"published-everything-src")
mock_bugfix_r = self.make_need_content('mock_bugfix_issue.txt', MOCK_DATA_FILE)
mock_install_r = self.make_need_content('mock_install_issue.txt', MOCK_DATA_FILE)
mock_check_r = self.make_need_content('check_status_success.txt', MOCK_DATA_FILE)
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, mock_install_r, resp, resp, resp, mock_bugfix_r, mock_check_r,
resp, resp, mock_install_r, mock_bugfix_r, resp, resp, resp, resp, RequestException, resp])
self.assert_result()
def test_send_repo_info_get_update_list_failed(self):
"""
test send repo info requests post failed
"""
self.expect_str = """
[INFO] successfully update status in check part.
[INFO] All issues are completed, the next step is sending repo to test platform.
[ERROR] failed to send repo info.
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=status", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
self.mock_subprocess_check_output(return_value=b"published-everything-src")
mock_bugfix_r = self.make_need_content('mock_bugfix_issue.txt', MOCK_DATA_FILE)
mock_install_r = self.make_need_content('mock_install_issue.txt', MOCK_DATA_FILE)
mock_check_r = self.make_need_content('check_status_success.txt', MOCK_DATA_FILE)
mock_repo_list_data = self.make_need_content('repo_list_data.txt', MOCK_DATA_FILE)
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, mock_install_r, resp, resp, resp, mock_bugfix_r, mock_check_r,
resp, resp, mock_install_r, mock_bugfix_r, resp, resp, resp, resp, mock_repo_list_data,
RequestException])
self.assert_result()
def test_block_has_no_related_issues(self):
"""
test block has no related issues
"""
self.expect_str = """
[INFO] successfully update status in check part.
[INFO] All issues are completed, the next step is sending repo to test platform.
[ERROR] failed to send repo info.
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=status", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
self.mock_subprocess_check_output(return_value=b"published-everything-src")
mock_no_related_issues_r = self.make_need_content('mock_no_related_issues.txt', MOCK_DATA_FILE)
self.mock_request(
side_effect=[resp, resp, mock_no_related_issues_r, mock_no_related_issues_r, mock_no_related_issues_r,
RequestException])
self.assert_result()
def test_people_review_success(self):
"""
test people review success
"""
self.expect_str = """
[INFO] successfully operate test in check part.
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=test", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
mock_comment_r = self.make_need_content('mock_issue_comment.txt', MOCK_DATA_FILE)
self.mock_request(side_effect=[resp, resp, resp, mock_comment_r])
self.assert_result()
def test_create_issue_comment_failed(self):
"""
test create issue comment failed
"""
self.expect_str = """
[ERROR] failed to operate test in check part.
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=test", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
self.mock_request(side_effect=[resp, resp, resp, RequestException])
self.assert_result()
def test_empty_related_personnel_information(self):
"""
test empty related personnel information
"""
self.expect_str = """
[ERROR] failed to operate test in check part.
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=test", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
mock_empty_related_personnel_r = self.make_need_content('mock_empty_related_personnel.txt', MOCK_DATA_FILE)
self.mock_request(side_effect=[resp, resp, mock_empty_related_personnel_r])
self.assert_result()
def test_parameter_validation_failed(self):
"""
test parameter validation failed
"""
self.expect_str = """
Parameter validation failed
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=status", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", ""]
self.assert_result()
def test_no_personnel_authority(self):
"""test_no_personnel_authority"""
self.expect_str = """
[ERROR] Failed to get the list of personnel permissions
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=status", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'mock_incorrect_issue.txt')
self.mock_request(side_effect=[resp])
self.assert_result()
def test_check_requires_success(self):
"""
test check requires success
"""
self.expect_str = self.read_file_content("requires_success.txt", folder=EXPECT_DATA_FILE, is_json=False)
self.command_params = ["--giteeid=Mary", "--token=example", "--type=requires", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
self.prepare_jenkins_data()
self.prepare_obs_data()
resp = self.make_expect_data(200, 'checkpart.txt')
self.mock_subprocess_check_output(
side_effect=[b"published-everything-src", b"published-everything-src", b"published-everything-src",
b"published-everything-src", None, b"published-Epol-src", b"published-everything-src",
b"published-everything-src", b"published-everything-src", b"published-everything-src", None,
b"published-Epol-src", b"published-everything-src", b"published-everything-src",
b"published-everything-src", b"published-everything-src", None, b"published-Epol-src"])
mock_add_repo_r = self.make_need_content('mock_add_repo_success.txt', MOCK_DATA_FILE)
mock_exist_issues = self.make_need_content('exist_issues.txt', MOCK_DATA_FILE)
mock_create_jenkins_comment = self.make_need_content('create_jenkins_comments_success.txt', MOCK_DATA_FILE)
mock_repo_list_data = self.make_need_content('repo_list_data.txt', MOCK_DATA_FILE)
mock_create_install_jenkins_comment = self.make_need_content('create_install_jenkins_comments_success.txt',
MOCK_DATA_FILE)
mock_create_build_jenkins_comment = self.make_need_content('create_build_jenkins_comments_success.txt',
MOCK_DATA_FILE)
mock_create_build_issue = self.make_need_content('create_build_issue_success.txt', MOCK_DATA_FILE)
mock_create_install_issue = self.make_need_content('create_install_issue_success.txt', MOCK_DATA_FILE)
mock_checkpart_add_build = self.make_need_content('checkpart_add_build_success.txt', MOCK_DATA_FILE)
mock_checkpart_add_install = self.make_need_content('checkpart_add_install_success.txt', MOCK_DATA_FILE)
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, resp, resp, resp, resp, resp, mock_repo_list_data,
mock_repo_list_data, mock_create_jenkins_comment, mock_create_jenkins_comment, resp, resp,
resp, resp, mock_add_repo_r, mock_create_build_jenkins_comment,
mock_create_install_jenkins_comment, mock_create_install_jenkins_comment, resp, resp,
mock_exist_issues, mock_create_build_issue, resp, resp, mock_create_build_issue,
mock_checkpart_add_build, resp, resp, mock_exist_issues, mock_create_install_issue, resp, resp,
resp, mock_create_install_issue, mock_checkpart_add_install])
self.assert_result()
def test_get_require_delete_file_failed(self):
"""
test get require and delete file failed
"""
self.expect_str = self.read_file_content("get_require_delete_file_failed.txt", folder=EXPECT_DATA_FILE,
is_json=False)
self.command_params = ["--giteeid=Mary", "--token=example", "--type=requires", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
self.prepare_jenkins_data()
self.prepare_obs_data(delete_status_code=400)
resp = self.make_expect_data(200, 'checkpart.txt')
self.mock_request(side_effect=[resp, resp, resp, resp, resp, resp])
self.assert_result()
def test_get_repo_in_table_failed(self):
"""
test add repo in table failed
"""
self.expect_str = self.read_file_content("add_repo_in_table_failed.txt", folder=EXPECT_DATA_FILE, is_json=False)
self.command_params = ["--giteeid=Mary", "--token=example", "--type=requires", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
self.prepare_jenkins_data()
self.prepare_obs_data()
mock_repo_list_data = self.make_need_content('repo_list_data.txt', MOCK_DATA_FILE)
self.mock_subprocess_check_output(return_value=b"published-everything-src")
mock_create_jenkins_comment = self.make_need_content('create_jenkins_comments_success.txt', MOCK_DATA_FILE)
resp = self.make_expect_data(200, 'checkpart.txt')
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, resp, resp, resp, resp, resp, mock_repo_list_data,
mock_create_jenkins_comment, resp, resp, resp, resp, RequestException])
self.assert_result()
def test_create_jenkins_comment_and_build_comment_and_install_comment_failed(self):
"""
test create jenkins comment and build comment and install comment failed
"""
self.expect_str = self.read_file_content("create_comment_failed.txt", folder=EXPECT_DATA_FILE,
is_json=False)
self.command_params = ["--giteeid=Mary", "--token=example", "--type=requires", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
self.prepare_jenkins_data()
self.prepare_obs_data()
resp = self.make_expect_data(200, 'checkpart.txt')
mock_repo_list_data = self.make_need_content('repo_list_data.txt', MOCK_DATA_FILE)
mock_exist_issues = self.make_need_content('exist_issues.txt', MOCK_DATA_FILE)
mock_add_repo_r = self.make_need_content('mock_add_repo_success.txt', MOCK_DATA_FILE)
mock_create_build_issue = self.make_need_content('create_build_issue_success.txt', MOCK_DATA_FILE)
mock_create_install_issue = self.make_need_content('create_install_issue_success.txt', MOCK_DATA_FILE)
mock_checkpart_add_build = self.make_need_content('checkpart_add_build_success.txt', MOCK_DATA_FILE)
mock_checkpart_add_install = self.make_need_content('checkpart_add_install_success.txt', MOCK_DATA_FILE)
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, resp, resp, resp, resp, resp, mock_repo_list_data,
RequestException, resp, resp, resp, resp, mock_add_repo_r, RequestException, RequestException,
resp, resp, mock_exist_issues, mock_create_build_issue, resp, resp, mock_create_build_issue,
mock_checkpart_add_build, resp, resp, mock_exist_issues, mock_create_install_issue, resp, resp,
mock_create_install_issue, mock_checkpart_add_install])
self.assert_result()
def test_check_requires_epol_list_failed(self):
"""
test check requires epol list failed
"""
self.expect_str = self.read_file_content("check_requires_epol_list_failed.txt", folder=EXPECT_DATA_FILE,
is_json=False)
self.command_params = ["--giteeid=Mary", "--token=example", "--type=requires", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
self.prepare_jenkins_data()
self.prepare_obs_data()
resp = self.make_expect_data(200, 'check_epol_list.txt', MOCK_DATA_FILE)
mock_repo_list_data = self.make_need_content('repo_list_data.txt', MOCK_DATA_FILE)
self.mock_subprocess_check_output(return_value=b'published-Epol-src')
mock_create_jenkins_comment = self.make_need_content('create_jenkins_comments_success.txt', MOCK_DATA_FILE)
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, resp, resp, resp, resp, resp, mock_repo_list_data,
mock_create_jenkins_comment, resp, resp, resp, resp, RequestException])
self.assert_result()
def test_get_update_issue_branch_and_get_update_list_failed(self):
"""
test get update issue branch and get update list failed
"""
self.expect_str = """
during the operation requires, a failure occurred, and the cause of the error was failed to get branch name.
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=requires", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
branch_abnormal_r = self.make_need_content('check_branch_abnormal.txt', MOCK_DATA_FILE)
self.mock_request(side_effect=[resp, resp, branch_abnormal_r, branch_abnormal_r, RequestException])
self.assert_result()
def test_branch_name_is_none_and_get_update_list_failed(self):
"""
test branch name is none and get update list failed
"""
self.expect_str = """
during the operation requires, a failure occurred, and the cause of the error was failed to get branch name.
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=requires", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
branch_abnormal_r = self.make_need_content('branch_name_is_none.txt', MOCK_DATA_FILE)
self.mock_request(
side_effect=[resp, resp, branch_abnormal_r, branch_abnormal_r, RequestException])
self.assert_result()
def test_create_jenkins_comment_failed(self):
"""
failed to create jenkins comment
"""
self.expect_str = """
[ERROR] failed to get requires.
already exists the repo url, then update the pkglist in repo.
during the operation requires, a failure occurred, and the cause of the error was transfer standard rpm jenkins res: No comment information. The content is: [].
"""
self.command_params = ["--giteeid=Mary", "--token=example", "--type=requires", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
self.prepare_jenkins_data()
self.prepare_obs_data()
self.mock_jenkins_build_job(return_value=0)
self.mock_request(return_value=resp)
self.assert_result()
def test_download_pkg_log_write_back_create_install_build_issue_failed(self):
"""
test download pkg log write back create install build issue failed
"""
self.expect_str = self.read_file_content("download_pkg_log_failed.txt",
folder=EXPECT_DATA_FILE, is_json=False)
self.command_params = ["--giteeid=Mary", "--token=example", "--type=requires", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
self.prepare_jenkins_data()
self.prepare_obs_data(get_objects_status_code=400)
resp = self.make_expect_data(200, 'checkpart.txt')
mock_repo_list_data = self.make_need_content('repo_list_data.txt', MOCK_DATA_FILE)
mock_add_repo_r = self.make_need_content('mock_add_repo_success.txt', MOCK_DATA_FILE)
mock_exist_issues = self.make_need_content('single_exist_issues.txt', MOCK_DATA_FILE)
mock_create_jenkins_comment = self.make_need_content('create_jenkins_comments_success.txt', MOCK_DATA_FILE)
mock_create_install_jenkins_comment = self.make_need_content('create_install_jenkins_comments_success.txt',
MOCK_DATA_FILE)
mock_create_build_jenkins_comment = self.make_need_content('create_build_jenkins_comments_success.txt',
MOCK_DATA_FILE)
mock_issue_comment = self.make_need_content('mock_issue_comment.txt', MOCK_DATA_FILE)
mock_create_build_issue = self.make_need_content('create_build_issue_success.txt', MOCK_DATA_FILE)
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, resp, resp, resp, resp, resp, mock_repo_list_data,
mock_create_jenkins_comment, resp, resp, resp, resp, mock_add_repo_r,
mock_create_build_jenkins_comment, mock_create_install_jenkins_comment, resp, resp,
mock_exist_issues, mock_issue_comment, mock_create_build_issue, RequestException,
RequestException])
self.assert_result()
def test_write_back_create_install_build_issue_failed(self):
"""
write_back_create_install_build_issue_failed
"""
self.expect_str = self.read_file_content("write_back_create_install_build_issue_failed.txt",
folder=EXPECT_DATA_FILE, is_json=False)
self.command_params = ["--giteeid=Mary", "--token=example", "--type=requires", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
self.prepare_jenkins_data()
self.prepare_obs_data()
resp = self.make_expect_data(200, 'checkpart.txt')
mock_repo_list_data = self.make_need_content('repo_list_data.txt', MOCK_DATA_FILE)
mock_add_repo_r = self.make_need_content('mock_add_repo_success.txt', MOCK_DATA_FILE)
mock_exist_issues = self.make_need_content('exist_issues.txt', MOCK_DATA_FILE)
mock_create_jenkins_comment = self.make_need_content('create_jenkins_comments_success.txt', MOCK_DATA_FILE)
mock_create_install_jenkins_comment = self.make_need_content('create_install_jenkins_comments_success.txt',
MOCK_DATA_FILE)
mock_create_build_jenkins_comment = self.make_need_content('create_build_jenkins_comments_success.txt',
MOCK_DATA_FILE)
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, resp, resp, resp, resp, resp, mock_repo_list_data,
mock_create_jenkins_comment, resp, resp, resp, resp, mock_add_repo_r,
mock_create_build_jenkins_comment, mock_create_install_jenkins_comment, resp, resp,
mock_exist_issues, RequestException, RequestException])
self.assert_result()
def test_write_back_operate_release_issue_failed(self):
"""
write_back_operate_release_issue_failed
"""
self.expect_str = self.read_file_content("write_back_operate_release_issue_failed.txt", folder=EXPECT_DATA_FILE,
is_json=False)
self.command_params = ["--giteeid=Mary", "--token=example", "--type=requires", "--jenkinsuser=mary",
"--jenkinskey=marykey", "--ak=forexample", "--sk=forexample", "I40769"]
resp = self.make_expect_data(200, 'checkpart.txt')
self.prepare_jenkins_data()
self.prepare_obs_data()
mock_add_repo_r = self.make_need_content('mock_add_repo_success.txt', MOCK_DATA_FILE)
mock_exist_issues = self.make_need_content('exist_issues.txt', MOCK_DATA_FILE)
mock_repo_list_data = self.make_need_content('repo_list_data.txt', MOCK_DATA_FILE)
mock_create_jenkins_comment = self.make_need_content('create_jenkins_comments_success.txt', MOCK_DATA_FILE)
mock_create_install_jenkins_comment = self.make_need_content('create_install_jenkins_comments_success.txt',
MOCK_DATA_FILE)
mock_create_build_jenkins_comment = self.make_need_content('create_build_jenkins_comments_success.txt',
MOCK_DATA_FILE)
mock_create_build_issue = self.make_need_content('create_build_issue_success.txt', MOCK_DATA_FILE)
mock_create_install_issue = self.make_need_content('create_install_issue_success.txt', MOCK_DATA_FILE)
self.mock_request(
side_effect=[resp, resp, resp, resp, resp, resp, resp, resp, resp, resp, mock_repo_list_data,
mock_create_jenkins_comment, resp, resp, resp, resp, mock_add_repo_r,
mock_create_build_jenkins_comment, mock_create_install_jenkins_comment, resp, resp,
mock_exist_issues, mock_create_build_issue, resp, resp, mock_create_build_issue,
RequestException, resp, resp, mock_exist_issues, mock_create_install_issue, resp,
mock_create_install_issue, RequestException])
self.assert_result()
def prepare_jenkins_data(self):
"""
prepare jenkins mock data
"""
self.mock_jenkins_build_job(return_value=2)
self.mock_jenkins_get_queue_item(
return_value=self.read_file_content("get_queue_item.json", folder=MOCK_DATA_FILE))
self.mock_jenkins_get_job_info(
return_value=self.read_file_content("get_job_info.json", folder=MOCK_DATA_FILE))
self.mock_jenkins_get_build_info(
return_value=self.read_file_content("get_build_info.json", folder=MOCK_DATA_FILE))
self.mock_jenkins_build_job_url(
return_value=self.read_file_content("build_job_url.txt", folder=MOCK_DATA_FILE, is_json=False))
self.mock_jenkins_create_folder(return_value=True)
self.mock_jenkins_job_exists(return_value=True)
self.mock_jenkins_delete_job(return_value=True)
self.mock_subprocess_check_output(return_value=b"published-everything-src")
trigger_config = self.read_file_content('test_template_config_trigger.xml', folder=MOCK_DATA_FILE,
is_json=False)
aarch64_config = self.read_file_content('test_template_config_aarch64.xml', folder=MOCK_DATA_FILE,
is_json=False)
x86_64_config = self.read_file_content('test_template_config_x86.xml', folder=MOCK_DATA_FILE, is_json=False)
self.mock_jenkins_get_job_config(
side_effect=[trigger_config, aarch64_config, aarch64_config, aarch64_config, aarch64_config, aarch64_config,
x86_64_config, x86_64_config, x86_64_config, x86_64_config, x86_64_config])
self.mock_jenkins_create_job(return_value=True)
self.mock_jenkins_get_build_console_output(
return_value=self.read_file_content('get_build_console_output.txt', folder=MOCK_DATA_FILE, is_json=False))
def prepare_obs_data(self, file_name='mock_obs_data.json', list_status_code=200, delete_status_code=200,
get_objects_status_code=200):
"""
prepare obs mock data
"""
mock_obs_list_objects_r = self.make_need_obs_cloud_data(file_name, MOCK_DATA_FILE, list_status_code)
mock_obs_delete_objects_r = self.make_need_obs_cloud_data(file_name, MOCK_DATA_FILE, delete_status_code)
mock_getobjects_r = self.make_object_data(get_objects_status_code)
self.mock_obs_cloud_get_objects(return_value=mock_getobjects_r)
self.mock_obs_cloud_list_objects(return_value=mock_obs_list_objects_r)
self.mock_obs_cloud_delete_object(return_value=mock_obs_delete_objects_r)
| 61.348432 | 161 | 0.664537 | 4,413 | 35,214 | 4.91797 | 0.055291 | 0.065982 | 0.064692 | 0.058241 | 0.919919 | 0.886145 | 0.855412 | 0.833157 | 0.795328 | 0.774686 | 0 | 0.009914 | 0.226643 | 35,214 | 573 | 162 | 61.455497 | 0.787023 | 0.050946 | 0 | 0.709459 | 0 | 0.011261 | 0.263071 | 0.079068 | 0 | 0 | 0 | 0 | 0.056306 | 1 | 0.060811 | false | 0 | 0.009009 | 0 | 0.074324 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0a3796e6b47cbd2e5ffba2e7cfa74b8c49642853 | 255,839 | py | Python | openstack_controller/tests/test_end_to_end.py | someword/integrations-core | 8861147fadf07500948a72fe2e1ef952ddcd9e2d | [
"BSD-3-Clause"
] | null | null | null | openstack_controller/tests/test_end_to_end.py | someword/integrations-core | 8861147fadf07500948a72fe2e1ef952ddcd9e2d | [
"BSD-3-Clause"
] | null | null | null | openstack_controller/tests/test_end_to_end.py | someword/integrations-core | 8861147fadf07500948a72fe2e1ef952ddcd9e2d | [
"BSD-3-Clause"
] | null | null | null | # (C) Datadog, Inc. 2018
# All rights reserved
# Licensed under Simplified BSD License (see LICENSE)
import mock
import os
import json
from . import common
from datadog_checks.openstack_controller import OpenStackControllerCheck
def make_request_responses(url, header, params=None, timeout=None):
if url == "http://10.0.2.15:5000/v3/projects":
mock_path = "v3_projects.json"
elif url == "http://10.0.2.15:9696":
return
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1":
return
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/limits":
mock_path = "v2.1_4bfc1_limits"
if params.get("tenant_id") == u'***************************d91a1':
mock_path = "{}_d91a1.json".format(mock_path)
elif params.get("tenant_id") == u'***************************4bfc1':
mock_path = "{}_4bfc1.json".format(mock_path)
elif params.get("tenant_id") == u'***************************73dbe':
mock_path = "{}_73dbe.json".format(mock_path)
elif params.get("tenant_id") == u'***************************3fb11':
mock_path = "{}_3fb11.json".format(mock_path)
elif params.get("tenant_id") == u'***************************44736':
mock_path = "{}_44736.json".format(mock_path)
elif params.get("tenant_id") == u'***************************147d1':
mock_path = "{}_147d1.json".format(mock_path)
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/os-hypervisors/detail":
mock_path = "v2.1_4bfc1_os-hypervisors_detail.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/os-hypervisors/1/uptime":
mock_path = "v2.1_4bfc1_os-hypervisors_uptime.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/os-hypervisors/2/uptime":
mock_path = "v2.1_4bfc1_os-hypervisors_uptime.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/os-hypervisors/8/uptime":
mock_path = "v2.1_4bfc1_os-hypervisors_uptime.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/os-hypervisors/9/uptime":
mock_path = "v2.1_4bfc1_os-hypervisors_uptime.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/os-hypervisors/10/uptime":
mock_path = "v2.1_4bfc1_os-hypervisors_uptime.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/os-hypervisors/11/uptime":
mock_path = "v2.1_4bfc1_os-hypervisors_uptime.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/os-hypervisors/12/uptime":
mock_path = "v2.1_4bfc1_os-hypervisors_uptime.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/os-hypervisors/13/uptime":
mock_path = "v2.1_4bfc1_os-hypervisors_uptime.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/os-hypervisors/14/uptime":
mock_path = "v2.1_4bfc1_os-hypervisors_uptime.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/os-hypervisors/15/uptime":
mock_path = "v2.1_4bfc1_os-hypervisors_uptime.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/os-aggregates":
mock_path = "v2.1_4bfc1_os-aggregates.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/detail":
mock_path = "v2.1_4bfc1_servers_detail.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/flavors/detail":
mock_path = "v2.1_4bfc1_flavors_detail.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/ff2f581c-5d03-4a27-a0ba-f102603fe38f/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_ff2f581c-5d03-4a27-a0ba-f102603fe38f_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/acb4197c-f54e-488e-a40a-1b7f59cc9117/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_acb4197c-f54e-488e-a40a-1b7f59cc9117_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/b3c8eee3-7e22-4a7c-9745-759073673cbe/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_b3c8eee3-7e22-4a7c-9745-759073673cbe_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/412c79b2-25f2-44d6-8e3b-be4baee11a7f/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_412c79b2-25f2-44d6-8e3b-be4baee11a7f_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/7e622c28-4b12-4a58-8ac2-4a2e854f84eb/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_7e622c28-4b12-4a58-8ac2-4a2e854f84eb_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/4ceb4c69-a332-4b9d-907b-e99635aae644/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_4ceb4c69-a332-4b9d-907b-e99635aae644_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/1cc21586-8d43-40ea-bdc9-6f54a79957b4/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_1cc21586-8d43-40ea-bdc9-6f54a79957b4_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/836f724f-0028-4dc0-b9bd-e0843d767ca2/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_836f724f-0028-4dc0-b9bd-e0843d767ca2_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/7eaa751c-1e37-4963-a836-0a28bc283a9a/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_7eaa751c-1e37-4963-a836-0a28bc283a9a_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/5357e70e-f12c-4bb7-85a2-b40d642a7e92/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_5357e70e-f12c-4bb7-85a2-b40d642a7e92_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/f2dd3f90-e738-4135-84d4-1a2d30d04929/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_f2dd3f90-e738-4135-84d4-1a2d30d04929_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/30888944-fb39-4590-9073-ef977ac1f039/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_30888944-fb39-4590-9073-ef977ac1f039_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/4d7cb923-788f-4b61-9061-abfc576ecc1a/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_4d7cb923-788f-4b61-9061-abfc576ecc1a_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/2e1ce152-b19d-4c4a-9cc7-0d150fa97a18/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_2e1ce152-b19d-4c4a-9cc7-0d150fa97a18_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/52561f29-e479-43d7-85de-944d29ef178d/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_52561f29-e479-43d7-85de-944d29ef178d_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/1b7a987f-c4fb-4b6b-aad9-3b461df2019d/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_1b7a987f-c4fb-4b6b-aad9-3b461df2019d_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/7324440d-915b-4e12-8b85-ec8c9a524d6c/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_7324440d-915b-4e12-8b85-ec8c9a524d6c_diagnostics.json"
elif url == "http://10.0.2.15:8774/v2.1/***************************4bfc1/servers/57030997-f1b5-4f79-9429-8cb285318633/diagnostics": # noqa E501
mock_path = "v2.1_4bfc1_servers_57030997-f1b5-4f79-9429-8cb285318633_diagnostics.json"
elif url == "http://10.0.2.15:9696/v2.0/networks":
mock_path = "v2.0_networks.json"
else:
raise RuntimeError()
mock_path = os.path.join(common.FIXTURES_DIR, mock_path)
with open(mock_path, 'r') as f:
return json.loads(f.read())
class MockHTTPResponse(object):
def __init__(self, response_dict, headers):
self.response_dict = response_dict
self.headers = headers
def json(self):
return self.response_dict
@mock.patch('datadog_checks.openstack_controller.api.AbstractApi._make_request',
side_effect=make_request_responses)
def test_scenario(make_request, aggregator):
instance = common.MOCK_CONFIG["instances"][0]
init_config = common.MOCK_CONFIG['init_config']
check = OpenStackControllerCheck('openstack_controller', init_config, {}, instances=[instance])
auth_tokens_response_path = os.path.join(common.FIXTURES_DIR, "auth_tokens_response.json")
with open(auth_tokens_response_path, 'r') as f:
auth_tokens_response = json.loads(f.read())
auth_tokens_response = MockHTTPResponse(response_dict=auth_tokens_response,
headers={'X-Subject-Token': 'fake_token'})
auth_projects_response_path = os.path.join(common.FIXTURES_DIR, "auth_projects_response.json")
with open(auth_projects_response_path, 'r') as f:
auth_projects_response = json.loads(f.read())
with mock.patch('datadog_checks.openstack_controller.scopes.KeystoneApi.post_auth_token',
return_value=auth_tokens_response):
with mock.patch('datadog_checks.openstack_controller.scopes.KeystoneApi.get_auth_projects',
return_value=auth_projects_response):
check.check(common.MOCK_CONFIG['instances'][0])
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova', 'interface:tapb488fc1e-3e'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova', 'interface:tapc929a75b-94'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova', 'interface:tapf3e5d7a2-94'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.rx', value=17286.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova', 'interface:tapf3e5d7a2-94'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.limits.max_image_meta', value=128.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_image_meta', value=128.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_image_meta', value=128.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_image_meta', value=128.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_image_meta', value=128.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_image_meta', value=128.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.server.tx', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova', 'interface:tap8880f875-12'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova', 'interface:tap9bff9e73-2f'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.limits.max_personality', value=5.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_personality', value=10.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_personality', value=5.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_personality', value=5.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_personality', value=5.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_personality', value=5.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.memory_mb', value=7982.0,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.memory_mb', value=7982.0,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.memory_mb', value=7982.0,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.memory_mb', value=7982.0,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.memory_mb', value=7982.0,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.memory_mb', value=7982.0,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.memory_mb', value=7982.0,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.memory_mb', value=7982.0,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.memory_mb', value=7982.0,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.memory_mb', value=7982.0,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova', 'interface:tapad123605-18'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova', 'interface:tapab9b23ee-c1'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova', 'interface:tap702092ed-a5'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova', 'interface:tapc929a75b-94'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova', 'interface:tapf3e5d7a2-94'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.tx', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova', 'interface:tapad123605-18'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova',
'interface:tap9ac4ed56-d2'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova', 'interface:tap56f02c54-da'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.rx', value=16542.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova', 'interface:tape690927f-80'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.limits.total_security_groups_used', value=0.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_security_groups_used', value=1.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_security_groups_used', value=0.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_security_groups_used', value=1.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_security_groups_used', value=0.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_security_groups_used', value=0.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_cores', value=20.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_cores', value=40.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_cores', value=40.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_cores', value=40.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_cores', value=40.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_cores', value=40.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova', 'interface:tapcb21dae0-46'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova',
'interface:tap9ac4ed56-d2'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.rx', value=15564.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova', 'interface:tap8880f875-12'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.limits.total_floating_ips_used', value=0.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_floating_ips_used', value=0.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_floating_ips_used', value=0.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_floating_ips_used', value=0.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_floating_ips_used', value=0.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_floating_ips_used', value=0.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova', 'interface:tap702092ed-a5'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova',
'interface:tap9ac4ed56-d2'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=170.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova', 'interface:tapab9b23ee-c1'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova', 'interface:tapc929a75b-94'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova', 'interface:tap66a9ffb5-8f'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.rx', value=6306.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova',
'interface:tap9ac4ed56-d2'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova', 'interface:tap3fd8281c-97'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.current_workload', value=0.0,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.current_workload', value=0.0,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.current_workload', value=0.0,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.current_workload', value=0.0,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.current_workload', value=0.0,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.current_workload', value=0.0,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.current_workload', value=0.0,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.current_workload', value=0.0,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.current_workload', value=0.0,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.current_workload', value=0.0,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_floating_ips', value=10.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_floating_ips', value=10.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_floating_ips', value=10.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_floating_ips', value=10.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_floating_ips', value=10.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_floating_ips', value=10.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.server.tx', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova', 'interface:tap702092ed-a5'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.limits.total_ram_used', value=0.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_ram_used', value=17408.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_ram_used', value=0.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_ram_used', value=1024.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_ram_used', value=0.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_ram_used', value=0.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova', 'interface:tap66a9ffb5-8f'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova', 'interface:tape690927f-80'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova', 'interface:tap3fd8281c-97'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova', 'interface:tapf86369c0-84'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova', 'interface:tapab9b23ee-c1'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova', 'interface:tapcb21dae0-46'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova', 'interface:tapf86369c0-84'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova', 'interface:tape690927f-80'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.rx', value=15408.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova', 'interface:tapcb21dae0-46'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova', 'interface:tap69a50430-3b'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.rx', value=5946.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova', 'interface:tap73364860-8e'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.free_ram_mb', value=3886.0,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_ram_mb', value=2862.0,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_ram_mb', value=5934.0,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_ram_mb', value=2862.0,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_ram_mb', value=3886.0,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_ram_mb', value=5934.0,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_ram_mb', value=2862.0,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_ram_mb', value=3886.0,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_ram_mb', value=5934.0,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_ram_mb', value=2862.0,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=207.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova', 'interface:tap3fd8281c-97'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova', 'interface:tapab9b23ee-c1'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=67.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova', 'interface:tapb488fc1e-3e'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova', 'interface:tap9bff9e73-2f'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova', 'interface:tapb488fc1e-3e'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=2422550000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=648410000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=6915020290000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=830250000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=3008600000000.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=741940000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=2406870000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=3193240000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=2616630000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=3608370000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=2124150000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=556800000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=4697690000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=3320700000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=1876660000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=2512910000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=567940000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.cpu0_time', value=2242410000000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova', 'interface:tapab9b23ee-c1'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova', 'interface:tapf3e5d7a2-94'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova', 'interface:tap56f02c54-da'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova', 'interface:tapf3e5d7a2-94'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.limits.max_personality_size', value=10240.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_personality_size', value=10240.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_personality_size', value=10240.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_personality_size', value=10240.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_personality_size', value=10240.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_personality_size', value=10240.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova', 'interface:tap66a9ffb5-8f'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova', 'interface:tap73364860-8e'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=67.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova', 'interface:tap73364860-8e'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova', 'interface:tapad123605-18'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova', 'interface:tap39a71720-01'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova', 'interface:tapad123605-18'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova', 'interface:tap9bff9e73-2f'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=193.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova', 'interface:tapf3e5d7a2-94'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.limits.max_server_meta', value=128.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_server_meta', value=128.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_server_meta', value=128.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_server_meta', value=128.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_server_meta', value=128.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_server_meta', value=128.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova', 'interface:tapcb21dae0-46'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.vcpus_used', value=4.0,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus_used', value=6.0,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus_used', value=0.0,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus_used', value=6.0,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus_used', value=4.0,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus_used', value=0.0,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus_used', value=6.0,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus_used', value=4.0,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus_used', value=0.0,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus_used', value=6.0,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova', 'interface:tap66a9ffb5-8f'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova', 'interface:tap56f02c54-da'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.limits.max_total_keypairs', value=100.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_keypairs', value=100.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_keypairs', value=100.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_keypairs', value=100.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_keypairs', value=100.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_keypairs', value=100.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_security_groups', value=10.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_security_groups', value=10.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_security_groups', value=10.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_security_groups', value=10.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_security_groups', value=10.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_security_groups', value=10.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.running_vms', value=2.0,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.running_vms', value=3.0,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.running_vms', value=0.0,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.running_vms', value=3.0,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.running_vms', value=2.0,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.running_vms', value=0.0,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.running_vms', value=3.0,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.running_vms', value=2.0,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.running_vms', value=0.0,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.running_vms', value=3.0,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova', 'interface:tapcb21dae0-46'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.vcpus', value=8.0,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus', value=8.0,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus', value=8.0,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus', value=8.0,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus', value=8.0,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus', value=8.0,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus', value=8.0,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus', value=8.0,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus', value=8.0,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.vcpus', value=8.0,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova', 'interface:tap69a50430-3b'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova', 'interface:tap66a9ffb5-8f'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova', 'interface:tap3fd8281c-97'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.limits.total_instances_used', value=0.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_instances_used', value=17.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_instances_used', value=0.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_instances_used', value=1.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_instances_used', value=0.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_instances_used', value=0.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova', 'interface:tap3fd8281c-97'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova', 'interface:tape690927f-80'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=71.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova', 'interface:tap66a9ffb5-8f'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova', 'interface:tap69a50430-3b'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.rx', value=15306.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova', 'interface:tap56f02c54-da'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova', 'interface:tape690927f-80'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.free_disk_gb', value=26.0,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_disk_gb', value=16.0,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_disk_gb', value=46.0,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_disk_gb', value=16.0,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_disk_gb', value=26.0,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_disk_gb', value=46.0,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_disk_gb', value=16.0,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_disk_gb', value=26.0,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_disk_gb', value=46.0,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.free_disk_gb', value=16.0,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.local_gb_used', value=22.0,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.local_gb_used', value=32.0,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.local_gb_used', value=2.0,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.local_gb_used', value=32.0,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.local_gb_used', value=22.0,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.local_gb_used', value=2.0,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.local_gb_used', value=32.0,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.local_gb_used', value=22.0,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.local_gb_used', value=2.0,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.local_gb_used', value=32.0,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.memory', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=195.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova', 'interface:tapad123605-18'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova', 'interface:tap8880f875-12'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova', 'interface:tapcb21dae0-46'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=199.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova', 'interface:tap39a71720-01'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova', 'interface:tapc929a75b-94'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=71.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova',
'interface:tap9ac4ed56-d2'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova', 'interface:tap8880f875-12'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.rx', value=5946.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova', 'interface:tapb488fc1e-3e'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=198.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova', 'interface:tapf86369c0-84'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova', 'interface:tapb488fc1e-3e'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova', 'interface:tapad123605-18'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova', 'interface:tap69a50430-3b'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova', 'interface:tap8880f875-12'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova', 'interface:tap9bff9e73-2f'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=172.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova', 'interface:tapcb21dae0-46'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova', 'interface:tap69a50430-3b'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova', 'interface:tapf86369c0-84'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova', 'interface:tapab9b23ee-c1'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.limits.total_cores_used', value=0.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_cores_used', value=34.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_cores_used', value=0.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_cores_used', value=2.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_cores_used', value=0.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.total_cores_used', value=0.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova',
'interface:tap9ac4ed56-d2'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova', 'interface:tapab9b23ee-c1'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.rx', value=17826.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova', 'interface:tap39a71720-01'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.vda_write', value=296960.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.vda_write', value=356352.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.vda_write', value=146432.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.vda_write', value=369664.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.vda_write', value=307200.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.vda_write', value=359424.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.vda_write', value=297984.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.vda_write', value=305152.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.vda_write', value=351232.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.vda_write', value=373760.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.vda_write', value=299008.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.vda_write', value=368640.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.vda_write', value=316416.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.vda_write', value=297984.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.vda_write', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.vda_write', value=105472.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.vda_write', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.vda_write', value=295936.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.rx', value=17466.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova', 'interface:tapad123605-18'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.rx', value=15228.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova', 'interface:tapab9b23ee-c1'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=66.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova', 'interface:tap702092ed-a5'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.rx', value=18522.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova', 'interface:tap3fd8281c-97'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=197.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova', 'interface:tap69a50430-3b'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=878.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=1154.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=825.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=1161.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=878.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=1171.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=877.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=878.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=1156.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=1157.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=878.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=1128.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=875.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=878.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=424.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=574.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=424.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.vda_read_req', value=878.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20160512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20432896.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.vda_read', value=15403008.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20445184.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20160512.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20473856.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20164608.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20160512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20458496.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20431872.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20160512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20446208.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20153344.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20160512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.vda_read', value=13560832.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.vda_read', value=15155200.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.vda_read', value=13560832.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.vda_read', value=20160512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova', 'interface:tapf86369c0-84'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.memory_actual', value=1048576.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova', 'interface:tap3fd8281c-97'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova', 'interface:tap9bff9e73-2f'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova', 'interface:tap73364860-8e'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova', 'interface:tap39a71720-01'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova', 'interface:tap702092ed-a5'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.vda_errors', value=-1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova', 'interface:tap702092ed-a5'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.local_gb', value=48.0,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.local_gb', value=48.0,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.local_gb', value=48.0,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.local_gb', value=48.0,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.local_gb', value=48.0,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.local_gb', value=48.0,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.local_gb', value=48.0,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.local_gb', value=48.0,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.local_gb', value=48.0,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.local_gb', value=48.0,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova', 'interface:tap39a71720-01'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova', 'interface:tapf3e5d7a2-94'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova', 'interface:tapb488fc1e-3e'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova', 'interface:tapb488fc1e-3e'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.tx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova', 'interface:tap73364860-8e'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova', 'interface:tap56f02c54-da'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova', 'interface:tapc929a75b-94'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova', 'interface:tap56f02c54-da'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.rx', value=6306.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova', 'interface:tap66a9ffb5-8f'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.limits.max_total_instances', value=10.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_instances', value=20.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_instances', value=20.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_instances', value=20.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_instances', value=20.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_instances', value=20.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.server.rx', value=5844.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova', 'interface:tap702092ed-a5'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=154.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova', 'interface:tap9bff9e73-2f'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.disk_available_least', value=14.0,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.disk_available_least', value=-2.0,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.disk_available_least', value=38.0,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.disk_available_least', value=2.0,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.disk_available_least', value=14.0,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.disk_available_least', value=37.0,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.disk_available_least', value=3.0,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.disk_available_least', value=13.0,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.disk_available_least', value=3.0,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.disk_available_least', value=3.0,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.server.tx', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova', 'interface:tapf3e5d7a2-94'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.limits.max_total_ram_size', value=51200.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_ram_size', value=51200.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_ram_size', value=51200.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_ram_size', value=51200.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_ram_size', value=51200.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_total_ram_size', value=51200.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=84.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=105.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=32.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=106.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=82.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=105.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=85.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=84.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=108.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=107.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=82.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=105.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=89.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=84.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=28.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.vda_write_req', value=83.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova', 'interface:tape690927f-80'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=145116.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=160832.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=147684.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=148000.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=141980.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=161108.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=144728.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=142012.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=146064.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=145892.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-1',
'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=140812.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=149456.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=146460.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=142300.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=146188.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=144460.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-1',
'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=148752.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:blacklist',
'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.memory_rss', value=143992.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=185.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova', 'interface:tape690927f-80'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova', 'interface:tap8880f875-12'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova', 'interface:tap39a71720-01'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local', 'server_name:server_take_zero-2',
'availability_zone:nova', 'interface:tapad123605-18'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.rx', value=13788.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova', 'interface:tap9bff9e73-2f'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=199.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova', 'interface:tapc929a75b-94'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.memory_mb_used', value=4096.0,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.memory_mb_used', value=5120.0,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.memory_mb_used', value=2048.0,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.memory_mb_used', value=5120.0,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.memory_mb_used', value=4096.0,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.memory_mb_used', value=2048.0,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.memory_mb_used', value=5120.0,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.memory_mb_used', value=4096.0,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.memory_mb_used', value=2048.0,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.memory_mb_used', value=5120.0,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova', 'interface:tap56f02c54-da'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:finalDestination-6',
'availability_zone:nova', 'interface:tape690927f-80'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova', 'interface:tapc929a75b-94'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.rx', value=17748.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova', 'interface:tapf86369c0-84'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova',
'interface:tap9ac4ed56-d2'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova', 'interface:tapf86369c0-84'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.limits.max_security_group_rules', value=20.0,
tags=['tenant_id:***************************4bfc1', 'project_name:service'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_security_group_rules', value=20.0,
tags=['tenant_id:***************************3fb11', 'project_name:admin'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_security_group_rules', value=20.0,
tags=['tenant_id:***************************d91a1', 'project_name:testProj2'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_security_group_rules', value=20.0,
tags=['tenant_id:***************************73dbe', 'project_name:testProj1'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_security_group_rules', value=20.0,
tags=['tenant_id:***************************147d1', 'project_name:12345'],
hostname='')
aggregator.assert_metric('openstack.nova.limits.max_security_group_rules', value=20.0,
tags=['tenant_id:***************************44736', 'project_name:abcde'],
hostname='')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova', 'interface:tap39a71720-01'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:jnrgjoner',
'availability_zone:nova', 'interface:tap66a9ffb5-8f'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova',
'interface:tap9ac4ed56-d2'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.rx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:finalDestination-5',
'availability_zone:nova', 'interface:tapf86369c0-84'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local', 'server_name:blacklistServer',
'availability_zone:nova', 'interface:tap9bff9e73-2f'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-2',
'availability_zone:nova', 'interface:tap39a71720-01'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.rx', value=17826.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local', 'server_name:finalDestination-7',
'availability_zone:nova', 'interface:tapc929a75b-94'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:Rocky',
'availability_zone:nova', 'interface:tapcb21dae0-46'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova', 'interface:tap73364860-8e'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova', 'interface:tap8880f875-12'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-8',
'availability_zone:nova', 'interface:tap73364860-8e'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=174.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local', 'server_name:ReadyServerOne',
'availability_zone:nova', 'interface:tap8880f875-12'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:finalDestination-8', 'availability_zone:nova',
'interface:tap73364860-8e'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.tx', value=1464.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local',
'server_name:jenga', 'availability_zone:nova',
'interface:tap3fd8281c-97'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.rx', value=17646.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local',
'server_name:moarserver-13', 'availability_zone:nova',
'interface:tap69a50430-3b'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.rx_errors', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:blacklist', 'availability_zone:nova',
'interface:tap702092ed-a5'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.tx_drop', value=0.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local', 'server_name:moarserver-13',
'availability_zone:nova', 'interface:tap69a50430-3b'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.rx_packets', value=171.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local', 'server_name:anotherServer',
'availability_zone:nova', 'interface:tap56f02c54-da'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.tx_packets', value=9.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local', 'server_name:finalDestination-4',
'availability_zone:nova', 'interface:tapb488fc1e-3e'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local',
'server_name:server_take_zero-1', 'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local',
'server_name:finalDestination-2', 'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:ReadyServerOne', 'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:finalDestination-6', 'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:finalDestination-4', 'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local',
'server_name:finalDestination-7', 'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local', 'server_name:jenga',
'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local',
'server_name:moarserver-13', 'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:finalDestination-8', 'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local',
'server_name:server_take_zero-2', 'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:finalDestination-1', 'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local',
'server_name:finalDestination-5', 'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:blacklist', 'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:anotherServer', 'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local',
'server_name:blacklistServer', 'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local',
'server_name:Rocky', 'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.flavor.disk', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:jnrgjoner', 'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local',
'server_name:server_take_zero-1', 'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local',
'server_name:finalDestination-2', 'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:ReadyServerOne', 'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:finalDestination-6', 'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:finalDestination-4', 'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local',
'server_name:finalDestination-7', 'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local',
'server_name:jenga', 'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local',
'server_name:moarserver-13', 'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:finalDestination-8', 'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local',
'server_name:server_take_zero-2', 'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:finalDestination-1', 'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local',
'server_name:finalDestination-5', 'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:blacklist', 'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:anotherServer', 'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local',
'server_name:blacklistServer', 'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local',
'server_name:Rocky', 'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.flavor.ram', value=512.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:jnrgjoner', 'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local',
'server_name:server_take_zero-1', 'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local',
'server_name:finalDestination-2', 'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:ReadyServerOne', 'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:finalDestination-6', 'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:finalDestination-4', 'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local',
'server_name:finalDestination-7', 'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local',
'server_name:jenga', 'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local',
'server_name:moarserver-13', 'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:finalDestination-8', 'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local',
'server_name:server_take_zero-2', 'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:finalDestination-1', 'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local',
'server_name:finalDestination-5', 'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:blacklist', 'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:anotherServer', 'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local',
'server_name:blacklistServer', 'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local',
'server_name:Rocky', 'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.flavor.vcpus', value=1.0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:jnrgjoner', 'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local',
'server_name:server_take_zero-1', 'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local',
'server_name:finalDestination-2', 'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:ReadyServerOne', 'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:finalDestination-6', 'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:finalDestination-4', 'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local',
'server_name:finalDestination-7', 'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local',
'server_name:jenga', 'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local',
'server_name:moarserver-13', 'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:finalDestination-8', 'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local',
'server_name:server_take_zero-2', 'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:finalDestination-1', 'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local',
'server_name:finalDestination-5', 'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:blacklist', 'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:anotherServer', 'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local',
'server_name:blacklistServer', 'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local',
'server_name:Rocky', 'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.flavor.ephemeral', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:jnrgjoner', 'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local',
'server_name:server_take_zero-1', 'availability_zone:nova'],
hostname=u'7eaa751c-1e37-4963-a836-0a28bc283a9a')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local',
'server_name:finalDestination-2', 'availability_zone:nova'],
hostname=u'52561f29-e479-43d7-85de-944d29ef178d')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:ReadyServerOne', 'availability_zone:nova'],
hostname=u'412c79b2-25f2-44d6-8e3b-be4baee11a7f')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:finalDestination-6', 'availability_zone:nova'],
hostname=u'acb4197c-f54e-488e-a40a-1b7f59cc9117')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:finalDestination-4', 'availability_zone:nova'],
hostname=u'7e622c28-4b12-4a58-8ac2-4a2e854f84eb')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute8.openstack.local',
'server_name:finalDestination-7', 'availability_zone:nova'],
hostname=u'1cc21586-8d43-40ea-bdc9-6f54a79957b4')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local',
'server_name:jenga', 'availability_zone:nova'],
hostname=u'f2dd3f90-e738-4135-84d4-1a2d30d04929')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local',
'server_name:moarserver-13', 'availability_zone:nova'],
hostname=u'4ceb4c69-a332-4b9d-907b-e99635aae644')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:finalDestination-8', 'availability_zone:nova'],
hostname=u'836f724f-0028-4dc0-b9bd-e0843d767ca2')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:HoneyIShrunkTheServer', 'availability_zone:nova'],
hostname=u'1b7a987f-c4fb-4b6b-aad9-3b461df2019d')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute4.openstack.local',
'server_name:server_take_zero-2', 'availability_zone:nova'],
hostname=u'ff2f581c-5d03-4a27-a0ba-f102603fe38f')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:finalDestination-1', 'availability_zone:nova'],
hostname=u'4d7cb923-788f-4b61-9061-abfc576ecc1a')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute5.openstack.local',
'server_name:finalDestination-5', 'availability_zone:nova'],
hostname=u'5357e70e-f12c-4bb7-85a2-b40d642a7e92')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute7.openstack.local',
'server_name:blacklist', 'availability_zone:nova'],
hostname=u'7324440d-915b-4e12-8b85-ec8c9a524d6c')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute10.openstack.local',
'server_name:anotherServer', 'availability_zone:nova'],
hostname=u'30888944-fb39-4590-9073-ef977ac1f039')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:testProj1',
'hypervisor:compute4.openstack.local',
'server_name:blacklistServer', 'availability_zone:nova'],
hostname=u'57030997-f1b5-4f79-9429-8cb285318633')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute1.openstack.local',
'server_name:Rocky', 'availability_zone:nova'],
hostname=u'2e1ce152-b19d-4c4a-9cc7-0d150fa97a18')
aggregator.assert_metric('openstack.nova.server.flavor.swap', value=0,
tags=['nova_managed_server', 'project_name:admin',
'hypervisor:compute2.openstack.local',
'server_name:jnrgjoner', 'availability_zone:nova'],
hostname=u'b3c8eee3-7e22-4a7c-9745-759073673cbe')
aggregator.assert_metric('openstack.nova.hypervisor_load.15', value=0.14,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.15', value=0.14,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.15', value=0.14,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.15', value=0.14,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.15', value=0.14,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.15', value=0.14,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.15', value=0.14,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.15', value=0.14,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.15', value=0.14,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.15', value=0.14,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.5', value=0.12,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.5', value=0.12,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.5', value=0.12,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.5', value=0.12,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.5', value=0.12,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.5', value=0.12,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.5', value=0.12,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.5', value=0.12,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.5', value=0.12,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.5', value=0.12,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.1', value=0.2,
tags=['hypervisor:compute1.openstack.local', 'hypervisor_id:1', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.1', value=0.2,
tags=['hypervisor:compute2.openstack.local', 'hypervisor_id:2',
'virt_type:QEMU', 'status:enabled'], hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.1', value=0.2,
tags=['hypervisor:compute3.openstack.local', 'hypervisor_id:8', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.1', value=0.2,
tags=['hypervisor:compute4.openstack.local', 'hypervisor_id:9',
'virt_type:QEMU', 'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.1', value=0.2,
tags=['hypervisor:compute5.openstack.local', 'hypervisor_id:10', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.1', value=0.2,
tags=['hypervisor:compute6.openstack.local', 'hypervisor_id:11', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.1', value=0.2,
tags=['hypervisor:compute7.openstack.local', 'hypervisor_id:12', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.1', value=0.2,
tags=['hypervisor:compute8.openstack.local', 'hypervisor_id:13', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.1', value=0.2,
tags=['hypervisor:compute9.openstack.local', 'hypervisor_id:14', 'virt_type:QEMU',
'status:enabled'],
hostname='')
aggregator.assert_metric('openstack.nova.hypervisor_load.1', value=0.2,
tags=['hypervisor:compute10.openstack.local', 'hypervisor_id:15', 'virt_type:QEMU',
'status:enabled'],
hostname='')
# Assert coverage for this check on this instance
aggregator.assert_all_metrics_covered()
| 87.977648 | 148 | 0.497684 | 21,255 | 255,839 | 5.79713 | 0.022865 | 0.082196 | 0.112841 | 0.159003 | 0.98488 | 0.983006 | 0.981634 | 0.981147 | 0.980823 | 0.972131 | 0 | 0.093877 | 0.38328 | 255,839 | 2,907 | 149 | 88.007912 | 0.687066 | 0.001259 | 0 | 0.918459 | 0 | 0.011103 | 0.425861 | 0.323346 | 0 | 0 | 0 | 0 | 0.219639 | 1 | 0.001388 | false | 0 | 0.001735 | 0.000347 | 0.004858 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0a481be93b5a7d5486f90957b6918dd93e559248 | 128 | py | Python | python/testData/completion/heavyStarPropagation/lib/_pkg1/_pkg1_0/_pkg1_0_1/_pkg1_0_1_1/_pkg1_0_1_1_0/_mod1_0_1_1_0_1.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/completion/heavyStarPropagation/lib/_pkg1/_pkg1_0/_pkg1_0_1/_pkg1_0_1_1/_pkg1_0_1_1_0/_mod1_0_1_1_0_1.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/completion/heavyStarPropagation/lib/_pkg1/_pkg1_0/_pkg1_0_1/_pkg1_0_1_1/_pkg1_0_1_1_0/_mod1_0_1_1_0_1.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | name1_0_1_1_0_1_0 = None
name1_0_1_1_0_1_1 = None
name1_0_1_1_0_1_2 = None
name1_0_1_1_0_1_3 = None
name1_0_1_1_0_1_4 = None | 14.222222 | 24 | 0.820313 | 40 | 128 | 1.875 | 0.175 | 0.266667 | 0.24 | 0.533333 | 0.88 | 0.88 | 0.746667 | 0 | 0 | 0 | 0 | 0.318182 | 0.140625 | 128 | 9 | 25 | 14.222222 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
6a6f87e359ff6590124f9683585aedd0def68b75 | 183 | py | Python | ummon/metrics/base.py | matherm/ummon3 | 08476d21ce17cc95180525d48202a1690dfc8a08 | [
"BSD-3-Clause"
] | 1 | 2022-02-10T06:47:13.000Z | 2022-02-10T06:47:13.000Z | ummon/metrics/base.py | matherm/ummon3 | 08476d21ce17cc95180525d48202a1690dfc8a08 | [
"BSD-3-Clause"
] | null | null | null | ummon/metrics/base.py | matherm/ummon3 | 08476d21ce17cc95180525d48202a1690dfc8a08 | [
"BSD-3-Clause"
] | null | null | null | class OfflineMetric():
def __repr__(self):
return str(self.__class__.__name__)
class OnlineMetric():
def __repr__(self):
return str(self.__class__.__name__) | 20.333333 | 43 | 0.68306 | 20 | 183 | 5.05 | 0.45 | 0.138614 | 0.217822 | 0.336634 | 0.653465 | 0.653465 | 0.653465 | 0.653465 | 0 | 0 | 0 | 0 | 0.20765 | 183 | 9 | 44 | 20.333333 | 0.696552 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
6ae1f5e203040ce419050c671b8c4912128cc2e9 | 5,802 | py | Python | tools.py | tjards/reynolds_escort | 5565ece406dcf6ed91f96efed21436c7e5edf444 | [
"MIT"
] | null | null | null | tools.py | tjards/reynolds_escort | 5565ece406dcf6ed91f96efed21436c7e5edf444 | [
"MIT"
] | null | null | null | tools.py | tjards/reynolds_escort | 5565ece406dcf6ed91f96efed21436c7e5edf444 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Dec 28 20:29:59 2020
This file defines some useful planar constraints
@author: tjards
"""
import numpy as np
def centroid(points):
length = points.shape[0]
sum_x = np.sum(points[:, 0])
sum_y = np.sum(points[:, 1])
sum_z = np.sum(points[:, 2])
centroid = np.array((sum_x/length, sum_y/length, sum_z/length), ndmin = 2)
return centroid.transpose()
def buildWall(wType, pos):
if wType == 'horizontal':
# define 3 points on the plane (this one is horizontal)
wallp1 = np.array([0, 0, pos])
wallp2 = np.array([5, 10, pos])
wallp3 = np.array([20, 30, pos+0.05])
# define two vectors on the plane
v1 = wallp3 - wallp1
v2 = wallp2 - wallp1
# compute vector normal to the plane
wallcp = np.cross(v1, v2)
walla, wallb, wallc = wallcp
walld = np.dot(wallcp, wallp3)
walls = np.zeros((6,1))
walls[0:3,0] = np.array(wallcp, ndmin=2)#.transpose()
walls[3:6,0] = np.array(wallp1, ndmin=2)#.transpose()
walls_plots = np.zeros((4,1))
walls_plots[:,0] = np.array([walla, wallb, wallc, walld])
if wType == 'vertical1':
# define 3 points on the plane (this one is vertical
wallp1 = np.array([0, pos, 0])
wallp2 = np.array([5, pos, 10])
wallp3 = np.array([20,pos+0.05, 30])
# define two vectors on the plane
v1 = wallp3 - wallp1
v2 = wallp2 - wallp1
# compute vector normal to the plane
wallcp = np.cross(v1, v2)
walla, wallb, wallc = wallcp
walld = np.dot(wallcp, wallp3)
walls = np.zeros((6,1))
walls[0:3,0] = np.array(wallcp, ndmin=2)#.transpose()
walls[3:6,0] = np.array(wallp1, ndmin=2)#.transpose()
walls_plots = np.zeros((4,1))
walls_plots[:,0] = np.array([walla, wallb, wallc, walld])
if wType == 'vertical2':
# define 3 points on the plane (this one is vertical
wallp1 = np.array([pos, 0, 0])
wallp2 = np.array([pos, 5, 10])
wallp3 = np.array([pos+0.05, 20, 30])
# define two vectors on the plane
v1 = wallp3 - wallp1
v2 = wallp2 - wallp1
# compute vector normal to the plane
wallcp = np.cross(v1, v2)
walla, wallb, wallc = wallcp
walld = np.dot(wallcp, wallp3)
walls = np.zeros((6,1))
walls[0:3,0] = np.array(wallcp, ndmin=2)#.transpose()
walls[3:6,0] = np.array(wallp1, ndmin=2)#.transpose()
walls_plots = np.zeros((4,1))
walls_plots[:,0] = np.array([walla, wallb, wallc, walld])
if wType == 'diagonal1a':
# define 3 points on the plane (this one is vertical
wallp1 = np.array([0, pos, 0])
wallp2 = np.array([0, pos+5, 5])
wallp3 = np.array([-5,pos+5, 5])
# define two vectors on the plane
v1 = wallp3 - wallp1
v2 = wallp2 - wallp1
# compute vector normal to the plane
wallcp = np.cross(v1, v2)
walla, wallb, wallc = wallcp
walld = np.dot(wallcp, wallp3)
walls = np.zeros((6,1))
walls[0:3,0] = np.array(wallcp, ndmin=2)#.transpose()
walls[3:6,0] = np.array(wallp1, ndmin=2)#.transpose()
walls_plots = np.zeros((4,1))
walls_plots[:,0] = np.array([walla, wallb, wallc, walld])
if wType == 'diagonal1b':
# define 3 points on the plane (this one is vertical
wallp1 = np.array([0, pos, 0])
wallp2 = np.array([0, pos-5, 5])
wallp3 = np.array([-5,pos-5, 5])
# define two vectors on the plane
v1 = wallp3 - wallp1
v2 = wallp2 - wallp1
# compute vector normal to the plane
wallcp = np.cross(v1, v2)
walla, wallb, wallc = wallcp
walld = np.dot(wallcp, wallp3)
walls = np.zeros((6,1))
walls[0:3,0] = np.array(wallcp, ndmin=2)#.transpose()
walls[3:6,0] = np.array(wallp1, ndmin=2)#.transpose()
walls_plots = np.zeros((4,1))
walls_plots[:,0] = np.array([walla, wallb, wallc, walld])
if wType == 'diagonal2a':
# define 3 points on the plane (this one is vertical
wallp1 = np.array([pos, 0, 0])
wallp2 = np.array([pos-5, 0, 5])
wallp3 = np.array([pos-5, -5, 5])
# define two vectors on the plane
v1 = wallp3 - wallp1
v2 = wallp2 - wallp1
# compute vector normal to the plane
wallcp = np.cross(v1, v2)
walla, wallb, wallc = wallcp
walld = np.dot(wallcp, wallp3)
walls = np.zeros((6,1))
walls[0:3,0] = np.array(wallcp, ndmin=2)#.transpose()
walls[3:6,0] = np.array(wallp1, ndmin=2)#.transpose()
walls_plots = np.zeros((4,1))
walls_plots[:,0] = np.array([walla, wallb, wallc, walld])
if wType == 'diagonal2b':
# define 3 points on the plane (this one is vertical
wallp1 = np.array([pos, 0, 0])
wallp2 = np.array([pos+5, 0, 5])
wallp3 = np.array([pos+5, -5, 5])
# define two vectors on the plane
v1 = wallp3 - wallp1
v2 = wallp2 - wallp1
# compute vector normal to the plane
wallcp = np.cross(v1, v2)
walla, wallb, wallc = wallcp
walld = np.dot(wallcp, wallp3)
walls = np.zeros((6,1))
walls[0:3,0] = np.array(wallcp, ndmin=2)#.transpose()
walls[3:6,0] = np.array(wallp1, ndmin=2)#.transpose()
walls_plots = np.zeros((4,1))
walls_plots[:,0] = np.array([walla, wallb, wallc, walld])
return walls, walls_plots | 36.955414 | 78 | 0.542227 | 814 | 5,802 | 3.839066 | 0.110565 | 0.09632 | 0.05376 | 0.0896 | 0.8352 | 0.8352 | 0.8352 | 0.8352 | 0.8352 | 0.82496 | 0 | 0.070926 | 0.314719 | 5,802 | 157 | 79 | 36.955414 | 0.71504 | 0.197001 | 0 | 0.703704 | 0 | 0 | 0.014731 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018519 | false | 0 | 0.009259 | 0 | 0.046296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6ae5444598de0cd0bf77a515197cd9ecf63b9e9d | 177 | py | Python | test_example2.py | greatfirsty/hellopython | f12aacf36b8f208d6c5622ffd6b4c1927f37b45a | [
"Apache-2.0"
] | 1 | 2019-05-04T01:25:43.000Z | 2019-05-04T01:25:43.000Z | test_example2.py | greatfirsty/hellopython | f12aacf36b8f208d6c5622ffd6b4c1927f37b45a | [
"Apache-2.0"
] | null | null | null | test_example2.py | greatfirsty/hellopython | f12aacf36b8f208d6c5622ffd6b4c1927f37b45a | [
"Apache-2.0"
] | null | null | null | class Count:
def __init__(self,a,b):
self.a = a
self.b = b
def add(self):
return self.a+self.b
def sub(self):
return self.a+self.b
| 16.090909 | 28 | 0.514124 | 29 | 177 | 3 | 0.344828 | 0.229885 | 0.206897 | 0.344828 | 0.45977 | 0.45977 | 0 | 0 | 0 | 0 | 0 | 0 | 0.361582 | 177 | 10 | 29 | 17.7 | 0.769912 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0 | 0 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
0a838c56ac7908fcc24d218501cb9bc672529b9e | 180,129 | py | Python | hl7apy/v2_5/groups.py | ryoung29/hl7apy | e83029ca68f31581b95f85f280c7b4cc81c0f6b2 | [
"MIT"
] | 163 | 2015-01-26T22:52:59.000Z | 2022-03-27T18:24:22.000Z | hl7apy/v2_5/groups.py | ryoung29/hl7apy | e83029ca68f31581b95f85f280c7b4cc81c0f6b2 | [
"MIT"
] | 87 | 2015-01-12T14:49:15.000Z | 2022-02-04T22:35:41.000Z | hl7apy/v2_5/groups.py | ryoung29/hl7apy | e83029ca68f31581b95f85f280c7b4cc81c0f6b2 | [
"MIT"
] | 88 | 2015-03-20T18:56:03.000Z | 2022-03-16T17:49:00.000Z | from hl7apy.utils import iteritems
from .segments import SEGMENTS
GROUPS = {
'ADR_A19_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, -1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'ADR_A19_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'ADR_A19_QUERY_RESPONSE': ('sequence',
(['EVN', SEGMENTS['EVN'], (0, 1), 'SEG'],
['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],
['NK1', SEGMENTS['NK1'], (0, -1), 'SEG'],
['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],
['DB1', SEGMENTS['DB1'], (0, -1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],
['DRG', SEGMENTS['DRG'], (0, 1), 'SEG'],
['ADR_A19_PROCEDURE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, -1), 'SEG'],
['ADR_A19_INSURANCE', None, (0, -1), 'GRP'],
['ACC', SEGMENTS['ACC'], (0, 1), 'SEG'],
['UB1', SEGMENTS['UB1'], (0, 1), 'SEG'],
['UB2', SEGMENTS['UB2'], (0, 1), 'SEG'],)),
'ADT_A01_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, -1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'ADT_A01_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'ADT_A03_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, -1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'ADT_A03_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'ADT_A05_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, -1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'ADT_A05_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'ADT_A06_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, -1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'ADT_A06_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'ADT_A16_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, -1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'ADT_A16_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'ADT_A39_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['MRG', SEGMENTS['MRG'], (1, 1), 'SEG'],
['PV1', SEGMENTS['PV1'], (0, 1), 'SEG'],)),
'ADT_A43_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['MRG', SEGMENTS['MRG'], (1, 1), 'SEG'],)),
'ADT_A45_MERGE_INFO': ('sequence',
(['MRG', SEGMENTS['MRG'], (1, 1), 'SEG'],
['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],)),
'BAR_P01_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, -1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'BAR_P01_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'BAR_P01_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (0, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],
['DB1', SEGMENTS['DB1'], (0, -1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],
['DRG', SEGMENTS['DRG'], (0, 1), 'SEG'],
['BAR_P01_PROCEDURE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, -1), 'SEG'],
['NK1', SEGMENTS['NK1'], (0, -1), 'SEG'],
['BAR_P01_INSURANCE', None, (0, -1), 'GRP'],
['ACC', SEGMENTS['ACC'], (0, 1), 'SEG'],
['UB1', SEGMENTS['UB1'], (0, 1), 'SEG'],
['UB2', SEGMENTS['UB2'], (0, 1), 'SEG'],)),
'BAR_P02_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['PV1', SEGMENTS['PV1'], (0, 1), 'SEG'],
['DB1', SEGMENTS['DB1'], (0, -1), 'SEG'],)),
'BAR_P05_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, -1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'BAR_P05_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'BAR_P05_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (0, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],
['DB1', SEGMENTS['DB1'], (0, -1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],
['DRG', SEGMENTS['DRG'], (0, 1), 'SEG'],
['BAR_P05_PROCEDURE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, -1), 'SEG'],
['NK1', SEGMENTS['NK1'], (0, -1), 'SEG'],
['BAR_P05_INSURANCE', None, (0, -1), 'GRP'],
['ACC', SEGMENTS['ACC'], (0, 1), 'SEG'],
['UB1', SEGMENTS['UB1'], (0, 1), 'SEG'],
['UB2', SEGMENTS['UB2'], (0, 1), 'SEG'],
['ABS', SEGMENTS['ABS'], (0, 1), 'SEG'],
['BLC', SEGMENTS['BLC'], (0, -1), 'SEG'],
['RMI', SEGMENTS['RMI'], (0, 1), 'SEG'],)),
'BAR_P06_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PV1', SEGMENTS['PV1'], (0, 1), 'SEG'],)),
'BAR_P10_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['GP2', SEGMENTS['GP2'], (0, 1), 'SEG'],)),
'BAR_P12_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'BPS_O29_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['BPS_O29_TIMING', None, (0, -1), 'GRP'],
['BPO', SEGMENTS['BPO'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['BPS_O29_PRODUCT', None, (0, -1), 'GRP'],)),
'BPS_O29_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['BPS_O29_PATIENT_VISIT', None, (0, 1), 'GRP'],)),
'BPS_O29_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'BPS_O29_PRODUCT': ('sequence',
(['BPX', SEGMENTS['BPX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'BPS_O29_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'BRP_O30_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['BRP_O30_TIMING', None, (0, -1), 'GRP'],
['BPO', SEGMENTS['BPO'], (0, 1), 'SEG'],
['BPX', SEGMENTS['BPX'], (0, -1), 'SEG'],)),
'BRP_O30_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['BRP_O30_ORDER', None, (0, -1), 'GRP'],)),
'BRP_O30_RESPONSE': ('sequence',
(['BRP_O30_PATIENT', None, (0, 1), 'GRP'],)),
'BRP_O30_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'BRT_O32_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['BRT_O32_TIMING', None, (0, -1), 'GRP'],
['BPO', SEGMENTS['BPO'], (0, 1), 'SEG'],
['BTX', SEGMENTS['BTX'], (0, -1), 'SEG'],)),
'BRT_O32_RESPONSE': ('sequence',
(['PID', SEGMENTS['PID'], (0, 1), 'SEG'],
['BRT_O32_ORDER', None, (0, -1), 'GRP'],)),
'BRT_O32_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'BTS_O31_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['BTS_O31_TIMING', None, (0, -1), 'GRP'],
['BPO', SEGMENTS['BPO'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['BTS_O31_PRODUCT_STATUS', None, (0, -1), 'GRP'],)),
'BTS_O31_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['BTS_O31_PATIENT_VISIT', None, (0, 1), 'GRP'],)),
'BTS_O31_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'BTS_O31_PRODUCT_STATUS': ('sequence',
(['BTX', SEGMENTS['BTX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'BTS_O31_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'CRM_C01_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PV1', SEGMENTS['PV1'], (0, 1), 'SEG'],
['CSR', SEGMENTS['CSR'], (1, 1), 'SEG'],
['CSP', SEGMENTS['CSP'], (0, -1), 'SEG'],)),
'CSU_C09_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['CSU_C09_VISIT', None, (0, 1), 'GRP'],
['CSR', SEGMENTS['CSR'], (1, 1), 'SEG'],
['CSU_C09_STUDY_PHASE', None, (1, -1), 'GRP'],)),
'CSU_C09_RX_ADMIN': ('sequence',
(['RXA', SEGMENTS['RXA'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, 1), 'SEG'],)),
'CSU_C09_STUDY_OBSERVATION': ('sequence',
(['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['CSU_C09_TIMING_QTY', None, (0, -1), 'GRP'],
['OBX', SEGMENTS['OBX'], (1, -1), 'SEG'],)),
'CSU_C09_STUDY_PHARM': ('sequence',
(['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['CSU_C09_RX_ADMIN', None, (1, -1), 'GRP'],)),
'CSU_C09_STUDY_PHASE': ('sequence',
(['CSP', SEGMENTS['CSP'], (0, 1), 'SEG'],
['CSU_C09_STUDY_SCHEDULE', None, (1, -1), 'GRP'],)),
'CSU_C09_STUDY_SCHEDULE': ('sequence',
(['CSS', SEGMENTS['CSS'], (0, 1), 'SEG'],
['CSU_C09_STUDY_OBSERVATION', None, (1, -1), 'GRP'],
['CSU_C09_STUDY_PHARM', None, (1, -1), 'GRP'],)),
'CSU_C09_TIMING_QTY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'CSU_C09_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'DFT_P03_COMMON_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['DFT_P03_TIMING_QUANTITY', None, (0, -1), 'GRP'],
['DFT_P03_ORDER', None, (0, 1), 'GRP'],
['DFT_P03_OBSERVATION', None, (0, -1), 'GRP'],)),
'DFT_P03_FINANCIAL': ('sequence',
(['FT1', SEGMENTS['FT1'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, 1), 'SEG'],
['DFT_P03_FINANCIAL_PROCEDURE', None, (0, -1), 'GRP'],
['DFT_P03_FINANCIAL_COMMON_ORDER', None, (0, -1), 'GRP'],)),
'DFT_P03_FINANCIAL_COMMON_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['DFT_P03_FINANCIAL_TIMING_QUANTITY', None, (0, -1), 'GRP'],
['DFT_P03_FINANCIAL_ORDER', None, (0, 1), 'GRP'],
['DFT_P03_FINANCIAL_OBSERVATION', None, (0, -1), 'GRP'],)),
'DFT_P03_FINANCIAL_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'DFT_P03_FINANCIAL_ORDER': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'DFT_P03_FINANCIAL_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'DFT_P03_FINANCIAL_TIMING_QUANTITY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'DFT_P03_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, -1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'DFT_P03_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'DFT_P03_ORDER': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'DFT_P03_TIMING_QUANTITY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'DFT_P11_COMMON_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['DFT_P11_TIMING_QUANTITY', None, (0, -1), 'GRP'],
['DFT_P11_ORDER', None, (0, 1), 'GRP'],
['DFT_P11_OBSERVATION', None, (0, -1), 'GRP'],)),
'DFT_P11_FINANCIAL': ('sequence',
(['FT1', SEGMENTS['FT1'], (1, 1), 'SEG'],
['DFT_P11_FINANCIAL_PROCEDURE', None, (0, -1), 'GRP'],
['DFT_P11_FINANCIAL_COMMON_ORDER', None, (0, -1), 'GRP'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],
['DRG', SEGMENTS['DRG'], (0, 1), 'SEG'],
['GT1', SEGMENTS['GT1'], (0, -1), 'SEG'],
['DFT_P11_FINANCIAL_INSURANCE', None, (0, -1), 'GRP'],)),
'DFT_P11_FINANCIAL_COMMON_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['DFT_P11_FINANCIAL_TIMING_QUANTITY', None, (0, -1), 'GRP'],
['DFT_P11_FINANCIAL_ORDER', None, (0, 1), 'GRP'],
['DFT_P11_FINANCIAL_OBSERVATION', None, (0, -1), 'GRP'],)),
'DFT_P11_FINANCIAL_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, -1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'DFT_P11_FINANCIAL_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'DFT_P11_FINANCIAL_ORDER': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'DFT_P11_FINANCIAL_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'DFT_P11_FINANCIAL_TIMING_QUANTITY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'DFT_P11_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, -1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'DFT_P11_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'DFT_P11_ORDER': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'DFT_P11_TIMING_QUANTITY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'DOC_T12_RESULT': ('sequence',
(['EVN', SEGMENTS['EVN'], (0, 1), 'SEG'],
['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['TXA', SEGMENTS['TXA'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],)),
'EAC_U07_COMMAND': ('sequence',
(['ECD', SEGMENTS['ECD'], (1, 1), 'SEG'],
['TQ1', SEGMENTS['TQ1'], (0, 1), 'SEG'],
['EAC_U07_SPECIMEN_CONTAINER', None, (0, 1), 'GRP'],
['CNS', SEGMENTS['CNS'], (0, 1), 'SEG'],)),
'EAC_U07_SPECIMEN_CONTAINER': ('sequence',
(['SAC', SEGMENTS['SAC'], (1, 1), 'SEG'],
['SPM', SEGMENTS['SPM'], (0, -1), 'SEG'],)),
'EAN_U09_NOTIFICATION': ('sequence',
(['NDS', SEGMENTS['NDS'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, 1), 'SEG'],)),
'EAR_U08_COMMAND_RESPONSE': ('sequence',
(['ECD', SEGMENTS['ECD'], (1, 1), 'SEG'],
['EAR_U08_SPECIMEN_CONTAINER', None, (0, 1), 'GRP'],
['ECR', SEGMENTS['ECR'], (1, 1), 'SEG'],)),
'EAR_U08_SPECIMEN_CONTAINER': ('sequence',
(['SAC', SEGMENTS['SAC'], (1, 1), 'SEG'],
['SPM', SEGMENTS['SPM'], (0, -1), 'SEG'],)),
'MDM_T01_COMMON_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['MDM_T01_TIMING', None, (0, -1), 'GRP'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'MDM_T01_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'MDM_T02_COMMON_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['MDM_T02_TIMING', None, (0, -1), 'GRP'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'MDM_T02_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'MDM_T02_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'MFN_M01_MF': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (0, 1), 'SEG'],)),
'MFN_M02_MF_STAFF': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['STF', SEGMENTS['STF'], (1, 1), 'SEG'],
['PRA', SEGMENTS['PRA'], (0, -1), 'SEG'],
['ORG', SEGMENTS['ORG'], (0, -1), 'SEG'],
['AFF', SEGMENTS['AFF'], (0, -1), 'SEG'],
['LAN', SEGMENTS['LAN'], (0, -1), 'SEG'],
['EDU', SEGMENTS['EDU'], (0, -1), 'SEG'],
['CER', SEGMENTS['CER'], (0, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'MFN_M03_MF_TEST': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['OM1', SEGMENTS['OM1'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (1, 1), 'SEG'],)),
'MFN_M04_MF_CDM': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['CDM', SEGMENTS['CDM'], (1, 1), 'SEG'],
['PRC', SEGMENTS['PRC'], (0, -1), 'SEG'],)),
'MFN_M05_MF_LOCATION': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['LOC', SEGMENTS['LOC'], (1, 1), 'SEG'],
['LCH', SEGMENTS['LCH'], (0, -1), 'SEG'],
['LRL', SEGMENTS['LRL'], (0, -1), 'SEG'],
['MFN_M05_MF_LOC_DEPT', None, (1, -1), 'GRP'],)),
'MFN_M05_MF_LOC_DEPT': ('sequence',
(['LDP', SEGMENTS['LDP'], (1, 1), 'SEG'],
['LCH', SEGMENTS['LCH'], (0, -1), 'SEG'],
['LCC', SEGMENTS['LCC'], (0, -1), 'SEG'],)),
'MFN_M06_MF_CLIN_STUDY': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['CM0', SEGMENTS['CM0'], (1, 1), 'SEG'],
['MFN_M06_MF_PHASE_SCHED_DETAIL', None, (0, -1), 'GRP'],)),
'MFN_M06_MF_PHASE_SCHED_DETAIL': ('sequence',
(['CM1', SEGMENTS['CM1'], (1, 1), 'SEG'],
['CM2', SEGMENTS['CM2'], (0, -1), 'SEG'],)),
'MFN_M07_MF_CLIN_STUDY_SCHED': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['CM0', SEGMENTS['CM0'], (1, 1), 'SEG'],
['CM2', SEGMENTS['CM2'], (0, -1), 'SEG'],)),
'MFN_M08_MF_TEST_NUMERIC': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['OM1', SEGMENTS['OM1'], (1, 1), 'SEG'],
['OM2', SEGMENTS['OM2'], (0, 1), 'SEG'],
['OM3', SEGMENTS['OM3'], (0, 1), 'SEG'],
['OM4', SEGMENTS['OM4'], (0, 1), 'SEG'],)),
'MFN_M09_MF_TEST_CATEGORICAL': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['OM1', SEGMENTS['OM1'], (1, 1), 'SEG'],
['MFN_M09_MF_TEST_CAT_DETAIL', None, (0, 1), 'GRP'],)),
'MFN_M09_MF_TEST_CAT_DETAIL': ('sequence',
(['OM3', SEGMENTS['OM3'], (1, 1), 'SEG'],
['OM4', SEGMENTS['OM4'], (0, -1), 'SEG'],)),
'MFN_M10_MF_TEST_BATTERIES': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['OM1', SEGMENTS['OM1'], (1, 1), 'SEG'],
['MFN_M10_MF_TEST_BATT_DETAIL', None, (0, 1), 'GRP'],)),
'MFN_M10_MF_TEST_BATT_DETAIL': ('sequence',
(['OM5', SEGMENTS['OM5'], (1, 1), 'SEG'],
['OM4', SEGMENTS['OM4'], (0, -1), 'SEG'],)),
'MFN_M11_MF_TEST_CALCULATED': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['OM1', SEGMENTS['OM1'], (1, 1), 'SEG'],
['MFN_M11_MF_TEST_CALC_DETAIL', None, (0, 1), 'GRP'],)),
'MFN_M11_MF_TEST_CALC_DETAIL': ('sequence',
(['OM6', SEGMENTS['OM6'], (1, 1), 'SEG'],
['OM2', SEGMENTS['OM2'], (1, 1), 'SEG'],)),
'MFN_M12_MF_OBS_ATTRIBUTES': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['OM1', SEGMENTS['OM1'], (1, 1), 'SEG'],
['OM7', SEGMENTS['OM7'], (0, 1), 'SEG'],)),
'MFN_M15_MF_INV_ITEM': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['IIM', SEGMENTS['IIM'], (1, 1), 'SEG'],)),
'MFN_ZNN': ('sequence',
(['MSH', SEGMENTS['MSH'], (1, 1), 'SEG'],
['SFT', SEGMENTS['SFT'], (0, -1), 'SEG'],
['MFI', SEGMENTS['MFI'], (1, 1), 'SEG'],
['MFN_ZNN_MF_SITE_DEFINED', None, (1, -1), 'GRP'],)),
'MFN_ZNN_MF_SITE_DEFINED': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (1, 1), 'SEG'],)),
'MFR_M01_MF_QUERY': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (0, 1), 'SEG'],)),
'MFR_M04_MF_QUERY': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['CDM', SEGMENTS['CDM'], (1, 1), 'SEG'],
['LCH', SEGMENTS['LCH'], (0, -1), 'SEG'],
['PRC', SEGMENTS['PRC'], (0, -1), 'SEG'],)),
'MFR_M05_MF_QUERY': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['LOC', SEGMENTS['LOC'], (1, 1), 'SEG'],
['LCH', SEGMENTS['LCH'], (0, -1), 'SEG'],
['LRL', SEGMENTS['LRL'], (0, -1), 'SEG'],
['LDP', SEGMENTS['LDP'], (1, -1), 'SEG'],
['LCH', SEGMENTS['LCH'], (0, -1), 'SEG'],
['LCC', SEGMENTS['LCC'], (0, -1), 'SEG'],)),
'MFR_M06_MF_QUERY': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['CM0', SEGMENTS['CM0'], (1, 1), 'SEG'],
['CM1', SEGMENTS['CM1'], (0, -1), 'SEG'],
['CM2', SEGMENTS['CM2'], (0, -1), 'SEG'],)),
'MFR_M07_MF_QUERY': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['CM0', SEGMENTS['CM0'], (1, 1), 'SEG'],
['CM2', SEGMENTS['CM2'], (0, -1), 'SEG'],)),
'NMD_N02_APP_STATS': ('sequence',
(['NST', SEGMENTS['NST'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'NMD_N02_APP_STATUS': ('sequence',
(['NSC', SEGMENTS['NSC'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'NMD_N02_CLOCK': ('sequence',
(['NCK', SEGMENTS['NCK'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'NMD_N02_CLOCK_AND_STATS_WITH_NOTES': ('sequence',
(['NMD_N02_CLOCK', None, (0, 1), 'GRP'],
['NMD_N02_APP_STATS', None, (0, 1), 'GRP'],
['NMD_N02_APP_STATUS', None, (0, 1), 'GRP'],)),
'NMQ_N01_CLOCK_AND_STATISTICS': ('sequence',
(['NCK', SEGMENTS['NCK'], (0, 1), 'SEG'],
['NST', SEGMENTS['NST'], (0, 1), 'SEG'],
['NSC', SEGMENTS['NSC'], (0, 1), 'SEG'],)),
'NMQ_N01_QRY_WITH_DETAIL': ('sequence',
(['QRD', SEGMENTS['QRD'], (1, 1), 'SEG'],
['QRF', SEGMENTS['QRF'], (0, 1), 'SEG'],)),
'NMR_N01_CLOCK_AND_STATS_WITH_NOTES_ALT': ('sequence',
(['NCK', SEGMENTS['NCK'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['NST', SEGMENTS['NST'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['NSC', SEGMENTS['NSC'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OMB_O27_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'OMB_O27_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OMB_O27_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['OMB_O27_TIMING', None, (0, -1), 'GRP'],
['BPO', SEGMENTS['BPO'], (1, 1), 'SEG'],
['SPM', SEGMENTS['SPM'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],
['OMB_O27_OBSERVATION', None, (0, -1), 'GRP'],
['FT1', SEGMENTS['FT1'], (0, -1), 'SEG'],
['BLG', SEGMENTS['BLG'], (0, 1), 'SEG'],)),
'OMB_O27_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OMB_O27_PATIENT_VISIT', None, (0, 1), 'GRP'],
['OMB_O27_INSURANCE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, 1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],)),
'OMB_O27_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OMB_O27_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OMD_O03_DIET': ('sequence',
(['ODS', SEGMENTS['ODS'], (1, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OMD_O03_OBSERVATION', None, (0, -1), 'GRP'],)),
'OMD_O03_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'OMD_O03_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OMD_O03_ORDER_DIET': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['OMD_O03_TIMING_DIET', None, (0, -1), 'GRP'],
['OMD_O03_DIET', None, (0, 1), 'GRP'],)),
'OMD_O03_ORDER_TRAY': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['OMD_O03_TIMING_TRAY', None, (0, -1), 'GRP'],
['ODT', SEGMENTS['ODT'], (1, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OMD_O03_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OMD_O03_PATIENT_VISIT', None, (0, 1), 'GRP'],
['OMD_O03_INSURANCE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, 1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],)),
'OMD_O03_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OMD_O03_TIMING_DIET': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OMD_O03_TIMING_TRAY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OMG_O19_CONTAINER': ('sequence',
(['SAC', SEGMENTS['SAC'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],)),
'OMG_O19_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'OMG_O19_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OMG_O19_OBSERVATION_PRIOR': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OMG_O19_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['OMG_O19_TIMING', None, (0, -1), 'GRP'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],
['OMG_O19_OBSERVATION', None, (0, -1), 'GRP'],
['OMG_O19_SPECIMEN', None, (0, -1), 'GRP'],
['OMG_O19_PRIOR_RESULT', None, (0, -1), 'GRP'],
['FT1', SEGMENTS['FT1'], (0, -1), 'SEG'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],
['BLG', SEGMENTS['BLG'], (0, 1), 'SEG'],)),
'OMG_O19_ORDER_PRIOR': ('sequence',
(['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['OMG_O19_TIMING_PRIOR', None, (0, -1), 'GRP'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],
['OMG_O19_OBSERVATION_PRIOR', None, (1, -1), 'GRP'],)),
'OMG_O19_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['NK1', SEGMENTS['NK1'], (0, -1), 'SEG'],
['OMG_O19_PATIENT_VISIT', None, (0, 1), 'GRP'],
['OMG_O19_INSURANCE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, 1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],)),
'OMG_O19_PATIENT_PRIOR': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],)),
'OMG_O19_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OMG_O19_PATIENT_VISIT_PRIOR': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OMG_O19_PRIOR_RESULT': ('sequence',
(['OMG_O19_PATIENT_PRIOR', None, (0, 1), 'GRP'],
['OMG_O19_PATIENT_VISIT_PRIOR', None, (0, 1), 'GRP'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],
['OMG_O19_ORDER_PRIOR', None, (1, -1), 'GRP'],)),
'OMG_O19_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['OMG_O19_CONTAINER', None, (0, -1), 'GRP'],)),
'OMG_O19_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OMG_O19_TIMING_PRIOR': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OMI_O23_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'OMI_O23_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OMI_O23_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['OMI_O23_TIMING', None, (0, -1), 'GRP'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],
['OMI_O23_OBSERVATION', None, (0, -1), 'GRP'],
['IPC', SEGMENTS['IPC'], (1, -1), 'SEG'],)),
'OMI_O23_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OMI_O23_PATIENT_VISIT', None, (0, 1), 'GRP'],
['OMI_O23_INSURANCE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, 1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],)),
'OMI_O23_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OMI_O23_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OML_O21_CONTAINER': ('sequence',
(['SAC', SEGMENTS['SAC'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],)),
'OML_O21_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'OML_O21_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['TCD', SEGMENTS['TCD'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OML_O21_OBSERVATION_PRIOR': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OML_O21_OBSERVATION_REQUEST': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['TCD', SEGMENTS['TCD'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],
['OML_O21_OBSERVATION', None, (0, -1), 'GRP'],
['OML_O21_SPECIMEN', None, (0, -1), 'GRP'],
['OML_O21_PRIOR_RESULT', None, (0, -1), 'GRP'],)),
'OML_O21_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['OML_O21_TIIMING', None, (0, -1), 'GRP'],
['OML_O21_OBSERVATION_REQUEST', None, (0, 1), 'GRP'],
['FT1', SEGMENTS['FT1'], (0, -1), 'SEG'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],
['BLG', SEGMENTS['BLG'], (0, 1), 'SEG'],)),
'OML_O21_ORDER_PRIOR': ('sequence',
(['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OML_O21_TIMING_PRIOR', None, (0, -1), 'GRP'],
['OML_O21_OBSERVATION_PRIOR', None, (1, -1), 'GRP'],)),
'OML_O21_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['NK1', SEGMENTS['NK1'], (0, -1), 'SEG'],
['OML_O21_PATIENT_VISIT', None, (0, 1), 'GRP'],
['OML_O21_INSURANCE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, 1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],)),
'OML_O21_PATIENT_PRIOR': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],)),
'OML_O21_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OML_O21_PATIENT_VISIT_PRIOR': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OML_O21_PRIOR_RESULT': ('sequence',
(['OML_O21_PATIENT_PRIOR', None, (0, 1), 'GRP'],
['OML_O21_PATIENT_VISIT_PRIOR', None, (0, 1), 'GRP'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],
['OML_O21_ORDER_PRIOR', None, (1, -1), 'GRP'],)),
'OML_O21_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['OML_O21_CONTAINER', None, (0, -1), 'GRP'],)),
'OML_O21_TIIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OML_O21_TIMING_PRIOR': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OML_O33_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'OML_O33_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['TCD', SEGMENTS['TCD'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OML_O33_OBSERVATION_PRIOR': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OML_O33_OBSERVATION_REQUEST': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['TCD', SEGMENTS['TCD'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],
['OML_O33_OBSERVATION', None, (0, -1), 'GRP'],
['OML_O33_PRIOR_RESULT', None, (0, -1), 'GRP'],)),
'OML_O33_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['OML_O33_TIMING', None, (0, -1), 'GRP'],
['OML_O33_OBSERVATION_REQUEST', None, (0, 1), 'GRP'],
['FT1', SEGMENTS['FT1'], (0, -1), 'SEG'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],
['BLG', SEGMENTS['BLG'], (0, 1), 'SEG'],)),
'OML_O33_ORDER_PRIOR': ('sequence',
(['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OML_O33_TIMING_PRIOR', None, (0, -1), 'GRP'],
['OML_O33_OBSERVATION_PRIOR', None, (1, -1), 'GRP'],)),
'OML_O33_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['NK1', SEGMENTS['NK1'], (0, -1), 'SEG'],
['OML_O33_PATIENT_VISIT', None, (0, 1), 'GRP'],
['OML_O33_INSURANCE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, 1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],)),
'OML_O33_PATIENT_PRIOR': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],)),
'OML_O33_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OML_O33_PATIENT_VISIT_PRIOR': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OML_O33_PRIOR_RESULT': ('sequence',
(['OML_O33_PATIENT_PRIOR', None, (0, 1), 'GRP'],
['OML_O33_PATIENT_VISIT_PRIOR', None, (0, 1), 'GRP'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],
['OML_O33_ORDER_PRIOR', None, (1, -1), 'GRP'],)),
'OML_O33_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['SAC', SEGMENTS['SAC'], (0, -1), 'SEG'],
['OML_O33_ORDER', None, (1, -1), 'GRP'],)),
'OML_O33_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OML_O33_TIMING_PRIOR': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OML_O35_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'OML_O35_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['TCD', SEGMENTS['TCD'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OML_O35_OBSERVATION_PRIOR': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OML_O35_OBSERVATION_REQUEST': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['TCD', SEGMENTS['TCD'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],
['OML_O35_OBSERVATION', None, (0, -1), 'GRP'],
['OML_O35_PRIOR_RESULT', None, (0, -1), 'GRP'],)),
'OML_O35_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['OML_O35_TIIMING', None, (0, -1), 'GRP'],
['OML_O35_OBSERVATION_REQUEST', None, (0, 1), 'GRP'],
['FT1', SEGMENTS['FT1'], (0, -1), 'SEG'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],
['BLG', SEGMENTS['BLG'], (0, 1), 'SEG'],)),
'OML_O35_ORDER_PRIOR': ('sequence',
(['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OML_O35_TIMING_PRIOR', None, (0, -1), 'GRP'],
['OML_O35_OBSERVATION_PRIOR', None, (1, -1), 'GRP'],)),
'OML_O35_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['NK1', SEGMENTS['NK1'], (0, -1), 'SEG'],
['OML_O35_PATIENT_VISIT', None, (0, 1), 'GRP'],
['OML_O35_INSURANCE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, 1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],)),
'OML_O35_PATIENT_PRIOR': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],)),
'OML_O35_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OML_O35_PATIENT_VISIT_PRIOR': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OML_O35_PRIOR_RESULT': ('sequence',
(['OML_O35_PATIENT_PRIOR', None, (0, 1), 'GRP'],
['OML_O35_PATIENT_VISIT_PRIOR', None, (0, 1), 'GRP'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],
['OML_O35_ORDER_PRIOR', None, (1, -1), 'GRP'],)),
'OML_O35_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['OML_O35_SPECIMEN_CONTAINER', None, (1, -1), 'GRP'],)),
'OML_O35_SPECIMEN_CONTAINER': ('sequence',
(['SAC', SEGMENTS['SAC'], (1, 1), 'SEG'],
['OML_O35_ORDER', None, (1, -1), 'GRP'],)),
'OML_O35_TIIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OML_O35_TIMING_PRIOR': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OMN_O07_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'OMN_O07_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OMN_O07_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['OMN_O07_TIMING', None, (0, -1), 'GRP'],
['RQD', SEGMENTS['RQD'], (1, 1), 'SEG'],
['RQ1', SEGMENTS['RQ1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OMN_O07_OBSERVATION', None, (0, -1), 'GRP'],
['BLG', SEGMENTS['BLG'], (0, 1), 'SEG'],)),
'OMN_O07_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OMN_O07_PATIENT_VISIT', None, (0, 1), 'GRP'],
['OMN_O07_INSURANCE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, 1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],)),
'OMN_O07_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OMN_O07_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OMP_O09_COMPONENT': ('sequence',
(['RXC', SEGMENTS['RXC'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OMP_O09_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'OMP_O09_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OMP_O09_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['OMP_O09_TIMING', None, (0, -1), 'GRP'],
['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['OMP_O09_COMPONENT', None, (0, -1), 'GRP'],
['OMP_O09_OBSERVATION', None, (0, -1), 'GRP'],
['FT1', SEGMENTS['FT1'], (0, -1), 'SEG'],
['BLG', SEGMENTS['BLG'], (0, 1), 'SEG'],)),
'OMP_O09_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OMP_O09_PATIENT_VISIT', None, (0, 1), 'GRP'],
['OMP_O09_INSURANCE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, 1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],)),
'OMP_O09_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OMP_O09_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OMS_O05_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'OMS_O05_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OMS_O05_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['OMS_O05_TIMING', None, (0, -1), 'GRP'],
['RQD', SEGMENTS['RQD'], (1, 1), 'SEG'],
['RQ1', SEGMENTS['RQ1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OMS_O05_OBSERVATION', None, (0, -1), 'GRP'],
['BLG', SEGMENTS['BLG'], (0, 1), 'SEG'],)),
'OMS_O05_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OMS_O05_PATIENT_VISIT', None, (0, 1), 'GRP'],
['OMS_O05_INSURANCE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, 1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],)),
'OMS_O05_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OMS_O05_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORB_O28_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['ORB_O28_TIMING', None, (0, -1), 'GRP'],
['BPO', SEGMENTS['BPO'], (0, 1), 'SEG'],)),
'ORB_O28_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['ORB_O28_ORDER', None, (0, -1), 'GRP'],)),
'ORB_O28_RESPONSE': ('sequence',
(['ORB_O28_PATIENT', None, (0, 1), 'GRP'],)),
'ORB_O28_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORD_O04_ORDER_DIET': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['ORD_O04_TIMING_DIET', None, (0, -1), 'GRP'],
['ODS', SEGMENTS['ODS'], (0, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORD_O04_ORDER_TRAY': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['ORD_O04_TIMING_TRAY', None, (0, -1), 'GRP'],
['ODT', SEGMENTS['ODT'], (0, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORD_O04_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORD_O04_RESPONSE': ('sequence',
(['ORD_O04_PATIENT', None, (0, 1), 'GRP'],
['ORD_O04_ORDER_DIET', None, (1, -1), 'GRP'],
['ORD_O04_ORDER_TRAY', None, (0, -1), 'GRP'],)),
'ORD_O04_TIMING_DIET': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORD_O04_TIMING_TRAY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORF_R04_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORF_R04_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['ORF_R04_TIMING_QTY', None, (0, -1), 'GRP'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],
['ORF_R04_OBSERVATION', None, (1, -1), 'GRP'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],)),
'ORF_R04_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORF_R04_QUERY_RESPONSE': ('sequence',
(['ORF_R04_PATIENT', None, (0, 1), 'GRP'],
['ORF_R04_ORDER', None, (1, -1), 'GRP'],)),
'ORF_R04_TIMING_QTY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORG_O20_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['ORG_O20_TIMING', None, (0, -1), 'GRP'],
['OBR', SEGMENTS['OBR'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],
['ORG_O20_SPECIMEN', None, (0, -1), 'GRP'],)),
'ORG_O20_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORG_O20_RESPONSE': ('sequence',
(['ORG_O20_PATIENT', None, (0, 1), 'GRP'],
['ORG_O20_ORDER', None, (1, -1), 'GRP'],)),
'ORG_O20_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['SAC', SEGMENTS['SAC'], (0, -1), 'SEG'],)),
'ORG_O20_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORI_O24_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['ORI_O24_TIMING', None, (0, -1), 'GRP'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['IPC', SEGMENTS['IPC'], (1, -1), 'SEG'],)),
'ORI_O24_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORI_O24_RESPONSE': ('sequence',
(['ORI_O24_PATIENT', None, (0, 1), 'GRP'],
['ORI_O24_ORDER', None, (1, -1), 'GRP'],)),
'ORI_O24_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORL_O22_OBSERVATION_REQUEST': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['ORL_O22_SPECIMEN', None, (0, -1), 'GRP'],)),
'ORL_O22_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['ORL_O22_TIMING', None, (0, -1), 'GRP'],
['ORL_O22_OBSERVATION_REQUEST', None, (0, 1), 'GRP'],)),
'ORL_O22_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['ORL_O22_ORDER', None, (0, -1), 'GRP'],)),
'ORL_O22_RESPONSE': ('sequence',
(['ORL_O22_PATIENT', None, (0, 1), 'GRP'],)),
'ORL_O22_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['SAC', SEGMENTS['SAC'], (0, -1), 'SEG'],)),
'ORL_O22_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORL_O34_OBSERVATION_REQUEST': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['ORL_O34_SPMSAC_SUPPGRP2', None, (0, -1), 'GRP'],)),
'ORL_O34_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['ORL_O34_TIMING', None, (0, -1), 'GRP'],
['ORL_O34_OBSERVATION_REQUEST', None, (0, 1), 'GRP'],)),
'ORL_O34_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['ORL_O34_SPECIMEN', None, (1, -1), 'GRP'],)),
'ORL_O34_RESPONSE': ('sequence',
(['ORL_O34_PATIENT', None, (0, 1), 'GRP'],)),
'ORL_O34_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['SAC', SEGMENTS['SAC'], (0, -1), 'SEG'],
['ORL_O34_ORDER', None, (0, -1), 'GRP'],)),
'ORL_O34_SPMSAC_SUPPGRP2': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['SAC', SEGMENTS['SAC'], (0, -1), 'SEG'],)),
'ORL_O34_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORL_O36_OBSERVATION_REQUEST': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],)),
'ORL_O36_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['ORL_O36_TIMING', None, (0, -1), 'GRP'],
['ORL_O36_OBSERVATION_REQUEST', None, (0, 1), 'GRP'],)),
'ORL_O36_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['ORL_O36_SPECIMEN', None, (1, -1), 'GRP'],)),
'ORL_O36_RESPONSE': ('sequence',
(['ORL_O36_PATIENT', None, (0, 1), 'GRP'],)),
'ORL_O36_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['ORL_O36_SPECIMEN_CONTAINER', None, (1, -1), 'GRP'],)),
'ORL_O36_SPECIMEN_CONTAINER': ('sequence',
(['SAC', SEGMENTS['SAC'], (1, 1), 'SEG'],
['ORL_O36_ORDER', None, (0, -1), 'GRP'],)),
'ORL_O36_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORM_O01_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'ORM_O01_OBRRQDRQ1RXOODSODT_SUPPGRP': ('choice',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['RQD', SEGMENTS['RQD'], (1, 1), 'SEG'],
['RQ1', SEGMENTS['RQ1'], (1, 1), 'SEG'],
['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['ODS', SEGMENTS['ODS'], (1, 1), 'SEG'],
['ODT', SEGMENTS['ODT'], (1, 1), 'SEG'],)),
'ORM_O01_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORM_O01_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['ORM_O01_ORDER_DETAIL', None, (0, 1), 'GRP'],
['FT1', SEGMENTS['FT1'], (0, -1), 'SEG'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],
['BLG', SEGMENTS['BLG'], (0, 1), 'SEG'],)),
'ORM_O01_ORDER_DETAIL': ('sequence',
(['ORM_O01_OBRRQDRQ1RXOODSODT_SUPPGRP', None, (1, 1), 'GRP'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],
['ORM_O01_OBSERVATION', None, (0, -1), 'GRP'],)),
'ORM_O01_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['ORM_O01_PATIENT_VISIT', None, (0, 1), 'GRP'],
['ORM_O01_INSURANCE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, 1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],)),
'ORM_O01_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'ORN_O08_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['ORN_O08_TIMING', None, (0, -1), 'GRP'],
['RQD', SEGMENTS['RQD'], (1, 1), 'SEG'],
['RQ1', SEGMENTS['RQ1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORN_O08_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORN_O08_RESPONSE': ('sequence',
(['ORN_O08_PATIENT', None, (0, 1), 'GRP'],
['ORN_O08_ORDER', None, (1, -1), 'GRP'],)),
'ORN_O08_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORP_O10_COMPONENT': ('sequence',
(['RXC', SEGMENTS['RXC'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORP_O10_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['ORP_O10_TIMING', None, (0, -1), 'GRP'],
['ORP_O10_ORDER_DETAIL', None, (0, 1), 'GRP'],)),
'ORP_O10_ORDER_DETAIL': ('sequence',
(['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['ORP_O10_COMPONENT', None, (0, -1), 'GRP'],)),
'ORP_O10_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORP_O10_RESPONSE': ('sequence',
(['ORP_O10_PATIENT', None, (0, 1), 'GRP'],
['ORP_O10_ORDER', None, (1, -1), 'GRP'],)),
'ORP_O10_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORR_O02_CHOICE': ('choice',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['RQD', SEGMENTS['RQD'], (1, 1), 'SEG'],
['RQ1', SEGMENTS['RQ1'], (1, 1), 'SEG'],
['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['ODS', SEGMENTS['ODS'], (1, 1), 'SEG'],
['ODT', SEGMENTS['ODT'], (1, 1), 'SEG'],)),
'ORR_O02_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['ORR_O02_CHOICE', None, (1, 1), 'GRP'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],)),
'ORR_O02_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORR_O02_RESPONSE': ('sequence',
(['ORR_O02_PATIENT', None, (0, 1), 'GRP'],
['ORR_O02_ORDER', None, (1, -1), 'GRP'],)),
'ORS_O06_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['ORS_O06_TIMING', None, (0, -1), 'GRP'],
['RQD', SEGMENTS['RQD'], (1, 1), 'SEG'],
['RQ1', SEGMENTS['RQ1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORS_O06_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORS_O06_RESPONSE': ('sequence',
(['ORS_O06_PATIENT', None, (0, 1), 'GRP'],
['ORS_O06_ORDER', None, (1, -1), 'GRP'],)),
'ORS_O06_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORU_R01_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORU_R01_ORDER_OBSERVATION': ('sequence',
(['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['ORU_R01_TIMING_QTY', None, (0, -1), 'GRP'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],
['ORU_R01_OBSERVATION', None, (0, -1), 'GRP'],
['FT1', SEGMENTS['FT1'], (0, -1), 'SEG'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],
['ORU_R01_SPECIMEN', None, (0, -1), 'GRP'],)),
'ORU_R01_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['NK1', SEGMENTS['NK1'], (0, -1), 'SEG'],
['ORU_R01_VISIT', None, (0, 1), 'GRP'],)),
'ORU_R01_PATIENT_RESULT': ('sequence',
(['ORU_R01_PATIENT', None, (0, 1), 'GRP'],
['ORU_R01_ORDER_OBSERVATION', None, (1, -1), 'GRP'],)),
'ORU_R01_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],)),
'ORU_R01_TIMING_QTY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORU_R01_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'ORU_R30_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'ORU_R30_TIMING_QTY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ORU_R30_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OSR_Q06_CHOICE': ('choice',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['RQD', SEGMENTS['RQD'], (1, 1), 'SEG'],
['RQ1', SEGMENTS['RQ1'], (1, 1), 'SEG'],
['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['ODS', SEGMENTS['ODS'], (1, 1), 'SEG'],
['ODT', SEGMENTS['ODT'], (1, 1), 'SEG'],)),
'OSR_Q06_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['OSR_Q06_TIMING', None, (0, -1), 'GRP'],
['OSR_Q06_CHOICE', None, (1, 1), 'GRP'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],)),
'OSR_Q06_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OSR_Q06_RESPONSE': ('sequence',
(['OSR_Q06_PATIENT', None, (0, 1), 'GRP'],
['OSR_Q06_ORDER', None, (1, -1), 'GRP'],)),
'OSR_Q06_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OUL_R21_CONTAINER': ('sequence',
(['SAC', SEGMENTS['SAC'], (1, 1), 'SEG'],
['SID', SEGMENTS['SID'], (0, 1), 'SEG'],)),
'OUL_R21_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (0, 1), 'SEG'],
['TCD', SEGMENTS['TCD'], (0, 1), 'SEG'],
['SID', SEGMENTS['SID'], (0, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OUL_R21_ORDER_OBSERVATION': ('sequence',
(['OUL_R21_CONTAINER', None, (0, 1), 'GRP'],
['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OUL_R21_TIMING_QTY', None, (0, -1), 'GRP'],
['OUL_R21_OBSERVATION', None, (1, -1), 'GRP'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],)),
'OUL_R21_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OUL_R21_TIMING_QTY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OUL_R21_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OUL_R22_CONTAINER': ('sequence',
(['SAC', SEGMENTS['SAC'], (1, 1), 'SEG'],
['INV', SEGMENTS['INV'], (0, 1), 'SEG'],)),
'OUL_R22_ORDER': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OUL_R22_TIMING_QTY', None, (0, -1), 'GRP'],
['OUL_R22_RESULT', None, (0, -1), 'GRP'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],)),
'OUL_R22_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OUL_R22_RESULT': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['TCD', SEGMENTS['TCD'], (0, 1), 'SEG'],
['SID', SEGMENTS['SID'], (0, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OUL_R22_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['OUL_R22_CONTAINER', None, (0, -1), 'GRP'],
['OUL_R22_ORDER', None, (1, -1), 'GRP'],)),
'OUL_R22_TIMING_QTY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OUL_R22_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OUL_R23_CONTAINER': ('sequence',
(['SAC', SEGMENTS['SAC'], (1, 1), 'SEG'],
['INV', SEGMENTS['INV'], (0, 1), 'SEG'],
['OUL_R23_ORDER', None, (1, -1), 'GRP'],)),
'OUL_R23_ORDER': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OUL_R23_TIMING_QTY', None, (0, -1), 'GRP'],
['OUL_R23_RESULT', None, (0, -1), 'GRP'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],)),
'OUL_R23_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OUL_R23_RESULT': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['TCD', SEGMENTS['TCD'], (0, 1), 'SEG'],
['SID', SEGMENTS['SID'], (0, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OUL_R23_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['OUL_R23_CONTAINER', None, (1, -1), 'GRP'],)),
'OUL_R23_TIMING_QTY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OUL_R23_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'OUL_R24_CONTAINER': ('sequence',
(['SAC', SEGMENTS['SAC'], (1, 1), 'SEG'],
['INV', SEGMENTS['INV'], (0, 1), 'SEG'],)),
'OUL_R24_ORDER': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['ORC', SEGMENTS['ORC'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['OUL_R24_TIMING_QTY', None, (0, -1), 'GRP'],
['OUL_R24_SPECIMEN', None, (0, -1), 'GRP'],
['OUL_R24_RESULT', None, (0, -1), 'GRP'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],)),
'OUL_R24_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OUL_R24_RESULT': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['TCD', SEGMENTS['TCD'], (0, 1), 'SEG'],
['SID', SEGMENTS['SID'], (0, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'OUL_R24_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['OUL_R24_CONTAINER', None, (0, -1), 'GRP'],)),
'OUL_R24_TIMING_QTY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'OUL_R24_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'PEX_P07_ASSOCIATED_PERSON': ('sequence',
(['NK1', SEGMENTS['NK1'], (1, 1), 'SEG'],
['PEX_P07_ASSOCIATED_RX_ORDER', None, (0, 1), 'GRP'],
['PEX_P07_ASSOCIATED_RX_ADMIN', None, (0, -1), 'GRP'],
['PRB', SEGMENTS['PRB'], (0, -1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],)),
'PEX_P07_ASSOCIATED_RX_ADMIN': ('sequence',
(['RXA', SEGMENTS['RXA'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (0, 1), 'SEG'],)),
'PEX_P07_ASSOCIATED_RX_ORDER': ('sequence',
(['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['PEX_P07_NK1_TIMING_QTY', None, (1, -1), 'GRP'],
['RXR', SEGMENTS['RXR'], (0, -1), 'SEG'],)),
'PEX_P07_EXPERIENCE': ('sequence',
(['PES', SEGMENTS['PES'], (1, 1), 'SEG'],
['PEX_P07_PEX_OBSERVATION', None, (1, -1), 'GRP'],)),
'PEX_P07_NK1_TIMING_QTY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'PEX_P07_PEX_CAUSE': ('sequence',
(['PCR', SEGMENTS['PCR'], (1, 1), 'SEG'],
['PEX_P07_RX_ORDER', None, (0, 1), 'GRP'],
['PEX_P07_RX_ADMINISTRATION', None, (0, -1), 'GRP'],
['PRB', SEGMENTS['PRB'], (0, -1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['PEX_P07_ASSOCIATED_PERSON', None, (0, 1), 'GRP'],
['PEX_P07_STUDY', None, (0, -1), 'GRP'],)),
'PEX_P07_PEX_OBSERVATION': ('sequence',
(['PEO', SEGMENTS['PEO'], (1, 1), 'SEG'],
['PEX_P07_PEX_CAUSE', None, (1, -1), 'GRP'],)),
'PEX_P07_RX_ADMINISTRATION': ('sequence',
(['RXA', SEGMENTS['RXA'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (0, 1), 'SEG'],)),
'PEX_P07_RX_ORDER': ('sequence',
(['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['PEX_P07_TIMING_QTY', None, (1, -1), 'GRP'],
['RXR', SEGMENTS['RXR'], (0, -1), 'SEG'],)),
'PEX_P07_STUDY': ('sequence',
(['CSR', SEGMENTS['CSR'], (1, 1), 'SEG'],
['CSP', SEGMENTS['CSP'], (0, -1), 'SEG'],)),
'PEX_P07_TIMING_QTY': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'PEX_P07_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'PGL_PC6_CHOICE': ('choice',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (1, 1), 'SEG'],)),
'PGL_PC6_GOAL': ('sequence',
(['GOL', SEGMENTS['GOL'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PGL_PC6_GOAL_ROLE', None, (0, -1), 'GRP'],
['PGL_PC6_PATHWAY', None, (0, -1), 'GRP'],
['PGL_PC6_OBSERVATION', None, (0, -1), 'GRP'],
['PGL_PC6_PROBLEM', None, (0, -1), 'GRP'],
['PGL_PC6_ORDER', None, (0, -1), 'GRP'],)),
'PGL_PC6_GOAL_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PGL_PC6_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PGL_PC6_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['PGL_PC6_ORDER_DETAIL', None, (0, 1), 'GRP'],)),
'PGL_PC6_ORDER_DETAIL': ('sequence',
(['PGL_PC6_CHOICE', None, (1, 1), 'GRP'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PGL_PC6_ORDER_OBSERVATION', None, (0, -1), 'GRP'],)),
'PGL_PC6_ORDER_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PGL_PC6_PATHWAY': ('sequence',
(['PTH', SEGMENTS['PTH'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PGL_PC6_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'PGL_PC6_PROBLEM': ('sequence',
(['PRB', SEGMENTS['PRB'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PGL_PC6_PROBLEM_ROLE', None, (0, -1), 'GRP'],
['PGL_PC6_PROBLEM_OBSERVATION', None, (0, -1), 'GRP'],)),
'PGL_PC6_PROBLEM_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PGL_PC6_PROBLEM_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PMU_B07_CERTIFICATE': ('sequence',
(['CER', SEGMENTS['CER'], (1, 1), 'SEG'],
['ROL', SEGMENTS['ROL'], (0, -1), 'SEG'],)),
'PPG_PCG_CHOICE': ('choice',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (1, 1), 'SEG'],)),
'PPG_PCG_GOAL': ('sequence',
(['GOL', SEGMENTS['GOL'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPG_PCG_GOAL_ROLE', None, (0, -1), 'GRP'],
['PPG_PCG_GOAL_OBSERVATION', None, (0, -1), 'GRP'],
['PPG_PCG_PROBLEM', None, (0, -1), 'GRP'],
['PPG_PCG_ORDER', None, (0, -1), 'GRP'],)),
'PPG_PCG_GOAL_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PPG_PCG_GOAL_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPG_PCG_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['PPG_PCG_ORDER_DETAIL', None, (0, 1), 'GRP'],)),
'PPG_PCG_ORDER_DETAIL': ('sequence',
(['PPG_PCG_CHOICE', None, (1, 1), 'GRP'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPG_PCG_ORDER_OBSERVATION', None, (0, -1), 'GRP'],)),
'PPG_PCG_ORDER_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPG_PCG_PATHWAY': ('sequence',
(['PTH', SEGMENTS['PTH'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPG_PCG_PATHWAY_ROLE', None, (0, -1), 'GRP'],
['PPG_PCG_GOAL', None, (0, -1), 'GRP'],)),
'PPG_PCG_PATHWAY_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPG_PCG_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'PPG_PCG_PROBLEM': ('sequence',
(['PRB', SEGMENTS['PRB'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPG_PCG_PROBLEM_ROLE', None, (0, -1), 'GRP'],
['PPG_PCG_PROBLEM_OBSERVATION', None, (0, -1), 'GRP'],)),
'PPG_PCG_PROBLEM_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PPG_PCG_PROBLEM_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPP_PCB_CHOICE': ('choice',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (1, 1), 'SEG'],)),
'PPP_PCB_GOAL': ('sequence',
(['GOL', SEGMENTS['GOL'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPP_PCB_GOAL_ROLE', None, (0, -1), 'GRP'],
['PPP_PCB_GOAL_OBSERVATION', None, (0, -1), 'GRP'],)),
'PPP_PCB_GOAL_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PPP_PCB_GOAL_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPP_PCB_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['PPP_PCB_ORDER_DETAIL', None, (0, 1), 'GRP'],)),
'PPP_PCB_ORDER_DETAIL': ('sequence',
(['PPP_PCB_CHOICE', None, (1, 1), 'GRP'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPP_PCB_ORDER_OBSERVATION', None, (0, -1), 'GRP'],)),
'PPP_PCB_ORDER_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPP_PCB_PATHWAY': ('sequence',
(['PTH', SEGMENTS['PTH'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPP_PCB_PATHWAY_ROLE', None, (0, -1), 'GRP'],
['PPP_PCB_PROBLEM', None, (0, -1), 'GRP'],)),
'PPP_PCB_PATHWAY_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPP_PCB_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'PPP_PCB_PROBLEM': ('sequence',
(['PRB', SEGMENTS['PRB'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPP_PCB_PROBLEM_ROLE', None, (0, -1), 'GRP'],
['PPP_PCB_PROBLEM_OBSERVATION', None, (0, -1), 'GRP'],
['PPP_PCB_GOAL', None, (0, -1), 'GRP'],
['PPP_PCB_ORDER', None, (0, -1), 'GRP'],)),
'PPP_PCB_PROBLEM_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PPP_PCB_PROBLEM_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPR_PC1_CHOICE': ('choice',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (1, 1), 'SEG'],)),
'PPR_PC1_GOAL': ('sequence',
(['GOL', SEGMENTS['GOL'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPR_PC1_GOAL_ROLE', None, (0, -1), 'GRP'],
['PPR_PC1_GOAL_OBSERVATION', None, (0, -1), 'GRP'],)),
'PPR_PC1_GOAL_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PPR_PC1_GOAL_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPR_PC1_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['PPR_PC1_ORDER_DETAIL', None, (0, 1), 'GRP'],)),
'PPR_PC1_ORDER_DETAIL': ('sequence',
(['PPR_PC1_CHOICE', None, (1, 1), 'GRP'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPR_PC1_ORDER_OBSERVATION', None, (0, -1), 'GRP'],)),
'PPR_PC1_ORDER_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPR_PC1_PATHWAY': ('sequence',
(['PTH', SEGMENTS['PTH'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPR_PC1_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'PPR_PC1_PROBLEM': ('sequence',
(['PRB', SEGMENTS['PRB'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPR_PC1_PROBLEM_ROLE', None, (0, -1), 'GRP'],
['PPR_PC1_PATHWAY', None, (0, -1), 'GRP'],
['PPR_PC1_PROBLEM_OBSERVATION', None, (0, -1), 'GRP'],
['PPR_PC1_GOAL', None, (0, -1), 'GRP'],
['PPR_PC1_ORDER', None, (0, -1), 'GRP'],)),
'PPR_PC1_PROBLEM_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PPR_PC1_PROBLEM_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPT_PCL_CHOICE': ('choice',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (1, 1), 'SEG'],)),
'PPT_PCL_GOAL': ('sequence',
(['GOL', SEGMENTS['GOL'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPT_PCL_GOAL_ROLE', None, (0, -1), 'GRP'],
['PPT_PCL_GOAL_OBSERVATION', None, (0, -1), 'GRP'],
['PPT_PCL_PROBLEM', None, (0, -1), 'GRP'],
['PPT_PCL_ORDER', None, (0, -1), 'GRP'],)),
'PPT_PCL_GOAL_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PPT_PCL_GOAL_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPT_PCL_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['PPT_PCL_ORDER_DETAIL', None, (0, 1), 'GRP'],)),
'PPT_PCL_ORDER_DETAIL': ('sequence',
(['PPT_PCL_CHOICE', None, (1, 1), 'GRP'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPT_PCL_ORDER_OBSERVATION', None, (0, -1), 'GRP'],)),
'PPT_PCL_ORDER_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPT_PCL_PATHWAY': ('sequence',
(['PTH', SEGMENTS['PTH'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPT_PCL_PATHWAY_ROLE', None, (0, -1), 'GRP'],
['PPT_PCL_GOAL', None, (0, -1), 'GRP'],)),
'PPT_PCL_PATHWAY_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPT_PCL_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PPT_PCL_PATIENT_VISIT', None, (0, 1), 'GRP'],
['PPT_PCL_PATHWAY', None, (1, -1), 'GRP'],)),
'PPT_PCL_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'PPT_PCL_PROBLEM': ('sequence',
(['PRB', SEGMENTS['PRB'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPT_PCL_PROBLEM_ROLE', None, (0, -1), 'GRP'],
['PPT_PCL_PROBLEM_OBSERVATION', None, (0, -1), 'GRP'],)),
'PPT_PCL_PROBLEM_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PPT_PCL_PROBLEM_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPV_PCA_GOAL': ('sequence',
(['GOL', SEGMENTS['GOL'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPV_PCA_GOAL_ROLE', None, (0, -1), 'GRP'],
['PPV_PCA_GOAL_PATHWAY', None, (0, -1), 'GRP'],
['PPV_PCA_GOAL_OBSERVATION', None, (0, -1), 'GRP'],
['PPV_PCA_PROBLEM', None, (0, -1), 'GRP'],
['PPV_PCA_ORDER', None, (0, -1), 'GRP'],)),
'PPV_PCA_GOAL_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PPV_PCA_GOAL_PATHWAY': ('sequence',
(['PTH', SEGMENTS['PTH'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPV_PCA_GOAL_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPV_PCA_OBRANYHL7SEGMENT_SUPPGRP': ('choice',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (1, 1), 'SEG'],)),
'PPV_PCA_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['PPV_PCA_ORDER_DETAIL', None, (0, 1), 'GRP'],)),
'PPV_PCA_ORDER_DETAIL': ('sequence',
(['PPV_PCA_OBRANYHL7SEGMENT_SUPPGRP', None, (1, 1), 'GRP'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPV_PCA_ORDER_OBSERVATION', None, (0, -1), 'GRP'],)),
'PPV_PCA_ORDER_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PPV_PCA_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PPV_PCA_PATIENT_VISIT', None, (0, 1), 'GRP'],
['PPV_PCA_GOAL', None, (1, -1), 'GRP'],)),
'PPV_PCA_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'PPV_PCA_PROBLEM': ('sequence',
(['PRB', SEGMENTS['PRB'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PPV_PCA_PROBLEM_ROLE', None, (0, -1), 'GRP'],
['PPV_PCA_PROBLEM_OBSERVATION', None, (0, -1), 'GRP'],)),
'PPV_PCA_PROBLEM_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PPV_PCA_PROBLEM_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PRR_PC5_CHOICE': ('choice',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (1, 1), 'SEG'],)),
'PRR_PC5_GOAL': ('sequence',
(['GOL', SEGMENTS['GOL'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PRR_PC5_GOAL_ROLE', None, (0, -1), 'GRP'],
['PRR_PC5_GOAL_OBSERVATION', None, (0, -1), 'GRP'],)),
'PRR_PC5_GOAL_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PRR_PC5_GOAL_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PRR_PC5_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['PRR_PC5_ORDER_DETAIL', None, (0, 1), 'GRP'],)),
'PRR_PC5_ORDER_DETAIL': ('sequence',
(['PRR_PC5_CHOICE', None, (1, 1), 'GRP'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PRR_PC5_ORDER_OBSERVATION', None, (0, -1), 'GRP'],)),
'PRR_PC5_ORDER_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PRR_PC5_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PRR_PC5_PATIENT_VISIT', None, (0, 1), 'GRP'],
['PRR_PC5_PROBLEM', None, (1, -1), 'GRP'],)),
'PRR_PC5_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'PRR_PC5_PROBLEM': ('sequence',
(['PRB', SEGMENTS['PRB'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PRR_PC5_PROBLEM_ROLE', None, (0, -1), 'GRP'],
['PRR_PC5_PROBLEM_PATHWAY', None, (0, -1), 'GRP'],
['PRR_PC5_PROBLEM_OBSERVATION', None, (0, -1), 'GRP'],
['PRR_PC5_GOAL', None, (0, -1), 'GRP'],
['PRR_PC5_ORDER', None, (0, -1), 'GRP'],)),
'PRR_PC5_PROBLEM_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PRR_PC5_PROBLEM_PATHWAY': ('sequence',
(['PTH', SEGMENTS['PTH'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PRR_PC5_PROBLEM_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PTR_PCF_CHOICE': ('choice',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (1, 1), 'SEG'],)),
'PTR_PCF_GOAL': ('sequence',
(['GOL', SEGMENTS['GOL'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PTR_PCF_GOAL_ROLE', None, (0, -1), 'GRP'],
['PTR_PCF_GOAL_OBSERVATION', None, (0, -1), 'GRP'],)),
'PTR_PCF_GOAL_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PTR_PCF_GOAL_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PTR_PCF_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['PTR_PCF_ORDER_DETAIL', None, (0, 1), 'GRP'],)),
'PTR_PCF_ORDER_DETAIL': ('sequence',
(['PTR_PCF_CHOICE', None, (1, 1), 'GRP'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PTR_PCF_ORDER_OBSERVATION', None, (0, -1), 'GRP'],)),
'PTR_PCF_ORDER_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PTR_PCF_PATHWAY': ('sequence',
(['PTH', SEGMENTS['PTH'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PTR_PCF_PATHWAY_ROLE', None, (0, -1), 'GRP'],
['PTR_PCF_PROBLEM', None, (0, -1), 'GRP'],)),
'PTR_PCF_PATHWAY_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'PTR_PCF_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PTR_PCF_PATIENT_VISIT', None, (0, 1), 'GRP'],
['PTR_PCF_PATHWAY', None, (1, -1), 'GRP'],)),
'PTR_PCF_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'PTR_PCF_PROBLEM': ('sequence',
(['PRB', SEGMENTS['PRB'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],
['PTR_PCF_PROBLEM_ROLE', None, (0, -1), 'GRP'],
['PTR_PCF_PROBLEM_OBSERVATION', None, (0, -1), 'GRP'],
['PTR_PCF_GOAL', None, (0, -1), 'GRP'],
['PTR_PCF_ORDER', None, (0, -1), 'GRP'],)),
'PTR_PCF_PROBLEM_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'PTR_PCF_PROBLEM_ROLE': ('sequence',
(['ROL', SEGMENTS['ROL'], (1, 1), 'SEG'],
['VAR', SEGMENTS['VAR'], (0, -1), 'SEG'],)),
'QBP_K13_ROW_DEFINITION': ('sequence',
(['RDF', SEGMENTS['RDF'], (1, 1), 'SEG'],
['RDT', SEGMENTS['RDT'], (0, -1), 'SEG'],)),
'QBP_QNN': ('sequence',
(['MSH', SEGMENTS['MSH'], (1, 1), 'SEG'],
['SFT', SEGMENTS['SFT'], (0, -1), 'SEG'],
['QPD', SEGMENTS['QPD'], (1, 1), 'SEG'],
['RDF', SEGMENTS['RDF'], (0, 1), 'SEG'],
['RCP', SEGMENTS['RCP'], (1, 1), 'SEG'],
['DSC', SEGMENTS['DSC'], (0, 1), 'SEG'],)),
'QVR_Q17_QBP': ('sequence',
(['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (0, 1), 'SEG'],)),
'RAR_RAR_DEFINITION': ('sequence',
(['QRD', SEGMENTS['QRD'], (1, 1), 'SEG'],
['QRF', SEGMENTS['QRF'], (0, 1), 'SEG'],
['RAR_RAR_PATIENT', None, (0, 1), 'GRP'],
['RAR_RAR_ORDER', None, (1, -1), 'GRP'],)),
'RAR_RAR_ENCODING': ('sequence',
(['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RAR_RAR_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RAR_RAR_ENCODING', None, (0, 1), 'GRP'],
['RXA', SEGMENTS['RXA'], (1, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, 1), 'SEG'],)),
'RAR_RAR_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RAS_O17_ADMINISTRATION': ('sequence',
(['RXA', SEGMENTS['RXA'], (1, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, 1), 'SEG'],
['RAS_O17_OBSERVATION', None, (0, -1), 'GRP'],)),
'RAS_O17_COMPONENTS': ('sequence',
(['RXC', SEGMENTS['RXC'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RAS_O17_ENCODING': ('sequence',
(['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['RAS_O17_TIMING_ENCODED', None, (1, -1), 'GRP'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RAS_O17_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RAS_O17_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RAS_O17_TIMING', None, (0, -1), 'GRP'],
['RAS_O17_ORDER_DETAIL', None, (0, 1), 'GRP'],
['RAS_O17_ENCODING', None, (0, 1), 'GRP'],
['RAS_O17_ADMINISTRATION', None, (1, -1), 'GRP'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],)),
'RAS_O17_ORDER_DETAIL': ('sequence',
(['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['RAS_O17_ORDER_DETAIL_SUPPLEMENT', None, (0, 1), 'GRP'],)),
'RAS_O17_ORDER_DETAIL_SUPPLEMENT': ('sequence',
(['NTE', SEGMENTS['NTE'], (1, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RAS_O17_COMPONENTS', None, (0, -1), 'GRP'],)),
'RAS_O17_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],
['RAS_O17_PATIENT_VISIT', None, (0, 1), 'GRP'],)),
'RAS_O17_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'RAS_O17_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RAS_O17_TIMING_ENCODED': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RCI_I05_OBSERVATION': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RCI_I05_RESULTS', None, (0, -1), 'GRP'],)),
'RCI_I05_PROVIDER': ('sequence',
(['PRD', SEGMENTS['PRD'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, -1), 'SEG'],)),
'RCI_I05_RESULTS': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RCL_I06_PROVIDER': ('sequence',
(['PRD', SEGMENTS['PRD'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, -1), 'SEG'],)),
'RDE_O11_COMPONENT': ('sequence',
(['RXC', SEGMENTS['RXC'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RDE_O11_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'RDE_O11_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RDE_O11_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RDE_O11_TIMING', None, (0, -1), 'GRP'],
['RDE_O11_ORDER_DETAIL', None, (0, 1), 'GRP'],
['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RDE_O11_TIMING_ENCODED', None, (1, -1), 'GRP'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],
['RDE_O11_OBSERVATION', None, (0, -1), 'GRP'],
['FT1', SEGMENTS['FT1'], (0, -1), 'SEG'],
['BLG', SEGMENTS['BLG'], (0, 1), 'SEG'],
['CTI', SEGMENTS['CTI'], (0, -1), 'SEG'],)),
'RDE_O11_ORDER_DETAIL': ('sequence',
(['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RDE_O11_COMPONENT', None, (0, -1), 'GRP'],)),
'RDE_O11_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RDE_O11_PATIENT_VISIT', None, (0, 1), 'GRP'],
['RDE_O11_INSURANCE', None, (0, -1), 'GRP'],
['GT1', SEGMENTS['GT1'], (0, 1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],)),
'RDE_O11_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'RDE_O11_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RDE_O11_TIMING_ENCODED': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RDR_RDR_DEFINITION': ('sequence',
(['QRD', SEGMENTS['QRD'], (1, 1), 'SEG'],
['QRF', SEGMENTS['QRF'], (0, 1), 'SEG'],
['RDR_RDR_PATIENT', None, (0, 1), 'GRP'],
['RDR_RDR_ORDER', None, (1, -1), 'GRP'],)),
'RDR_RDR_DISPENSE': ('sequence',
(['RXD', SEGMENTS['RXD'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RDR_RDR_ENCODING': ('sequence',
(['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RDR_RDR_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RDR_RDR_ENCODING', None, (0, 1), 'GRP'],
['RDR_RDR_DISPENSE', None, (1, -1), 'GRP'],)),
'RDR_RDR_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RDS_O13_COMPONENT': ('sequence',
(['RXC', SEGMENTS['RXC'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RDS_O13_ENCODING': ('sequence',
(['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RDS_O13_TIMING_ENCODED', None, (1, -1), 'GRP'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RDS_O13_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RDS_O13_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RDS_O13_TIMING', None, (0, -1), 'GRP'],
['RDS_O13_ORDER_DETAIL', None, (0, 1), 'GRP'],
['RDS_O13_ENCODING', None, (0, 1), 'GRP'],
['RXD', SEGMENTS['RXD'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],
['RDS_O13_OBSERVATION', None, (0, -1), 'GRP'],
['FT1', SEGMENTS['FT1'], (0, -1), 'SEG'],)),
'RDS_O13_ORDER_DETAIL': ('sequence',
(['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['RDS_O13_ORDER_DETAIL_SUPPLEMENT', None, (0, 1), 'GRP'],)),
'RDS_O13_ORDER_DETAIL_SUPPLEMENT': ('sequence',
(['NTE', SEGMENTS['NTE'], (1, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RDS_O13_COMPONENT', None, (0, -1), 'GRP'],)),
'RDS_O13_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],
['RDS_O13_PATIENT_VISIT', None, (0, 1), 'GRP'],)),
'RDS_O13_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'RDS_O13_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RDS_O13_TIMING_ENCODED': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'REF_I12_AUTCTD_SUPPGRP2': ('sequence',
(['AUT', SEGMENTS['AUT'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],)),
'REF_I12_AUTHORIZATION_CONTACT': ('sequence',
(['AUT', SEGMENTS['AUT'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],)),
'REF_I12_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'REF_I12_OBSERVATION': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['REF_I12_RESULTS_NOTES', None, (0, -1), 'GRP'],)),
'REF_I12_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'REF_I12_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['REF_I12_AUTCTD_SUPPGRP2', None, (0, 1), 'GRP'],)),
'REF_I12_PROVIDER_CONTACT': ('sequence',
(['PRD', SEGMENTS['PRD'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, -1), 'SEG'],)),
'REF_I12_RESULTS_NOTES': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RER_RER_DEFINITION': ('sequence',
(['QRD', SEGMENTS['QRD'], (1, 1), 'SEG'],
['QRF', SEGMENTS['QRF'], (0, 1), 'SEG'],
['RER_RER_PATIENT', None, (0, 1), 'GRP'],
['RER_RER_ORDER', None, (1, -1), 'GRP'],)),
'RER_RER_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RER_RER_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RGR_RGR_DEFINITION': ('sequence',
(['QRD', SEGMENTS['QRD'], (1, 1), 'SEG'],
['QRF', SEGMENTS['QRF'], (0, 1), 'SEG'],
['RGR_RGR_PATIENT', None, (0, 1), 'GRP'],
['RGR_RGR_ORDER', None, (1, -1), 'GRP'],)),
'RGR_RGR_ENCODING': ('sequence',
(['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RGR_RGR_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RGR_RGR_ENCODING', None, (0, 1), 'GRP'],
['RXG', SEGMENTS['RXG'], (1, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RGR_RGR_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RGV_O15_COMPONENTS': ('sequence',
(['RXC', SEGMENTS['RXC'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RGV_O15_ENCODING': ('sequence',
(['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['RGV_O15_TIMING_ENCODED', None, (1, -1), 'GRP'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RGV_O15_GIVE': ('sequence',
(['RXG', SEGMENTS['RXG'], (1, 1), 'SEG'],
['RGV_O15_TIMING_GIVE', None, (1, -1), 'GRP'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],
['RGV_O15_OBSERVATION', None, (1, -1), 'GRP'],)),
'RGV_O15_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RGV_O15_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RGV_O15_TIMING', None, (0, -1), 'GRP'],
['RGV_O15_ORDER_DETAIL', None, (0, 1), 'GRP'],
['RGV_O15_ENCODING', None, (0, 1), 'GRP'],
['RGV_O15_GIVE', None, (1, -1), 'GRP'],)),
'RGV_O15_ORDER_DETAIL': ('sequence',
(['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['RGV_O15_ORDER_DETAIL_SUPPLEMENT', None, (0, 1), 'GRP'],)),
'RGV_O15_ORDER_DETAIL_SUPPLEMENT': ('sequence',
(['NTE', SEGMENTS['NTE'], (1, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RGV_O15_COMPONENTS', None, (0, -1), 'GRP'],)),
'RGV_O15_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],
['RGV_O15_PATIENT_VISIT', None, (0, 1), 'GRP'],)),
'RGV_O15_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'RGV_O15_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RGV_O15_TIMING_ENCODED': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RGV_O15_TIMING_GIVE': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'ROR_ROR_DEFINITION': ('sequence',
(['QRD', SEGMENTS['QRD'], (1, 1), 'SEG'],
['QRF', SEGMENTS['QRF'], (0, 1), 'SEG'],
['ROR_ROR_PATIENT', None, (0, 1), 'GRP'],
['ROR_ROR_ORDER', None, (1, -1), 'GRP'],)),
'ROR_ROR_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'ROR_ROR_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RPA_I08_AUTHORIZATION_1': ('sequence',
(['AUT', SEGMENTS['AUT'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],)),
'RPA_I08_AUTHORIZATION_2': ('sequence',
(['AUT', SEGMENTS['AUT'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],)),
'RPA_I08_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'RPA_I08_OBSERVATION': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RPA_I08_RESULTS', None, (0, -1), 'GRP'],)),
'RPA_I08_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['RPA_I08_AUTHORIZATION_2', None, (0, 1), 'GRP'],)),
'RPA_I08_PROVIDER': ('sequence',
(['PRD', SEGMENTS['PRD'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, -1), 'SEG'],)),
'RPA_I08_RESULTS': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RPA_I08_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'RPI_I01_GUARANTOR_INSURANCE': ('sequence',
(['GT1', SEGMENTS['GT1'], (0, -1), 'SEG'],
['RPI_I01_INSURANCE', None, (1, -1), 'GRP'],)),
'RPI_I01_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'RPI_I01_PROVIDER': ('sequence',
(['PRD', SEGMENTS['PRD'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, -1), 'SEG'],)),
'RPI_I04_GUARANTOR_INSURANCE': ('sequence',
(['GT1', SEGMENTS['GT1'], (0, -1), 'SEG'],
['RPI_I04_INSURANCE', None, (1, -1), 'GRP'],)),
'RPI_I04_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'RPI_I04_PROVIDER': ('sequence',
(['PRD', SEGMENTS['PRD'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, -1), 'SEG'],)),
'RPL_I02_PROVIDER': ('sequence',
(['PRD', SEGMENTS['PRD'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, -1), 'SEG'],)),
'RPR_I03_PROVIDER': ('sequence',
(['PRD', SEGMENTS['PRD'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, -1), 'SEG'],)),
'RQA_I08_AUTCTD_SUPPGRP2': ('sequence',
(['AUT', SEGMENTS['AUT'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],)),
'RQA_I08_AUTHORIZATION': ('sequence',
(['AUT', SEGMENTS['AUT'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],)),
'RQA_I08_GUARANTOR_INSURANCE': ('sequence',
(['GT1', SEGMENTS['GT1'], (0, -1), 'SEG'],
['RQA_I08_INSURANCE', None, (1, -1), 'GRP'],)),
'RQA_I08_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'RQA_I08_OBSERVATION': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RQA_I08_RESULTS', None, (0, -1), 'GRP'],)),
'RQA_I08_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['RQA_I08_AUTCTD_SUPPGRP2', None, (0, 1), 'GRP'],)),
'RQA_I08_PROVIDER': ('sequence',
(['PRD', SEGMENTS['PRD'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, -1), 'SEG'],)),
'RQA_I08_RESULTS': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RQA_I08_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'RQC_I05_PROVIDER': ('sequence',
(['PRD', SEGMENTS['PRD'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, -1), 'SEG'],)),
'RQI_I01_GUARANTOR_INSURANCE': ('sequence',
(['GT1', SEGMENTS['GT1'], (0, -1), 'SEG'],
['RQI_I01_INSURANCE', None, (1, -1), 'GRP'],)),
'RQI_I01_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'RQI_I01_PROVIDER': ('sequence',
(['PRD', SEGMENTS['PRD'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, -1), 'SEG'],)),
'RQP_I04_PROVIDER': ('sequence',
(['PRD', SEGMENTS['PRD'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, -1), 'SEG'],)),
'RRA_O18_ADMINISTRATION': ('sequence',
(['RXA', SEGMENTS['RXA'], (1, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, 1), 'SEG'],)),
'RRA_O18_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RRA_O18_TIMING', None, (0, -1), 'GRP'],
['RRA_O18_ADMINISTRATION', None, (0, 1), 'GRP'],)),
'RRA_O18_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RRA_O18_RESPONSE': ('sequence',
(['RRA_O18_PATIENT', None, (0, 1), 'GRP'],
['RRA_O18_ORDER', None, (1, -1), 'GRP'],)),
'RRA_O18_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RRD_O14_DISPENSE': ('sequence',
(['RXD', SEGMENTS['RXD'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RRD_O14_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RRD_O14_TIMING', None, (0, -1), 'GRP'],
['RRD_O14_DISPENSE', None, (0, 1), 'GRP'],)),
'RRD_O14_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RRD_O14_RESPONSE': ('sequence',
(['RRD_O14_PATIENT', None, (0, 1), 'GRP'],
['RRD_O14_ORDER', None, (1, -1), 'GRP'],)),
'RRD_O14_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RRE_O12_ENCODING': ('sequence',
(['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RRE_O12_TIMING_ENCODED', None, (1, -1), 'GRP'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RRE_O12_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RRE_O12_TIMING', None, (0, -1), 'GRP'],
['RRE_O12_ENCODING', None, (0, 1), 'GRP'],)),
'RRE_O12_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RRE_O12_RESPONSE': ('sequence',
(['RRE_O12_PATIENT', None, (0, 1), 'GRP'],
['RRE_O12_ORDER', None, (1, -1), 'GRP'],)),
'RRE_O12_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RRE_O12_TIMING_ENCODED': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RRG_O16_GIVE': ('sequence',
(['RXG', SEGMENTS['RXG'], (1, 1), 'SEG'],
['RRG_O16_TIMING_GIVE', None, (1, -1), 'GRP'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RRG_O16_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RRG_O16_TIMING', None, (0, -1), 'GRP'],
['RRG_O16_GIVE', None, (0, 1), 'GRP'],)),
'RRG_O16_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RRG_O16_RESPONSE': ('sequence',
(['RRG_O16_PATIENT', None, (0, 1), 'GRP'],
['RRG_O16_ORDER', None, (1, -1), 'GRP'],)),
'RRG_O16_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RRG_O16_TIMING_GIVE': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RRI_I12_AUTCTD_SUPPGRP2': ('sequence',
(['AUT', SEGMENTS['AUT'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],)),
'RRI_I12_AUTHORIZATION_CONTACT': ('sequence',
(['AUT', SEGMENTS['AUT'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],)),
'RRI_I12_OBSERVATION': ('sequence',
(['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RRI_I12_RESULTS_NOTES', None, (0, -1), 'GRP'],)),
'RRI_I12_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'RRI_I12_PROCEDURE': ('sequence',
(['PR1', SEGMENTS['PR1'], (1, 1), 'SEG'],
['RRI_I12_AUTCTD_SUPPGRP2', None, (0, 1), 'GRP'],)),
'RRI_I12_PROVIDER_CONTACT': ('sequence',
(['PRD', SEGMENTS['PRD'], (1, 1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, -1), 'SEG'],)),
'RRI_I12_RESULTS_NOTES': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RSP_K11_ROW_DEFINITION': ('sequence',
(['RDF', SEGMENTS['RDF'], (1, 1), 'SEG'],
['RDT', SEGMENTS['RDT'], (0, -1), 'SEG'],)),
'RSP_K21_QUERY_RESPONSE': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NK1', SEGMENTS['NK1'], (0, -1), 'SEG'],
['QRI', SEGMENTS['QRI'], (1, 1), 'SEG'],)),
'RSP_K23_QUERY_RESPONSE': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],)),
'RSP_K25_STAFF': ('sequence',
(['STF', SEGMENTS['STF'], (1, 1), 'SEG'],
['PRA', SEGMENTS['PRA'], (0, -1), 'SEG'],
['ORG', SEGMENTS['ORG'], (0, -1), 'SEG'],
['AFF', SEGMENTS['AFF'], (0, -1), 'SEG'],
['LAN', SEGMENTS['LAN'], (0, -1), 'SEG'],
['EDU', SEGMENTS['EDU'], (0, -1), 'SEG'],
['CER', SEGMENTS['CER'], (0, -1), 'SEG'],)),
'RSP_K31_COMPONENTS': ('sequence',
(['RXC', SEGMENTS['RXC'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RSP_K31_ENCODING': ('sequence',
(['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['RSP_K31_TIMING_ENCODED', None, (1, -1), 'GRP'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RSP_K31_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RSP_K31_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RSP_K31_TIMING', None, (0, -1), 'GRP'],
['RSP_K31_ORDER_DETAIL', None, (0, 1), 'GRP'],
['RSP_K31_ENCODING', None, (0, 1), 'GRP'],
['RXD', SEGMENTS['RXD'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],
['RSP_K31_OBSERVATION', None, (1, -1), 'GRP'],)),
'RSP_K31_ORDER_DETAIL': ('sequence',
(['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RSP_K31_COMPONENTS', None, (0, -1), 'GRP'],)),
'RSP_K31_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],
['RSP_K31_PATIENT_VISIT', None, (0, 1), 'GRP'],)),
'RSP_K31_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'RSP_K31_RESPONSE': ('sequence',
(['RSP_K31_PATIENT', None, (0, 1), 'GRP'],
['RSP_K31_ORDER', None, (1, -1), 'GRP'],)),
'RSP_K31_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RSP_K31_TIMING_ENCODED': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RSP_Q11_MF_LOC_DEPT': ('sequence',
(['LDP', SEGMENTS['LDP'], (1, 1), 'SEG'],
['LCH', SEGMENTS['LCH'], (0, -1), 'SEG'],
['LCC', SEGMENTS['LCC'], (0, -1), 'SEG'],)),
'RSP_Q11_QUERY_RESULT_CLUSTER': ('sequence',
(['MFE', SEGMENTS['MFE'], (1, 1), 'SEG'],
['LOC', SEGMENTS['LOC'], (1, 1), 'SEG'],
['LCH', SEGMENTS['LCH'], (0, -1), 'SEG'],
['LRL', SEGMENTS['LRL'], (0, -1), 'SEG'],
['RSP_Q11_MF_LOC_DEPT', None, (1, -1), 'GRP'],)),
'RSP_Z82_COMMON_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RSP_Z82_TIMING', None, (0, -1), 'GRP'],
['RSP_Z82_ORDER_DETAIL', None, (0, 1), 'GRP'],
['RSP_Z82_ENCODED_ORDER', None, (0, 1), 'GRP'],
['RXD', SEGMENTS['RXD'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],
['RSP_Z82_OBSERVATION', None, (1, -1), 'GRP'],)),
'RSP_Z82_ENCODED_ORDER': ('sequence',
(['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['RSP_Z82_TIMING_ENCODED', None, (0, -1), 'GRP'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RSP_Z82_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RSP_Z82_ORDER_DETAIL': ('sequence',
(['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RSP_Z82_TREATMENT', None, (0, 1), 'GRP'],)),
'RSP_Z82_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RSP_Z82_VISIT', None, (0, 1), 'GRP'],)),
'RSP_Z82_QUERY_RESPONSE': ('sequence',
(['RSP_Z82_PATIENT', None, (0, 1), 'GRP'],
['RSP_Z82_COMMON_ORDER', None, (1, -1), 'GRP'],)),
'RSP_Z82_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RSP_Z82_TIMING_ENCODED': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RSP_Z82_TREATMENT': ('sequence',
(['RXC', SEGMENTS['RXC'], (1, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RSP_Z82_VISIT': ('sequence',
(['AL1', SEGMENTS['AL1'], (1, -1), 'SEG'],
['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'RSP_Z86_ADMINISTRATION': ('sequence',
(['RXA', SEGMENTS['RXA'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RSP_Z86_COMMON_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RSP_Z86_TIMING', None, (0, -1), 'GRP'],
['RSP_Z86_ORDER_DETAIL', None, (0, 1), 'GRP'],
['RSP_Z86_ENCODED_ORDER', None, (0, 1), 'GRP'],
['RSP_Z86_DISPENSE', None, (0, 1), 'GRP'],
['RSP_Z86_GIVE', None, (0, 1), 'GRP'],
['RSP_Z86_ADMINISTRATION', None, (0, 1), 'GRP'],
['RSP_Z86_OBSERVATION', None, (1, -1), 'GRP'],)),
'RSP_Z86_DISPENSE': ('sequence',
(['RXD', SEGMENTS['RXD'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RSP_Z86_ENCODED_ORDER': ('sequence',
(['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['RSP_Z86_TIMING_ENCODED', None, (0, -1), 'GRP'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RSP_Z86_GIVE': ('sequence',
(['RXG', SEGMENTS['RXG'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RSP_Z86_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RSP_Z86_ORDER_DETAIL': ('sequence',
(['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RSP_Z86_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['AL1', SEGMENTS['AL1'], (0, -1), 'SEG'],)),
'RSP_Z86_QUERY_RESPONSE': ('sequence',
(['RSP_Z86_PATIENT', None, (0, 1), 'GRP'],
['RSP_Z86_COMMON_ORDER', None, (1, -1), 'GRP'],)),
'RSP_Z86_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RSP_Z86_TIMING_ENCODED': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RSP_Z88_ALLERGY': ('sequence',
(['AL1', SEGMENTS['AL1'], (1, -1), 'SEG'],
['RSP_Z88_VISIT', None, (0, 1), 'GRP'],)),
'RSP_Z88_COMMON_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RSP_Z88_TIMING', None, (0, -1), 'GRP'],
['RSP_Z88_ORDER_DETAIL', None, (0, 1), 'GRP'],
['RSP_Z88_ORDER_ENCODED', None, (0, 1), 'GRP'],
['RXD', SEGMENTS['RXD'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],
['RSP_Z88_OBSERVATION', None, (1, -1), 'GRP'],)),
'RSP_Z88_COMPONENT': ('sequence',
(['RXC', SEGMENTS['RXC'], (1, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RSP_Z88_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RSP_Z88_ORDER_DETAIL': ('sequence',
(['RXO', SEGMENTS['RXO'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RSP_Z88_COMPONENT', None, (0, 1), 'GRP'],)),
'RSP_Z88_ORDER_ENCODED': ('sequence',
(['RXE', SEGMENTS['RXE'], (1, 1), 'SEG'],
['RSP_Z88_TIMING_ENCODED', None, (0, -1), 'GRP'],
['RXR', SEGMENTS['RXR'], (1, -1), 'SEG'],
['RXC', SEGMENTS['RXC'], (0, -1), 'SEG'],)),
'RSP_Z88_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RSP_Z88_ALLERGY', None, (0, 1), 'GRP'],)),
'RSP_Z88_QUERY_RESPONSE': ('sequence',
(['RSP_Z88_PATIENT', None, (0, 1), 'GRP'],
['RSP_Z88_COMMON_ORDER', None, (1, -1), 'GRP'],)),
'RSP_Z88_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RSP_Z88_TIMING_ENCODED': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RSP_Z88_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'RSP_Z90_COMMON_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['RSP_Z90_TIMING', None, (0, -1), 'GRP'],
['OBR', SEGMENTS['OBR'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['CTD', SEGMENTS['CTD'], (0, 1), 'SEG'],
['RSP_Z90_OBSERVATION', None, (1, -1), 'GRP'],)),
'RSP_Z90_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'RSP_Z90_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['NK1', SEGMENTS['NK1'], (0, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['RSP_Z90_VISIT', None, (0, 1), 'GRP'],)),
'RSP_Z90_QUERY_RESPONSE': ('sequence',
(['RSP_Z90_PATIENT', None, (0, 1), 'GRP'],
['RSP_Z90_COMMON_ORDER', None, (1, -1), 'GRP'],
['RSP_Z90_SPECIMEN', None, (0, -1), 'GRP'],)),
'RSP_Z90_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],)),
'RSP_Z90_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'RSP_Z90_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'RTB_K13_ROW_DEFINITION': ('sequence',
(['RDF', SEGMENTS['RDF'], (1, 1), 'SEG'],
['RDT', SEGMENTS['RDT'], (0, -1), 'SEG'],)),
'RTB_KNN': ('sequence',
(['MSH', SEGMENTS['MSH'], (1, 1), 'SEG'],
['SFT', SEGMENTS['SFT'], (0, -1), 'SEG'],
['MSA', SEGMENTS['MSA'], (1, 1), 'SEG'],
['ERR', SEGMENTS['ERR'], (0, 1), 'SEG'],
['QAK', SEGMENTS['QAK'], (1, 1), 'SEG'],
['QPD', SEGMENTS['QPD'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (1, 1), 'SEG'],
['ANYHL7SEGMENT', SEGMENTS['ANYHL7SEGMENT'], (1, 1), 'SEG'],
['DSC', SEGMENTS['DSC'], (0, 1), 'SEG'],)),
'RTB_Z74_ROW_DEFINITION': ('sequence',
(['RDF', SEGMENTS['RDF'], (1, 1), 'SEG'],
['RDT', SEGMENTS['RDT'], (0, -1), 'SEG'],)),
'SIU_S12_GENERAL_RESOURCE': ('sequence',
(['AIG', SEGMENTS['AIG'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SIU_S12_LOCATION_RESOURCE': ('sequence',
(['AIL', SEGMENTS['AIL'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SIU_S12_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PD1', SEGMENTS['PD1'], (0, 1), 'SEG'],
['PV1', SEGMENTS['PV1'], (0, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],)),
'SIU_S12_PERSONNEL_RESOURCE': ('sequence',
(['AIP', SEGMENTS['AIP'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SIU_S12_RESOURCES': ('sequence',
(['RGS', SEGMENTS['RGS'], (1, 1), 'SEG'],
['SIU_S12_SERVICE', None, (0, -1), 'GRP'],
['SIU_S12_GENERAL_RESOURCE', None, (0, -1), 'GRP'],
['SIU_S12_LOCATION_RESOURCE', None, (0, -1), 'GRP'],
['SIU_S12_PERSONNEL_RESOURCE', None, (0, -1), 'GRP'],)),
'SIU_S12_SERVICE': ('sequence',
(['AIS', SEGMENTS['AIS'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SQM_S25_GENERAL_RESOURCE': ('sequence',
(['AIG', SEGMENTS['AIG'], (1, 1), 'SEG'],
['APR', SEGMENTS['APR'], (0, 1), 'SEG'],)),
'SQM_S25_LOCATION_RESOURCE': ('sequence',
(['AIL', SEGMENTS['AIL'], (1, 1), 'SEG'],
['APR', SEGMENTS['APR'], (0, 1), 'SEG'],)),
'SQM_S25_PERSONNEL_RESOURCE': ('sequence',
(['AIP', SEGMENTS['AIP'], (1, 1), 'SEG'],
['APR', SEGMENTS['APR'], (0, 1), 'SEG'],)),
'SQM_S25_REQUEST': ('sequence',
(['ARQ', SEGMENTS['ARQ'], (1, 1), 'SEG'],
['APR', SEGMENTS['APR'], (0, 1), 'SEG'],
['PID', SEGMENTS['PID'], (0, 1), 'SEG'],
['SQM_S25_RESOURCES', None, (1, -1), 'GRP'],)),
'SQM_S25_RESOURCES': ('sequence',
(['RGS', SEGMENTS['RGS'], (1, 1), 'SEG'],
['SQM_S25_SERVICE', None, (0, -1), 'GRP'],
['SQM_S25_GENERAL_RESOURCE', None, (0, -1), 'GRP'],
['SQM_S25_PERSONNEL_RESOURCE', None, (0, -1), 'GRP'],
['SQM_S25_LOCATION_RESOURCE', None, (0, -1), 'GRP'],)),
'SQM_S25_SERVICE': ('sequence',
(['AIS', SEGMENTS['AIS'], (1, 1), 'SEG'],
['APR', SEGMENTS['APR'], (0, 1), 'SEG'],)),
'SQR_S25_GENERAL_RESOURCE': ('sequence',
(['AIG', SEGMENTS['AIG'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SQR_S25_LOCATION_RESOURCE': ('sequence',
(['AIL', SEGMENTS['AIL'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SQR_S25_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PV1', SEGMENTS['PV1'], (0, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, 1), 'SEG'],)),
'SQR_S25_PERSONNEL_RESOURCE': ('sequence',
(['AIP', SEGMENTS['AIP'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SQR_S25_RESOURCES': ('sequence',
(['RGS', SEGMENTS['RGS'], (1, 1), 'SEG'],
['SQR_S25_SERVICE', None, (0, -1), 'GRP'],
['SQR_S25_GENERAL_RESOURCE', None, (0, -1), 'GRP'],
['SQR_S25_PERSONNEL_RESOURCE', None, (0, -1), 'GRP'],
['SQR_S25_LOCATION_RESOURCE', None, (0, -1), 'GRP'],)),
'SQR_S25_SCHEDULE': ('sequence',
(['SCH', SEGMENTS['SCH'], (1, 1), 'SEG'],
['TQ1', SEGMENTS['TQ1'], (0, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['SQR_S25_PATIENT', None, (0, 1), 'GRP'],
['SQR_S25_RESOURCES', None, (1, -1), 'GRP'],)),
'SQR_S25_SERVICE': ('sequence',
(['AIS', SEGMENTS['AIS'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SRM_S01_GENERAL_RESOURCE': ('sequence',
(['AIG', SEGMENTS['AIG'], (1, 1), 'SEG'],
['APR', SEGMENTS['APR'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SRM_S01_LOCATION_RESOURCE': ('sequence',
(['AIL', SEGMENTS['AIL'], (1, 1), 'SEG'],
['APR', SEGMENTS['APR'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SRM_S01_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PV1', SEGMENTS['PV1'], (0, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],)),
'SRM_S01_PERSONNEL_RESOURCE': ('sequence',
(['AIP', SEGMENTS['AIP'], (1, 1), 'SEG'],
['APR', SEGMENTS['APR'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SRM_S01_RESOURCES': ('sequence',
(['RGS', SEGMENTS['RGS'], (1, 1), 'SEG'],
['SRM_S01_SERVICE', None, (0, -1), 'GRP'],
['SRM_S01_GENERAL_RESOURCE', None, (0, -1), 'GRP'],
['SRM_S01_LOCATION_RESOURCE', None, (0, -1), 'GRP'],
['SRM_S01_PERSONNEL_RESOURCE', None, (0, -1), 'GRP'],)),
'SRM_S01_SERVICE': ('sequence',
(['AIS', SEGMENTS['AIS'], (1, 1), 'SEG'],
['APR', SEGMENTS['APR'], (0, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SRR_S01_GENERAL_RESOURCE': ('sequence',
(['AIG', SEGMENTS['AIG'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SRR_S01_LOCATION_RESOURCE': ('sequence',
(['AIL', SEGMENTS['AIL'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SRR_S01_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['PV1', SEGMENTS['PV1'], (0, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],
['DG1', SEGMENTS['DG1'], (0, -1), 'SEG'],)),
'SRR_S01_PERSONNEL_RESOURCE': ('sequence',
(['AIP', SEGMENTS['AIP'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SRR_S01_RESOURCES': ('sequence',
(['RGS', SEGMENTS['RGS'], (1, 1), 'SEG'],
['SRR_S01_SERVICE', None, (0, -1), 'GRP'],
['SRR_S01_GENERAL_RESOURCE', None, (0, -1), 'GRP'],
['SRR_S01_LOCATION_RESOURCE', None, (0, -1), 'GRP'],
['SRR_S01_PERSONNEL_RESOURCE', None, (0, -1), 'GRP'],)),
'SRR_S01_SCHEDULE': ('sequence',
(['SCH', SEGMENTS['SCH'], (1, 1), 'SEG'],
['TQ1', SEGMENTS['TQ1'], (0, -1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],
['SRR_S01_PATIENT', None, (0, -1), 'GRP'],
['SRR_S01_RESOURCES', None, (1, -1), 'GRP'],)),
'SRR_S01_SERVICE': ('sequence',
(['AIS', SEGMENTS['AIS'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'SSR_U04_SPECIMEN_CONTAINER': ('sequence',
(['SAC', SEGMENTS['SAC'], (1, 1), 'SEG'],
['SPM', SEGMENTS['SPM'], (0, -1), 'SEG'],)),
'SSU_U03_SPECIMEN': ('sequence',
(['SPM', SEGMENTS['SPM'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],)),
'SSU_U03_SPECIMEN_CONTAINER': ('sequence',
(['SAC', SEGMENTS['SAC'], (1, 1), 'SEG'],
['OBX', SEGMENTS['OBX'], (0, -1), 'SEG'],
['SSU_U03_SPECIMEN', None, (0, -1), 'GRP'],)),
'SUR_P09_FACILITY': ('sequence',
(['FAC', SEGMENTS['FAC'], (1, 1), 'SEG'],
['SUR_P09_PRODUCT', None, (1, -1), 'GRP'],
['PSH', SEGMENTS['PSH'], (1, 1), 'SEG'],
['SUR_P09_FACILITY_DETAIL', None, (1, -1), 'GRP'],
)),
'SUR_P09_FACILITY_DETAIL': ('sequence',
(['FAC', SEGMENTS['FAC'], (1, 1), 'SEG'],
['PDC', SEGMENTS['PDC'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (1, 1), 'SEG'],)),
'SUR_P09_PRODUCT': ('sequence',
(['PSH', SEGMENTS['PSH'], (1, 1), 'SEG'],
['PDC', SEGMENTS['PDC'], (1, 1), 'SEG'],)),
'TCU_U10_TEST_CONFIGURATION': ('sequence',
(['SPM', SEGMENTS['SPM'], (0, 1), 'SEG'],
['TCC', SEGMENTS['TCC'], (1, -1), 'SEG'],)),
'VXR_V03_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'VXR_V03_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'VXR_V03_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['VXR_V03_TIMING', None, (0, -1), 'GRP'],
['RXA', SEGMENTS['RXA'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (0, 1), 'SEG'],
['VXR_V03_OBSERVATION', None, (0, -1), 'GRP'],)),
'VXR_V03_PATIENT_VISIT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'VXR_V03_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'VXU_V04_INSURANCE': ('sequence',
(['IN1', SEGMENTS['IN1'], (1, 1), 'SEG'],
['IN2', SEGMENTS['IN2'], (0, 1), 'SEG'],
['IN3', SEGMENTS['IN3'], (0, 1), 'SEG'],)),
'VXU_V04_OBSERVATION': ('sequence',
(['OBX', SEGMENTS['OBX'], (1, 1), 'SEG'],
['NTE', SEGMENTS['NTE'], (0, -1), 'SEG'],)),
'VXU_V04_ORDER': ('sequence',
(['ORC', SEGMENTS['ORC'], (1, 1), 'SEG'],
['VXU_V04_TIMING', None, (0, -1), 'GRP'],
['RXA', SEGMENTS['RXA'], (1, 1), 'SEG'],
['RXR', SEGMENTS['RXR'], (0, 1), 'SEG'],
['VXU_V04_OBSERVATION', None, (0, -1), 'GRP'],)),
'VXU_V04_PATIENT': ('sequence',
(['PV1', SEGMENTS['PV1'], (1, 1), 'SEG'],
['PV2', SEGMENTS['PV2'], (0, 1), 'SEG'],)),
'VXU_V04_TIMING': ('sequence',
(['TQ1', SEGMENTS['TQ1'], (1, 1), 'SEG'],
['TQ2', SEGMENTS['TQ2'], (0, -1), 'SEG'],)),
'VXX_V02_PATIENT': ('sequence',
(['PID', SEGMENTS['PID'], (1, 1), 'SEG'],
['NK1', SEGMENTS['NK1'], (0, -1), 'SEG'],)),
}
for k, v in iteritems(GROUPS):
for item in v[1]:
if item[3] == 'GRP':
item[1] = GROUPS[item[0]]
| 63.247542 | 105 | 0.351293 | 16,703 | 180,129 | 3.616356 | 0.028019 | 0.111251 | 0.076154 | 0.055576 | 0.952172 | 0.914012 | 0.844961 | 0.784286 | 0.721144 | 0.689723 | 0 | 0.0689 | 0.408668 | 180,129 | 2,847 | 106 | 63.269758 | 0.498188 | 0 | 0 | 0.559578 | 0 | 0 | 0.241749 | 0.044096 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.000703 | 0 | 0.000703 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0aae34fc234fb44ad4572cbf8003f971454107a6 | 487 | py | Python | test/pagerank.py | roks/snap-python | e316dfae8f0b7707756e0a6bf4237d448259d2d2 | [
"BSD-3-Clause"
] | null | null | null | test/pagerank.py | roks/snap-python | e316dfae8f0b7707756e0a6bf4237d448259d2d2 | [
"BSD-3-Clause"
] | null | null | null | test/pagerank.py | roks/snap-python | e316dfae8f0b7707756e0a6bf4237d448259d2d2 | [
"BSD-3-Clause"
] | 1 | 2019-11-11T20:25:19.000Z | 2019-11-11T20:25:19.000Z | import snap
Graph = snap.GenRndGnm(snap.PNGraph, 100, 1000)
PRankH = snap.TIntFltH()
snap.GetPageRank(Graph, PRankH)
for item in PRankH:
print item, PRankH[item]
Graph = snap.GenRndGnm(snap.PUNGraph, 5000000, 50000000)
PRankH = snap.TIntFltH()
snap.GetPageRank(Graph, PRankH)
for item in PRankH:
print item, PRankH[item]
Graph = snap.GenRndGnm(snap.PNEANet, 100, 1000)
PRankH = snap.TIntFltH()
snap.GetPageRank(Graph, PRankH)
for item in PRankH:
print item, PRankH[item]
| 23.190476 | 56 | 0.741273 | 68 | 487 | 5.308824 | 0.264706 | 0.074792 | 0.149584 | 0.182825 | 0.808864 | 0.808864 | 0.808864 | 0.808864 | 0.808864 | 0.808864 | 0 | 0.069378 | 0.141684 | 487 | 20 | 57 | 24.35 | 0.794258 | 0 | 0 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0625 | null | null | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
0abb308dcf500fd4a5d46c36e27a1428a29ac423 | 40,877 | py | Python | airflow/providers/google/cloud/hooks/cloud_memorystore.py | npodewitz/airflow | 511ea702d5f732582d018dad79754b54d5e53f9d | [
"Apache-2.0"
] | 8,092 | 2016-04-27T20:32:29.000Z | 2019-01-05T07:39:33.000Z | airflow/providers/google/cloud/hooks/cloud_memorystore.py | npodewitz/airflow | 511ea702d5f732582d018dad79754b54d5e53f9d | [
"Apache-2.0"
] | 2,961 | 2016-05-05T07:16:16.000Z | 2019-01-05T08:47:59.000Z | airflow/providers/google/cloud/hooks/cloud_memorystore.py | npodewitz/airflow | 511ea702d5f732582d018dad79754b54d5e53f9d | [
"Apache-2.0"
] | 3,546 | 2016-05-04T20:33:16.000Z | 2019-01-05T05:14:26.000Z | #
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
Hooks for Cloud Memorystore service.
.. spelling::
DataProtectionMode
FieldMask
pb
memcache
"""
from typing import Dict, Optional, Sequence, Tuple, Union
from google.api_core import path_template
from google.api_core.exceptions import NotFound
from google.api_core.gapic_v1.method import DEFAULT, _MethodDefault
from google.api_core.retry import Retry
from google.cloud.memcache_v1beta2 import CloudMemcacheClient
from google.cloud.memcache_v1beta2.types import cloud_memcache
from google.cloud.redis_v1 import (
CloudRedisClient,
FailoverInstanceRequest,
InputConfig,
Instance,
OutputConfig,
)
from google.protobuf.field_mask_pb2 import FieldMask
from airflow import version
from airflow.exceptions import AirflowException
from airflow.providers.google.common.hooks.base_google import PROVIDE_PROJECT_ID, GoogleBaseHook
class CloudMemorystoreHook(GoogleBaseHook):
"""
Hook for Google Cloud Memorystore APIs.
All the methods in the hook where project_id is used must be called with
keyword arguments rather than positional.
:param gcp_conn_id: The connection ID to use when fetching connection info.
:param delegate_to: The account to impersonate using domain-wide delegation of authority,
if any. For this to work, the service account making the request must have
domain-wide delegation enabled.
:param impersonation_chain: Optional service account to impersonate using short-term
credentials, or chained list of accounts required to get the access_token
of the last account in the list, which will be impersonated in the request.
If set as a string, the account must grant the originating account
the Service Account Token Creator IAM role.
If set as a sequence, the identities from the list must grant
Service Account Token Creator IAM role to the directly preceding identity, with first
account from the list granting this role to the originating account.
"""
def __init__(
self,
gcp_conn_id: str = "google_cloud_default",
delegate_to: Optional[str] = None,
impersonation_chain: Optional[Union[str, Sequence[str]]] = None,
) -> None:
super().__init__(
gcp_conn_id=gcp_conn_id,
delegate_to=delegate_to,
impersonation_chain=impersonation_chain,
)
self._client: Optional[CloudRedisClient] = None
def get_conn(self) -> CloudRedisClient:
"""Retrieves client library object that allow access to Cloud Memorystore service."""
if not self._client:
self._client = CloudRedisClient(credentials=self._get_credentials())
return self._client
@staticmethod
def _append_label(instance: Instance, key: str, val: str) -> Instance:
"""
Append labels to provided Instance type
Labels must fit the regex ``[a-z]([-a-z0-9]*[a-z0-9])?`` (current
airflow version string follows semantic versioning spec: x.y.z).
:param instance: The proto to append resource_label airflow
version to
:param key: The key label
:param val:
:return: The cluster proto updated with new label
"""
val = val.replace(".", "-").replace("+", "-")
instance.labels.update({key: val})
return instance
@GoogleBaseHook.fallback_to_default_project_id
def create_instance(
self,
location: str,
instance_id: str,
instance: Union[Dict, Instance],
project_id: str = PROVIDE_PROJECT_ID,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Creates a Redis instance based on the specified tier and memory size.
By default, the instance is accessible from the project's `default network
<https://cloud.google.com/compute/docs/networks-and-firewalls#networks>`__.
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
:param instance_id: Required. The logical name of the Redis instance in the customer project with the
following restrictions:
- Must contain only lowercase letters, numbers, and hyphens.
- Must start with a letter.
- Must be between 1-40 characters.
- Must end with a number or a letter.
- Must be unique within the customer project / location
:param instance: Required. A Redis [Instance] resource
If a dict is provided, it must be of the same form as the protobuf message
:class:`~google.cloud.redis_v1.types.Instance`
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the Google Cloud connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
if isinstance(instance, dict):
instance = Instance(**instance)
elif not isinstance(instance, Instance):
raise AirflowException("instance is not instance of Instance type or python dict")
parent = f"projects/{project_id}/locations/{location}"
instance_name = f"projects/{project_id}/locations/{location}/instances/{instance_id}"
try:
self.log.info("Fetching instance: %s", instance_name)
instance = client.get_instance(
request={'name': instance_name}, retry=retry, timeout=timeout, metadata=metadata or ()
)
self.log.info("Instance exists. Skipping creation.")
return instance
except NotFound:
self.log.info("Instance not exists.")
self._append_label(instance, "airflow-version", "v" + version.version)
result = client.create_instance(
request={'parent': parent, 'instance_id': instance_id, 'instance': instance},
retry=retry,
timeout=timeout,
metadata=metadata,
)
result.result()
self.log.info("Instance created.")
return client.get_instance(
request={'name': instance_name}, retry=retry, timeout=timeout, metadata=metadata or ()
)
@GoogleBaseHook.fallback_to_default_project_id
def delete_instance(
self,
location: str,
instance: str,
project_id: str = PROVIDE_PROJECT_ID,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Deletes a specific Redis instance. Instance stops serving and data is deleted.
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
:param instance: The logical name of the Redis instance in the customer project.
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the Google Cloud connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
name = f"projects/{project_id}/locations/{location}/instances/{instance}"
self.log.info("Fetching Instance: %s", name)
instance = client.get_instance(
request={'name': name},
retry=retry,
timeout=timeout,
metadata=metadata,
)
if not instance:
return
self.log.info("Deleting Instance: %s", name)
result = client.delete_instance(
request={'name': name},
retry=retry,
timeout=timeout,
metadata=metadata,
)
result.result()
self.log.info("Instance deleted: %s", name)
@GoogleBaseHook.fallback_to_default_project_id
def export_instance(
self,
location: str,
instance: str,
output_config: Union[Dict, OutputConfig],
project_id: str = PROVIDE_PROJECT_ID,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Export Redis instance data into a Redis RDB format file in Cloud Storage.
Redis will continue serving during this operation.
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
:param instance: The logical name of the Redis instance in the customer project.
:param output_config: Required. Specify data to be exported.
If a dict is provided, it must be of the same form as the protobuf message
:class:`~google.cloud.redis_v1.types.OutputConfig`
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the Google Cloud connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
name = f"projects/{project_id}/locations/{location}/instances/{instance}"
self.log.info("Exporting Instance: %s", name)
result = client.export_instance(
request={'name': name, 'output_config': output_config},
retry=retry,
timeout=timeout,
metadata=metadata,
)
result.result()
self.log.info("Instance exported: %s", name)
@GoogleBaseHook.fallback_to_default_project_id
def failover_instance(
self,
location: str,
instance: str,
data_protection_mode: FailoverInstanceRequest.DataProtectionMode,
project_id: str = PROVIDE_PROJECT_ID,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Initiates a failover of the primary node to current replica node for a specific STANDARD tier Cloud
Memorystore for Redis instance.
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
:param instance: The logical name of the Redis instance in the customer project.
:param data_protection_mode: Optional. Available data protection modes that the user can choose. If
it's unspecified, data protection mode will be LIMITED_DATA_LOSS by default.
.DataProtectionMode
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the Google Cloud connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
name = f"projects/{project_id}/locations/{location}/instances/{instance}"
self.log.info("Failovering Instance: %s", name)
result = client.failover_instance(
request={'name': name, 'data_protection_mode': data_protection_mode},
retry=retry,
timeout=timeout,
metadata=metadata,
)
result.result()
self.log.info("Instance failovered: %s", name)
@GoogleBaseHook.fallback_to_default_project_id
def get_instance(
self,
location: str,
instance: str,
project_id: str = PROVIDE_PROJECT_ID,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Gets the details of a specific Redis instance.
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
:param instance: The logical name of the Redis instance in the customer project.
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the Google Cloud connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
name = f"projects/{project_id}/locations/{location}/instances/{instance}"
result = client.get_instance(
request={'name': name},
retry=retry,
timeout=timeout,
metadata=metadata,
)
self.log.info("Fetched Instance: %s", name)
return result
@GoogleBaseHook.fallback_to_default_project_id
def import_instance(
self,
location: str,
instance: str,
input_config: Union[Dict, InputConfig],
project_id: str = PROVIDE_PROJECT_ID,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Import a Redis RDB snapshot file from Cloud Storage into a Redis instance.
Redis may stop serving during this operation. Instance state will be IMPORTING for entire operation.
When complete, the instance will contain only data from the imported file.
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
:param instance: The logical name of the Redis instance in the customer project.
:param input_config: Required. Specify data to be imported.
If a dict is provided, it must be of the same form as the protobuf message
:class:`~google.cloud.redis_v1.types.InputConfig`
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the Google Cloud connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
name = f"projects/{project_id}/locations/{location}/instances/{instance}"
self.log.info("Importing Instance: %s", name)
result = client.import_instance(
request={'name': name, 'input_config': input_config},
retry=retry,
timeout=timeout,
metadata=metadata,
)
result.result()
self.log.info("Instance imported: %s", name)
@GoogleBaseHook.fallback_to_default_project_id
def list_instances(
self,
location: str,
page_size: int,
project_id: str = PROVIDE_PROJECT_ID,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Lists all Redis instances owned by a project in either the specified location (region) or all
locations.
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
If it is specified as ``-`` (wildcard), then all regions available to the project are
queried, and the results are aggregated.
:param page_size: The maximum number of resources contained in the underlying API response. If page
streaming is performed per- resource, this parameter does not affect the return value. If page
streaming is performed per-page, this determines the maximum number of resources in a page.
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the Google Cloud connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
parent = f"projects/{project_id}/locations/{location}"
result = client.list_instances(
request={'parent': parent, 'page_size': page_size},
retry=retry,
timeout=timeout,
metadata=metadata,
)
self.log.info("Fetched instances")
return result
@GoogleBaseHook.fallback_to_default_project_id
def update_instance(
self,
update_mask: Union[Dict, FieldMask],
instance: Union[Dict, Instance],
project_id: str = PROVIDE_PROJECT_ID,
location: Optional[str] = None,
instance_id: Optional[str] = None,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Updates the metadata and configuration of a specific Redis instance.
:param update_mask: Required. Mask of fields to update. At least one path must be supplied in this
field. The elements of the repeated paths field may only include these fields from ``Instance``:
- ``displayName``
- ``labels``
- ``memorySizeGb``
- ``redisConfig``
If a dict is provided, it must be of the same form as the protobuf message
:class:`~google.protobuf.field_mask_pb2.FieldMask`
:param instance: Required. Update description. Only fields specified in ``update_mask`` are updated.
If a dict is provided, it must be of the same form as the protobuf message
:class:`~google.cloud.redis_v1.types.Instance`
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
:param instance_id: The logical name of the Redis instance in the customer project.
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the Google Cloud connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
if isinstance(instance, dict):
instance = Instance(**instance)
elif not isinstance(instance, Instance):
raise AirflowException("instance is not instance of Instance type or python dict")
if location and instance_id:
name = f"projects/{project_id}/locations/{location}/instances/{instance_id}"
instance.name = name
self.log.info("Updating instances: %s", instance.name)
result = client.update_instance(
request={'update_mask': update_mask, 'instance': instance},
retry=retry,
timeout=timeout,
metadata=metadata,
)
result.result()
self.log.info("Instance updated: %s", instance.name)
class CloudMemorystoreMemcachedHook(GoogleBaseHook):
"""
Hook for Google Cloud Memorystore for Memcached service APIs.
All the methods in the hook where project_id is used must be called with
keyword arguments rather than positional.
:param gcp_conn_id: The connection ID to use when fetching connection info.
:param delegate_to: The account to impersonate using domain-wide delegation of authority,
if any. For this to work, the service account making the request must have
domain-wide delegation enabled.
:param impersonation_chain: Optional service account to impersonate using short-term
credentials, or chained list of accounts required to get the access_token
of the last account in the list, which will be impersonated in the request.
If set as a string, the account must grant the originating account
the Service Account Token Creator IAM role.
If set as a sequence, the identities from the list must grant
Service Account Token Creator IAM role to the directly preceding identity, with first
account from the list granting this role to the originating account.
"""
def __init__(
self,
gcp_conn_id: str = "google_cloud_default",
delegate_to: Optional[str] = None,
impersonation_chain: Optional[Union[str, Sequence[str]]] = None,
) -> None:
super().__init__(
gcp_conn_id=gcp_conn_id,
delegate_to=delegate_to,
impersonation_chain=impersonation_chain,
)
self._client: Optional[CloudMemcacheClient] = None
def get_conn(
self,
):
"""Retrieves client library object that allow access to Cloud Memorystore Memcached service."""
if not self._client:
self._client = CloudMemcacheClient(credentials=self._get_credentials())
return self._client
@staticmethod
def _append_label(instance: cloud_memcache.Instance, key: str, val: str) -> cloud_memcache.Instance:
"""
Append labels to provided Instance type
Labels must fit the regex ``[a-z]([-a-z0-9]*[a-z0-9])?`` (current
airflow version string follows semantic versioning spec: x.y.z).
:param instance: The proto to append resource_label airflow
version to
:param key: The key label
:param val:
:return: The cluster proto updated with new label
"""
val = val.replace(".", "-").replace("+", "-")
instance.labels.update({key: val})
return instance
@GoogleBaseHook.fallback_to_default_project_id
def apply_parameters(
self,
node_ids: Sequence[str],
apply_all: bool,
project_id: str,
location: str,
instance_id: str,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Will update current set of Parameters to the set of specified nodes of the Memcached Instance.
:param node_ids: Nodes to which we should apply the instance-level parameter group.
:param apply_all: Whether to apply instance-level parameter group to all nodes. If set to true,
will explicitly restrict users from specifying any nodes, and apply parameter group updates
to all nodes within the instance.
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
:param instance_id: The logical name of the Memcached instance in the customer project.
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the Google Cloud connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
metadata = metadata or ()
name = CloudMemcacheClient.instance_path(project_id, location, instance_id)
self.log.info("Applying update to instance: %s", instance_id)
result = client.apply_parameters(
name=name,
node_ids=node_ids,
apply_all=apply_all,
retry=retry,
timeout=timeout,
metadata=metadata,
)
result.result()
self.log.info("Instance updated: %s", instance_id)
@GoogleBaseHook.fallback_to_default_project_id
def create_instance(
self,
location: str,
instance_id: str,
instance: Union[Dict, cloud_memcache.Instance],
project_id: str,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Creates a Memcached instance based on the specified tier and memory size.
By default, the instance is accessible from the project's `default network
<https://cloud.google.com/compute/docs/networks-and-firewalls#networks>`__.
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
:param instance_id: Required. The logical name of the Memcached instance in the customer project
with the following restrictions:
- Must contain only lowercase letters, numbers, and hyphens.
- Must start with a letter.
- Must be between 1-40 characters.
- Must end with a number or a letter.
- Must be unique within the customer project / location
:param instance: Required. A Memcached [Instance] resource
If a dict is provided, it must be of the same form as the protobuf message
:class:`~google.cloud.memcache_v1beta2.types.cloud_memcache.Instance`
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the GCP connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
metadata = metadata or ()
parent = path_template.expand(
"projects/{project}/locations/{location}", project=project_id, location=location
)
instance_name = CloudMemcacheClient.instance_path(project_id, location, instance_id)
try:
instance = client.get_instance(
name=instance_name, retry=retry, timeout=timeout, metadata=metadata
)
self.log.info("Instance exists. Skipping creation.")
return instance
except NotFound:
self.log.info("Instance not exists.")
if isinstance(instance, dict):
instance = cloud_memcache.Instance(instance)
elif not isinstance(instance, cloud_memcache.Instance):
raise AirflowException("instance is not instance of Instance type or python dict")
self._append_label(instance, "airflow-version", "v" + version.version)
result = client.create_instance(
parent=parent,
instance_id=instance_id,
resource=instance,
retry=retry,
timeout=timeout,
metadata=metadata,
)
result.result()
self.log.info("Instance created.")
return client.get_instance(
name=instance_name,
retry=retry,
timeout=timeout,
metadata=metadata,
)
@GoogleBaseHook.fallback_to_default_project_id
def delete_instance(
self,
location: str,
instance: str,
project_id: str,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Deletes a specific Memcached instance. Instance stops serving and data is deleted.
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
:param instance: The logical name of the Memcached instance in the customer project.
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the GCP connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
metadata = metadata or ()
name = CloudMemcacheClient.instance_path(project_id, location, instance)
self.log.info("Fetching Instance: %s", name)
instance = client.get_instance(
name=name,
retry=retry,
timeout=timeout,
metadata=metadata,
)
if not instance:
return
self.log.info("Deleting Instance: %s", name)
result = client.delete_instance(
name=name,
retry=retry,
timeout=timeout,
metadata=metadata,
)
result.result()
self.log.info("Instance deleted: %s", name)
@GoogleBaseHook.fallback_to_default_project_id
def get_instance(
self,
location: str,
instance: str,
project_id: str,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Gets the details of a specific Memcached instance.
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
:param instance: The logical name of the Memcached instance in the customer project.
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the GCP connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
metadata = metadata or ()
name = CloudMemcacheClient.instance_path(project_id, location, instance)
result = client.get_instance(name=name, retry=retry, timeout=timeout, metadata=metadata or ())
self.log.info("Fetched Instance: %s", name)
return result
@GoogleBaseHook.fallback_to_default_project_id
def list_instances(
self,
location: str,
project_id: str,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Lists all Memcached instances owned by a project in either the specified location (region) or all
locations.
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
If it is specified as ``-`` (wildcard), then all regions available to the project are
queried, and the results are aggregated.
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the GCP connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
metadata = metadata or ()
parent = path_template.expand(
"projects/{project}/locations/{location}", project=project_id, location=location
)
result = client.list_instances(
parent=parent,
retry=retry,
timeout=timeout,
metadata=metadata,
)
self.log.info("Fetched instances")
return result
@GoogleBaseHook.fallback_to_default_project_id
def update_instance(
self,
update_mask: Union[Dict, FieldMask],
instance: Union[Dict, cloud_memcache.Instance],
project_id: str,
location: Optional[str] = None,
instance_id: Optional[str] = None,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Updates the metadata and configuration of a specific Memcached instance.
:param update_mask: Required. Mask of fields to update. At least one path must be supplied in this
field. The elements of the repeated paths field may only include these fields from ``Instance``:
- ``displayName``
If a dict is provided, it must be of the same form as the protobuf message
:class:`~google.protobuf.field_mask_pb2.FieldMask`)
Union[Dict, google.protobuf.field_mask_pb2.FieldMask]
:param instance: Required. Update description. Only fields specified in ``update_mask`` are updated.
If a dict is provided, it must be of the same form as the protobuf message
:class:`~google.cloud.memcache_v1beta2.types.cloud_memcache.Instance`
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
:param instance_id: The logical name of the Memcached instance in the customer project.
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the Google Cloud connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
metadata = metadata or ()
if isinstance(instance, dict):
instance = cloud_memcache.Instance(instance)
elif not isinstance(instance, cloud_memcache.Instance):
raise AirflowException("instance is not instance of Instance type or python dict")
if location and instance_id:
name = CloudMemcacheClient.instance_path(project_id, location, instance_id)
instance.name = name
self.log.info("Updating instances: %s", instance.name)
result = client.update_instance(
update_mask=update_mask, resource=instance, retry=retry, timeout=timeout, metadata=metadata or ()
)
result.result()
self.log.info("Instance updated: %s", instance.name)
@GoogleBaseHook.fallback_to_default_project_id
def update_parameters(
self,
update_mask: Union[Dict, FieldMask],
parameters: Union[Dict, cloud_memcache.MemcacheParameters],
project_id: str,
location: str,
instance_id: str,
retry: Union[Retry, _MethodDefault] = DEFAULT,
timeout: Optional[float] = None,
metadata: Sequence[Tuple[str, str]] = (),
):
"""
Updates the defined Memcached Parameters for an existing Instance. This method only stages the
parameters, it must be followed by apply_parameters to apply the parameters to nodes of
the Memcached Instance.
:param update_mask: Required. Mask of fields to update.
If a dict is provided, it must be of the same form as the protobuf message
:class:`~google.protobuf.field_mask_pb2.FieldMask`
Union[Dict, google.protobuf.field_mask_pb2.FieldMask]
:param parameters: The parameters to apply to the instance.
If a dict is provided, it must be of the same form as the protobuf message
:class:`~google.cloud.memcache_v1beta2.types.cloud_memcache.MemcacheParameters`
:param location: The location of the Cloud Memorystore instance (for example europe-west1)
:param instance_id: The logical name of the Memcached instance in the customer project.
:param project_id: Project ID of the project that contains the instance. If set
to None or missing, the default project_id from the Google Cloud connection is used.
:param retry: A retry object used to retry requests. If ``None`` is specified, requests will not be
retried.
:param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if
``retry`` is specified, the timeout applies to each individual attempt.
:param metadata: Additional metadata that is provided to the method.
"""
client = self.get_conn()
metadata = metadata or ()
if isinstance(parameters, dict):
parameters = cloud_memcache.MemcacheParameters(parameters)
elif not isinstance(parameters, cloud_memcache.MemcacheParameters):
raise AirflowException("instance is not instance of MemcacheParameters type or python dict")
name = CloudMemcacheClient.instance_path(project_id, location, instance_id)
self.log.info("Staging update to instance: %s", instance_id)
result = client.update_parameters(
name=name,
update_mask=update_mask,
parameters=parameters,
retry=retry,
timeout=timeout,
metadata=metadata,
)
result.result()
self.log.info("Update staged for instance: %s", instance_id)
| 45.774916 | 109 | 0.655038 | 5,009 | 40,877 | 5.26113 | 0.084248 | 0.035176 | 0.01294 | 0.019125 | 0.862217 | 0.848328 | 0.829962 | 0.822221 | 0.814974 | 0.806739 | 0 | 0.001845 | 0.27091 | 40,877 | 892 | 110 | 45.826233 | 0.882394 | 0.495633 | 0 | 0.776824 | 0 | 0 | 0.098916 | 0.033355 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045064 | false | 0 | 0.034335 | 0 | 0.113734 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0ae6336302293f221ee26692557edd24ea29ec0b | 9,036 | py | Python | Scripts/RQ2_scripts/3_mapped_by_major_cat_version.py | syful-is/Package-manager-project | a73b5ce715d1c83e15404a502d16cfa748b304fe | [
"MIT"
] | 1 | 2021-03-06T23:19:17.000Z | 2021-03-06T23:19:17.000Z | Scripts/RQ2_scripts/3_mapped_by_major_cat_version.py | syful-is/Package-manager-project | a73b5ce715d1c83e15404a502d16cfa748b304fe | [
"MIT"
] | null | null | null | Scripts/RQ2_scripts/3_mapped_by_major_cat_version.py | syful-is/Package-manager-project | a73b5ce715d1c83e15404a502d16cfa748b304fe | [
"MIT"
] | 1 | 2020-08-18T07:05:12.000Z | 2020-08-18T07:05:12.000Z | # -*- coding: utf-8 -*-
"""
Created on Mon Jun 15 16:50:31 2020
@author: SE
"""
# -*- coding: utf-8 -*-
"""
Created on Mon Mar 30 15:59:19 2020
@author: SE
"""
import re
import pandas as pd
from matplotlib import pyplot as plt
from datetime import datetime
from collections import Counter
import numpy as np
import random
import os
#Please specify your dataset directory.
os.chdir("your dataset directory")
df_PM=pd.read_csv("RQ2_LDA_topics_mapped_by_major_detail_27_6_20.csv", low_memory=False)
#"""
post_category=[]
post_language=[]
post_Enviroment=[]
post_Dependency=[]
Document_No=[]
Dominant_Topic=[]
Topic_Perc_Contrib=[]
Keywords=[]
Title=[]
Id=[]
Tags=[]
AcceptedAnswerId=[]
Major_cat=[]
Semi_major=[]
for i in range(0, len(df_PM)):
if df_PM['Dominant_Topic'][i]==0 or df_PM['Dominant_Topic'][i]==1:
Semi_major.append(0)
Major_cat.append(df_PM['Major_cat'][i])
post_category.append(df_PM['post_category'][i])
post_language.append(df_PM['post_language'][i])
post_Enviroment.append(df_PM['post_Enviroment'][i])
post_Dependency.append(df_PM['post_Dependency'][i])
Document_No.append(df_PM['Document_No'][i])
Dominant_Topic.append(df_PM['Dominant_Topic'][i])
Topic_Perc_Contrib.append(df_PM['Topic_Perc_Contrib'][i])
Keywords.append(df_PM['Keywords'][i])
Title.append(df_PM['Title'][i])
Id.append(df_PM['Id'][i])
Tags.append(df_PM['Tags'][i])
AcceptedAnswerId.append(df_PM['AcceptedAnswerId'][i])
if df_PM['Dominant_Topic'][i]==6 or df_PM['Dominant_Topic'][i]==8:
Semi_major.append(1)
Major_cat.append(df_PM['Major_cat'][i])
post_category.append(df_PM['post_category'][i])
post_language.append(df_PM['post_language'][i])
post_Enviroment.append(df_PM['post_Enviroment'][i])
post_Dependency.append(df_PM['post_Dependency'][i])
Document_No.append(df_PM['Document_No'][i])
Dominant_Topic.append(df_PM['Dominant_Topic'][i])
Topic_Perc_Contrib.append(df_PM['Topic_Perc_Contrib'][i])
Keywords.append(df_PM['Keywords'][i])
Title.append(df_PM['Title'][i])
Id.append(df_PM['Id'][i])
Tags.append(df_PM['Tags'][i])
AcceptedAnswerId.append(df_PM['AcceptedAnswerId'][i])
if df_PM['Dominant_Topic'][i]==3 or df_PM['Dominant_Topic'][i]==11 :
Semi_major.append(2)
Major_cat.append(df_PM['Major_cat'][i])
post_category.append(df_PM['post_category'][i])
post_language.append(df_PM['post_language'][i])
post_Enviroment.append(df_PM['post_Enviroment'][i])
post_Dependency.append(df_PM['post_Dependency'][i])
Document_No.append(df_PM['Document_No'][i])
Dominant_Topic.append(df_PM['Dominant_Topic'][i])
Topic_Perc_Contrib.append(df_PM['Topic_Perc_Contrib'][i])
Keywords.append(df_PM['Keywords'][i])
Title.append(df_PM['Title'][i])
Id.append(df_PM['Id'][i])
Tags.append(df_PM['Tags'][i])
AcceptedAnswerId.append(df_PM['AcceptedAnswerId'][i])
if df_PM['Dominant_Topic'][i]==9 or df_PM['Dominant_Topic'][i]==14 :
Semi_major.append(3)
Major_cat.append(df_PM['Major_cat'][i])
post_category.append(df_PM['post_category'][i])
post_language.append(df_PM['post_language'][i])
post_Enviroment.append(df_PM['post_Enviroment'][i])
post_Dependency.append(df_PM['post_Dependency'][i])
Document_No.append(df_PM['Document_No'][i])
Dominant_Topic.append(df_PM['Dominant_Topic'][i])
Topic_Perc_Contrib.append(df_PM['Topic_Perc_Contrib'][i])
Keywords.append(df_PM['Keywords'][i])
Title.append(df_PM['Title'][i])
Id.append(df_PM['Id'][i])
Tags.append(df_PM['Tags'][i])
AcceptedAnswerId.append(df_PM['AcceptedAnswerId'][i])
if df_PM['Dominant_Topic'][i]==7 :
Semi_major.append(4)
Major_cat.append(df_PM['Major_cat'][i])
post_category.append(df_PM['post_category'][i])
post_language.append(df_PM['post_language'][i])
post_Enviroment.append(df_PM['post_Enviroment'][i])
post_Dependency.append(df_PM['post_Dependency'][i])
Document_No.append(df_PM['Document_No'][i])
Dominant_Topic.append(df_PM['Dominant_Topic'][i])
Topic_Perc_Contrib.append(df_PM['Topic_Perc_Contrib'][i])
Keywords.append(df_PM['Keywords'][i])
Title.append(df_PM['Title'][i])
Id.append(df_PM['Id'][i])
Tags.append(df_PM['Tags'][i])
AcceptedAnswerId.append(df_PM['AcceptedAnswerId'][i])
if df_PM['Dominant_Topic'][i]==5:
Semi_major.append(5)
Major_cat.append(df_PM['Major_cat'][i])
post_category.append(df_PM['post_category'][i])
post_language.append(df_PM['post_language'][i])
post_Enviroment.append(df_PM['post_Enviroment'][i])
post_Dependency.append(df_PM['post_Dependency'][i])
Document_No.append(df_PM['Document_No'][i])
Dominant_Topic.append(df_PM['Dominant_Topic'][i])
Topic_Perc_Contrib.append(df_PM['Topic_Perc_Contrib'][i])
Keywords.append(df_PM['Keywords'][i])
Title.append(df_PM['Title'][i])
Id.append(df_PM['Id'][i])
Tags.append(df_PM['Tags'][i])
AcceptedAnswerId.append(df_PM['AcceptedAnswerId'][i])
if df_PM['Dominant_Topic'][i]==2:
Semi_major.append(6)
Major_cat.append(df_PM['Major_cat'][i])
post_category.append(df_PM['post_category'][i])
post_language.append(df_PM['post_language'][i])
post_Enviroment.append(df_PM['post_Enviroment'][i])
post_Dependency.append(df_PM['post_Dependency'][i])
Document_No.append(df_PM['Document_No'][i])
Dominant_Topic.append(df_PM['Dominant_Topic'][i])
Topic_Perc_Contrib.append(df_PM['Topic_Perc_Contrib'][i])
Keywords.append(df_PM['Keywords'][i])
Title.append(df_PM['Title'][i])
Id.append(df_PM['Id'][i])
Tags.append(df_PM['Tags'][i])
AcceptedAnswerId.append(df_PM['AcceptedAnswerId'][i])
if df_PM['Dominant_Topic'][i]==10:
Semi_major.append(7)
Major_cat.append(df_PM['Major_cat'][i])
post_category.append(df_PM['post_category'][i])
post_language.append(df_PM['post_language'][i])
post_Enviroment.append(df_PM['post_Enviroment'][i])
post_Dependency.append(df_PM['post_Dependency'][i])
Document_No.append(df_PM['Document_No'][i])
Dominant_Topic.append(df_PM['Dominant_Topic'][i])
Topic_Perc_Contrib.append(df_PM['Topic_Perc_Contrib'][i])
Keywords.append(df_PM['Keywords'][i])
Title.append(df_PM['Title'][i])
Id.append(df_PM['Id'][i])
Tags.append(df_PM['Tags'][i])
AcceptedAnswerId.append(df_PM['AcceptedAnswerId'][i])
if df_PM['Dominant_Topic'][i]==4 or df_PM['Dominant_Topic'][i]==12:
Semi_major.append(8)
Major_cat.append(df_PM['Major_cat'][i])
post_category.append(df_PM['post_category'][i])
post_language.append(df_PM['post_language'][i])
post_Enviroment.append(df_PM['post_Enviroment'][i])
post_Dependency.append(df_PM['post_Dependency'][i])
Document_No.append(df_PM['Document_No'][i])
Dominant_Topic.append(df_PM['Dominant_Topic'][i])
Topic_Perc_Contrib.append(df_PM['Topic_Perc_Contrib'][i])
Keywords.append(df_PM['Keywords'][i])
Title.append(df_PM['Title'][i])
Id.append(df_PM['Id'][i])
Tags.append(df_PM['Tags'][i])
AcceptedAnswerId.append(df_PM['AcceptedAnswerId'][i])
if df_PM['Dominant_Topic'][i]==13:
Semi_major.append(9)
Major_cat.append(df_PM['Major_cat'][i])
post_category.append(df_PM['post_category'][i])
post_language.append(df_PM['post_language'][i])
post_Enviroment.append(df_PM['post_Enviroment'][i])
post_Dependency.append(df_PM['post_Dependency'][i])
Document_No.append(df_PM['Document_No'][i])
Dominant_Topic.append(df_PM['Dominant_Topic'][i])
Topic_Perc_Contrib.append(df_PM['Topic_Perc_Contrib'][i])
Keywords.append(df_PM['Keywords'][i])
Title.append(df_PM['Title'][i])
Id.append(df_PM['Id'][i])
Tags.append(df_PM['Tags'][i])
AcceptedAnswerId.append(df_PM['AcceptedAnswerId'][i])
dict={'Semi_major':Semi_major, 'Major_cat':Major_cat,'post_Enviroment':post_Enviroment, 'post_Dependency':post_Dependency,'post_category':post_category, 'post_language':post_language,'Document_No':list(df_PM['Document_No']), 'Dominant_Topic':list(df_PM['Dominant_Topic']), 'Topic_Perc_Contrib':list(df_PM['Topic_Perc_Contrib']), 'Keywords':list(df_PM['Keywords']), 'Title':list(df_PM['Title']), 'Id':list(df_PM['Id']), 'Tags':list(df_PM['Tags']), 'AcceptedAnswerId':list(df_PM['AcceptedAnswerId']) }
df=pd.DataFrame(dict)
df.to_csv("RQ2_LDA_topics_mapped_by_major_minor_detail_27_6_20.csv")
#""" | 41.449541 | 501 | 0.658035 | 1,314 | 9,036 | 4.211568 | 0.076865 | 0.112035 | 0.234911 | 0.101193 | 0.844236 | 0.814962 | 0.785327 | 0.775208 | 0.775208 | 0.775208 | 0 | 0.009198 | 0.169765 | 9,036 | 218 | 502 | 41.449541 | 0.728472 | 0.015383 | 0 | 0.730337 | 0 | 0 | 0.226186 | 0.011779 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.044944 | 0 | 0.044944 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
0ae8fc58644544dfb45188fd20b684df02983b8b | 58,642 | py | Python | tests/driver/test_mongodb_driver.py | WGriffing/dagda | b1f669063fcb556df39e0730ce41a72ab9130867 | [
"Apache-2.0",
"MIT"
] | null | null | null | tests/driver/test_mongodb_driver.py | WGriffing/dagda | b1f669063fcb556df39e0730ce41a72ab9130867 | [
"Apache-2.0",
"MIT"
] | null | null | null | tests/driver/test_mongodb_driver.py | WGriffing/dagda | b1f669063fcb556df39e0730ce41a72ab9130867 | [
"Apache-2.0",
"MIT"
] | null | null | null | #
# Licensed to Dagda under one or more contributor
# license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright
# ownership. Dagda licenses this file to you under
# the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
import unittest
from unittest.mock import Mock
import pymongo
import datetime
from dagda.driver.mongodb_driver import MongoDbDriver
import pytest
# -- Test suite
class MongoDbDriverTestCase(unittest.TestCase):
def test_get_vulnerabilities_product_full_happy_path(self):
mock_driver = FullGetVulnProdMongoDbDriver()
vulnerabilities = mock_driver.get_vulnerabilities('openldap')
self.assertEqual(len(vulnerabilities), 6)
self.assertDictEqual(vulnerabilities[0],{"CVE-2002-2001":{"cvss_access_vector": "Network",
"cveid": "CVE-2002-2002",
"cvss_base": 7.5,
"cvss_integrity_impact": "Partial",
"cvss_availability_impact": "Partial",
"summary": "Summary example",
"cvss_confidentiality_impact": "Partial",
"cvss_vector": ["AV:N","AC:L","Au:N","C:P","I:P","A:P"],
"cvss_authentication": "None required",
"cvss_access_complexity": "Low",
"pub_date": datetime.datetime.now().strftime('%d-%m-%Y'),
"cvss_impact": 6.4,
"cvss_exploit": 10.0,
"mod_date": datetime.datetime.now().strftime('%d-%m-%Y'),
"cweid": "CWE-0"
}})
self.assertDictEqual(vulnerabilities[1],{"CVE-2002-2002":{"cvss_access_vector": "Network",
"cveid": "CVE-2002-2002",
"cvss_base": 7.5,
"cvss_integrity_impact": "Partial",
"cvss_availability_impact": "Partial",
"summary": "Summary example",
"cvss_confidentiality_impact": "Partial",
"cvss_vector": ["AV:N","AC:L","Au:N","C:P","I:P","A:P"],
"cvss_authentication": "None required",
"cvss_access_complexity": "Low",
"pub_date": datetime.datetime.now().strftime('%d-%m-%Y'),
"cvss_impact": 6.4,
"cvss_exploit": 10.0,
"mod_date": datetime.datetime.now().strftime('%d-%m-%Y'),
"cweid": "CWE-0"
}})
self.assertDictEqual(vulnerabilities[2],{"BID-1": {
"bugtraq_id": 15128,
"class": "Boundary Condition Error",
"cve": [
"CVE-2005-2978"
],
"local": "no",
"remote": "yes",
"title": "NetPBM PNMToPNG Buffer Overflow Vulnerability"
}})
self.assertDictEqual(vulnerabilities[3],{"BID-2": {
"bugtraq_id": 15128,
"class": "Boundary Condition Error",
"cve": [
"CVE-2005-2978"
],
"local": "no",
"remote": "yes",
"title": "NetPBM PNMToPNG Buffer Overflow Vulnerability"
}})
self.assertDictEqual(vulnerabilities[4],{"EXPLOIT_DB_ID-3": {'exploit_db_id': 1,
'description': 'Summary example',
'platform': 'Linux',
'type': 'DoS',
'port': 0
}})
self.assertDictEqual(vulnerabilities[5],{"EXPLOIT_DB_ID-4": {'exploit_db_id': 1,
'description': 'Summary example',
'platform': 'Linux',
'type': 'DoS',
'port': 0
}})
def test_get_vulnerabilities_product_and_version_full_happy_path(self):
mock_driver = FullGetVulnProdAndVersionMongoDbDriver()
vulnerabilities = mock_driver.get_vulnerabilities('openldap','2.2.20')
self.assertEqual(len(vulnerabilities), 6)
self.assertDictEqual(vulnerabilities[0],{"CVE-2005-4442":{"cvss_access_vector": "Network",
"cveid": "CVE-2005-4442",
"cvss_base": 7.5,
"cvss_integrity_impact": "Partial",
"cvss_availability_impact": "Partial",
"summary": "Summary example",
"cvss_confidentiality_impact": "Partial",
"cvss_vector": ["AV:N","AC:L","Au:N","C:P","I:P","A:P"],
"cvss_authentication": "None required",
"cvss_access_complexity": "Low",
"pub_date": datetime.datetime.now().strftime('%d-%m-%Y'),
"cvss_impact": 6.4,
"cvss_exploit": 10.0,
"mod_date": datetime.datetime.now().strftime('%d-%m-%Y'),
"cweid": "CWE-0"
}})
self.assertDictEqual(vulnerabilities[1],{"CVE-2006-2754":{"cvss_access_vector": "Network",
"cveid": "CVE-2005-4442",
"cvss_base": 7.5,
"cvss_integrity_impact": "Partial",
"cvss_availability_impact": "Partial",
"summary": "Summary example",
"cvss_confidentiality_impact": "Partial",
"cvss_vector": ["AV:N","AC:L","Au:N","C:P","I:P","A:P"],
"cvss_authentication": "None required",
"cvss_access_complexity": "Low",
"pub_date": datetime.datetime.now().strftime('%d-%m-%Y'),
"cvss_impact": 6.4,
"cvss_exploit": 10.0,
"mod_date": datetime.datetime.now().strftime('%d-%m-%Y'),
"cweid": "CWE-0"
}})
self.assertDictEqual(vulnerabilities[2],{"CVE-2007-5707":{"cvss_access_vector": "Network",
"cveid": "CVE-2005-4442",
"cvss_base": 7.5,
"cvss_integrity_impact": "Partial",
"cvss_availability_impact": "Partial",
"summary": "Summary example",
"cvss_confidentiality_impact": "Partial",
"cvss_vector": ["AV:N","AC:L","Au:N","C:P","I:P","A:P"],
"cvss_authentication": "None required",
"cvss_access_complexity": "Low",
"pub_date": datetime.datetime.now().strftime('%d-%m-%Y'),
"cvss_impact": 6.4,
"cvss_exploit": 10.0,
"mod_date": datetime.datetime.now().strftime('%d-%m-%Y'),
"cweid": "CWE-0"
}})
self.assertDictEqual(vulnerabilities[3],{"CVE-2011-4079":{"cvss_access_vector": "Network",
"cveid": "CVE-2005-4442",
"cvss_base": 7.5,
"cvss_integrity_impact": "Partial",
"cvss_availability_impact": "Partial",
"summary": "Summary example",
"cvss_confidentiality_impact": "Partial",
"cvss_vector": ["AV:N","AC:L","Au:N","C:P","I:P","A:P"],
"cvss_authentication": "None required",
"cvss_access_complexity": "Low",
"pub_date": datetime.datetime.now().strftime('%d-%m-%Y'),
"cvss_impact": 6.4,
"cvss_exploit": 10.0,
"mod_date": datetime.datetime.now().strftime('%d-%m-%Y'),
"cweid": "CWE-0"
}})
self.assertDictEqual(vulnerabilities[4],{"BID-83610": {
"bugtraq_id": 15128,
"class": "Boundary Condition Error",
"cve": [
"CVE-2005-2978"
],
"local": "no",
"remote": "yes",
"title": "NetPBM PNMToPNG Buffer Overflow Vulnerability"
}})
self.assertDictEqual(vulnerabilities[5],{"BID-83843": {
"bugtraq_id": 15128,
"class": "Boundary Condition Error",
"cve": [
"CVE-2005-2978"
],
"local": "no",
"remote": "yes",
"title": "NetPBM PNMToPNG Buffer Overflow Vulnerability"
}})
def test_bulk_insert_cves(self):
mock_driver = BulkInfoMongoDbDriver()
mock_driver.bulk_insert_cves(["CVE-2002-2001#Vendor 1#Product 1#1.1.0#2001",
"CVE-2002-2002#Vendor 2#Product 2#2.1.1#2002"])
mock_driver.db.cve.insert_many.assert_called_once_with([
{'cve_id': 'CVE-2002-2001', 'vendor': 'Vendor 1',
'product': 'Product 1', 'version': '1.1.0', 'year': 2001},
{'cve_id': 'CVE-2002-2002', 'vendor': 'Vendor 2',
'product': 'Product 2', 'version': '2.1.1', 'year': 2002}])
def test_bulk_insert_bids(self):
mock_driver = BulkInfoMongoDbDriver()
mock_driver.bulk_insert_bids(["1#Product 1#1.1.0", "2#Product 2#2.1.1"])
mock_driver.db.bid.insert_many.assert_called_once_with([
{'bugtraq_id': 1, 'product': 'Product 1', 'version': '1.1.0'},
{'bugtraq_id': 2, 'product': 'Product 2', 'version': '2.1.1'}])
def test_bulk_insert_exploit_db_ids(self):
mock_driver = BulkInfoMongoDbDriver()
mock_driver.bulk_insert_exploit_db_ids(["1#Product 1#1.1.0", "2#Product 2#2.1.1"])
mock_driver.db.exploit_db.insert_many.assert_called_once_with([
{'exploit_db_id': 1, 'product': 'Product 1', 'version': '1.1.0'},
{'exploit_db_id': 2, 'product': 'Product 2', 'version': '2.1.1'}])
def test_bulk_insert_sysdig_falco_events(self):
mock_driver = BulkInfoMongoDbDriver()
mock_driver.bulk_insert_sysdig_falco_events([{"container_id": "ef45au6756jh", "image_name": "alpine",
"output": "16:47:44.080226697: Warning Sensitive file opened for reading by non-trusted program (user=root command=cat /etc/shadow file=/etc/shadow)",
"priority": "Warning",
"rule": "read_sensitive_file_untrusted",
"time": "2016-06-06T23:47:44.080226697Z"
}])
mock_driver.db.falco_events.insert_many.assert_called_once_with([{"container_id": "ef45au6756jh",
"image_name": "alpine",
"output": "16:47:44.080226697: Warning Sensitive file opened for reading by non-trusted program (user=root command=cat /etc/shadow file=/etc/shadow)",
"priority": "Warning",
"rule": "read_sensitive_file_untrusted",
"time": 1465256864.080226
}])
def test_bulk_insert_rhsa(self):
mock_driver = BulkInfoMongoDbDriver()
mock_driver.bulk_insert_rhsa([{"vendor": "redhat", "product": "enterprise_linux", "version": "4", "rhsa_id": "RHSA-2010:0002-01"}])
mock_driver.db.rhsa.insert_many.assert_called_once_with([{"vendor": "redhat", "product": "enterprise_linux", "version": "4", "rhsa_id": "RHSA-2010:0002-01"}])
def test_bulk_insert_rhsa_empty(self):
# Test bug #85
mock_driver = BulkInfoMongoDbDriver()
mock_driver.bulk_insert_rhsa([])
mock_driver.db.rhsa.insert_many.assert_not_called()
def test_bulk_insert_rhba(self):
mock_driver = BulkInfoMongoDbDriver()
mock_driver.bulk_insert_rhba([{"vendor": "redhat", "product": "enterprise_linux", "version": "4", "rhba_id": "RHBA-2010:0002-01"}])
mock_driver.db.rhba.insert_many.assert_called_once_with([{"vendor": "redhat", "product": "enterprise_linux", "version": "4", "rhba_id": "RHBA-2010:0002-01"}])
def test_bulk_insert_rhba_empty(self):
# Test bug #85
mock_driver = BulkInfoMongoDbDriver()
mock_driver.bulk_insert_rhba([])
mock_driver.db.rhba.insert_many.assert_not_called()
def test_bulk_insert_rhsa_info(self):
mock_driver = BulkInfoMongoDbDriver()
mock_driver.bulk_insert_rhsa_info([{"cve": ["CVE-20101"], "title": "Test", "description": "Test CVE", "severity": "high", "rhsa_id": "RHSA-2010:0002-01"}])
mock_driver.db.rhsa_info.insert_many.assert_called_once_with([{"cve": ["CVE-20101"], "title": "Test", "description": "Test CVE", "severity": "high", "rhsa_id": "RHSA-2010:0002-01"}])
def test_bulk_insert_rhsa_info_empty(self):
# Test bug #85
mock_driver = BulkInfoMongoDbDriver()
mock_driver.bulk_insert_rhsa_info([])
mock_driver.db.rhsa_info.insert_many.assert_not_called()
def test_bulk_insert_rhba_info(self):
mock_driver = BulkInfoMongoDbDriver()
mock_driver.bulk_insert_rhba_info([{"cve": ["CVE-20101"], "title": "Test", "description": "Test CVE", "severity": "high", "rhba_id": "RHBA-2010:0002-01"}])
mock_driver.db.rhba_info.insert_many.assert_called_once_with([{"cve": ["CVE-20101"], "title": "Test", "description": "Test CVE", "severity": "high", "rhba_id": "RHBA-2010:0002-01"}])
def test_bulk_insert_rhba_info_empty(self):
# Test bug #85
mock_driver = BulkInfoMongoDbDriver()
mock_driver.bulk_insert_rhba_info([])
mock_driver.db.rhba_info.insert_many.assert_not_called()
def test_is_fp_false(self):
mock_driver = IsFPMongoDbDriver()
is_fp = mock_driver.is_fp('alpine', 'zlib')
self.assertFalse(is_fp)
def test_is_fp_true(self):
mock_driver = IsFPMongoDbDriver()
is_fp = mock_driver.is_fp('alpine', 'musl', '1.1.15')
self.assertTrue(is_fp)
def test_get_max_bid_inserted_zero(self):
mock_driver = MaxBidZeroMongoDbDriver()
max_bid = mock_driver.get_max_bid_inserted()
self.assertEqual(max_bid, 0)
def test_get_max_bid_inserted_not_zero(self):
mock_driver = MaxBidNotZeroMongoDbDriver()
max_bid = mock_driver.get_max_bid_inserted()
self.assertEqual(max_bid, 83843)
def test_remove_only_cve_for_update_empty(self):
mock_driver = RemoveOnlyCVEForUpdateEmptyCollectionNamesMongoDbDriver()
cve_year = mock_driver.remove_only_cve_for_update()
self.assertEqual(cve_year, 2002)
def test_remove_only_cve_for_update_minor_than_2002(self):
mock_driver = RemoveOnlyCVEForUpdateMinorThan2002MongoDbDriver()
cve_year = mock_driver.remove_only_cve_for_update()
self.assertEqual(cve_year, 2002)
def test_remove_only_cve_for_update_equals_2011(self):
mock_driver = RemoveOnlyCVEForUpdateEquals2011MongoDbDriver()
cve_year = mock_driver.remove_only_cve_for_update()
self.assertEqual(cve_year, 2011)
def test_get_init_db_process_status_none(self):
mock_driver = GetEmptyInitDBStatusMongoDbDriver()
status = mock_driver.get_init_db_process_status()
self.assertEqual(status, {'status': 'None', 'timestamp': None})
def test_get_init_db_process_status_updated(self):
mock_driver = GetInitDBStatusMongoDbDriver()
status = mock_driver.get_init_db_process_status()
self.assertEqual(status, {'status': 'Updated', 'timestamp': None})
def test_update_fp(self):
mock_driver = UpdateFPMongoDbDriver()
updated = mock_driver.update_product_vulnerability_as_fp('alpine', 'musl', '1.1.15')
self.assertTrue(updated)
mock_driver.db.image_history.update.assert_called_once_with({'_id': "5915ed36ff1f081833551af5"},
{"_id": "5915ed36ff1f081833551af5",
"timestamp": 1494609523.342605, "status": "Completed",
"image_name": "alpine",
"static_analysis": {"prog_lang_dependencies": {
"dependencies_details": {"java": [], "python": [],
"js": [], "ruby": [],
"php": [], "nodejs": []},
"vuln_dependencies": 0},
"os_packages": {"vuln_os_packages": 1,
"os_packages_details": [
{"version": "1.1.15",
"vulnerabilities": [{
"CVE-2016-8859": {
"cvss_integrity_impact": "Partial",
"cvss_access_vector": "Network",
"cweid": "CWE-190",
"cvss_access_complexity": "Low",
"cvss_confidentiality_impact": "Partial",
"mod_date": "07-03-2017",
"cvss_exploit": 10,
"cvss_vector": [
"AV:N",
"AC:L",
"Au:N",
"C:P",
"I:P",
"A:P"],
"cvss_authentication": "None required",
"summary": "Multiple integer overflows in the TRE library and musl libc allow attackers to cause memory corruption via a large number of (1) states or (2) tags, which triggers an out-of-bounds write.",
"cveid": "CVE-2016-8859",
"cvss_impact": 6.4,
"pub_date": "13-02-2017",
"cvss_base": 7.5,
"cvss_availability_impact": "Partial"}}],
"product": "musl",
"is_vulnerable": True,
"is_false_positive": True},
{"version": "1.25.1",
"vulnerabilities": [],
"product": "busybox",
"is_vulnerable": False,
"is_false_positive": False},
{"version": "3.0.4",
"vulnerabilities": [],
"product": "alpine-baselayout",
"is_vulnerable": False,
"is_false_positive": False},
{"version": "1.3",
"vulnerabilities": [],
"product": "alpine-keys",
"is_vulnerable": False,
"is_false_positive": False},
{"version": "2.4.4",
"vulnerabilities": [],
"product": "libressl2.4-libcrypto",
"is_vulnerable": False,
"is_false_positive": False},
{"version": "2.4.4",
"vulnerabilities": [],
"product": "libressl2.4-libssl",
"is_vulnerable": False,
"is_false_positive": False},
{"version": "1.2.8",
"vulnerabilities": [{
"BID-95131": {
"cve": [
"CVE-2016-9840"],
"bugtraq_id": 95131,
"title": "zlib Multiple Denial of Service Vulnerabilities",
"remote": "yes",
"local": "no",
"class": "Design Error"}}],
"product": "zlib",
"is_vulnerable": True,
"is_false_positive": False},
{"version": "2.6.8",
"vulnerabilities": [],
"product": "apk-tools",
"is_vulnerable": False,
"is_false_positive": False},
{"version": "1.1.6",
"vulnerabilities": [],
"product": "scanelf",
"is_vulnerable": False,
"is_false_positive": False},
{"version": "1.1.15",
"vulnerabilities": [],
"product": "musl-utils",
"is_vulnerable": False,
"is_false_positive": False},
{"version": "0.7",
"vulnerabilities": [],
"product": "libc-utils",
"is_vulnerable": False,
"is_false_positive": False}],
"total_os_packages": 11,
"ok_os_packages": 10}}})
def test_get_docker_image_all_history(self):
mock_driver = GetFullHistoryMongoDbDriver()
history = mock_driver.get_docker_image_all_history()
self.assertEqual(history, [{
"anomalies": 0,
"image_name": "jboss/wildfly",
"libs_vulns": 1,
"os_vulns": 2,
"malware_bins": 0,
"reportid": "58790707ed253944951ec5ba",
"start_date": "2017-05-12 17:18:43.342605",
"status": "Completed"
},{
"anomalies": 2,
"image_name": "jboss/wildfly",
"libs_vulns": 0,
"os_vulns": 0,
"malware_bins": 0,
"reportid": "58790707ed253944951ec5ba",
"start_date": "2017-05-12 17:18:43.342605",
"status": "Completed"
}])
def test_get_docker_image_history(self):
mock_driver = GetDockerImageHistory()
history = mock_driver.get_docker_image_history('jboss/wildfly')
self.assertEqual(history, [{
"id": "586f7631ed25396a829baaf4",
"image_name": "jboss/wildfly",
"timestamp": "2017-05-12 17:18:43.342605",
"status": "Completed",
"runtime_analysis": {
"container_id": "69dbf26ab368",
"start_timestamp": "2017-05-12 17:18:43.342605",
"stop_timestamp": "2017-05-12 17:18:43.342605",
"anomalous_activities_detected": {
"anomalous_counts_by_severity": {
"Warning": 2
},
"anomalous_activities_details": [{
"output": "10:49:47.492517329: Warning Unexpected setuid call by non-sudo, non-root program (user=<NA> command=ping 8.8.8.8 uid=<NA>) container=thirsty_spence (id=69dbf26ab368)",
"priority": "Warning",
"rule": "Non sudo setuid",
"time": "2017-01-06 10:49:47.492516"
}, {
"output": "10:49:53.181654702: Warning Unexpected setuid call by non-sudo, non-root program (user=<NA> command=ping 8.8.4.4 uid=<NA>) container=thirsty_spence (id=69dbf26ab368)",
"priority": "Warning",
"rule": "Non sudo setuid",
"time": "2017-01-06 10:49:53.181653"
}]
}
}
}])
# -- Mock classes
class FullGetVulnProdMongoDbDriver(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
cursor_cve = self.db.cve.find.return_value
cursor_cve.sort.return_value = [{'cve_id': "CVE-2002-2001"}, {'cve_id': "CVE-2002-2002"}]
cursor_bid = self.db.bid.find.return_value
cursor_bid.sort.return_value = [{'bugtraq_id': 1}, {'bugtraq_id': 2}]
cursor_expl = self.db.exploit_db.find.return_value
cursor_expl.sort.return_value = [{'exploit_db_id': 3}, {'exploit_db_id': 4}]
self.db.cve_info.find_one.return_value = {"cvss_access_vector": "Network",
"_id": "58d11025100e75000e789c9a",
"cveid": "CVE-2002-2002",
"cvss_base": 7.5,
"cvss_integrity_impact": "Partial",
"cvss_availability_impact": "Partial",
"summary": "Summary example",
"cvss_confidentiality_impact": "Partial",
"cvss_vector": [
"AV:N",
"AC:L",
"Au:N",
"C:P",
"I:P",
"A:P"
],
"cvss_authentication": "None required",
"cvss_access_complexity": "Low",
"pub_date": datetime.datetime.now(),
"cvss_impact": 6.4,
"cvss_exploit": 10.0,
"mod_date": datetime.datetime.now(),
"cweid": "CWE-0"
}
self.db.exploit_db_info.find_one.return_value = {'_id': '58d11025100e75000e789c9a',
'exploit_db_id': 1,
'description': 'Summary example',
'platform': 'Linux',
'type': 'DoS',
'port': 0
}
self.db.bid_info.find_one.return_value = {
"_id": "'58d11025100e75000e789c9a",
"bugtraq_id": 15128,
"class": "Boundary Condition Error",
"cve": [
"CVE-2005-2978"
],
"local": "no",
"remote": "yes",
"title": "NetPBM PNMToPNG Buffer Overflow Vulnerability"
}
cursor_rhba = self.db.rhba.find.return_value
cursor_rhba.sort.return_value = []
cursor_rhsa = self.db.rhsa.find.return_value
cursor_rhsa.sort.return_value = []
class FullGetVulnProdAndVersionMongoDbDriver(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
cursor_cve = self.db.cve.find.return_value
cursor_cve.sort.return_value = [{'cve_id': "CVE-2005-4442"}, {'cve_id': "CVE-2006-2754"},
{'cve_id': "CVE-2007-5707"}, {'cve_id': "CVE-2011-4079"}]
cursor_bid = self.db.bid.find.return_value
cursor_bid.sort.return_value = [{'bugtraq_id': 83610}, {'bugtraq_id': 83843}]
cursor_expl = self.db.exploit_db.find.return_value
cursor_expl.sort.return_value = []
cursor_rhba = self.db.rhba.find.return_value
cursor_rhba.sort.return_value = []
cursor_rhsa = self.db.rhsa.find.return_value
cursor_rhsa.sort.return_value = []
self.db.cve_info.find_one.return_value = {"cvss_access_vector": "Network",
"_id": "58d11025100e75000e789c9a",
"cveid": "CVE-2005-4442",
"cvss_base": 7.5,
"cvss_integrity_impact": "Partial",
"cvss_availability_impact": "Partial",
"summary": "Summary example",
"cvss_confidentiality_impact": "Partial",
"cvss_vector": [
"AV:N",
"AC:L",
"Au:N",
"C:P",
"I:P",
"A:P"
],
"cvss_authentication": "None required",
"cvss_access_complexity": "Low",
"pub_date": datetime.datetime.now(),
"cvss_impact": 6.4,
"cvss_exploit": 10.0,
"mod_date": datetime.datetime.now(),
"cweid": "CWE-0"
}
self.db.exploit_db_info.find_one.return_value = {'_id': '58d11025100e75000e789c9a',
'exploit_db_id': 1,
'description': 'Summary example',
'platform': 'Linux',
'type': 'DoS',
'port': 0
}
self.db.bid_info.find_one.return_value = {
"_id": "'58d11025100e75000e789c9a",
"bugtraq_id": 15128,
"class": "Boundary Condition Error",
"cve": [
"CVE-2005-2978"
],
"local": "no",
"remote": "yes",
"title": "NetPBM PNMToPNG Buffer Overflow Vulnerability"
}
class IsFPMongoDbDriver(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
cursor_image_history = self.db.image_history.find.return_value
cursor_image_history.sort.return_value = [
{"_id": "5915ed36ff1f081833551af5", "timestamp": 1494609523.342605, "status": "Completed",
"image_name": "alpine", "static_analysis": {"prog_lang_dependencies": {
"dependencies_details": {"java": [], "python": [], "js": [], "ruby": [], "php": [], "nodejs": []},
"vuln_dependencies": 0}, "os_packages": {"vuln_os_packages": 1, "os_packages_details": [
{"version": "1.1.15", "vulnerabilities": [{"CVE-2016-8859": {"cvss_integrity_impact": "Partial",
"cvss_access_vector": "Network", "cweid": "CWE-190", "cvss_access_complexity": "Low", "cvss_confidentiality_impact": "Partial",
"mod_date": "07-03-2017", "cvss_exploit": 10, "cvss_vector": ["AV:N", "AC:L", "Au:N", "C:P", "I:P", "A:P"],
"cvss_authentication": "None required",
"summary": "Multiple integer overflows in the TRE library and musl libc allow attackers to cause memory corruption via a large number of (1) states or (2) tags, which triggers an out-of-bounds write.",
"cveid": "CVE-2016-8859", "cvss_impact": 6.4, "pub_date": "13-02-2017", "cvss_base": 7.5,
"cvss_availability_impact": "Partial"}}],
"product": "musl", "is_vulnerable": True, "is_false_positive" : True},
{"version": "1.25.1", "vulnerabilities": [], "product": "busybox", "is_vulnerable": False, "is_false_positive" : False},
{"version": "3.0.4", "vulnerabilities": [], "product": "alpine-baselayout", "is_vulnerable": False, "is_false_positive" : False},
{"version": "1.3", "vulnerabilities": [], "product": "alpine-keys", "is_vulnerable": False, "is_false_positive" : False},
{"version": "2.4.4", "vulnerabilities": [], "product": "libressl2.4-libcrypto", "is_vulnerable": False, "is_false_positive" : False},
{"version": "2.4.4", "vulnerabilities": [], "product": "libressl2.4-libssl", "is_vulnerable": False, "is_false_positive" : False},
{"version": "1.2.8", "vulnerabilities": [{"BID-95131": {"cve": ["CVE-2016-9840"], "bugtraq_id": 95131,
"title": "zlib Multiple Denial of Service Vulnerabilities", "remote": "yes", "local": "no",
"class": "Design Error"}}], "product": "zlib", "is_vulnerable": True, "is_false_positive" : False},
{"version": "2.6.8", "vulnerabilities": [], "product": "apk-tools", "is_vulnerable": False, "is_false_positive" : False},
{"version": "1.1.6", "vulnerabilities": [], "product": "scanelf", "is_vulnerable": False, "is_false_positive" : False},
{"version": "1.1.15", "vulnerabilities": [], "product": "musl-utils", "is_vulnerable": False, "is_false_positive" : False},
{"version": "0.7", "vulnerabilities": [], "product": "libc-utils", "is_vulnerable": False, "is_false_positive" : False}],
"total_os_packages": 11, "ok_os_packages": 10}}}]
class UpdateFPMongoDbDriver(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
cursor_image_history = self.db.image_history.find.return_value
cursor_image_history.sort.return_value = [
{"_id": "5915ed36ff1f081833551af5", "timestamp": 1494609523.342605, "status": "Completed",
"image_name": "alpine", "static_analysis": {"prog_lang_dependencies": {
"dependencies_details": {"java": [], "python": [], "js": [], "ruby": [], "php": [], "nodejs": []},
"vuln_dependencies": 0}, "os_packages": {"vuln_os_packages": 2, "os_packages_details": [
{"version": "1.1.15", "vulnerabilities": [{"CVE-2016-8859": {"cvss_integrity_impact": "Partial",
"cvss_access_vector": "Network", "cweid": "CWE-190", "cvss_access_complexity": "Low", "cvss_confidentiality_impact": "Partial",
"mod_date": "07-03-2017", "cvss_exploit": 10, "cvss_vector": ["AV:N", "AC:L", "Au:N", "C:P", "I:P", "A:P"],
"cvss_authentication": "None required",
"summary": "Multiple integer overflows in the TRE library and musl libc allow attackers to cause memory corruption via a large number of (1) states or (2) tags, which triggers an out-of-bounds write.",
"cveid": "CVE-2016-8859", "cvss_impact": 6.4, "pub_date": "13-02-2017", "cvss_base": 7.5,
"cvss_availability_impact": "Partial"}}],
"product": "musl", "is_vulnerable": True, "is_false_positive" : False},
{"version": "1.25.1", "vulnerabilities": [], "product": "busybox", "is_vulnerable": False, "is_false_positive" : False},
{"version": "3.0.4", "vulnerabilities": [], "product": "alpine-baselayout", "is_vulnerable": False, "is_false_positive" : False},
{"version": "1.3", "vulnerabilities": [], "product": "alpine-keys", "is_vulnerable": False, "is_false_positive" : False},
{"version": "2.4.4", "vulnerabilities": [], "product": "libressl2.4-libcrypto", "is_vulnerable": False, "is_false_positive" : False},
{"version": "2.4.4", "vulnerabilities": [], "product": "libressl2.4-libssl", "is_vulnerable": False, "is_false_positive" : False},
{"version": "1.2.8", "vulnerabilities": [{"BID-95131": {"cve": ["CVE-2016-9840"], "bugtraq_id": 95131,
"title": "zlib Multiple Denial of Service Vulnerabilities", "remote": "yes", "local": "no",
"class": "Design Error"}}], "product": "zlib", "is_vulnerable": True, "is_false_positive" : False},
{"version": "2.6.8", "vulnerabilities": [], "product": "apk-tools", "is_vulnerable": False, "is_false_positive" : False},
{"version": "1.1.6", "vulnerabilities": [], "product": "scanelf", "is_vulnerable": False, "is_false_positive" : False},
{"version": "1.1.15", "vulnerabilities": [], "product": "musl-utils", "is_vulnerable": False, "is_false_positive" : False},
{"version": "0.7", "vulnerabilities": [], "product": "libc-utils", "is_vulnerable": False, "is_false_positive" : False}],
"total_os_packages": 11, "ok_os_packages": 9}}}]
self.db.image_history.update.return_value = True
class BulkInfoMongoDbDriver(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
self.db.cve.create_index.return_value = True
self.db.bid.create_index.return_value = True
self.db.exploit_db.create_index.return_value = True
self.db.falco_events.count.return_value = 0
self.db.falco_events.create_index.return_value = True
self.db.collection_names.return_value = []
class MaxBidZeroMongoDbDriver(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
self.db.collection_names.return_value = []
class MaxBidNotZeroMongoDbDriver(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
self.db.collection_names.return_value = ['bid']
self.db.bid.count.return_value = 10
cursor_bid = self.db.bid.find.return_value
sort_bid = cursor_bid.sort.return_value
sort_bid.limit.return_value = [{'bugtraq_id': 83843}]
class RemoveOnlyCVEForUpdateEmptyCollectionNamesMongoDbDriver(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
self.db.collection_names.return_value = []
class RemoveOnlyCVEForUpdateMinorThan2002MongoDbDriver(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
self.db.collection_names.return_value = ['cve']
self.db.cve.count.return_value = 10
cursor_cve = self.db.cve.find.return_value
sort_cve = cursor_cve.sort.return_value
sort_cve.limit.return_value = [{'year': 2002}]
self.db.cve.drop.return_value = True
self.db.cve_info.drop.return_value = True
class RemoveOnlyCVEForUpdateEquals2011MongoDbDriver(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
self.db.collection_names.return_value = ['cve']
self.db.cve.count.return_value = 10
cursor_cve = self.db.cve.find.return_value
sort_cve = cursor_cve.sort.return_value
sort_cve.limit.return_value = [{'year': 2012}]
self.db.cve.remove.return_value = True
self.db.cve_info.remove.return_value = True
class GetEmptyInitDBStatusMongoDbDriver(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
cursor = self.db.init_db_process_status.find.return_value
cursor.sort.return_value = []
class GetInitDBStatusMongoDbDriver(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
cursor = self.db.init_db_process_status.find.return_value
cursor.sort.return_value = [{'status': 'Updated', 'timestamp': None}]
class GetFullHistoryMongoDbDriver(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
cursor = self.db.image_history.find.return_value
cursor.sort.return_value = [{'_id': '58790707ed253944951ec5ba',
'image_name': 'jboss/wildfly',
'status': 'Completed',
'timestamp':1494609523.342605,
'static_analysis':{'os_packages':{'vuln_os_packages':2},
'malware_binaries':[],
'prog_lang_dependencies':{'vuln_dependencies':1}},
},
{'_id': '58790707ed253944951ec5ba',
'image_name': 'jboss/wildfly',
'status': 'Completed',
'timestamp': 1494609523.342605,
'runtime_analysis':{"anomalous_activities_detected":
{"anomalous_counts_by_severity": {"Warning": 2}}}}]
class GetDockerImageHistory(MongoDbDriver):
def __init__(self):
self.client = Mock(spec=pymongo.MongoClient)
self.db = Mock()
self.db.image_history.count.return_value = 1
cursor = self.db.image_history.find.return_value
cursor.sort.return_value = [{
"_id": "586f7631ed25396a829baaf4",
"image_name": "jboss/wildfly",
"timestamp": 1494609523.342605,
"status": "Completed",
"runtime_analysis": {
"container_id": "69dbf26ab368",
"start_timestamp": 1494609523.342605,
"stop_timestamp": 1494609523.342605,
"anomalous_activities_detected": {
"anomalous_counts_by_severity": {
"Warning": 2
},
"anomalous_activities_details": [{
"output": "10:49:47.492517329: Warning Unexpected setuid call by non-sudo, non-root program (user=<NA> command=ping 8.8.8.8 uid=<NA>) container=thirsty_spence (id=69dbf26ab368)",
"priority": "Warning",
"rule": "Non sudo setuid",
"time": "2017-01-06 10:49:47.492516"
}, {
"output": "10:49:53.181654702: Warning Unexpected setuid call by non-sudo, non-root program (user=<NA> command=ping 8.8.4.4 uid=<NA>) container=thirsty_spence (id=69dbf26ab368)",
"priority": "Warning",
"rule": "Non sudo setuid",
"time": "2017-01-06 10:49:53.181653"
}]
}
}
}]
self.db.image_history.find_one.return_value = {
"_id": "586f7631ed25396a829baaf4",
"image_name": "jboss/wildfly",
"timestamp": 1494609523.342605,
"status": "Completed",
"runtime_analysis": {
"container_id": "69dbf26ab368",
"start_timestamp": 1494609523.342605,
"stop_timestamp": 1494609523.342605,
"anomalous_activities_detected": {
"anomalous_counts_by_severity": {
"Warning": 2
},
"anomalous_activities_details": [{
"output": "10:49:47.492517329: Warning Unexpected setuid call by non-sudo, non-root program (user=<NA> command=ping 8.8.8.8 uid=<NA>) container=thirsty_spence (id=69dbf26ab368)",
"priority": "Warning",
"rule": "Non sudo setuid",
"time": "2017-01-06 10:49:47.492516"
}, {
"output": "10:49:53.181654702: Warning Unexpected setuid call by non-sudo, non-root program (user=<NA> command=ping 8.8.4.4 uid=<NA>) container=thirsty_spence (id=69dbf26ab368)",
"priority": "Warning",
"rule": "Non sudo setuid",
"time": "2017-01-06 10:49:53.181653"
}]
}
}
}
self.db.falco_events.find.return_value = []
if __name__ == '__main__':
unittest.main()
| 70.483173 | 316 | 0.388868 | 4,274 | 58,642 | 5.094993 | 0.096163 | 0.034855 | 0.022731 | 0.028472 | 0.858698 | 0.847263 | 0.821914 | 0.801295 | 0.778977 | 0.759506 | 0 | 0.067202 | 0.508987 | 58,642 | 831 | 317 | 70.56799 | 0.689067 | 0.013557 | 0 | 0.680593 | 0 | 0.014825 | 0.235307 | 0.036995 | 0 | 0 | 0 | 0 | 0.052561 | 1 | 0.053908 | false | 0 | 0.008086 | 0 | 0.08221 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7c32693024a75f08a6a13c8ce55346cd5ff8064d | 894 | py | Python | settings/incar/write_incar_yaml.py | hegdevinayi/vasp_input_generator | ee127d034b5d7c57fe8b2026c66366b6f2960c41 | [
"MIT"
] | null | null | null | settings/incar/write_incar_yaml.py | hegdevinayi/vasp_input_generator | ee127d034b5d7c57fe8b2026c66366b6f2960c41 | [
"MIT"
] | null | null | null | settings/incar/write_incar_yaml.py | hegdevinayi/vasp_input_generator | ee127d034b5d7c57fe8b2026c66366b6f2960c41 | [
"MIT"
] | null | null | null | import yaml
sett_dict = {}
with open('INCAR.relaxation' , 'r') as fr:
incar = fr.readlines()
for row in incar:
if not row.strip():
continue
if row.strip()[0] == '#':
continue
tag, value = row.strip().split('=')
tag = tag.strip()
value = value.strip()
sett_dict[tag] = value
with open('incar_relaxation.yml', 'w') as fw:
yaml.dump(sett_dict, fw, default_flow_style=False)
sett_dict = {}
with open('INCAR.static' , 'r') as fr:
incar = fr.readlines()
for row in incar:
if not row.strip():
continue
if row.strip()[0] == '#':
continue
tag, value = row.strip().split('=')
tag = tag.strip()
value = value.strip()
sett_dict[tag] = value
with open('incar_static.yml', 'w') as fw:
yaml.dump(sett_dict, fw, default_flow_style=False)
| 26.294118 | 54 | 0.548098 | 118 | 894 | 4.050847 | 0.271186 | 0.100418 | 0.108787 | 0.066946 | 0.912134 | 0.824268 | 0.824268 | 0.824268 | 0.824268 | 0.824268 | 0 | 0.003185 | 0.297539 | 894 | 33 | 55 | 27.090909 | 0.757962 | 0 | 0 | 0.827586 | 0 | 0 | 0.080537 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.034483 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.