hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
73d8167357d14a3d02dd9864ab0d793dcad8ad86 | 97,747 | py | Python | src/alsc/ar-bert-s/models/catt_gcn.py | mainuliitkgp/AR-BERT | d6d5e8542a3a1c76edac49cec9e99ebda6395725 | [
"MIT"
] | 4 | 2022-03-06T17:41:57.000Z | 2022-03-22T08:42:58.000Z | src/alsc/ar-bert-s/models/catt_gcn.py | mainuliitkgp/AR-BERT | d6d5e8542a3a1c76edac49cec9e99ebda6395725 | [
"MIT"
] | null | null | null | src/alsc/ar-bert-s/models/catt_gcn.py | mainuliitkgp/AR-BERT | d6d5e8542a3a1c76edac49cec9e99ebda6395725 | [
"MIT"
] | 1 | 2022-03-19T14:04:42.000Z | 2022-03-19T14:04:42.000Z | # -*- coding: utf-8 -*-
#SDGCN
import tensorflow as tf
import numpy as np
from models.nn_layer import dynamic_rnn, softmax_layer, bi_dynamic_rnn, reduce_mean_with_len,WXA_Relu,WXbA_Relu
from models.att_layer import dot_produce_attention_layer, bilinear_attention_layer, mlp_attention_layer, Mlp_attention_layer
class CAtt_GCN_L1(object):
def __init__(self, sequence_length, target_sequence_length,targets_num_max, num_classes, word_embedding, l2_reg_lambda=0.0,
num_hidden=100):
#tf.set_random_seed(-1)
# PLACEHOLDERS
rand_base = 0.01
self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x") # X - The Data
self.input_target = tf.placeholder(tf.int32, [None, target_sequence_length], name="input_x") # The target
self.input_targets_all = tf.placeholder(tf.int32, [None,targets_num_max, target_sequence_length], name="input_x") #All the targets
self.sen_len = tf.placeholder(tf.int32, None, name='sen_len')#lens of sentence
self.target_len = tf.placeholder(tf.int32, None, name='target_len')#lens of target
with tf.name_scope('targets_all_len'):
self.targets_all_len_a = tf.placeholder(tf.int32, [None,targets_num_max],name="targets_all_len")
batch_size = tf.shape(self.input_x)[0]
self.targets_all_len = []
for i in range(targets_num_max):
targets_i_len = tf.slice(self.targets_all_len_a, [0, i], [batch_size, 1])
self.targets_all_len.append(tf.squeeze(targets_i_len)) #lens of every target
self.targets_num = tf.placeholder(tf.int32, None, name='targets_num') #The number os targets
self.relate_cross = tf.placeholder(tf.float32, [None,targets_num_max, targets_num_max], name='relate_cross') #the relation between targets
self.relate_self = tf.placeholder(tf.float32, [None, targets_num_max, targets_num_max], name='relate_self')
self.target_which = tf.placeholder(tf.float32, [None, targets_num_max, ], name='which_position')
self.target_position = tf.placeholder(tf.float32, [None, sequence_length], name='target_position')
with tf.name_scope('targets_all_position'):
self.targets_all_position_a = tf.placeholder(tf.float32, [None,targets_num_max,sequence_length],name="targets_all_position")
self.targets_all_position = []
for i in range(targets_num_max):
targets_i_len = self.targets_all_position_a[:, i,:]
self.targets_all_position.append(tf.squeeze(targets_i_len))
self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y") # Y - The Lables
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob") # Dropout
l2_loss = tf.constant(0.0) # Keeping track of l2 regularization loss
# 1. EMBEDDING LAYER ################################################################
with tf.name_scope("embedding"):
self.word_embedding = tf.constant(word_embedding, name='word_embedding')
# Embedding for the context
with tf.name_scope("embedded_sen"):
self.embedded_sen = tf.nn.embedding_lookup(self.word_embedding, self.input_x)
# self.embedded_expanded = tf.expand_dims(self.embedded, -1)
self.embedded_sen = tf.cast(self.embedded_sen, tf.float32) #(?,78,300)
self.embedded_sen = tf.nn.dropout(self.embedded_sen, keep_prob=self.dropout_keep_prob)
embedding_size = word_embedding.shape[1]
print('embedding_size {}'.format(embedding_size))
num_hidden = embedding_size
# Embedding for the target
with tf.name_scope("embedding_target"):
self.embedded_target = tf.nn.embedding_lookup(self.word_embedding, self.input_target)
self.embedded_target = tf.cast(self.embedded_target, tf.float32) #(?,21,300)
self.embedded_target = tf.nn.dropout(self.embedded_target, keep_prob=self.dropout_keep_prob)
# Embedding for all targets
with tf.name_scope("embedding_targets"):
self.embedded_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
#get a target
self.input_targets_i = self.input_targets_all[:,i,:] # (?,13,23)-->(?,23) i表示第i个target
self.embedded_target_i = tf.nn.embedding_lookup(self.word_embedding, self.input_targets_i)
self.embedded_target_i = tf.cast(self.embedded_target_i, tf.float32)
self.embedded_target_i = tf.nn.dropout(self.embedded_target_i, keep_prob=self.dropout_keep_prob)
self.embedded_targets_all[i] = self.embedded_target_i #13*(?,21,300)
#2. LSTM LAYER ######################################################################
# Bi-LSTM for the context
with tf.name_scope("Bi-LSTM_sentence"):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_Hiddens_sen = bi_dynamic_rnn(cell, self.embedded_sen, num_hidden, self.sen_len,
sequence_length, 'bi-lstm-sentence' ,'all',
dropout = True, dropout_prob= self.dropout_keep_prob) #(?,78,600)
pool_sen = reduce_mean_with_len(self.LSTM_Hiddens_sen, self.sen_len)
# Bi-LSTM for the targets
with tf.variable_scope("Bi-LSTM_targets") as scope:
self.LSTM_targets_all = list(range(targets_num_max))
poor_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_targets_all[i] = bi_dynamic_rnn(cell, self.embedded_targets_all[i], num_hidden, self.targets_all_len[i],
target_sequence_length, 'bi-lstm-targets', 'all',
dropout=True, dropout_prob=self.dropout_keep_prob) # (?,21,600)
poor_targets_all[i] = reduce_mean_with_len(self.LSTM_targets_all[i], self.targets_all_len[i])
scope.reuse_variables()
# 3. Attention LAYER ######################################################################
# all targets to sentence attention
with tf.variable_scope("Attention-targets_all2sentence") as scope:
self.outputs_ss = list(range(targets_num_max)) #all the target attention for the sentence
self.outputs_ts = list(range(targets_num_max))
for i in range(targets_num_max):
#target2sentence attention
att_s_i = bilinear_attention_layer(self.LSTM_targets_all[i], pool_sen, self.targets_all_len[i], 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'tar')
self.outputs_ss[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_targets_all[i]),axis=1) #13*(?,600)
#position
target_position_i = tf.expand_dims(self.targets_all_position[i], 2) # (?,78,1)
LSTM_Hiddens_sen_p_i = tf.multiply(self.LSTM_Hiddens_sen, target_position_i)
att_s_i = bilinear_attention_layer(LSTM_Hiddens_sen_p_i, self.outputs_ss[i], self.sen_len, 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'sen')
self.outputs_ts[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_Hiddens_sen), axis=1)
scope.reuse_variables()
with tf.name_scope("targets_gather"):
self.targets_concat = tf.concat([tf.expand_dims (i ,axis = 2) for i in self.outputs_ts], axis=2) #(?,600,13)
# 4. GCN LAYER ######################################################################
with tf.name_scope('GCN_layer1'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN1_cross = WXbA_Relu(self.targets_concat,self.relate_cross,W_cross,b_cross)
GCN1_self = WXbA_Relu(self.targets_concat,self.relate_self,W_self,b_self)
GCN1_out = GCN1_cross+GCN1_self #(?,600,13)
# GCN2_out = tf.concat([GCN1_out,GCN2_out],1)
# 和 self.target_which 矩阵相乘,求出对应的矩阵
target_which = tf.expand_dims(self.target_which,1) # (?,1,13)
self.GCN2_out = tf.multiply(GCN1_out, target_which) #(?,600,13)*(?,1,13) = (?,600,13)
self.targets_representation = tf.reduce_sum(self.GCN2_out, 2) # (?,600)
with tf.name_scope("output"):
W = tf.Variable(tf.random_normal([2 * num_hidden, num_classes]))
b = tf.Variable(tf.random_normal([num_classes]))
self.scores = tf.nn.xw_plus_b(self.targets_representation, W,b, name="scores")
l2_loss += tf.nn.l2_loss(W)
l2_loss += tf.nn.l2_loss(b)
self.predictions = tf.argmax(self.scores, 1, name="predictions")
self.true_y = tf.argmax(self.input_y, 1, name="true_y")
self.softmax = tf.nn.softmax(self.scores, name="softmax")
with tf.name_scope("loss"):
self.losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores,labels=self.input_y)
self.loss = tf.reduce_mean(self.losses, name="loss") + l2_reg_lambda * l2_loss
with tf.name_scope("accuracy"):
self.correct_pred = tf.equal(self.predictions,self.true_y)
self.accuracy = tf.reduce_mean(tf.cast(self.correct_pred, "float"),name="accuracy")
print ("LOADED Att-GCN!")
class CAtt_GCN_L2(object):
def __init__(self, sequence_length, target_sequence_length,targets_num_max, num_classes, word_embedding, l2_reg_lambda=0.0,
num_hidden=100):
#tf.set_random_seed(-1)
# PLACEHOLDERS
rand_base = 0.01
self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x") # X - The Data
self.input_target = tf.placeholder(tf.int32, [None, target_sequence_length], name="input_x") # The target
self.input_targets_all = tf.placeholder(tf.int32, [None,targets_num_max, target_sequence_length], name="input_x") #All the targets
self.sen_len = tf.placeholder(tf.int32, None, name='sen_len')#lens of sentence
self.target_len = tf.placeholder(tf.int32, None, name='target_len')#lens of target
with tf.name_scope('targets_all_len'):
self.targets_all_len_a = tf.placeholder(tf.int32, [None,targets_num_max],name="targets_all_len")
batch_size = tf.shape(self.input_x)[0]
self.targets_all_len = []
for i in range(targets_num_max):
targets_i_len = tf.slice(self.targets_all_len_a, [0, i], [batch_size, 1])
self.targets_all_len.append(tf.squeeze(targets_i_len)) #lens of every target
self.targets_num = tf.placeholder(tf.int32, None, name='targets_num') #The number os targets
self.relate_cross = tf.placeholder(tf.float32, [None,targets_num_max, targets_num_max], name='relate_cross') #the relation between targets
self.relate_self = tf.placeholder(tf.float32, [None, targets_num_max, targets_num_max], name='relate_self')
self.target_which = tf.placeholder(tf.float32, [None, targets_num_max, ], name='which_position')
self.target_position = tf.placeholder(tf.float32, [None, sequence_length], name='target_position')
with tf.name_scope('targets_all_position'):
self.targets_all_position_a = tf.placeholder(tf.float32, [None,targets_num_max,sequence_length],name="targets_all_position")
self.targets_all_position = []
for i in range(targets_num_max):
targets_i_len = self.targets_all_position_a[:, i,:]
self.targets_all_position.append(tf.squeeze(targets_i_len))
self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y") # Y - The Lables
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob") # Dropout
l2_loss = tf.constant(0.0) # Keeping track of l2 regularization loss
# 1. EMBEDDING LAYER ################################################################
with tf.name_scope("embedding"):
self.word_embedding = tf.constant(word_embedding, name='word_embedding')
# Embedding for the context
with tf.name_scope("embedded_sen"):
self.embedded_sen = tf.nn.embedding_lookup(self.word_embedding, self.input_x)
# self.embedded_expanded = tf.expand_dims(self.embedded, -1)
self.embedded_sen = tf.cast(self.embedded_sen, tf.float32) #(?,78,300)
self.embedded_sen = tf.nn.dropout(self.embedded_sen, keep_prob=self.dropout_keep_prob)
embedding_size = word_embedding.shape[1]
print('embedding_size {}'.format(embedding_size))
num_hidden = embedding_size
# Embedding for the target
with tf.name_scope("embedding_target"):
self.embedded_target = tf.nn.embedding_lookup(self.word_embedding, self.input_target)
self.embedded_target = tf.cast(self.embedded_target, tf.float32) #(?,21,300)
self.embedded_target = tf.nn.dropout(self.embedded_target, keep_prob=self.dropout_keep_prob)
# Embedding for all targets
with tf.name_scope("embedding_targets"):
self.embedded_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
#get a target
self.input_targets_i = self.input_targets_all[:,i,:] # (?,13,23)-->(?,23) i表示第i个target
self.embedded_target_i = tf.nn.embedding_lookup(self.word_embedding, self.input_targets_i)
self.embedded_target_i = tf.cast(self.embedded_target_i, tf.float32)
self.embedded_target_i = tf.nn.dropout(self.embedded_target_i, keep_prob=self.dropout_keep_prob)
self.embedded_targets_all[i] = self.embedded_target_i #13*(?,21,300)
#2. LSTM LAYER ######################################################################
# Bi-LSTM for the context
with tf.name_scope("Bi-LSTM_sentence"):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_Hiddens_sen = bi_dynamic_rnn(cell, self.embedded_sen, num_hidden, self.sen_len,
sequence_length, 'bi-lstm-sentence' ,'all',
dropout = True, dropout_prob= self.dropout_keep_prob) #(?,78,600)
pool_sen = reduce_mean_with_len(self.LSTM_Hiddens_sen, self.sen_len)
# Bi-LSTM for the targets
with tf.variable_scope("Bi-LSTM_targets") as scope:
self.LSTM_targets_all = list(range(targets_num_max))
poor_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_targets_all[i] = bi_dynamic_rnn(cell, self.embedded_targets_all[i], num_hidden, self.targets_all_len[i],
target_sequence_length, 'bi-lstm-targets', 'all',
dropout=True, dropout_prob=self.dropout_keep_prob) # (?,21,600)
poor_targets_all[i] = reduce_mean_with_len(self.LSTM_targets_all[i], self.targets_all_len[i])
scope.reuse_variables()
# 3. Attention LAYER ######################################################################
# all targets to sentence attention
with tf.variable_scope("Attention-targets_all2sentence") as scope:
self.outputs_ss = list(range(targets_num_max)) #all the target attention for the sentence
self.outputs_ts = list(range(targets_num_max))
for i in range(targets_num_max):
#target2sentence attention
att_s_i = bilinear_attention_layer(self.LSTM_targets_all[i], pool_sen, self.targets_all_len[i], 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'tar')
self.outputs_ss[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_targets_all[i]),axis=1) #13*(?,600)
#position
target_position_i = tf.expand_dims(self.targets_all_position[i], 2) # (?,78,1)
LSTM_Hiddens_sen_p_i = tf.multiply(self.LSTM_Hiddens_sen, target_position_i)
att_s_i = bilinear_attention_layer(LSTM_Hiddens_sen_p_i, self.outputs_ss[i], self.sen_len, 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'sen')
self.outputs_ts[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_Hiddens_sen), axis=1)
scope.reuse_variables()
with tf.name_scope("targets_gather"):
self.targets_concat = tf.concat([tf.expand_dims (i ,axis = 2) for i in self.outputs_ts], axis=2) #(?,600,13)
# 4. GCN LAYER ######################################################################
with tf.name_scope('GCN_layer1'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN1_cross = WXbA_Relu(self.targets_concat,self.relate_cross,W_cross,b_cross)
GCN1_self = WXbA_Relu(self.targets_concat,self.relate_self,W_self,b_self)
GCN1_out = GCN1_cross+GCN1_self #(?,600,13)
with tf.name_scope('GCN_layer2'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN2_cross = WXbA_Relu(GCN1_out,self.relate_cross,W_cross,b_cross)
GCN2_self = WXbA_Relu(GCN1_out,self.relate_self,W_self,b_self)
GCN2_out = GCN2_cross+GCN2_self #(?,600,13)
# GCN2_out = tf.concat([GCN1_out,GCN2_out],1)
# 和 self.target_which 矩阵相乘,求出对应的矩阵
target_which = tf.expand_dims(self.target_which,1) # (?,1,13)
self.GCN2_out = tf.multiply(GCN2_out, target_which) #(?,600,13)*(?,1,13) = (?,600,13)
self.targets_representation = tf.reduce_sum(self.GCN2_out, 2) # (?,600)
with tf.name_scope("output"):
W = tf.Variable(tf.random_normal([2 * num_hidden, num_classes]))
b = tf.Variable(tf.random_normal([num_classes]))
self.scores = tf.nn.xw_plus_b(self.targets_representation, W,b, name="scores")
l2_loss += tf.nn.l2_loss(W)
l2_loss += tf.nn.l2_loss(b)
self.predictions = tf.argmax(self.scores, 1, name="predictions")
self.true_y = tf.argmax(self.input_y, 1, name="true_y")
self.softmax = tf.nn.softmax(self.scores, name="softmax")
with tf.name_scope("loss"):
self.losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores,labels=self.input_y)
self.loss = tf.reduce_mean(self.losses, name="loss") + l2_reg_lambda * l2_loss
with tf.name_scope("accuracy"):
self.correct_pred = tf.equal(self.predictions,self.true_y)
self.accuracy = tf.reduce_mean(tf.cast(self.correct_pred, "float"),name="accuracy")
print ("LOADED Att-GCN!")
class CAtt_GCN_L3(object):
def __init__(self, sequence_length, target_sequence_length,targets_num_max, num_classes, word_embedding, l2_reg_lambda=0.0,
num_hidden=100):
#tf.set_random_seed(-1)
# PLACEHOLDERS
rand_base = 0.01
self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x") # X - The Data
self.input_target = tf.placeholder(tf.int32, [None, target_sequence_length], name="input_x") # The target
self.input_targets_all = tf.placeholder(tf.int32, [None,targets_num_max, target_sequence_length], name="input_x") #All the targets
self.sen_len = tf.placeholder(tf.int32, None, name='sen_len')#lens of sentence
self.target_len = tf.placeholder(tf.int32, None, name='target_len')#lens of target
with tf.name_scope('targets_all_len'):
self.targets_all_len_a = tf.placeholder(tf.int32, [None,targets_num_max],name="targets_all_len")
batch_size = tf.shape(self.input_x)[0]
self.targets_all_len = []
for i in range(targets_num_max):
targets_i_len = tf.slice(self.targets_all_len_a, [0, i], [batch_size, 1])
self.targets_all_len.append(tf.squeeze(targets_i_len)) #lens of every target
self.targets_num = tf.placeholder(tf.int32, None, name='targets_num') #The number os targets
self.relate_cross = tf.placeholder(tf.float32, [None,targets_num_max, targets_num_max], name='relate_cross') #the relation between targets
self.relate_self = tf.placeholder(tf.float32, [None, targets_num_max, targets_num_max], name='relate_self')
self.target_which = tf.placeholder(tf.float32, [None, targets_num_max, ], name='which_position')
self.target_position = tf.placeholder(tf.float32, [None, sequence_length], name='target_position')
with tf.name_scope('targets_all_position'):
self.targets_all_position_a = tf.placeholder(tf.float32, [None,targets_num_max,sequence_length],name="targets_all_position")
self.targets_all_position = []
for i in range(targets_num_max):
targets_i_len = self.targets_all_position_a[:, i,:]
self.targets_all_position.append(tf.squeeze(targets_i_len))
self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y") # Y - The Lables
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob") # Dropout
l2_loss = tf.constant(0.0) # Keeping track of l2 regularization loss
# 1. EMBEDDING LAYER ################################################################
with tf.name_scope("embedding"):
self.word_embedding = tf.constant(word_embedding, name='word_embedding')
# Embedding for the context
with tf.name_scope("embedded_sen"):
self.embedded_sen = tf.nn.embedding_lookup(self.word_embedding, self.input_x)
# self.embedded_expanded = tf.expand_dims(self.embedded, -1)
self.embedded_sen = tf.cast(self.embedded_sen, tf.float32) #(?,78,300)
self.embedded_sen = tf.nn.dropout(self.embedded_sen, keep_prob=self.dropout_keep_prob)
embedding_size = word_embedding.shape[1]
print('embedding_size {}'.format(embedding_size))
num_hidden = embedding_size
# Embedding for the target
with tf.name_scope("embedding_target"):
self.embedded_target = tf.nn.embedding_lookup(self.word_embedding, self.input_target)
self.embedded_target = tf.cast(self.embedded_target, tf.float32) #(?,21,300)
self.embedded_target = tf.nn.dropout(self.embedded_target, keep_prob=self.dropout_keep_prob)
# Embedding for all targets
with tf.name_scope("embedding_targets"):
self.embedded_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
#get a target
self.input_targets_i = self.input_targets_all[:,i,:] # (?,13,23)-->(?,23) i表示第i个target
self.embedded_target_i = tf.nn.embedding_lookup(self.word_embedding, self.input_targets_i)
self.embedded_target_i = tf.cast(self.embedded_target_i, tf.float32)
self.embedded_target_i = tf.nn.dropout(self.embedded_target_i, keep_prob=self.dropout_keep_prob)
self.embedded_targets_all[i] = self.embedded_target_i #13*(?,21,300)
#2. LSTM LAYER ######################################################################
# Bi-LSTM for the context
with tf.name_scope("Bi-LSTM_sentence"):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_Hiddens_sen = bi_dynamic_rnn(cell, self.embedded_sen, num_hidden, self.sen_len,
sequence_length, 'bi-lstm-sentence' ,'all',
dropout = True, dropout_prob= self.dropout_keep_prob) #(?,78,600)
pool_sen = reduce_mean_with_len(self.LSTM_Hiddens_sen, self.sen_len)
# Bi-LSTM for the targets
with tf.variable_scope("Bi-LSTM_targets") as scope:
self.LSTM_targets_all = list(range(targets_num_max))
poor_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_targets_all[i] = bi_dynamic_rnn(cell, self.embedded_targets_all[i], num_hidden, self.targets_all_len[i],
target_sequence_length, 'bi-lstm-targets', 'all',
dropout=True, dropout_prob=self.dropout_keep_prob) # (?,21,600)
poor_targets_all[i] = reduce_mean_with_len(self.LSTM_targets_all[i], self.targets_all_len[i])
scope.reuse_variables()
# 3. Attention LAYER ######################################################################
# all targets to sentence attention
with tf.variable_scope("Attention-targets_all2sentence") as scope:
self.outputs_ss = list(range(targets_num_max)) #all the target attention for the sentence
self.outputs_ts = list(range(targets_num_max))
for i in range(targets_num_max):
#target2sentence attention
att_s_i = bilinear_attention_layer(self.LSTM_targets_all[i], pool_sen, self.targets_all_len[i], 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'tar')
self.outputs_ss[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_targets_all[i]),axis=1) #13*(?,600)
#position
target_position_i = tf.expand_dims(self.targets_all_position[i], 2) # (?,78,1)
LSTM_Hiddens_sen_p_i = tf.multiply(self.LSTM_Hiddens_sen, target_position_i)
att_s_i = bilinear_attention_layer(LSTM_Hiddens_sen_p_i, self.outputs_ss[i], self.sen_len, 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'sen')
self.outputs_ts[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_Hiddens_sen), axis=1)
scope.reuse_variables()
with tf.name_scope("targets_gather"):
self.targets_concat = tf.concat([tf.expand_dims (i ,axis = 2) for i in self.outputs_ts], axis=2) #(?,600,13)
# 4. GCN LAYER ######################################################################
with tf.name_scope('GCN_layer1'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN1_cross = WXbA_Relu(self.targets_concat,self.relate_cross,W_cross,b_cross)
GCN1_self = WXbA_Relu(self.targets_concat,self.relate_self,W_self,b_self)
GCN1_out = GCN1_cross+GCN1_self #(?,600,13)
with tf.name_scope('GCN_layer2'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN2_cross = WXbA_Relu(GCN1_out,self.relate_cross,W_cross,b_cross)
GCN2_self = WXbA_Relu(GCN1_out,self.relate_self,W_self,b_self)
GCN2_out = GCN2_cross+GCN2_self #(?,600,13)
with tf.name_scope('GCN_layer3'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN3_cross = WXbA_Relu(GCN2_out,self.relate_cross,W_cross,b_cross)
GCN3_self = WXbA_Relu(GCN2_out,self.relate_self,W_self,b_self)
GCN3_out = GCN3_cross+GCN3_self #(?,600,13)
# 和 self.target_which 矩阵相乘,求出对应的矩阵
target_which = tf.expand_dims(self.target_which,1) # (?,1,13)
self.GCN2_out = tf.multiply(GCN3_out, target_which) #(?,600,13)*(?,1,13) = (?,600,13)
self.targets_representation = tf.reduce_sum(self.GCN2_out, 2) # (?,600)
with tf.name_scope("output"):
W = tf.Variable(tf.random_normal([2 * num_hidden, num_classes]))
b = tf.Variable(tf.random_normal([num_classes]))
self.scores = tf.nn.xw_plus_b(self.targets_representation, W,b, name="scores")
l2_loss += tf.nn.l2_loss(W)
l2_loss += tf.nn.l2_loss(b)
self.predictions = tf.argmax(self.scores, 1, name="predictions")
self.true_y = tf.argmax(self.input_y, 1, name="true_y")
self.softmax = tf.nn.softmax(self.scores, name="softmax")
with tf.name_scope("loss"):
self.losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores,labels=self.input_y)
self.loss = tf.reduce_mean(self.losses, name="loss") + l2_reg_lambda * l2_loss
with tf.name_scope("accuracy"):
self.correct_pred = tf.equal(self.predictions,self.true_y)
self.accuracy = tf.reduce_mean(tf.cast(self.correct_pred, "float"),name="accuracy")
print ("LOADED Att-GCN!")
class CAtt_GCN_L4(object):
def __init__(self, sequence_length, target_sequence_length,targets_num_max, num_classes, word_embedding, l2_reg_lambda=0.0,
num_hidden=100):
#tf.set_random_seed(-1)
# PLACEHOLDERS
rand_base = 0.01
self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x") # X - The Data
self.input_target = tf.placeholder(tf.int32, [None, target_sequence_length], name="input_x") # The target
self.input_targets_all = tf.placeholder(tf.int32, [None,targets_num_max, target_sequence_length], name="input_x") #All the targets
self.sen_len = tf.placeholder(tf.int32, None, name='sen_len')#lens of sentence
self.target_len = tf.placeholder(tf.int32, None, name='target_len')#lens of target
with tf.name_scope('targets_all_len'):
self.targets_all_len_a = tf.placeholder(tf.int32, [None,targets_num_max],name="targets_all_len")
batch_size = tf.shape(self.input_x)[0]
self.targets_all_len = []
for i in range(targets_num_max):
targets_i_len = tf.slice(self.targets_all_len_a, [0, i], [batch_size, 1])
self.targets_all_len.append(tf.squeeze(targets_i_len)) #lens of every target
self.targets_num = tf.placeholder(tf.int32, None, name='targets_num') #The number os targets
self.relate_cross = tf.placeholder(tf.float32, [None,targets_num_max, targets_num_max], name='relate_cross') #the relation between targets
self.relate_self = tf.placeholder(tf.float32, [None, targets_num_max, targets_num_max], name='relate_self')
self.target_which = tf.placeholder(tf.float32, [None, targets_num_max, ], name='which_position')
self.target_position = tf.placeholder(tf.float32, [None, sequence_length], name='target_position')
with tf.name_scope('targets_all_position'):
self.targets_all_position_a = tf.placeholder(tf.float32, [None,targets_num_max,sequence_length],name="targets_all_position")
self.targets_all_position = []
for i in range(targets_num_max):
targets_i_len = self.targets_all_position_a[:, i,:]
self.targets_all_position.append(tf.squeeze(targets_i_len))
self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y") # Y - The Lables
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob") # Dropout
l2_loss = tf.constant(0.0) # Keeping track of l2 regularization loss
# 1. EMBEDDING LAYER ################################################################
with tf.name_scope("embedding"):
self.word_embedding = tf.constant(word_embedding, name='word_embedding')
# Embedding for the context
with tf.name_scope("embedded_sen"):
self.embedded_sen = tf.nn.embedding_lookup(self.word_embedding, self.input_x)
# self.embedded_expanded = tf.expand_dims(self.embedded, -1)
self.embedded_sen = tf.cast(self.embedded_sen, tf.float32) #(?,78,300)
self.embedded_sen = tf.nn.dropout(self.embedded_sen, keep_prob=self.dropout_keep_prob)
embedding_size = word_embedding.shape[1]
print('embedding_size {}'.format(embedding_size))
num_hidden = embedding_size
# Embedding for the target
with tf.name_scope("embedding_target"):
self.embedded_target = tf.nn.embedding_lookup(self.word_embedding, self.input_target)
self.embedded_target = tf.cast(self.embedded_target, tf.float32) #(?,21,300)
self.embedded_target = tf.nn.dropout(self.embedded_target, keep_prob=self.dropout_keep_prob)
# Embedding for all targets
with tf.name_scope("embedding_targets"):
self.embedded_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
#get a target
self.input_targets_i = self.input_targets_all[:,i,:] # (?,13,23)-->(?,23) i表示第i个target
self.embedded_target_i = tf.nn.embedding_lookup(self.word_embedding, self.input_targets_i)
self.embedded_target_i = tf.cast(self.embedded_target_i, tf.float32)
self.embedded_target_i = tf.nn.dropout(self.embedded_target_i, keep_prob=self.dropout_keep_prob)
self.embedded_targets_all[i] = self.embedded_target_i #13*(?,21,300)
#2. LSTM LAYER ######################################################################
# Bi-LSTM for the context
with tf.name_scope("Bi-LSTM_sentence"):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_Hiddens_sen = bi_dynamic_rnn(cell, self.embedded_sen, num_hidden, self.sen_len,
sequence_length, 'bi-lstm-sentence' ,'all',
dropout = True, dropout_prob= self.dropout_keep_prob) #(?,78,600)
pool_sen = reduce_mean_with_len(self.LSTM_Hiddens_sen, self.sen_len)
# Bi-LSTM for the targets
with tf.variable_scope("Bi-LSTM_targets") as scope:
self.LSTM_targets_all = list(range(targets_num_max))
poor_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_targets_all[i] = bi_dynamic_rnn(cell, self.embedded_targets_all[i], num_hidden, self.targets_all_len[i],
target_sequence_length, 'bi-lstm-targets', 'all',
dropout=True, dropout_prob=self.dropout_keep_prob) # (?,21,600)
poor_targets_all[i] = reduce_mean_with_len(self.LSTM_targets_all[i], self.targets_all_len[i])
scope.reuse_variables()
# 3. Attention LAYER ######################################################################
# all targets to sentence attention
with tf.variable_scope("Attention-targets_all2sentence") as scope:
self.outputs_ss = list(range(targets_num_max)) #all the target attention for the sentence
self.outputs_ts = list(range(targets_num_max))
for i in range(targets_num_max):
#target2sentence attention
att_s_i = bilinear_attention_layer(self.LSTM_targets_all[i], pool_sen, self.targets_all_len[i], 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'tar')
self.outputs_ss[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_targets_all[i]),axis=1) #13*(?,600)
#position
target_position_i = tf.expand_dims(self.targets_all_position[i], 2) # (?,78,1)
LSTM_Hiddens_sen_p_i = tf.multiply(self.LSTM_Hiddens_sen, target_position_i)
att_s_i = bilinear_attention_layer(LSTM_Hiddens_sen_p_i, self.outputs_ss[i], self.sen_len, 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'sen')
self.outputs_ts[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_Hiddens_sen), axis=1)
scope.reuse_variables()
with tf.name_scope("targets_gather"):
self.targets_concat = tf.concat([tf.expand_dims (i ,axis = 2) for i in self.outputs_ts], axis=2) #(?,600,13)
# 4. GCN LAYER ######################################################################
with tf.name_scope('GCN_layer1'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN1_cross = WXbA_Relu(self.targets_concat,self.relate_cross,W_cross,b_cross)
GCN1_self = WXbA_Relu(self.targets_concat,self.relate_self,W_self,b_self)
GCN1_out = GCN1_cross+GCN1_self #(?,600,13)
with tf.name_scope('GCN_layer2'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN2_cross = WXbA_Relu(GCN1_out,self.relate_cross,W_cross,b_cross)
GCN2_self = WXbA_Relu(GCN1_out,self.relate_self,W_self,b_self)
GCN2_out = GCN2_cross+GCN2_self #(?,600,13)
with tf.name_scope('GCN_layer3'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN3_cross = WXbA_Relu(GCN2_out,self.relate_cross,W_cross,b_cross)
GCN3_self = WXbA_Relu(GCN2_out,self.relate_self,W_self,b_self)
GCN3_out = GCN3_cross+GCN3_self #(?,600,13)
with tf.name_scope('GCN_layer4'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN4_cross = WXbA_Relu(GCN3_out,self.relate_cross,W_cross,b_cross)
GCN4_self = WXbA_Relu(GCN3_out,self.relate_self,W_self,b_self)
GCN4_out = GCN4_cross+GCN4_self #(?,600,13)
# 和 self.target_which 矩阵相乘,求出对应的矩阵
target_which = tf.expand_dims(self.target_which,1) # (?,1,13)
self.GCN2_out = tf.multiply(GCN4_out, target_which) #(?,600,13)*(?,1,13) = (?,600,13)
self.targets_representation = tf.reduce_sum(self.GCN2_out, 2) # (?,600)
with tf.name_scope("output"):
W = tf.Variable(tf.random_normal([2 * num_hidden, num_classes]))
b = tf.Variable(tf.random_normal([num_classes]))
self.scores = tf.nn.xw_plus_b(self.targets_representation, W,b, name="scores")
l2_loss += tf.nn.l2_loss(W)
l2_loss += tf.nn.l2_loss(b)
self.predictions = tf.argmax(self.scores, 1, name="predictions")
self.true_y = tf.argmax(self.input_y, 1, name="true_y")
self.softmax = tf.nn.softmax(self.scores, name="softmax")
with tf.name_scope("loss"):
self.losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores,labels=self.input_y)
self.loss = tf.reduce_mean(self.losses, name="loss") + l2_reg_lambda * l2_loss
with tf.name_scope("accuracy"):
self.correct_pred = tf.equal(self.predictions,self.true_y)
self.accuracy = tf.reduce_mean(tf.cast(self.correct_pred, "float"),name="accuracy")
print ("LOADED Att-GCN!")
class CAtt_GCN_L5(object):
def __init__(self, sequence_length, target_sequence_length,targets_num_max, num_classes, word_embedding, l2_reg_lambda=0.0,
num_hidden=100):
#tf.set_random_seed(-1)
# PLACEHOLDERS
rand_base = 0.01
self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x") # X - The Data
self.input_target = tf.placeholder(tf.int32, [None, target_sequence_length], name="input_x") # The target
self.input_targets_all = tf.placeholder(tf.int32, [None,targets_num_max, target_sequence_length], name="input_x") #All the targets
self.sen_len = tf.placeholder(tf.int32, None, name='sen_len')#lens of sentence
self.target_len = tf.placeholder(tf.int32, None, name='target_len')#lens of target
with tf.name_scope('targets_all_len'):
self.targets_all_len_a = tf.placeholder(tf.int32, [None,targets_num_max],name="targets_all_len")
batch_size = tf.shape(self.input_x)[0]
self.targets_all_len = []
for i in range(targets_num_max):
targets_i_len = tf.slice(self.targets_all_len_a, [0, i], [batch_size, 1])
self.targets_all_len.append(tf.squeeze(targets_i_len)) #lens of every target
self.targets_num = tf.placeholder(tf.int32, None, name='targets_num') #The number os targets
self.relate_cross = tf.placeholder(tf.float32, [None,targets_num_max, targets_num_max], name='relate_cross') #the relation between targets
self.relate_self = tf.placeholder(tf.float32, [None, targets_num_max, targets_num_max], name='relate_self')
self.target_which = tf.placeholder(tf.float32, [None, targets_num_max, ], name='which_position')
self.target_position = tf.placeholder(tf.float32, [None, sequence_length], name='target_position')
with tf.name_scope('targets_all_position'):
self.targets_all_position_a = tf.placeholder(tf.float32, [None,targets_num_max,sequence_length],name="targets_all_position")
self.targets_all_position = []
for i in range(targets_num_max):
targets_i_len = self.targets_all_position_a[:, i,:]
self.targets_all_position.append(tf.squeeze(targets_i_len))
self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y") # Y - The Lables
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob") # Dropout
l2_loss = tf.constant(0.0) # Keeping track of l2 regularization loss
# 1. EMBEDDING LAYER ################################################################
with tf.name_scope("embedding"):
self.word_embedding = tf.constant(word_embedding, name='word_embedding')
# Embedding for the context
with tf.name_scope("embedded_sen"):
self.embedded_sen = tf.nn.embedding_lookup(self.word_embedding, self.input_x)
# self.embedded_expanded = tf.expand_dims(self.embedded, -1)
self.embedded_sen = tf.cast(self.embedded_sen, tf.float32) #(?,78,300)
self.embedded_sen = tf.nn.dropout(self.embedded_sen, keep_prob=self.dropout_keep_prob)
embedding_size = word_embedding.shape[1]
print('embedding_size {}'.format(embedding_size))
num_hidden = embedding_size
# Embedding for the target
with tf.name_scope("embedding_target"):
self.embedded_target = tf.nn.embedding_lookup(self.word_embedding, self.input_target)
self.embedded_target = tf.cast(self.embedded_target, tf.float32) #(?,21,300)
self.embedded_target = tf.nn.dropout(self.embedded_target, keep_prob=self.dropout_keep_prob)
# Embedding for all targets
with tf.name_scope("embedding_targets"):
self.embedded_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
#get a target
self.input_targets_i = self.input_targets_all[:,i,:] # (?,13,23)-->(?,23) i表示第i个target
self.embedded_target_i = tf.nn.embedding_lookup(self.word_embedding, self.input_targets_i)
self.embedded_target_i = tf.cast(self.embedded_target_i, tf.float32)
self.embedded_target_i = tf.nn.dropout(self.embedded_target_i, keep_prob=self.dropout_keep_prob)
self.embedded_targets_all[i] = self.embedded_target_i #13*(?,21,300)
#2. LSTM LAYER ######################################################################
# Bi-LSTM for the context
with tf.name_scope("Bi-LSTM_sentence"):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_Hiddens_sen = bi_dynamic_rnn(cell, self.embedded_sen, num_hidden, self.sen_len,
sequence_length, 'bi-lstm-sentence' ,'all',
dropout = True, dropout_prob= self.dropout_keep_prob) #(?,78,600)
pool_sen = reduce_mean_with_len(self.LSTM_Hiddens_sen, self.sen_len)
# Bi-LSTM for the targets
with tf.variable_scope("Bi-LSTM_targets") as scope:
self.LSTM_targets_all = list(range(targets_num_max))
poor_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_targets_all[i] = bi_dynamic_rnn(cell, self.embedded_targets_all[i], num_hidden, self.targets_all_len[i],
target_sequence_length, 'bi-lstm-targets', 'all',
dropout=True, dropout_prob=self.dropout_keep_prob) # (?,21,600)
poor_targets_all[i] = reduce_mean_with_len(self.LSTM_targets_all[i], self.targets_all_len[i])
scope.reuse_variables()
# 3. Attention LAYER ######################################################################
# all targets to sentence attention
with tf.variable_scope("Attention-targets_all2sentence") as scope:
self.outputs_ss = list(range(targets_num_max)) #all the target attention for the sentence
self.outputs_ts = list(range(targets_num_max))
for i in range(targets_num_max):
#target2sentence attention
att_s_i = bilinear_attention_layer(self.LSTM_targets_all[i], pool_sen, self.targets_all_len[i], 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'tar')
self.outputs_ss[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_targets_all[i]),axis=1) #13*(?,600)
#position
target_position_i = tf.expand_dims(self.targets_all_position[i], 2) # (?,78,1)
LSTM_Hiddens_sen_p_i = tf.multiply(self.LSTM_Hiddens_sen, target_position_i)
att_s_i = bilinear_attention_layer(LSTM_Hiddens_sen_p_i, self.outputs_ss[i], self.sen_len, 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'sen')
self.outputs_ts[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_Hiddens_sen), axis=1)
scope.reuse_variables()
with tf.name_scope("targets_gather"):
self.targets_concat = tf.concat([tf.expand_dims (i ,axis = 2) for i in self.outputs_ts], axis=2) #(?,600,13)
# 4. GCN LAYER ######################################################################
with tf.name_scope('GCN_layer1'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN1_cross = WXbA_Relu(self.targets_concat,self.relate_cross,W_cross,b_cross)
GCN1_self = WXbA_Relu(self.targets_concat,self.relate_self,W_self,b_self)
GCN1_out = GCN1_cross+GCN1_self #(?,600,13)
with tf.name_scope('GCN_layer2'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN2_cross = WXbA_Relu(GCN1_out,self.relate_cross,W_cross,b_cross)
GCN2_self = WXbA_Relu(GCN1_out,self.relate_self,W_self,b_self)
GCN2_out = GCN2_cross+GCN2_self #(?,600,13)
with tf.name_scope('GCN_layer3'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN3_cross = WXbA_Relu(GCN2_out,self.relate_cross,W_cross,b_cross)
GCN3_self = WXbA_Relu(GCN2_out,self.relate_self,W_self,b_self)
GCN3_out = GCN3_cross+GCN3_self #(?,600,13)
with tf.name_scope('GCN_layer4'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN4_cross = WXbA_Relu(GCN3_out,self.relate_cross,W_cross,b_cross)
GCN4_self = WXbA_Relu(GCN3_out,self.relate_self,W_self,b_self)
GCN4_out = GCN4_cross+GCN4_self #(?,600,13)
with tf.name_scope('GCN_layer5'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN5_cross = WXbA_Relu(GCN4_out,self.relate_cross,W_cross,b_cross)
GCN5_self = WXbA_Relu(GCN4_out,self.relate_self,W_self,b_self)
GCN5_out = GCN5_cross+GCN5_self #(?,600,13)
# 和 self.target_which 矩阵相乘,求出对应的矩阵
target_which = tf.expand_dims(self.target_which,1) # (?,1,13)
self.GCN2_out = tf.multiply(GCN5_out, target_which) #(?,600,13)*(?,1,13) = (?,600,13)
self.targets_representation = tf.reduce_sum(self.GCN2_out, 2) # (?,600)
with tf.name_scope("output"):
W = tf.Variable(tf.random_normal([2 * num_hidden, num_classes]))
b = tf.Variable(tf.random_normal([num_classes]))
self.scores = tf.nn.xw_plus_b(self.targets_representation, W,b, name="scores")
l2_loss += tf.nn.l2_loss(W)
l2_loss += tf.nn.l2_loss(b)
self.predictions = tf.argmax(self.scores, 1, name="predictions")
self.true_y = tf.argmax(self.input_y, 1, name="true_y")
self.softmax = tf.nn.softmax(self.scores, name="softmax")
with tf.name_scope("loss"):
self.losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores,labels=self.input_y)
self.loss = tf.reduce_mean(self.losses, name="loss") + l2_reg_lambda * l2_loss
with tf.name_scope("accuracy"):
self.correct_pred = tf.equal(self.predictions,self.true_y)
self.accuracy = tf.reduce_mean(tf.cast(self.correct_pred, "float"),name="accuracy")
print ("LOADED Att-GCN!")
class CAtt_GCN_L6(object):
def __init__(self, sequence_length, target_sequence_length,targets_num_max, num_classes, word_embedding, l2_reg_lambda=0.0,
num_hidden=100):
#tf.set_random_seed(-1)
# PLACEHOLDERS
rand_base = 0.01
self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x") # X - The Data
self.input_target = tf.placeholder(tf.int32, [None, target_sequence_length], name="input_x") # The target
self.input_targets_all = tf.placeholder(tf.int32, [None,targets_num_max, target_sequence_length], name="input_x") #All the targets
self.sen_len = tf.placeholder(tf.int32, None, name='sen_len')#lens of sentence
self.target_len = tf.placeholder(tf.int32, None, name='target_len')#lens of target
with tf.name_scope('targets_all_len'):
self.targets_all_len_a = tf.placeholder(tf.int32, [None,targets_num_max],name="targets_all_len")
batch_size = tf.shape(self.input_x)[0]
self.targets_all_len = []
for i in range(targets_num_max):
targets_i_len = tf.slice(self.targets_all_len_a, [0, i], [batch_size, 1])
self.targets_all_len.append(tf.squeeze(targets_i_len)) #lens of every target
self.targets_num = tf.placeholder(tf.int32, None, name='targets_num') #The number os targets
self.relate_cross = tf.placeholder(tf.float32, [None,targets_num_max, targets_num_max], name='relate_cross') #the relation between targets
self.relate_self = tf.placeholder(tf.float32, [None, targets_num_max, targets_num_max], name='relate_self')
self.target_which = tf.placeholder(tf.float32, [None, targets_num_max, ], name='which_position')
self.target_position = tf.placeholder(tf.float32, [None, sequence_length], name='target_position')
with tf.name_scope('targets_all_position'):
self.targets_all_position_a = tf.placeholder(tf.float32, [None,targets_num_max,sequence_length],name="targets_all_position")
self.targets_all_position = []
for i in range(targets_num_max):
targets_i_len = self.targets_all_position_a[:, i,:]
self.targets_all_position.append(tf.squeeze(targets_i_len))
self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y") # Y - The Lables
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob") # Dropout
l2_loss = tf.constant(0.0) # Keeping track of l2 regularization loss
# 1. EMBEDDING LAYER ################################################################
with tf.name_scope("embedding"):
self.word_embedding = tf.constant(word_embedding, name='word_embedding')
# Embedding for the context
with tf.name_scope("embedded_sen"):
self.embedded_sen = tf.nn.embedding_lookup(self.word_embedding, self.input_x)
# self.embedded_expanded = tf.expand_dims(self.embedded, -1)
self.embedded_sen = tf.cast(self.embedded_sen, tf.float32) #(?,78,300)
self.embedded_sen = tf.nn.dropout(self.embedded_sen, keep_prob=self.dropout_keep_prob)
embedding_size = word_embedding.shape[1]
print('embedding_size {}'.format(embedding_size))
num_hidden = embedding_size
# Embedding for the target
with tf.name_scope("embedding_target"):
self.embedded_target = tf.nn.embedding_lookup(self.word_embedding, self.input_target)
self.embedded_target = tf.cast(self.embedded_target, tf.float32) #(?,21,300)
self.embedded_target = tf.nn.dropout(self.embedded_target, keep_prob=self.dropout_keep_prob)
# Embedding for all targets
with tf.name_scope("embedding_targets"):
self.embedded_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
#get a target
self.input_targets_i = self.input_targets_all[:,i,:] # (?,13,23)-->(?,23) i表示第i个target
self.embedded_target_i = tf.nn.embedding_lookup(self.word_embedding, self.input_targets_i)
self.embedded_target_i = tf.cast(self.embedded_target_i, tf.float32)
self.embedded_target_i = tf.nn.dropout(self.embedded_target_i, keep_prob=self.dropout_keep_prob)
self.embedded_targets_all[i] = self.embedded_target_i #13*(?,21,300)
#2. LSTM LAYER ######################################################################
# Bi-LSTM for the context
with tf.name_scope("Bi-LSTM_sentence"):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_Hiddens_sen = bi_dynamic_rnn(cell, self.embedded_sen, num_hidden, self.sen_len,
sequence_length, 'bi-lstm-sentence' ,'all',
dropout = True, dropout_prob= self.dropout_keep_prob) #(?,78,600)
pool_sen = reduce_mean_with_len(self.LSTM_Hiddens_sen, self.sen_len)
# Bi-LSTM for the targets
with tf.variable_scope("Bi-LSTM_targets") as scope:
self.LSTM_targets_all = list(range(targets_num_max))
poor_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_targets_all[i] = bi_dynamic_rnn(cell, self.embedded_targets_all[i], num_hidden, self.targets_all_len[i],
target_sequence_length, 'bi-lstm-targets', 'all',
dropout=True, dropout_prob=self.dropout_keep_prob) # (?,21,600)
poor_targets_all[i] = reduce_mean_with_len(self.LSTM_targets_all[i], self.targets_all_len[i])
scope.reuse_variables()
# 3. Attention LAYER ######################################################################
# all targets to sentence attention
with tf.variable_scope("Attention-targets_all2sentence") as scope:
self.outputs_ss = list(range(targets_num_max)) #all the target attention for the sentence
self.outputs_ts = list(range(targets_num_max))
for i in range(targets_num_max):
#target2sentence attention
att_s_i = bilinear_attention_layer(self.LSTM_targets_all[i], pool_sen, self.targets_all_len[i], 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'tar')
self.outputs_ss[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_targets_all[i]),axis=1) #13*(?,600)
#position
target_position_i = tf.expand_dims(self.targets_all_position[i], 2) # (?,78,1)
LSTM_Hiddens_sen_p_i = tf.multiply(self.LSTM_Hiddens_sen, target_position_i)
att_s_i = bilinear_attention_layer(LSTM_Hiddens_sen_p_i, self.outputs_ss[i], self.sen_len, 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'sen')
self.outputs_ts[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_Hiddens_sen), axis=1)
scope.reuse_variables()
with tf.name_scope("targets_gather"):
self.targets_concat = tf.concat([tf.expand_dims (i ,axis = 2) for i in self.outputs_ts], axis=2) #(?,600,13)
# 4. GCN LAYER ######################################################################
with tf.name_scope('GCN_layer1'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN1_cross = WXbA_Relu(self.targets_concat,self.relate_cross,W_cross,b_cross)
GCN1_self = WXbA_Relu(self.targets_concat,self.relate_self,W_self,b_self)
GCN1_out = GCN1_cross+GCN1_self #(?,600,13)
with tf.name_scope('GCN_layer2'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN2_cross = WXbA_Relu(GCN1_out,self.relate_cross,W_cross,b_cross)
GCN2_self = WXbA_Relu(GCN1_out,self.relate_self,W_self,b_self)
GCN2_out = GCN2_cross+GCN2_self #(?,600,13)
with tf.name_scope('GCN_layer3'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN3_cross = WXbA_Relu(GCN2_out,self.relate_cross,W_cross,b_cross)
GCN3_self = WXbA_Relu(GCN2_out,self.relate_self,W_self,b_self)
GCN3_out = GCN3_cross+GCN3_self #(?,600,13)
with tf.name_scope('GCN_layer4'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN4_cross = WXbA_Relu(GCN3_out,self.relate_cross,W_cross,b_cross)
GCN4_self = WXbA_Relu(GCN3_out,self.relate_self,W_self,b_self)
GCN4_out = GCN4_cross+GCN4_self #(?,600,13)
with tf.name_scope('GCN_layer5'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN5_cross = WXbA_Relu(GCN4_out,self.relate_cross,W_cross,b_cross)
GCN5_self = WXbA_Relu(GCN4_out,self.relate_self,W_self,b_self)
GCN5_out = GCN5_cross+GCN5_self #(?,600,13)
with tf.name_scope('GCN_layer6'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN6_cross = WXbA_Relu(GCN5_out,self.relate_cross,W_cross,b_cross)
GCN6_self = WXbA_Relu(GCN5_out,self.relate_self,W_self,b_self)
GCN6_out = GCN6_cross+GCN6_self #(?,600,13)
# 和 self.target_which 矩阵相乘,求出对应的矩阵
target_which = tf.expand_dims(self.target_which,1) # (?,1,13)
self.GCN2_out = tf.multiply(GCN6_out, target_which) #(?,600,13)*(?,1,13) = (?,600,13)
self.targets_representation = tf.reduce_sum(self.GCN2_out, 2) # (?,600)
with tf.name_scope("output"):
W = tf.Variable(tf.random_normal([2 * num_hidden, num_classes]))
b = tf.Variable(tf.random_normal([num_classes]))
self.scores = tf.nn.xw_plus_b(self.targets_representation, W,b, name="scores")
l2_loss += tf.nn.l2_loss(W)
l2_loss += tf.nn.l2_loss(b)
self.predictions = tf.argmax(self.scores, 1, name="predictions")
self.true_y = tf.argmax(self.input_y, 1, name="true_y")
self.softmax = tf.nn.softmax(self.scores, name="softmax")
with tf.name_scope("loss"):
self.losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores,labels=self.input_y)
self.loss = tf.reduce_mean(self.losses, name="loss") + l2_reg_lambda * l2_loss
with tf.name_scope("accuracy"):
self.correct_pred = tf.equal(self.predictions,self.true_y)
self.accuracy = tf.reduce_mean(tf.cast(self.correct_pred, "float"),name="accuracy")
print ("LOADED Att-GCN!")
class CAtt_GCN_L7(object):
def __init__(self, sequence_length, target_sequence_length,targets_num_max, num_classes, word_embedding, l2_reg_lambda=0.0,
num_hidden=100):
#tf.set_random_seed(-1)
# PLACEHOLDERS
rand_base = 0.01
self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x") # X - The Data
self.input_target = tf.placeholder(tf.int32, [None, target_sequence_length], name="input_x") # The target
self.input_targets_all = tf.placeholder(tf.int32, [None,targets_num_max, target_sequence_length], name="input_x") #All the targets
self.sen_len = tf.placeholder(tf.int32, None, name='sen_len')#lens of sentence
self.target_len = tf.placeholder(tf.int32, None, name='target_len')#lens of target
with tf.name_scope('targets_all_len'):
self.targets_all_len_a = tf.placeholder(tf.int32, [None,targets_num_max],name="targets_all_len")
batch_size = tf.shape(self.input_x)[0]
self.targets_all_len = []
for i in range(targets_num_max):
targets_i_len = tf.slice(self.targets_all_len_a, [0, i], [batch_size, 1])
self.targets_all_len.append(tf.squeeze(targets_i_len)) #lens of every target
self.targets_num = tf.placeholder(tf.int32, None, name='targets_num') #The number os targets
self.relate_cross = tf.placeholder(tf.float32, [None,targets_num_max, targets_num_max], name='relate_cross') #the relation between targets
self.relate_self = tf.placeholder(tf.float32, [None, targets_num_max, targets_num_max], name='relate_self')
self.target_which = tf.placeholder(tf.float32, [None, targets_num_max, ], name='which_position')
self.target_position = tf.placeholder(tf.float32, [None, sequence_length], name='target_position')
with tf.name_scope('targets_all_position'):
self.targets_all_position_a = tf.placeholder(tf.float32, [None,targets_num_max,sequence_length],name="targets_all_position")
self.targets_all_position = []
for i in range(targets_num_max):
targets_i_len = self.targets_all_position_a[:, i,:]
self.targets_all_position.append(tf.squeeze(targets_i_len))
self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y") # Y - The Lables
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob") # Dropout
l2_loss = tf.constant(0.0) # Keeping track of l2 regularization loss
# 1. EMBEDDING LAYER ################################################################
with tf.name_scope("embedding"):
self.word_embedding = tf.constant(word_embedding, name='word_embedding')
# Embedding for the context
with tf.name_scope("embedded_sen"):
self.embedded_sen = tf.nn.embedding_lookup(self.word_embedding, self.input_x)
# self.embedded_expanded = tf.expand_dims(self.embedded, -1)
self.embedded_sen = tf.cast(self.embedded_sen, tf.float32) #(?,78,300)
self.embedded_sen = tf.nn.dropout(self.embedded_sen, keep_prob=self.dropout_keep_prob)
embedding_size = word_embedding.shape[1]
print('embedding_size {}'.format(embedding_size))
num_hidden = embedding_size
# Embedding for the target
with tf.name_scope("embedding_target"):
self.embedded_target = tf.nn.embedding_lookup(self.word_embedding, self.input_target)
self.embedded_target = tf.cast(self.embedded_target, tf.float32) #(?,21,300)
self.embedded_target = tf.nn.dropout(self.embedded_target, keep_prob=self.dropout_keep_prob)
# Embedding for all targets
with tf.name_scope("embedding_targets"):
self.embedded_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
#get a target
self.input_targets_i = self.input_targets_all[:,i,:] # (?,13,23)-->(?,23) i表示第i个target
self.embedded_target_i = tf.nn.embedding_lookup(self.word_embedding, self.input_targets_i)
self.embedded_target_i = tf.cast(self.embedded_target_i, tf.float32)
self.embedded_target_i = tf.nn.dropout(self.embedded_target_i, keep_prob=self.dropout_keep_prob)
self.embedded_targets_all[i] = self.embedded_target_i #13*(?,21,300)
#2. LSTM LAYER ######################################################################
# Bi-LSTM for the context
with tf.name_scope("Bi-LSTM_sentence"):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_Hiddens_sen = bi_dynamic_rnn(cell, self.embedded_sen, num_hidden, self.sen_len,
sequence_length, 'bi-lstm-sentence' ,'all',
dropout = True, dropout_prob= self.dropout_keep_prob) #(?,78,600)
pool_sen = reduce_mean_with_len(self.LSTM_Hiddens_sen, self.sen_len)
# Bi-LSTM for the targets
with tf.variable_scope("Bi-LSTM_targets") as scope:
self.LSTM_targets_all = list(range(targets_num_max))
poor_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_targets_all[i] = bi_dynamic_rnn(cell, self.embedded_targets_all[i], num_hidden, self.targets_all_len[i],
target_sequence_length, 'bi-lstm-targets', 'all',
dropout=True, dropout_prob=self.dropout_keep_prob) # (?,21,600)
poor_targets_all[i] = reduce_mean_with_len(self.LSTM_targets_all[i], self.targets_all_len[i])
scope.reuse_variables()
# 3. Attention LAYER ######################################################################
# all targets to sentence attention
with tf.variable_scope("Attention-targets_all2sentence") as scope:
self.outputs_ss = list(range(targets_num_max)) #all the target attention for the sentence
self.outputs_ts = list(range(targets_num_max))
for i in range(targets_num_max):
#target2sentence attention
att_s_i = bilinear_attention_layer(self.LSTM_targets_all[i], pool_sen, self.targets_all_len[i], 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'tar')
self.outputs_ss[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_targets_all[i]),axis=1) #13*(?,600)
#position
target_position_i = tf.expand_dims(self.targets_all_position[i], 2) # (?,78,1)
LSTM_Hiddens_sen_p_i = tf.multiply(self.LSTM_Hiddens_sen, target_position_i)
att_s_i = bilinear_attention_layer(LSTM_Hiddens_sen_p_i, self.outputs_ss[i], self.sen_len, 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'sen')
self.outputs_ts[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_Hiddens_sen), axis=1)
scope.reuse_variables()
with tf.name_scope("targets_gather"):
self.targets_concat = tf.concat([tf.expand_dims (i ,axis = 2) for i in self.outputs_ts], axis=2) #(?,600,13)
# 4. GCN LAYER ######################################################################
with tf.name_scope('GCN_layer1'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN1_cross = WXbA_Relu(self.targets_concat,self.relate_cross,W_cross,b_cross)
GCN1_self = WXbA_Relu(self.targets_concat,self.relate_self,W_self,b_self)
GCN1_out = GCN1_cross+GCN1_self #(?,600,13)
with tf.name_scope('GCN_layer2'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN2_cross = WXbA_Relu(GCN1_out,self.relate_cross,W_cross,b_cross)
GCN2_self = WXbA_Relu(GCN1_out,self.relate_self,W_self,b_self)
GCN2_out = GCN2_cross+GCN2_self #(?,600,13)
with tf.name_scope('GCN_layer3'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN3_cross = WXbA_Relu(GCN2_out,self.relate_cross,W_cross,b_cross)
GCN3_self = WXbA_Relu(GCN2_out,self.relate_self,W_self,b_self)
GCN3_out = GCN3_cross+GCN3_self #(?,600,13)
with tf.name_scope('GCN_layer4'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN4_cross = WXbA_Relu(GCN3_out,self.relate_cross,W_cross,b_cross)
GCN4_self = WXbA_Relu(GCN3_out,self.relate_self,W_self,b_self)
GCN4_out = GCN4_cross+GCN4_self #(?,600,13)
with tf.name_scope('GCN_layer5'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN5_cross = WXbA_Relu(GCN4_out,self.relate_cross,W_cross,b_cross)
GCN5_self = WXbA_Relu(GCN4_out,self.relate_self,W_self,b_self)
GCN5_out = GCN5_cross+GCN5_self #(?,600,13)
with tf.name_scope('GCN_layer6'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN6_cross = WXbA_Relu(GCN5_out,self.relate_cross,W_cross,b_cross)
GCN6_self = WXbA_Relu(GCN5_out,self.relate_self,W_self,b_self)
GCN6_out = GCN6_cross+GCN6_self #(?,600,13)
with tf.name_scope('GCN_layer7'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN7_cross = WXbA_Relu(GCN6_out,self.relate_cross,W_cross,b_cross)
GCN7_self = WXbA_Relu(GCN6_out,self.relate_self,W_self,b_self)
GCN7_out = GCN7_cross+GCN7_self #(?,600,13)
# 和 self.target_which 矩阵相乘,求出对应的矩阵
target_which = tf.expand_dims(self.target_which,1) # (?,1,13)
self.GCN2_out = tf.multiply(GCN7_out, target_which) #(?,600,13)*(?,1,13) = (?,600,13)
self.targets_representation = tf.reduce_sum(self.GCN2_out, 2) # (?,600)
with tf.name_scope("output"):
W = tf.Variable(tf.random_normal([2 * num_hidden, num_classes]))
b = tf.Variable(tf.random_normal([num_classes]))
self.scores = tf.nn.xw_plus_b(self.targets_representation, W,b, name="scores")
l2_loss += tf.nn.l2_loss(W)
l2_loss += tf.nn.l2_loss(b)
self.predictions = tf.argmax(self.scores, 1, name="predictions")
self.true_y = tf.argmax(self.input_y, 1, name="true_y")
self.softmax = tf.nn.softmax(self.scores, name="softmax")
with tf.name_scope("loss"):
self.losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores,labels=self.input_y)
self.loss = tf.reduce_mean(self.losses, name="loss") + l2_reg_lambda * l2_loss
with tf.name_scope("accuracy"):
self.correct_pred = tf.equal(self.predictions,self.true_y)
self.accuracy = tf.reduce_mean(tf.cast(self.correct_pred, "float"),name="accuracy")
print ("LOADED Att-GCN!")
class CAtt_GCN_L8(object):
def __init__(self, sequence_length, target_sequence_length,targets_num_max, num_classes, word_embedding, l2_reg_lambda=0.0,
num_hidden=100):
#tf.set_random_seed(-1)
# PLACEHOLDERS
rand_base = 0.01
self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x") # X - The Data
self.input_target = tf.placeholder(tf.int32, [None, target_sequence_length], name="input_x") # The target
self.input_targets_all = tf.placeholder(tf.int32, [None,targets_num_max, target_sequence_length], name="input_x") #All the targets
self.sen_len = tf.placeholder(tf.int32, None, name='sen_len')#lens of sentence
self.target_len = tf.placeholder(tf.int32, None, name='target_len')#lens of target
with tf.name_scope('targets_all_len'):
self.targets_all_len_a = tf.placeholder(tf.int32, [None,targets_num_max],name="targets_all_len")
batch_size = tf.shape(self.input_x)[0]
self.targets_all_len = []
for i in range(targets_num_max):
targets_i_len = tf.slice(self.targets_all_len_a, [0, i], [batch_size, 1])
self.targets_all_len.append(tf.squeeze(targets_i_len)) #lens of every target
self.targets_num = tf.placeholder(tf.int32, None, name='targets_num') #The number os targets
self.relate_cross = tf.placeholder(tf.float32, [None,targets_num_max, targets_num_max], name='relate_cross') #the relation between targets
self.relate_self = tf.placeholder(tf.float32, [None, targets_num_max, targets_num_max], name='relate_self')
self.target_which = tf.placeholder(tf.float32, [None, targets_num_max, ], name='which_position')
self.target_position = tf.placeholder(tf.float32, [None, sequence_length], name='target_position')
with tf.name_scope('targets_all_position'):
self.targets_all_position_a = tf.placeholder(tf.float32, [None,targets_num_max,sequence_length],name="targets_all_position")
self.targets_all_position = []
for i in range(targets_num_max):
targets_i_len = self.targets_all_position_a[:, i,:]
self.targets_all_position.append(tf.squeeze(targets_i_len))
self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y") # Y - The Lables
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob") # Dropout
l2_loss = tf.constant(0.0) # Keeping track of l2 regularization loss
# 1. EMBEDDING LAYER ################################################################
with tf.name_scope("embedding"):
self.word_embedding = tf.constant(word_embedding, name='word_embedding')
# Embedding for the context
with tf.name_scope("embedded_sen"):
self.embedded_sen = tf.nn.embedding_lookup(self.word_embedding, self.input_x)
# self.embedded_expanded = tf.expand_dims(self.embedded, -1)
self.embedded_sen = tf.cast(self.embedded_sen, tf.float32) #(?,78,300)
self.embedded_sen = tf.nn.dropout(self.embedded_sen, keep_prob=self.dropout_keep_prob)
embedding_size = word_embedding.shape[1]
print('embedding_size {}'.format(embedding_size))
num_hidden = embedding_size
# Embedding for the target
with tf.name_scope("embedding_target"):
self.embedded_target = tf.nn.embedding_lookup(self.word_embedding, self.input_target)
self.embedded_target = tf.cast(self.embedded_target, tf.float32) #(?,21,300)
self.embedded_target = tf.nn.dropout(self.embedded_target, keep_prob=self.dropout_keep_prob)
# Embedding for all targets
with tf.name_scope("embedding_targets"):
self.embedded_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
#get a target
self.input_targets_i = self.input_targets_all[:,i,:] # (?,13,23)-->(?,23) i表示第i个target
self.embedded_target_i = tf.nn.embedding_lookup(self.word_embedding, self.input_targets_i)
self.embedded_target_i = tf.cast(self.embedded_target_i, tf.float32)
self.embedded_target_i = tf.nn.dropout(self.embedded_target_i, keep_prob=self.dropout_keep_prob)
self.embedded_targets_all[i] = self.embedded_target_i #13*(?,21,300)
#2. LSTM LAYER ######################################################################
# Bi-LSTM for the context
with tf.name_scope("Bi-LSTM_sentence"):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_Hiddens_sen = bi_dynamic_rnn(cell, self.embedded_sen, num_hidden, self.sen_len,
sequence_length, 'bi-lstm-sentence' ,'all',
dropout = True, dropout_prob= self.dropout_keep_prob) #(?,78,600)
pool_sen = reduce_mean_with_len(self.LSTM_Hiddens_sen, self.sen_len)
# Bi-LSTM for the targets
with tf.variable_scope("Bi-LSTM_targets") as scope:
self.LSTM_targets_all = list(range(targets_num_max))
poor_targets_all = list(range(targets_num_max))
for i in range(targets_num_max):
cell = tf.nn.rnn_cell.LSTMCell
self.LSTM_targets_all[i] = bi_dynamic_rnn(cell, self.embedded_targets_all[i], num_hidden, self.targets_all_len[i],
target_sequence_length, 'bi-lstm-targets', 'all',
dropout=True, dropout_prob=self.dropout_keep_prob) # (?,21,600)
poor_targets_all[i] = reduce_mean_with_len(self.LSTM_targets_all[i], self.targets_all_len[i])
scope.reuse_variables()
# 3. Attention LAYER ######################################################################
# all targets to sentence attention
with tf.variable_scope("Attention-targets_all2sentence") as scope:
self.outputs_ss = list(range(targets_num_max)) #all the target attention for the sentence
self.outputs_ts = list(range(targets_num_max))
for i in range(targets_num_max):
#target2sentence attention
att_s_i = bilinear_attention_layer(self.LSTM_targets_all[i], pool_sen, self.targets_all_len[i], 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'tar')
self.outputs_ss[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_targets_all[i]),axis=1) #13*(?,600)
#position
target_position_i = tf.expand_dims(self.targets_all_position[i], 2) # (?,78,1)
LSTM_Hiddens_sen_p_i = tf.multiply(self.LSTM_Hiddens_sen, target_position_i)
att_s_i = bilinear_attention_layer(LSTM_Hiddens_sen_p_i, self.outputs_ss[i], self.sen_len, 2 * num_hidden ,l2_reg_lambda,
random_base = rand_base, layer_id = 'sen')
self.outputs_ts[i] = tf.squeeze(tf.matmul(att_s_i, self.LSTM_Hiddens_sen), axis=1)
scope.reuse_variables()
with tf.name_scope("targets_gather"):
self.targets_concat = tf.concat([tf.expand_dims (i ,axis = 2) for i in self.outputs_ts], axis=2) #(?,600,13)
# 4. GCN LAYER ######################################################################
with tf.name_scope('GCN_layer1'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN1_cross = WXbA_Relu(self.targets_concat,self.relate_cross,W_cross,b_cross)
GCN1_self = WXbA_Relu(self.targets_concat,self.relate_self,W_self,b_self)
GCN1_out = GCN1_cross+GCN1_self #(?,600,13)
with tf.name_scope('GCN_layer2'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN2_cross = WXbA_Relu(GCN1_out,self.relate_cross,W_cross,b_cross)
GCN2_self = WXbA_Relu(GCN1_out,self.relate_self,W_self,b_self)
GCN2_out = GCN2_cross+GCN2_self #(?,600,13)
with tf.name_scope('GCN_layer3'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN3_cross = WXbA_Relu(GCN2_out,self.relate_cross,W_cross,b_cross)
GCN3_self = WXbA_Relu(GCN2_out,self.relate_self,W_self,b_self)
GCN3_out = GCN3_cross+GCN3_self #(?,600,13)
with tf.name_scope('GCN_layer4'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN4_cross = WXbA_Relu(GCN3_out,self.relate_cross,W_cross,b_cross)
GCN4_self = WXbA_Relu(GCN3_out,self.relate_self,W_self,b_self)
GCN4_out = GCN4_cross+GCN4_self #(?,600,13)
with tf.name_scope('GCN_layer5'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN5_cross = WXbA_Relu(GCN4_out,self.relate_cross,W_cross,b_cross)
GCN5_self = WXbA_Relu(GCN4_out,self.relate_self,W_self,b_self)
GCN5_out = GCN5_cross+GCN5_self #(?,600,13)
with tf.name_scope('GCN_layer6'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN6_cross = WXbA_Relu(GCN5_out,self.relate_cross,W_cross,b_cross)
GCN6_self = WXbA_Relu(GCN5_out,self.relate_self,W_self,b_self)
GCN6_out = GCN6_cross+GCN6_self #(?,600,13)
with tf.name_scope('GCN_layer7'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN7_cross = WXbA_Relu(GCN6_out,self.relate_cross,W_cross,b_cross)
GCN7_self = WXbA_Relu(GCN6_out,self.relate_self,W_self,b_self)
GCN7_out = GCN7_cross+GCN7_self #(?,600,13)
with tf.name_scope('GCN_layer8'):
W_cross = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_cross')
b_cross = tf.Variable(tf.random.uniform([2 * num_hidden],-rand_base,rand_base),name = 'b_cross')
W_self = tf.Variable(tf.random.uniform([2 * num_hidden, 2 * num_hidden],-rand_base,rand_base),name = 'W_self')
b_self = tf.Variable(tf.random.uniform([2 * num_hidden], -rand_base, rand_base), name='b_self')
GCN8_cross = WXbA_Relu(GCN7_out,self.relate_cross,W_cross,b_cross)
GCN8_self = WXbA_Relu(GCN7_out,self.relate_self,W_self,b_self)
GCN8_out = GCN8_cross+GCN8_self #(?,600,13)
# 和 self.target_which 矩阵相乘,求出对应的矩阵
target_which = tf.expand_dims(self.target_which,1) # (?,1,13)
self.GCN2_out = tf.multiply(GCN8_out, target_which) #(?,600,13)*(?,1,13) = (?,600,13)
self.targets_representation = tf.reduce_sum(self.GCN2_out, 2) # (?,600)
with tf.name_scope("output"):
W = tf.Variable(tf.random_normal([2 * num_hidden, num_classes]))
b = tf.Variable(tf.random_normal([num_classes]))
self.scores = tf.nn.xw_plus_b(self.targets_representation, W,b, name="scores")
l2_loss += tf.nn.l2_loss(W)
l2_loss += tf.nn.l2_loss(b)
self.predictions = tf.argmax(self.scores, 1, name="predictions")
self.true_y = tf.argmax(self.input_y, 1, name="true_y")
self.softmax = tf.nn.softmax(self.scores, name="softmax")
with tf.name_scope("loss"):
self.losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores,labels=self.input_y)
self.loss = tf.reduce_mean(self.losses, name="loss") + l2_reg_lambda * l2_loss
with tf.name_scope("accuracy"):
self.correct_pred = tf.equal(self.predictions,self.true_y)
self.accuracy = tf.reduce_mean(tf.cast(self.correct_pred, "float"),name="accuracy")
print ("LOADED Att-GCN!")
| 70.575451 | 147 | 0.631273 | 13,628 | 97,747 | 4.213898 | 0.014749 | 0.043464 | 0.041792 | 0.050151 | 0.994811 | 0.993975 | 0.993557 | 0.993557 | 0.992251 | 0.992251 | 0 | 0.024645 | 0.227485 | 97,747 | 1,384 | 148 | 70.626445 | 0.735866 | 0.070734 | 0 | 0.978182 | 0 | 0 | 0.058003 | 0.002719 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007273 | false | 0 | 0.003636 | 0 | 0.018182 | 0.014545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fb6d0db0802b81d5047a427be95d3544ac94f25d | 19,430 | py | Python | 08/VMT/CodeWriter.py | aadityarautela/nand2tetris | 64768087ae5f6903beeb17a01492d68d7b2354f6 | [
"MIT"
] | null | null | null | 08/VMT/CodeWriter.py | aadityarautela/nand2tetris | 64768087ae5f6903beeb17a01492d68d7b2354f6 | [
"MIT"
] | null | null | null | 08/VMT/CodeWriter.py | aadityarautela/nand2tetris | 64768087ae5f6903beeb17a01492d68d7b2354f6 | [
"MIT"
] | null | null | null | import Const
import Parser
import os
class CodeWriter(object):
def __init__(self, outfile):
self.outfile = open(outfile, 'w')
self.vmfile = ''
self.labelnum = 0
self.EQCNT = 0
self.LTCNT = 0
self.GTCNT = 0
def setFileName(self, filename):
self.vmfile, ext = os.path.split(filename)
def close(self):
self.outfile.close()
def writeInit(self, isdir):
self.outfile.write("// Init\n")
self.outfile.write("@256\n")
self.outfile.write("D=A\n")
self.outfile.write("@SP\n")
self.outfile.write("M=D\n")
if isdir:
self.writeCall("Sys.init", 0)
def writeArithmetic(self, command):
cmd_list = command.split()
cmd_type = cmd_list[0]
if cmd_type == "add":
self.outfile.write("// "+command+"\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("M=M+D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
elif cmd_type == "sub":
self.outfile.write("// "+command+"\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("M=M-D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
elif cmd_type == "neg":
self.outfile.write("// "+command+"\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("M=-M\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
elif cmd_type == "and":
self.outfile.write("// "+command+"\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("M=M&D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
elif cmd_type == "or":
self.outfile.write("// "+command+"\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("M=M|D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
elif cmd_type == "not":
self.outfile.write("// "+command+"\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("M=!M\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
elif cmd_type == "eq":
self.outfile.write("// "+command+"\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("D=M-D\n")
self.outfile.write("M=-1\n")
self.outfile.write("@EQ_LABEL_" + str(self.EQCNT) + "\n")
self.outfile.write("D;JEQ\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=0\n")
self.outfile.write("(EQ_LABEL_" + str(self.EQCNT) + ")\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
self.EQCNT += 1
elif cmd_type == "gt":
self.outfile.write("// "+command+"\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("D=M-D\n")
self.outfile.write("M=-1\n")
self.outfile.write("@GT_LABEL_" + str(self.GTCNT) + "\n")
self.outfile.write("D;JGT\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=0\n")
self.outfile.write("(GT_LABEL_" + str(self.GTCNT) + ")\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
self.GTCNT += 1
elif cmd_type == "lt":
self.outfile.write("// "+command+"\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("D=M-D\n")
self.outfile.write("M=-1\n")
self.outfile.write("@LT_LABEL_" + str(self.LTCNT) + "\n")
self.outfile.write("D;JLT\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=0\n")
self.outfile.write("(LT_LABEL_" + str(self.LTCNT) + ")\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
self.LTCNT += 1
def writePushPop(self, commandtype, segment, index, staticName=None):
if commandtype == Const.C_PUSH:
self.outfile.write("// push " + segment + " " + str(index) + "\n")
if segment == "argument":
self.outfile.write("@ARG\n")
self.outfile.write("D=M\n")
self.outfile.write("@" + str(index) + "\n")
self.outfile.write("A=A+D\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
elif segment == "local":
self.outfile.write("@LCL\n")
self.outfile.write("D=M\n")
self.outfile.write("@" + str(index) + "\n")
self.outfile.write("A=A+D\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
elif segment == "static":
self.outfile.write(
"@" + staticName + "\n" +
"D=M\n" +
"@SP\n" +
"A=M\n" +
"M=D\n" +
"@SP\n" +
"M=M+1\n"
)
elif segment == "constant":
self.outfile.write("@" + str(index) + "\n")
self.outfile.write("D=A\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
elif segment == "this":
self.outfile.write("@THIS\n")
self.outfile.write("D=M\n")
self.outfile.write("@" + str(index) + "\n")
self.outfile.write("A=A+D\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
elif segment == "that":
self.outfile.write("@THAT\n")
self.outfile.write("D=M\n")
self.outfile.write("@" + str(index) + "\n")
self.outfile.write("A=A+D\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
elif segment == "pointer":
if index == 0:
self.outfile.write("@THIS\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
else:
self.outfile.write("@THAT\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
elif segment == "temp":
tmpindex = index+5
self.outfile.write("@" + str(tmpindex) + "\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
self.outfile.write("D=A+1\n")
self.outfile.write("@SP\n")
self.outfile.write("M=D\n")
else:
self.outfile.write("// pop " + segment + " " + str(index) + "\n")
if segment == "argument":
self.outfile.write("@ARG\n")
self.outfile.write("D=M\n")
self.outfile.write("@" + str(index) + "\n")
self.outfile.write("D=D+A\n")
self.outfile.write("@R13\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("AM=M-1\n")
self.outfile.write("D=M\n")
self.outfile.write("@R13\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
elif segment == "local":
self.outfile.write("@" + str(index) + "\n")
self.outfile.write("D=A\n")
self.outfile.write("@LCL\n")
self.outfile.write("D=D+M\n")
self.outfile.write("@R13\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("D=M\n")
self.outfile.write("@R13\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
elif segment == "static":
self.outfile.write(
"@SP\n" +
"M=M-1\n" +
"@SP\n" +
"A=M\n" +
"D=M\n" +
"@" + staticName + "\n" +
"M=D\n"
)
elif segment == "this":
self.outfile.write("@THIS\n")
self.outfile.write("D=M\n")
self.outfile.write("@" + str(index) + "\n")
self.outfile.write("D=D+A\n")
self.outfile.write("@R13\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("AM=M-1\n")
self.outfile.write("D=M\n")
self.outfile.write("@R13\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
elif segment == "that":
self.outfile.write("@THAT\n")
self.outfile.write("D=M\n")
self.outfile.write("@" + str(index) + "\n")
self.outfile.write("D=D+A\n")
self.outfile.write("@R13\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("AM=M-1\n")
self.outfile.write("D=M\n")
self.outfile.write("@R13\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
elif segment == "pointer":
if index == 0:
self.outfile.write("@THIS\n")
self.outfile.write("D=A\n")
self.outfile.write("@R13\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("AM=M-1\n")
self.outfile.write("D=M\n")
self.outfile.write("@R13\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
else:
self.outfile.write("@THAT\n")
self.outfile.write("D=A\n")
self.outfile.write("@R13\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("AM=M-1\n")
self.outfile.write("D=M\n")
self.outfile.write("@R13\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
elif segment == "temp":
tmpindex = index + 5
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("D=M\n")
self.outfile.write("@" + str(tmpindex) + "\n")
self.outfile.write("M=D\n")
def newLabel(self):
self.labelnum += 1
return "LABEL_" + str(self.labelnum)
def writeLabel(self, label):
self.outfile.write("// label " + label + "\n")
self.outfile.write("(" + label + ")\n")
def writeGoto(self, label):
self.outfile.write("// goto " + label + "\n")
self.outfile.write("@" + label + "\n")
self.outfile.write("0;JMP\n")
def writeIf(self,label):
self.outfile.write("// if-goto " + label + "\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M-1\n")
self.outfile.write("A=M\n")
self.outfile.write("D=M\n")
self.outfile.write("@" + label + "\n")
self.outfile.write("D;JNE\n")
def writeCall(self,functionName,numArgs):
self.outfile.write("// call " + functionName + " " + str(numArgs) + "\n")
return_addr = self.newLabel()
self.outfile.write("@" + str(return_addr) + "\n" + "D=A\n@SP\nA=M\nM=D\n@SP\nM=M+1\n")
self.outfile.write("@LCL\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
self.outfile.write("@ARG\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
self.outfile.write("@THIS\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
self.outfile.write("@THAT\n")
self.outfile.write("D=M\n")
self.outfile.write("@SP\n")
self.outfile.write("A=M\n")
self.outfile.write("M=D\n")
self.outfile.write("@SP\n")
self.outfile.write("M=M+1\n")
self.outfile.write(
"@SP\n" +
"D=M\n" +
"@5\n" +
"D=D-A\n" +
"@" + str(numArgs) + "\n" +
"D=D-A\n" +
"@ARG\n" +
"M=D\n" +
"@SP\n" +
"D=M\n" +
"@LCL\n" +
"M=D\n" +
"@" + str(functionName) + "\n" +
"0;JMP\n" +
"(" + str(return_addr) + ")\n"
)
def writeReturn(self):
self.outfile.write("// return\n")
self.outfile.write(
"@LCL\n" +
"D=M\n" +
"@R11\n" +
"M=D\n" +
"@5\n" +
"A=D-A\n" +
"D=M\n" +
"@R12\n" +
"M=D\n"
)
self.outfile.write(
"@ARG\n" +
"D=M\n" +
"@0\n" +
"D=D+A\n" +
"@R13\n" +
"M=D\n" +
"@SP\n" +
"AM=M-1\n" +
"D=M\n" +
"@R13\n" +
"A=M\n" +
"M=D\n"
)
self.outfile.write(
"@ARG\n" +
"D=M\n" +
"@SP\n" +
"M=D+1\n"
)
self.outfile.write(
"@R11\n" +
"D=M-1\n" +
"AM=D\n" +
"D=M\n" +
"@THAT\n" +
"M=D\n" +
"@R11\n" +
"D=M-1\n" +
"AM=D\n" +
"D=M\n" +
"@THIS\n" +
"M=D\n" +
"@R11\n" +
"D=M-1\n" +
"AM=D\n" +
"D=M\n" +
"@ARG\n" +
"M=D\n" +
"@R11\n" +
"D=M-1\n" +
"AM=D\n" +
"D=M\n" +
"@LCL\n" +
"M=D\n"
)
self.outfile.write(
"@R12\n" +
"A=M\n" +
"0;JMP\n"
)
def writeFunction(self, functionName, numLocals):
self.outfile.write("// function " + functionName + " " + str(numLocals) + "\n")
self.outfile.write("(" + functionName + ")\n")
for i in range(numLocals):
self.writePushPop(Const.C_PUSH, "constant", 0) | 37.293666 | 94 | 0.411889 | 2,337 | 19,430 | 3.409927 | 0.045357 | 0.440331 | 0.634459 | 0.597315 | 0.853307 | 0.83875 | 0.826453 | 0.811394 | 0.811394 | 0.796963 | 0 | 0.010927 | 0.406536 | 19,430 | 521 | 95 | 37.293666 | 0.680167 | 0 | 0 | 0.78481 | 0 | 0.00211 | 0.123411 | 0.001647 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027426 | false | 0 | 0.006329 | 0 | 0.037975 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
83854ba523f785bc1eb2b8d6f6079406eec273ca | 29 | py | Python | Chapter 04/ch41l.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | Chapter 04/ch41l.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | Chapter 04/ch41l.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | print(15| 56*4+3-20)
#207 | 9.666667 | 22 | 0.551724 | 7 | 29 | 2.285714 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.478261 | 0.206897 | 29 | 3 | 23 | 9.666667 | 0.217391 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
838cc54bfe29ed1de3601e9878415e82c7afc641 | 140 | py | Python | docs/context_processors.py | sreekanth1990/djangoproject.com | 69ab8573aadcc79e9da610ddad7b4e0c69a550d3 | [
"BSD-3-Clause"
] | 1,440 | 2015-01-05T13:06:12.000Z | 2022-03-30T23:09:24.000Z | docs/context_processors.py | sreekanth1990/djangoproject.com | 69ab8573aadcc79e9da610ddad7b4e0c69a550d3 | [
"BSD-3-Clause"
] | 711 | 2015-01-01T19:42:33.000Z | 2022-03-29T08:36:29.000Z | docs/context_processors.py | sreekanth1990/djangoproject.com | 69ab8573aadcc79e9da610ddad7b4e0c69a550d3 | [
"BSD-3-Clause"
] | 887 | 2015-01-01T03:17:20.000Z | 2022-03-23T09:15:26.000Z | from docs.models import DocumentRelease
def docs_version(request):
return {'DOCS_VERSION': DocumentRelease.objects.current_version()}
| 23.333333 | 70 | 0.8 | 16 | 140 | 6.8125 | 0.6875 | 0.201835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 140 | 5 | 71 | 28 | 0.872 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
839c697b42d7af5331837509a03f956ec95540b3 | 86,125 | py | Python | core/code.py | phofegger/kara | e65019f0ed46320601b6ca2efa9d4eb6e7ccd6dc | [
"MIT"
] | null | null | null | core/code.py | phofegger/kara | e65019f0ed46320601b6ca2efa9d4eb6e7ccd6dc | [
"MIT"
] | null | null | null | core/code.py | phofegger/kara | e65019f0ed46320601b6ca2efa9d4eb6e7ccd6dc | [
"MIT"
] | null | null | null | sor_2d_hom = '''
# line 13 "sor_poisson_2d_hom.py"
double tmp, diff, err=1.0, sum, dx2 = double(_dx2);
int t=0, i, j, k;
for (t=0; t<int(nmax); t++) {
err = 0.0;
sum = 0.0;
i=0, j=0, k=0; // left front bottom corner neumann boundary conditions with 1st order approximation
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i+1,j,k) + syy*u(i,j+1,k) + I(i,j,k)*dx2)/(sxx + syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
i=0, k=0; // left bottom edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i+1,j,k) + syy*(u(i,j+1,k) + u(i,j-1,k)) + I(i,j,k)*dx2)/(sxx + 2*syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=0, j=ny-1, k=0; // left back bottom corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i+1,j,k) + syy*u(i,j-1,k) + I(i,j,k)*dx2)/(sxx + syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
j=0, k=0; // front bottom edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*(u(i+1,j,k) + u(i-1,j,k)) + syy*u(i,j+1,k) + I(i,j,k)*dx2)/(2*sxx + syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
k=0; // bottom face
for (i=1; i<nx-1; i++) {
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*(u(i+1,j,k) + u(i-1,j,k)) + syy*(u(i,j+1,k) + u(i,j-1,k)) + I(i,j,k)*dx2)/(2*sxx + 2*syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
j=ny-1, k=0; // back bottom edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*(u(i+1,j,k) + u(i-1,j,k)) + syy*u(i,j-1,k) + I(i,j,k)*dx2)/(2*sxx + syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, j=0, k=0; // bottom front right corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(syy*u(i,j+1,k) + sxx*u(i-1,j,k) + I(i,j,k)*dx2)/(syy + sxx);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
i=nx-1, k=0; // bottom right edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(syy*(u(i,j+1,k) + u(i,j-1,k)) + sxx*u(i-1,j,k) + I(i,j,k)*dx2)/(2*syy + sxx);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, j=ny-1, k=0; // bottom back right corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i-1,j,k) + syy*u(i,j-1,k) + I(i,j,k)*dx2)/(sxx + syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
if (fabs(sum) != 0.) {
err = sqrt(err/sum);
if (err<double(tol)) break;
}
}
if (err>double(tol)) return_val=double(err);
else return_val = double(t);
'''
sor_2d_hom_xy = '''
# line 13 "sor_poisson_2d_hom_xy.py"
double tmp, diff, err=1.0, sum, dx2 = double(_dx2);
int t=0, i, j, k;
for (t=0; t<int(nmax); t++) {
err = 0.0;
sum = 0.0;
i=0, j=0, k=0; // left front bottom corner neumann boundary conditions with 1st order approximation
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+sxx*u(i+1,j,k)
+syy*u(i,j+1,k)
+0.5*sxy*(u(i,j+1,k)+u(i+1,j+1,k)-u(i+1,j,k))
-0.5*sxy*(u(i,j+1,k)-u(i+1,j+1,k)-u(i+1,j,k))
+I(i,j,k)*dx2)/(
+sxx + sxy
+syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
i=0, k=0; // left bottom edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+sxx*u(i+1,j,k)
+syy*u(i,j+1,k)
+syy*u(i,j-1,k)
+0.25*sxy*(u(i,j+1,k)+u(i+1,j+1,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.5*sxy*(u(i,j+1,k)-u(i+1,j+1,k)-u(i+1,j,k))
-0.5*sxy*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+sxx
+syy
+syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=0, j=ny-1, k=0; // left back bottom corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+sxx*u(i+1,j,k)
+syy*u(i,j-1,k)
+0.5*sxy*(u(i+1,j,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.5*sxy*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+sxx - sxy
+syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
j=0, k=0; // front bottom edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+sxx*u(i+1,j,k)
+syy*u(i,j+1,k)
+sxx*u(i-1,j,k)
+0.5*sxy*(u(i,j+1,k)+u(i+1,j+1,k)-u(i+1,j,k))
-0.25*sxy*(u(i-1,j+1,k)+u(i-1,j,k)-u(i+1,j+1,k)-u(i+1,j,k))
+0.5*sxy*(u(i-1,j,k)-u(i-1,j+1,k)-u(i,j+1,k))
+I(i,j,k)*dx2)/(
2*sxx
+syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
k=0; // bottom face
for (i=1; i<nx-1; i++) {
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+sxx*(u(i+1,j,k) + u(i-1,j,k))
+syy*(u(i,j+1,k) + u(i,j-1,k))
+0.5*sxy*(u(i+1,j+1,k) + u(i-1,j-1,k) -u(i+1,j-1,k) - u(i-1,j+1,k))
+I(i,j,k)*dx2)/(
2*sxx + 2*syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
j=ny-1, k=0; // back bottom edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+sxx*u(i+1,j,k)
+sxx*u(i-1,j,k)
+syy*u(i,j-1,k)
+0.5*sxy*(u(i,j+1,k)+u(i+1,j+1,k)-u(i+1,j,k))
+0.5*sxy*(u(i-1,j,k)-u(i-1,j+1,k)-u(i,j+1,k))
-0.25*sxy*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
2*sxx + syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, j=0, k=0; // bottom front right corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+syy*u(i,j+1,k)
+sxx*u(i-1,j,k)
-0.5*sxy*(u(i-1,j+1,k)+u(i-1,j,k)-u(i,j+1,k))
+0.5*sxy*(u(i-1,j,k)-u(i-1,j+1,k)-u(i,j+1,k))
+I(i,j,k)*dx2)/(
syy - sxy + sxx);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
i=nx-1, k=0; // bottom right edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+syy*u(i,j+1,k)
+sxx*u(i-1,j,k)
+syy*u(i,j-1,k)
-0.5*sxy*(u(i-1,j+1,k)+u(i-1,j,k)-u(i,j+1,k))
+0.25*sxy*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j+1,k)-u(i,j+1,k))
-0.5*sxy*(u(i,j-1,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
2*syy + sxx);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, j=ny-1, k=0; // bottom back right corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+sxx*u(i-1,j,k)
+syy*u(i,j-1,k)
+0.5*sxy*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j,k))
-0.5*sxy*(u(i,j-1,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
sxx + sxy + syy);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
if (fabs(sum) != 0.) {
err = sqrt(err/sum);
if (err<double(tol)) break;
}
}
if (err>double(tol)) return_val=double(err);
else return_val = double(t);
'''
sor_2d_inh_xy = '''
# line 13 "sor_poisson_2d_inh_xy.py"
double tmp, diff, err=1.0, sum, dx2 = double(_dx2);
int t=0, i, j, k;
for (t=0; t<int(nmax); t++) {
err = 0.0;
sum = 0.0;
i=0, j=0, k=0; // left front bottom corner neumann boundary conditions with 1st order approximation
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.5*(sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.5*(syy(i,j+1,k)+syy(i+1,j+1,k))*u(i,j+1,k)
+0.25*(sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i+1,j,k))
-0.25*(sxy(i,j+1,k)+sxy(i+1,j+1,k))*(u(i,j+1,k)-u(i+1,j+1,k)-u(i+1,j,k))
+I(i,j,k)*dx2)/(
+0.5*(sxx(i+1,j+1,k)+sxx(i+1,j,k)) + 0.25*(sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.5*(syy(i,j+1,k)+syy(i+1,j+1,k)) + 0.25*(sxy(i,j+1,k)+sxy(i+1,j+1,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=0, k=0; // left bottom edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.5*(sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.5*(syy(i,j+1,k)+syy(i+1,j+1,k))*u(i,j+1,k)
+0.5*(syy(i,j,k)+syy(i+1,j,k))*u(i,j-1,k)
+0.125*(sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.25*(sxy(i,j+1,k)+sxy(i+1,j+1,k))*(u(i,j+1,k)-u(i+1,j+1,k)-u(i+1,j,k))
-0.25*(sxy(i,j,k)+sxy(i+1,j,k))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.5*(sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.5*(syy(i,j+1,k)+syy(i+1,j+1,k)) + 0.25*(sxy(i,j+1,k)+sxy(i+1,j+1,k))
+0.5*(syy(i,j,k)+syy(i+1,j,k)) - 0.25*(sxy(i,j,k)+sxy(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=0, j=ny-1, k=0; // left back bottom corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.5*(sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.5*(syy(i,j,k)+syy(i+1,j,k))*u(i,j-1,k)
+0.25*(sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i+1,j,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.25*(sxy(i,j,k)+sxy(i+1,j,k))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.5*(sxx(i+1,j+1,k)+sxx(i+1,j,k)) - 0.25*(sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.5*(syy(i,j,k)+syy(i+1,j,k)) - 0.25*(sxy(i,j,k)+sxy(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
j=0, k=0; // front bottom edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.5*(sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.5*(syy(i,j+1,k)+syy(i+1,j+1,k))*u(i,j+1,k)
+0.5*(sxx(i,j+1,k)+sxx(i,j,k))*u(i-1,j,k)
+0.25*(sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i+1,j,k))
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i+1,j+1,k)-u(i+1,j,k))
+0.25*(sxy(i,j+1,k)+sxy(i,j,k))*(u(i-1,j,k)-u(i-1,j+1,k)-u(i,j+1,k))
+I(i,j,k)*dx2)/(
+0.5*(sxx(i+1,j+1,k)+sxx(i+1,j,k)) + 0.25*(sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.5*(syy(i,j+1,k)+syy(i+1,j+1,k))
+0.5*(sxx(i,j+1,k)+sxx(i,j,k)) - 0.25*(sxy(i,j+1,k)+sxy(i,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
k=0; // bottom face
for (i=1; i<nx-1; i++) {
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.5*(sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.5*(syy(i,j+1,k)+syy(i+1,j+1,k))*u(i,j+1,k)
+0.5*(sxx(i,j+1,k)+sxx(i,j,k))*u(i-1,j,k)
+0.5*(syy(i,j,k)+syy(i+1,j,k))*u(i,j-1,k)
+0.125*(sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i+1,j+1,k)-u(i+1,j,k))
+0.125*(sxy(i,j+1,k)+sxy(i,j,k))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j+1,k)-u(i,j+1,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.5*(sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.5*(syy(i,j+1,k)+syy(i+1,j+1,k))
+0.5*(sxx(i,j+1,k)+sxx(i,j,k))
+0.5*(syy(i,j,k)+syy(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
j=ny-1, k=0; // back bottom edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.5*(sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.5*(sxx(i,j+1,k)+sxx(i,j,k))*u(i-1,j,k)
+0.5*(syy(i,j,k)+syy(i+1,j,k))*u(i,j-1,k)
+0.25*(sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i+1,j,k))
+0.25*(sxy(i,j+1,k)+sxy(i,j,k))*(u(i-1,j,k)-u(i-1,j+1,k)-u(i,j+1,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.5*(sxx(i+1,j+1,k)+sxx(i+1,j,k)) + 0.25*(sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.5*(sxx(i,j+1,k)+sxx(i,j,k)) - 0.25*(sxy(i,j+1,k)+sxy(i,j,k))
+0.5*(syy(i,j,k)+syy(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=nx-1, j=0, k=0; // bottom front right corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.5*(syy(i,j+1,k)+syy(i+1,j+1,k))*u(i,j+1,k)
+0.5*(sxx(i,j+1,k)+sxx(i,j,k))*u(i-1,j,k)
-0.25*(sxy(i,j+1,k)+sxy(i+1,j+1,k))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i,j+1,k))
+0.25*(sxy(i,j+1,k)+sxy(i,j,k))*(u(i-1,j,k)-u(i-1,j+1,k)-u(i,j+1,k))
+I(i,j,k)*dx2)/(
+0.5*(syy(i,j+1,k)+syy(i+1,j+1,k)) - 0.25*(sxy(i,j+1,k)+sxy(i+1,j+1,k))
+0.5*(sxx(i,j+1,k)+sxx(i,j,k)) - 0.25*(sxy(i,j+1,k)+sxy(i,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, k=0; // bottom right edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.5*(syy(i,j+1,k)+syy(i+1,j+1,k))*u(i,j+1,k)
+0.5*(sxx(i,j+1,k)+sxx(i,j,k))*u(i-1,j,k)
+0.5*(syy(i,j,k)+syy(i+1,j,k))*u(i,j-1,k)
-0.25*(sxy(i,j+1,k)+sxy(i+1,j+1,k))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i,j+1,k))
+0.125*(sxy(i,j+1,k)+sxy(i,j,k))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j+1,k)-u(i,j+1,k))
-0.25*(sxy(i,j,k)+sxy(i+1,j,k))*(u(i,j-1,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.5*(syy(i,j+1,k)+syy(i+1,j+1,k)) - 0.25*(sxy(i,j+1,k)+sxy(i+1,j+1,k))
+0.5*(sxx(i,j+1,k)+sxx(i,j,k))
+0.5*(syy(i,j,k)+syy(i+1,j,k)) + 0.25*(sxy(i,j,k)+sxy(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=nx-1, j=ny-1, k=0; // bottom back right corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.5*(sxx(i,j+1,k)+sxx(i,j,k))*u(i-1,j,k)
+0.5*(syy(i,j,k)+syy(i+1,j,k))*u(i,j-1,k)
+0.25*(sxy(i,j+1,k)+sxy(i,j,k))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j,k))
-0.25*(sxy(i,j,k)+sxy(i+1,j,k))*(u(i,j-1,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.5*(sxx(i,j+1,k)+sxx(i,j,k)) + 0.25*(sxy(i,j+1,k)+sxy(i,j,k))
+0.5*(syy(i,j,k)+syy(i+1,j,k)) + 0.25*(sxy(i,j,k)+sxy(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
if (fabs(sum) != 0.) {
err = sqrt(err/sum);
if (err<double(tol)) break;
}
}
if (err>double(tol)) return_val=double(err);
else return_val = double(t);
'''
sor_3d_hom = '''
# line 13 "sor_poisson_3d_hom.py"
double tmp, diff, err=1.0, sum, dx2 = double(_dx2);
int t=0, i, j, k;
for (t=0; t<int(nmax); t++) {
err = 0.0;
sum = 0.0;
i=0, j=0, k=0; // left front bottom corner neumann boundary conditions with 1st order approximation
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i+1,j,k) + syy*u(i,j+1,k) + szz*u(i,j,k+1) + I(i,j,k)*dx2)/(sxx + syy + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
i=0, k=0; // left bottom edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i+1,j,k) + syy*(u(i,j+1,k) + u(i,j-1,k)) + szz*u(i,j,k+1) + I(i,j,k)*dx2)/(sxx + 2*syy + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=0, j=ny-1, k=0; // left back bottom corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i+1,j,k) + syy*u(i,j-1,k) + szz*(u(i,j,k+1)) + I(i,j,k)*dx2)/(
sxx + syy + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
i=0, j=0; // front left edge
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i+1,j,k) + syy*u(i,j+1,k) + szz*(u(i,j,k-1) + u(i,j,k+1)) + I(i,j,k)*dx2)/(
sxx + syy + 2*szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=0; // left face
for (j=1; j<ny-1; j++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i+1,j,k) + syy*(u(i,j+1,k) + u(i,j-1,k)) + szz*(u(i,j,k-1) + u(i,j,k+1)) + I(i,j,k)*dx2)/(
sxx + 2*syy + 2*szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=0, j=ny-1; // left back edge
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i+1,j,k) + syy*u(i,j-1,k) + szz*(u(i,j,k-1) + u(i,j,k+1)) + I(i,j,k)*dx2)/(
sxx + syy + 2*szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=0, j=0, k=nz-1; // left front top corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i+1,j,k) + syy*u(i,j+1,k) + szz*u(i,j,k-1) + I(i,j,k)*dx2)/(
sxx + syy + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
i=0, k=nz-1; // left top edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i+1,j,k) + syy*(u(i,j+1,k) + u(i,j-1,k)) + szz*u(i,j,k-1) + I(i,j,k)*dx2)/(
sxx + 2*syy + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=0, j=ny-1, k=nz-1; // left back top corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i+1,j,k) + syy*u(i,j-1,k) + szz*u(i,j,k-1) + I(i,j,k)*dx2)/(
sxx + syy + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
j=0, k=0; // front bottom edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*(u(i+1,j,k) + u(i-1,j,k)) + syy*u(i,j+1,k) + szz*u(i,j,k+1) + I(i,j,k)*dx2)/(
2*sxx + syy + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
int j=0; // front face
for (i=1; i<nx-1; i++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*(u(i+1,j,k) + u(i-1,j,k)) + syy*u(i,j+1,k) + szz*(u(i,j,k-1) + u(i,j,k+1)) + I(i,j,k)*dx2)/(
2*sxx + syy + 2*szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
j=0,k=nz-1; // front top edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*(u(i+1,j,k) + u(i-1,j,k)) + syy*u(i,j+1,k) + szz*u(i,j,k-1) + I(i,j,k)*dx2)/(
2*sxx + syy + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
k=0; // bottom face
for (i=1; i<nx-1; i++) {
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*(u(i+1,j,k) + u(i-1,j,k)) + syy*(u(i,j+1,k) + u(i,j-1,k)) + szz*u(i,j,k+1) + I(i,j,k)*dx2)/(
2*sxx + 2*syy +szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
j=ny-1, k=0; // back bottom edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*(u(i+1,j,k) + u(i-1,j,k)) + syy*u(i,j-1,k) + szz*u(i,j,k+1) + I(i,j,k)*dx2)/(
2*sxx + syy + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
// bulk
for (i=1; i<nx-1; i++) {
for (j=1; j<ny-1; j++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*(u(i+1,j,k) + u(i-1,j,k)) + syy*(u(i,j+1,k) + u(i,j-1,k)) + szz*(u(i,j,k-1) + u(i,j,k+1)) + I(i,j,k)*dx2)/(
2*sxx + 2*syy + 2*szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
k=nz-1; // top face
for (i=1; i<nx-1; i++) {
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*(u(i+1,j,k) + u(i-1,j,k)) + syy*(u(i,j+1,k) + u(i,j-1,k)) + szz*u(i,j,k-1) + I(i,j,k)*dx2)/(
2*sxx + 2*syy + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
j=ny-1; // back face
for (i=1; i<nx-1; i++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*(u(i+1,j,k) + u(i-1,j,k)) + syy*u(i,j-1,k) + szz*(u(i,j,k-1) + u(i,j,k+1)) + I(i,j,k)*dx2)/(
2*sxx + syy + 2*szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
j=ny-1, k=nz-1; // back top edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*(u(i+1,j,k) + u(i-1,j,k)) + syy*u(i,j-1,k) + szz*u(i,j,k-1) + I(i,j,k)*dx2)/(
2*sxx + syy + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, j=0, k=0; // bottom front right corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(syy*u(i,j+1,k) + sxx*u(i-1,j,k) + szz*u(i,j,k+1) + I(i,j,k)*dx2)/(
syy + sxx + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
i=nx-1, k=0; // bottom right edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(syy*(u(i,j+1,k) + u(i,j-1,k)) + sxx*u(i-1,j,k) + szz*u(i,j,k+1) + I(i,j,k)*dx2)/(
2*syy + sxx + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, j=ny-1, k=0; // bottom back right corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i-1,j,k) + syy*u(i,j-1,k) + szz*u(i,j,k+1) + I(i,j,k)*dx2)/(
sxx + syy + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
i=nx-1, j=0; // front right edge
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(syy*u(i,j+1,k) + sxx*u(i-1,j,k) + szz*(u(i,j,k-1) + u(i,j,k+1)) + I(i,j,k)*dx2)/(
syy + sxx + 2*szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, j=0, k=nz-1; // front right top corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(syy*u(i,j+1,k) + sxx*u(i-1,j,k) + szz*u(i,j,k-1) + I(i,j,k)*dx2)/(
syy + sxx + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
i=nx-1; // right face
for (j=1; j<ny-1; j++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(syy*(u(i,j+1,k) + u(i,j-1,k)) + sxx*u(i-1,j,k) + szz*(u(i,j,k-1) + u(i,j,k+1)) + I(i,j,k)*dx2)/(
2*syy + sxx + 2*szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=nx-1, j=ny-1; // back right edge
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i-1,j,k) + syy*u(i,j-1,k) + szz*(u(i,j,k-1) + u(i,j,k+1)) + I(i,j,k)*dx2)/(
sxx + syy + 2*szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, k=nz-1; // right top edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(syy*(u(i,j+1,k) + u(i,j-1,k)) + sxx*u(i-1,j,k) + szz*u(i,j,k-1) + I(i,j,k)*dx2)/(
2*syy + sxx + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, j=ny-1, k=nz-1; // top right back corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(sxx*u(i-1,j,k) + syy*u(i,j-1,k) + szz*u(i,j,k-1) + I(i,j,k)*dx2)/(
sxx + syy + szz);
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
if (fabs(sum) != 0.) {
err = sqrt(err/sum);
if (err<double(tol)) break;
}
}
if (err>double(tol)) return_val=double(err);
else return_val = double(t);
'''
sor_3d_inh = '''
# line 14 "sor_poisson_3d_inh.py"
double tmp, diff, err=1.0, sum, dx2 = double(_dx2);
int t=0, i, j, k;
for (t=0; t<int(nmax); t++) {
err = 0.0;
sum = 0.0;
// x=0, y=0, z=0 corner neumann boundary conditions with 1st order approximation
tmp = u(0,0,0);
u(0,0,0) = (1-w)*u(0,0,0) + w*(+0.25*(sxx(1,1,1)+sxx(1,0,1)+sxx(1,1,0)+sxx(1,0,0))*u(1,0,0)
+0.25*(syy(0,1,0)+syy(1,1,0)+syy(0,1,1)+syy(1,1,1))*u(0,1,0)
+0.25*(szz(0,1,1)+szz(1,1,1)+szz(0,0,1)+szz(1,0,1))*u(0,0,1)
+I(0,0,0)*dx2)/(
+0.25*(sxx(1,1,1)+sxx(1,0,1)+sxx(1,1,0)+sxx(1,0,0))
+0.25*(syy(0,1,0)+syy(1,1,0)+syy(0,1,1)+syy(1,1,1))
+0.25*(szz(0,1,1)+szz(1,1,1)+szz(0,0,1)+szz(1,0,1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
// x=0, z=0 edge
for (j=1; j<ny-1; j++) {
tmp = u(0,j,0);
u(0,j,0) = (1-w)*u(0,j,0) + w*(+0.25*(sxx(1,j+1,1)+sxx(1,j,1)+sxx(1,j+1,0)+sxx(1,j,0))*u(1,j,0)
+0.25*(syy(0,j+1,0)+syy(1,j+1,0)+syy(0,j+1,1)+syy(1,j+1,1))*u(0,j+1,0)
+0.25*(syy(0,j,0)+syy(1,j,0)+syy(0,j,1)+syy(1,j,1))*u(0,j-1,0)
+0.25*(szz(0,j+1,1)+szz(1,j+1,1)+szz(0,j,1)+szz(1,j,1))*u(0,j,1)
+I(0,j,0)*dx2)/(
+0.25*(sxx(1,j+1,1)+sxx(1,j,1)+sxx(1,j+1,0)+sxx(1,j,0))
+0.25*(syy(0,j+1,0)+syy(1,j+1,0)+syy(0,j+1,1)+syy(1,j+1,1))
+0.25*(syy(0,j,0)+syy(1,j,0)+syy(0,j,1)+syy(1,j,1))
+0.25*(szz(0,j+1,1)+szz(1,j+1,1)+szz(0,j,1)+szz(1,j,1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
// x=0, y=ny, z=0 corner
tmp = u(0,ny-1,0);
u(0,ny-1,0) = (1-w)*u(0,ny-1,0) + w*(+0.25*(sxx(1,ny,1)+sxx(1,ny-1,1)+sxx(1,ny,0)+sxx(1,ny-1,0))*u(1,ny-1,0)
+0.25*(syy(0,ny-1,0)+syy(1,ny-1,0)+syy(0,ny-1,1)+syy(1,ny-1,1))*u(0,ny-2,0)
+0.25*(szz(0,ny,1)+szz(1,ny,1)+szz(0,ny-1,1)+szz(1,ny-1,1))*u(0,ny-1,1)
+I(0,ny-1,0)*dx2)/(
+0.25*(sxx(1,ny,1)+sxx(1,ny-1,1)+sxx(1,ny,0)+sxx(1,ny-1,0))
+0.25*(syy(0,ny-1,0)+syy(1,ny-1,0)+syy(0,ny-1,1)+syy(1,ny-1,1))
+0.25*(szz(0,ny,1)+szz(1,ny,1)+szz(0,ny-1,1)+szz(1,ny-1,1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
// x=0, y=0 edge
for (k=1; k<nz-1; k++) {
tmp = u(0,0,k);
u(0,0,k) = (1-w)*u(0,0,k) + w*(+0.25*(sxx(1,1,k+1)+sxx(1,0,k+1)+sxx(1,1,k)+sxx(1,0,k))*u(1,0,k)
+0.25*(syy(0,1,k)+syy(1,1,k)+syy(0,1,k+1)+syy(1,1,k+1))*u(0,1,k)
+0.25*(szz(0,1,k)+szz(1,1,k)+szz(0,0,k)+szz(1,0,k))*u(0,0,k-1)
+0.25*(szz(0,1,k+1)+szz(1,1,k+1)+szz(0,0,k+1)+szz(1,0,k+1))*u(0,0,k+1)
+I(0,0,k)*dx2)/(
+0.25*(sxx(1,1,k+1)+sxx(1,0,k+1)+sxx(1,1,k)+sxx(1,0,k))
+0.25*(syy(0,1,k)+syy(1,1,k)+syy(0,1,k+1)+syy(1,1,k+1))
+0.25*(szz(0,1,k)+szz(1,1,k)+szz(0,0,k)+szz(1,0,k))
+0.25*(szz(0,1,k+1)+szz(1,1,k+1)+szz(0,0,k+1)+szz(1,0,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
// x=0 face
for (j=1; j<ny-1; j++) {
for (k=1; k<nz-1; k++) {
tmp = u(0,j,k);
u(0,j,k) = (1-w)*u(0,j,k) + w*(+0.25*(sxx(1,j+1,k+1)+sxx(1,j,k+1)+sxx(1,j+1,k)+sxx(1,j,k))*u(1,j,k)
+0.25*(syy(0,j+1,k)+syy(1,j+1,k)+syy(0,j+1,k+1)+syy(1,j+1,k+1))*u(0,j+1,k)
+0.25*(syy(0,j,k)+syy(1,j,k)+syy(0,j,k+1)+syy(1,j,k+1))*u(0,j-1,k)
+0.25*(szz(0,j+1,k)+szz(1,j+1,k)+szz(0,j,k)+szz(1,j,k))*u(0,j,k-1)
+0.25*(szz(0,j+1,k+1)+szz(1,j+1,k+1)+szz(0,j,k+1)+szz(1,j,k+1))*u(0,j,k+1)
+I(0,j,k)*dx2)/(
+0.25*(sxx(1,j+1,k+1)+sxx(1,j,k+1)+sxx(1,j+1,k)+sxx(1,j,k))
+0.25*(syy(0,j+1,k)+syy(1,j+1,k)+syy(0,j+1,k+1)+syy(1,j+1,k+1))
+0.25*(syy(0,j,k)+syy(1,j,k)+syy(0,j,k+1)+syy(1,j,k+1))
+0.25*(szz(0,j+1,k)+szz(1,j+1,k)+szz(0,j,k)+szz(1,j,k))
+0.25*(szz(0,j+1,k+1)+szz(1,j+1,k+1)+szz(0,j,k+1)+szz(1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
// x=0, j=ny edge
for (k=1; k<nz-1; k++) {
tmp = u(0,ny-1,k);
u(0,ny-1,k) = (1-w)*u(0,ny-1,k) + w*(+0.25*(sxx(1,ny-1,k+1)+sxx(1,ny-1,k+1)+sxx(1,ny,k)+sxx(1,ny-1,k))*u(1,ny-1,k)
+0.25*(syy(0,ny-1,k)+syy(1,ny-1,k)+syy(0,ny-1,k+1)+syy(1,ny-1,k+1))*u(0,ny-2,k)
+0.25*(szz(0,ny,k)+szz(1,ny,k)+szz(0,ny-1,k)+szz(1,ny-1,k))*u(0,ny-1,k-1)
+0.25*(szz(0,ny,k+1)+szz(1,ny,k+1)+szz(0,ny-1,k+1)+szz(1,ny-1,k+1))*u(0,ny-1,k+1)
+I(0,ny-1,k)*dx2)/(
+0.25*(sxx(1,ny-1,k+1)+sxx(1,ny-1,k+1)+sxx(1,ny,k)+sxx(1,ny-1,k))
+0.25*(syy(0,ny-1,k)+syy(1,ny-1,k)+syy(0,ny-1,k+1)+syy(1,ny-1,k+1))
+0.25*(szz(0,ny,k)+szz(1,ny,k)+szz(0,ny-1,k)+szz(1,ny-1,k))
+0.25*(szz(0,ny,k+1)+szz(1,ny,k+1)+szz(0,ny-1,k+1)+szz(1,ny-1,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
// x=0, y=0, z=nz corner
tmp = u(0,0,nz-1);
u(0,0,nz-1) = (1-w)*u(0,0,nz-1) + w*(+0.25*(sxx(1,1,nz)+sxx(1,0,nz)+sxx(1,1,nz-1)+sxx(1,0,nz-1))*u(1,0,nz-1)
+0.25*(syy(0,1,nz-1)+syy(1,1,nz-1)+syy(0,1,nz)+syy(1,1,nz))*u(0,1,nz-1)
+0.25*(szz(0,1,nz-1)+szz(1,1,nz-1)+szz(0,0,nz-1)+szz(1,0,nz-1))*u(0,0,nz-2)
+I(0,0,nz-1)*dx2)/(
+0.25*(sxx(1,1,nz)+sxx(1,0,nz)+sxx(1,1,nz-1)+sxx(1,0,nz-1))
+0.25*(syy(0,1,nz-1)+syy(1,1,nz-1)+syy(0,1,nz)+syy(1,1,nz))
+0.25*(szz(0,1,nz-1)+szz(1,1,nz-1)+szz(0,0,nz-1)+szz(1,0,nz-1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
// x=0, z=nz edge
for (j=1; j<ny-1; j++) {
tmp = u(0,j,nz-1);
u(0,j,nz-1) = (1-w)*u(0,j,nz-1) + w*(+0.25*(sxx(1,j+1,nz)+sxx(1,j,nz)+sxx(1,j+1,nz-1)+sxx(1,j,nz-1))*u(1,j,nz-1)
+0.25*(syy(0,j+1,nz-1)+syy(1,j+1,nz-1)+syy(0,j+1,nz)+syy(1,j+1,nz))*u(0,j+1,nz-1)
+0.25*(syy(0,j,nz-1)+syy(1,j,nz-1)+syy(0,j,nz)+syy(1,j,nz))*u(0,j-1,nz-1)
+0.25*(szz(0,j+1,nz-1)+szz(1,j+1,nz-1)+szz(0,j,nz-1)+szz(1,j,nz-1))*u(0,j,nz-2)
+I(0,j,nz-1)*dx2)/(
+0.25*(sxx(1,j+1,nz)+sxx(1,j,nz)+sxx(1,j+1,nz-1)+sxx(1,j,nz-1))
+0.25*(syy(0,j+1,nz-1)+syy(1,j+1,nz-1)+syy(0,j+1,nz)+syy(1,j+1,nz))
+0.25*(syy(0,j,nz-1)+syy(1,j,nz-1)+syy(0,j,nz)+syy(1,j,nz))
+0.25*(szz(0,j+1,nz-1)+szz(1,j+1,nz-1)+szz(0,j,nz-1)+szz(1,j,nz-1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
// x=0, y=ny, z=nz corner
tmp = u(0,ny-1,nz-1);
u(0,ny-1,nz-1) = (1-w)*u(0,ny-1,nz-1) + w*(+0.25*(sxx(1,ny,nz)+sxx(1,ny-1,nz)+sxx(1,ny,nz-1)+sxx(1,ny-1,nz-1))*u(1,ny-1,nz-1)
+0.25*(syy(0,ny-1,nz-1)+syy(1,ny-1,nz-1)+syy(0,ny-1,nz)+syy(1,ny,nz))*u(0,ny-2,nz-1)
+0.25*(szz(0,ny,nz-1)+szz(1,ny,nz-1)+szz(0,ny-1,nz-1)+szz(1,ny-1,nz-1))*u(0,ny-1,nz-2)
+I(0,ny-1,nz-1)*dx2)/(
+0.25*(sxx(1,ny,nz)+sxx(1,ny-1,nz)+sxx(1,ny,nz-1)+sxx(1,ny-1,nz-1))
+0.25*(syy(0,ny-1,nz-1)+syy(1,ny-1,nz-1)+syy(0,ny-1,nz)+syy(1,ny,nz))
+0.25*(szz(0,ny,nz-1)+szz(1,ny,nz-1)+szz(0,ny-1,nz-1)+szz(1,ny-1,nz-1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
// j=0, z=0 edge
for (i=1; i<nx-1; i++) {
tmp = u(i,0,0);
u(i,0,0) = (1-w)*u(i,0,0) + w*(+0.25*(sxx(i+1,1,1)+sxx(i+1,0,1)+sxx(i+1,1,0)+sxx(i+1,0,0))*u(i+1,0,0)
+0.25*(syy(i,1,0)+syy(i+1,1,0)+syy(i,1,1)+syy(i+1,1,1))*u(i,1,0)
+0.25*(sxx(i,1,0)+sxx(i,0,0)+sxx(i,1,1)+sxx(i,0,1))*u(i-1,0,0)
+0.25*(szz(i,1,1)+szz(i+1,1,1)+szz(i,0,1)+szz(i+1,0,1))*u(i,0,1)
+I(i,0,0)*dx2)/(
+0.25*(sxx(i+1,1,1)+sxx(i+1,0,1)+sxx(i+1,1,0)+sxx(i+1,0,0))
+0.25*(syy(i,1,0)+syy(i+1,1,0)+syy(i,1,1)+syy(i+1,1,1))
+0.25*(sxx(i,1,0)+sxx(i,0,0)+sxx(i,1,1)+sxx(i,0,1))
+0.25*(szz(i,1,1)+szz(i+1,1,1)+szz(i,0,1)+szz(i+1,0,1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
int j=0; // front face
for (i=1; i<nx-1; i++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
j=0,k=nz-1; // front top edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
k=0; // bottom face
for (i=1; i<nx-1; i++) {
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
j=ny-1, k=0; // back bottom edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
// bulk
for (i=1; i<nx-1; i++) {
for (j=1; j<ny-1; j++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
}
k=nz-1; // top face
for (i=1; i<nx-1; i++) {
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
j=ny-1; // back face
for (i=1; i<nx-1; i++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
j=ny-1, k=nz-1; // back top edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=nx-1, j=0, k=0; // bottom front right corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+I(i,j,k)*dx2)/(
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, k=0; // bottom right edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+I(i,j,k)*dx2)/(
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=nx-1, j=ny-1, k=0; // bottom back right corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+I(i,j,k)*dx2)/(
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, j=0; // front right edge
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+I(i,j,k)*dx2)/(
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=nx-1, j=0, k=nz-1; // front right top corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+I(i,j,k)*dx2)/(
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1; // right face
for (j=1; j<ny-1; j++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+I(i,j,k)*dx2)/(
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
i=nx-1, j=ny-1; // back right edge
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+I(i,j,k)*dx2)/(
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=nx-1, k=nz-1; // right top edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+I(i,j,k)*dx2)/(
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=nx-1, j=ny-1, k=nz-1; // top right back corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+I(i,j,k)*dx2)/(
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
if (fabs(sum) != 0.) {
err = sqrt(err/sum);
if (err<double(tol)) break;
}
}
if (err>double(tol)) return_val=double(err);
else return_val = double(t);
'''
sor_3d_inh_xy = '''
# line 13 "sor_poisson_3d_inh_xy.py"
double tmp, diff, err=1.0, sum, dx2 = double(_dx2);
int t=0, i, j, k;
for (t=0; t<int(nmax); t++) {
err = 0.0;
sum = 0.0;
i=0, j=0, k=0; // left front bottom corner neumann boundary conditions with 1st order approximation
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i+1,j,k))
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i,j+1,k)-u(i+1,j+1,k)-u(i+1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k)) + 0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1)) + 0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=0, k=0; // left bottom edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.0625*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i,j+1,k)-u(i+1,j+1,k)-u(i+1,j,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1)) + 0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1)) - 0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=0, j=ny-1, k=0; // left back bottom corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i+1,j,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k)) - 0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1)) - 0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=0, j=0; // front left edge
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i+1,j,k))
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i,j+1,k)-u(i+1,j+1,k)-u(i+1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k)) + 0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1)) + 0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=0; // left face
for (j=1; j<ny-1; j++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.0625*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i,j+1,k)-u(i+1,j+1,k)-u(i+1,j,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i,j-1,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1)) + 0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1)) - 0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
i=0, j=ny-1; // left back edge
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i+1,j,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i,j-1,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k)) - 0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1)) - 0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=0, j=0, k=nz-1; // left front top corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i+1,j,k))
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i,j+1,k)-u(i+1,j+1,k)-u(i+1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k)) + 0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1)) + 0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=0, k=nz-1; // left top edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.0625*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i,j+1,k)-u(i+1,j+1,k)-u(i+1,j,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i,j-1,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1)) + 0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1)) - 0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=0, j=ny-1, k=nz-1; // left back top corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i+1,j,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i,j-1,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k)) - 0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1)) - 0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
j=0, k=0; // front bottom edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i+1,j,k))
-0.0625*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i+1,j+1,k)-u(i+1,j,k))
+0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j,k)-u(i-1,j+1,k)-u(i,j+1,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k)) + 0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1)) - 0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
int j=0; // front face
for (i=1; i<nx-1; i++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i+1,j,k))
-0.0625*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i+1,j+1,k)-u(i+1,j,k))
+0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j,k)-u(i-1,j+1,k)-u(i,j+1,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k)) + 0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1)) - 0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
j=0,k=nz-1; // front top edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i+1,j,k))
-0.0625*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i+1,j+1,k)-u(i+1,j,k))
+0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j,k)-u(i-1,j+1,k)-u(i,j+1,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k)) + 0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1)) - 0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
k=0; // bottom face
for (i=1; i<nx-1; i++) {
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.0625*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.0625*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i+1,j+1,k)-u(i+1,j,k))
+0.0625*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j+1,k)-u(i,j+1,k))
-0.0625*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
j=ny-1, k=0; // back bottom edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i+1,j,k))
+0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j,k)-u(i-1,j+1,k)-u(i,j+1,k))
-0.0625*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k)) + 0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1)) - 0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
// bulk
for (i=1; i<nx-1; i++) {
for (j=1; j<ny-1; j++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.0625*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.0625*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i+1,j+1,k)-u(i+1,j,k))
+0.0625*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j+1,k)-u(i,j+1,k))
-0.0625*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
}
k=nz-1; // top face
for (i=1; i<nx-1; i++) {
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.0625*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i,j+1,k)+u(i+1,j+1,k)-u(i,j-1,k)-u(i+1,j-1,k))
-0.0625*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i+1,j+1,k)-u(i+1,j,k))
+0.0625*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j+1,k)-u(i,j+1,k))
-0.0625*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
j=ny-1; // back face
for (i=1; i<nx-1; i++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i+1,j,k)-u(i,j-1,k)-u(i+1,j-1,k))
+0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j,k))
-0.0625*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k)) - 0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1)) + 0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
j=ny-1, k=nz-1; // back top edge
for (i=1; i<nx-1; i++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k))*u(i+1,j,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))*(u(i+1,j,k)-u(i,j-1,k)-u(i+1,j-1,k))
+0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j,k))
-0.0625*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i+1,j-1,k)+u(i+1,j,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i+1,j+1,k+1)+sxx(i+1,j,k+1)+sxx(i+1,j+1,k)+sxx(i+1,j,k)) - 0.125*(sxy(i+1,j+1,k+1)+sxy(i+1,j,k+1)+sxy(i+1,j+1,k)+sxy(i+1,j,k))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1)) + 0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=nx-1, j=0, k=0; // bottom front right corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i,j+1,k))
+0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j,k)-u(i-1,j+1,k)-u(i,j+1,k))
+I(i,j,k)*dx2)/(
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1)) - 0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1)) - 0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, k=0; // bottom right edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i,j+1,k))
+0.0625*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j+1,k)-u(i,j+1,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i,j-1,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1)) - 0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1)) + 0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=nx-1, j=ny-1, k=0; // bottom back right corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i,j-1,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1)) + 0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1)) + 0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1, j=0; // front right edge
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i,j+1,k))
+0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j,k)-u(i-1,j+1,k)-u(i,j+1,k))
+I(i,j,k)*dx2)/(
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1)) - 0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1)) - 0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=nx-1, j=0, k=nz-1; // front right top corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i,j+1,k))
+0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j,k)-u(i-1,j+1,k)-u(i,j+1,k))
+I(i,j,k)*dx2)/(
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1)) - 0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1)) - 0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
i=nx-1; // right face
for (j=1; j<ny-1; j++) {
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i,j+1,k))
+0.0625*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j+1,k)-u(i,j+1,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i,j-1,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1)) - 0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1)) + 0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
}
i=nx-1, j=ny-1; // back right edge
for (k=1; k<nz-1; k++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1))*u(i,j,k+1)
+0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i,j-1,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1)) + 0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1)) + 0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))
+0.25*(szz(i,j+1,k+1)+szz(i+1,j+1,k+1)+szz(i,j,k+1)+szz(i+1,j,k+1)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=nx-1, k=nz-1; // right top edge
for (j=1; j<ny-1; j++) {
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1))*u(i,j+1,k)
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
-0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))*(u(i-1,j+1,k)+u(i-1,j,k)-u(i,j+1,k))
+0.0625*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j+1,k)-u(i,j+1,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i,j-1,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(syy(i,j+1,k)+syy(i+1,j+1,k)+syy(i,j+1,k+1)+syy(i+1,j+1,k+1)) - 0.125*(sxy(i,j+1,k)+sxy(i+1,j+1,k)+sxy(i,j+1,k+1)+sxy(i+1,j+1,k+1))
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1)) + 0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
}
i=nx-1, j=ny-1, k=nz-1; // top right back corner
tmp = u(i,j,k);
u(i,j,k) = (1-w)*u(i,j,k) + w*(+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1))*u(i-1,j,k)
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1))*u(i,j-1,k)
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k))*u(i,j,k-1)
+0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))*(u(i-1,j-1,k)+u(i,j-1,k)-u(i-1,j,k))
-0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))*(u(i,j-1,k)-u(i-1,j-1,k)-u(i-1,j,k))
+I(i,j,k)*dx2)/(
+0.25*(sxx(i,j+1,k)+sxx(i,j,k)+sxx(i,j+1,k+1)+sxx(i,j,k+1)) + 0.125*(sxy(i,j+1,k)+sxy(i,j,k)+sxy(i,j+1,k+1)+sxy(i,j,k+1))
+0.25*(syy(i,j,k)+syy(i+1,j,k)+syy(i,j,k+1)+syy(i+1,j,k+1)) + 0.125*(sxy(i,j,k)+sxy(i+1,j,k)+sxy(i,j,k+1)+sxy(i+1,j,k+1))
+0.25*(szz(i,j+1,k)+szz(i+1,j+1,k)+szz(i,j,k)+szz(i+1,j,k)));
if (u(i,j,k) != u(i,j,k)) u(i,j,k) = 0;
else {
diff = u(i,j,k) - tmp;
err += diff*diff;
sum += tmp*tmp;
}
if (fabs(sum) != 0.) {
err = sqrt(err/sum);
if (err<double(tol)) break;
}
}
if (err>double(tol)) return_val=double(err);
else return_val = double(t);
''' | 44.997388 | 151 | 0.424697 | 24,977 | 86,125 | 1.462265 | 0.002723 | 0.116091 | 0.119842 | 0.085316 | 0.997344 | 0.994661 | 0.989185 | 0.985845 | 0.985133 | 0.983572 | 0 | 0.112474 | 0.2049 | 86,125 | 1,914 | 152 | 44.997388 | 0.42088 | 0 | 0 | 0.798654 | 0 | 0.406618 | 0.998444 | 0.529155 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
83a41aef4ce0b340675b8ac06bf6e024b933d0b0 | 4,646 | py | Python | site-packages/pskf/tools/plot/pa/normalscore.py | jjokella/pyshemkf | 61a6329f7aa5739a38a68504fd6a44568fcd833b | [
"MIT"
] | 5 | 2019-02-06T10:52:52.000Z | 2021-05-21T09:32:45.000Z | site-packages/pskf/tools/plot/pa/normalscore.py | jjokella/pyshemkf | 61a6329f7aa5739a38a68504fd6a44568fcd833b | [
"MIT"
] | null | null | null | site-packages/pskf/tools/plot/pa/normalscore.py | jjokella/pyshemkf | 61a6329f7aa5739a38a68504fd6a44568fcd833b | [
"MIT"
] | 1 | 2018-12-04T11:39:10.000Z | 2018-12-04T11:39:10.000Z | # Normal Score
wavewell_dats = {50: '2018_03_12', 70: '2018_03_12',
100: '2018_03_12', 250: '2018_03_12',
500: '2018_03_12', 1000: '2018_03_12',
2000: '2018_03_12'}
wavewell_lets = {50: 'b', 70: 'aln', 100: 'bxz', 250: 'dkl',
500: 'ewx', 1000: 'gjj', 2000: 'hvv'}
wavewell_nums = {50: 1000, 70: 1000, 100: 1000, 250: 1000,
500: 100, 1000: 100, 2000: 100}
wavewell_obss = {50: 60, 70: 60, 100: 60, 250: 60,
500: 60, 1000: 60, 2000: 60}
wavewell2_dats = {50: '2019_03_03', 70: '2019_03_03',
100: '2019_03_03', 250: '2019_03_03',
500: '2019_03_03', 1000: '2019_03_03',
2000: '2019_03_03'}
wavewell2_lets = {50: 'b', 70: 'aln', 100: 'bxz', 250: 'dkl',
500: 'ewx', 1000: 'gjj', 2000: 'hvv'}
wavewell2_nums = {50: 1000, 70: 1000, 100: 1000, 250: 1000,
500: 100, 1000: 100, 2000: 100}
wavewell2_obss = {50: 60, 70: 60, 100: 60, 250: 60,
500: 60, 1000: 60, 2000: 60}
wavereal_dats = {50: '2018_02_06', 70: '2018_02_06',
100: '2018_02_06', 250: '2018_02_06',
500: '2018_02_06', 1000: '2018_02_06',
2000: '2018_02_06'}
wavereal_lets = {50: 'b', 70: 'aln', 100: 'bxz', 250: 'dkl',
500: 'ewx', 1000: 'gjj', 2000: 'hvv'}
wavereal_nums = {50: 1000, 70: 1000, 100: 1000, 250: 1000,
500: 100, 1000: 100, 2000: 100}
wavereal_obss = {50: 100, 70: 100, 100: 100, 250: 100,
500: 100, 1000: 100, 2000: 100}
wavereal2_dats = {50: '2019_03_03', 70: '2019_03_03',
100: '2019_03_03', 250: '2019_03_03',
500: '2019_03_03', 1000: '2019_03_03',
2000: '2019_03_03'}
wavereal2_lets = {50: 'b', 70: 'aln', 100: 'bxz', 250: 'dkl',
500: 'ewx', 1000: 'gjj', 2000: 'hvv'}
wavereal2_nums = {50: 1000, 70: 1000, 100: 1000, 250: 1000,
500: 100, 1000: 100, 2000: 100}
wavereal2_obss = {50: 100, 70: 100, 100: 100, 250: 100,
500: 100, 1000: 100, 2000: 100}
corrsmall_wavereal_dats = {50: '2018_09_03', 70: '2018_09_03',
100: '2018_09_03', 250: '2018_09_03',
500: '2018_09_03', 1000: '2018_09_03',
2000: '2018_09_03'}
corrsmall_wavereal_lets = {50: 'b', 70: 'aln', 100: 'bxz', 250: 'dkl',
500: 'ewx', 1000: 'gjj', 2000: 'hvv'}
corrsmall_wavereal_nums = {50: 1000, 70: 1000, 100: 1000, 250: 1000,
500: 100, 1000: 100, 2000: 100}
corrsmall_wavereal_obss = {50: 100, 70: 100, 100: 100, 250: 100,
500: 100, 1000: 100, 2000: 100}
corrlarge_wavereal_dats = {50: '2018_09_13', 70: '2018_09_13',
100: '2018_09_13', 250: '2018_09_13',
500: '2018_09_13', 1000: '2018_09_13',
2000: '2018_09_13'}
corrlarge_wavereal_lets = {50: 'b', 70: 'aln', 100: 'bxz', 250: 'dkl',
500: 'ewx', 1000: 'gjj', 2000: 'hvv'}
corrlarge_wavereal_nums = {50: 1000, 70: 1000, 100: 1000, 250: 1000,
500: 100, 1000: 100, 2000: 100}
corrlarge_wavereal_obss = {50: 100, 70: 100, 100: 100, 250: 100,
500: 100, 1000: 100, 2000: 100}
corrsmall_wavewell_dats = {50: '2018_09_03', 70: '2018_09_03',
100: '2018_09_03', 250: '2018_09_03',
500: '2018_09_03', 1000: '2018_09_03',
2000: '2018_09_03'}
corrsmall_wavewell_lets = {50: 'b', 70: 'aln', 100: 'bxz', 250: 'dkl',
500: 'ewx', 1000: 'gjj', 2000: 'hvv'}
corrsmall_wavewell_nums = {50: 1000, 70: 1000, 100: 1000, 250: 1000,
500: 100, 1000: 100, 2000: 100}
corrsmall_wavewell_obss = {50: 60, 70: 60, 100: 60, 250: 60,
500: 60, 1000: 60, 2000: 60}
corrlarge_wavewell_dats = {50: '2018_09_13', 70: '2018_09_13',
100: '2018_09_13', 250: '2018_09_13',
500: '2018_09_13', 1000: '2018_09_13',
2000: '2018_09_13'}
corrlarge_wavewell_lets = {50: 'b', 70: 'aln', 100: 'bxz', 250: 'dkl',
500: 'ewx', 1000: 'gjj', 2000: 'hvv'}
corrlarge_wavewell_nums = {50: 1000, 70: 1000, 100: 1000, 250: 1000,
500: 100, 1000: 100, 2000: 100}
corrlarge_wavewell_obss = {50: 60, 70: 60, 100: 60, 250: 60,
500: 60, 1000: 60, 2000: 60}
| 52.202247 | 70 | 0.499139 | 642 | 4,646 | 3.362928 | 0.054517 | 0.077814 | 0.051876 | 0.072256 | 0.883279 | 0.879574 | 0.879574 | 0.871237 | 0.855952 | 0.855952 | 0 | 0.480366 | 0.34223 | 4,646 | 88 | 71 | 52.795455 | 0.226113 | 0.002583 | 0 | 0.525 | 0 | 0 | 0.153713 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
83b27a25a9140d1b6db85305e96ed70fd7624e48 | 14,005 | py | Python | recipe/migrations/0008_auto_20201108_1126.py | wichmannpas/recipemanager | 787b11b958c9938af9440330c2d3597400dbfdcc | [
"Apache-2.0"
] | 1 | 2021-03-01T09:31:58.000Z | 2021-03-01T09:31:58.000Z | recipe/migrations/0008_auto_20201108_1126.py | wichmannpas/recipemanager | 787b11b958c9938af9440330c2d3597400dbfdcc | [
"Apache-2.0"
] | null | null | null | recipe/migrations/0008_auto_20201108_1126.py | wichmannpas/recipemanager | 787b11b958c9938af9440330c2d3597400dbfdcc | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.1.1 on 2020-11-08 10:26
from decimal import Decimal
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('recipe', '0007_recipe_display_factor'),
]
operations = [
migrations.AddField(
model_name='ingredient',
name='carbohydrates_el',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='carbohydrates_g',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='carbohydrates_l',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='carbohydrates_ml',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='carbohydrates_pieces',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='carbohydrates_tl',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_el',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_g',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_l',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_ml',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_monounsaturated_el',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_monounsaturated_g',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_monounsaturated_l',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_monounsaturated_ml',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_monounsaturated_pieces',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_monounsaturated_tl',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_pieces',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_polyunsaturated_el',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_polyunsaturated_g',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_polyunsaturated_l',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_polyunsaturated_ml',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_polyunsaturated_pieces',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_polyunsaturated_tl',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_saturated_el',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_saturated_g',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_saturated_l',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_saturated_ml',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_saturated_pieces',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_saturated_tl',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fat_tl',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fibres_el',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fibres_g',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fibres_l',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fibres_ml',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fibres_pieces',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='fibres_tl',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='kcal_el',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='kcal_g',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='kcal_l',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='kcal_ml',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='kcal_pieces',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='kcal_tl',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='price_el',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='price_g',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='price_l',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='price_ml',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='price_pieces',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='price_tl',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='protein_el',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='protein_g',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='protein_l',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='protein_ml',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='protein_pieces',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='protein_tl',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='salt_el',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='salt_g',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='salt_l',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='salt_ml',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='salt_pieces',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='salt_tl',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='sugar_el',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='sugar_g',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='sugar_l',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='sugar_ml',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='sugar_pieces',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='ingredient',
name='sugar_tl',
field=models.DecimalField(blank=True, decimal_places=3, max_digits=10, null=True),
),
migrations.AddField(
model_name='recipe',
name='portions',
field=models.DecimalField(decimal_places=3, default=Decimal('1'), max_digits=10),
),
]
| 40.014286 | 94 | 0.592074 | 1,456 | 14,005 | 5.497253 | 0.038462 | 0.150675 | 0.192529 | 0.226012 | 0.967891 | 0.967891 | 0.967891 | 0.961144 | 0.961144 | 0.961144 | 0 | 0.02226 | 0.29111 | 14,005 | 349 | 95 | 40.12894 | 0.783944 | 0.003213 | 0 | 0.77551 | 1 | 0 | 0.10854 | 0.021063 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.005831 | 0 | 0.014577 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
83da54fe4ab08df7ae68bfe399e478903a7b9fa7 | 3,928 | py | Python | bench_test/tests/basic/test_access.py | hankei6km/xrosfs | 17d9db92345d53198eb6a50d8a826557e41f293f | [
"MIT"
] | 1 | 2021-07-09T16:56:52.000Z | 2021-07-09T16:56:52.000Z | bench_test/tests/basic/test_access.py | hankei6km/xrosfs | 17d9db92345d53198eb6a50d8a826557e41f293f | [
"MIT"
] | null | null | null | bench_test/tests/basic/test_access.py | hankei6km/xrosfs | 17d9db92345d53198eb6a50d8a826557e41f293f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright (c) 2018 hankei6km
# Licensed under the MIT License. See LICENSE.txt in the project root.
import os
class TestAccess():
test_file = ('access', 'test.txt')
test_x_file = ('access', 'test_x.txt')
no_exist_file = ('access', 'no_exist.txt')
def test_access(self, mnt_sshfs_path, mnt_xrosfs_path):
test_file_sshfs_path = \
os.path.join(mnt_sshfs_path, *self.test_file)
test_x_file_sshfs_path = \
os.path.join(mnt_sshfs_path, *self.test_x_file)
no_exist_file_sshfs_path = \
os.path.join(mnt_sshfs_path, *self.no_exist_file)
test_file_xrosfs_path = \
os.path.join(mnt_xrosfs_path, *self.test_file)
test_x_file_xrosfs_path = \
os.path.join(mnt_xrosfs_path, *self.test_x_file)
no_exist_file_xrosfs_path = \
os.path.join(mnt_xrosfs_path, *self.no_exist_file)
# Check test data is exist separataly sshfs/xrofs
assert os.access(test_file_sshfs_path, os.F_OK)
assert os.access(test_file_xrosfs_path, os.F_OK)
assert os.access(test_file_sshfs_path, os.F_OK) == \
os.access(test_file_xrosfs_path, os.F_OK)
assert os.access(test_file_sshfs_path, os.R_OK) == \
os.access(test_file_xrosfs_path, os.R_OK)
assert os.access(test_file_sshfs_path, os.W_OK) == \
os.access(test_file_xrosfs_path, os.W_OK)
assert os.access(test_file_sshfs_path, os.X_OK) == \
os.access(test_file_xrosfs_path, os.X_OK)
assert os.access(test_file_sshfs_path, os.R_OK | os.W_OK | os.X_OK) \
== os.access(test_file_xrosfs_path, os.R_OK | os.W_OK | os.X_OK)
assert os.access(test_file_sshfs_path, os.R_OK | os.W_OK) == \
os.access(test_file_xrosfs_path, os.R_OK | os.W_OK)
assert os.access(test_file_sshfs_path, os.R_OK | os.X_OK) == \
os.access(test_file_xrosfs_path, os.R_OK | os.X_OK)
assert os.access(test_file_sshfs_path, os.W_OK | os.X_OK) == \
os.access(test_file_xrosfs_path, os.W_OK | os.X_OK)
assert os.access(test_x_file_sshfs_path, os.X_OK) == \
os.access(test_x_file_xrosfs_path, os.X_OK)
assert os.access(test_x_file_sshfs_path, os.R_OK | os.W_OK | os.X_OK) \
== os.access(test_x_file_xrosfs_path, os.R_OK | os.W_OK | os.X_OK)
assert os.access(test_x_file_sshfs_path, os.R_OK | os.W_OK) == \
os.access(test_x_file_xrosfs_path, os.R_OK | os.W_OK)
assert os.access(test_x_file_sshfs_path, os.R_OK | os.X_OK) == \
os.access(test_x_file_xrosfs_path, os.R_OK | os.X_OK)
assert os.access(test_x_file_sshfs_path, os.W_OK | os.X_OK) == \
os.access(test_x_file_xrosfs_path, os.W_OK | os.X_OK)
assert os.access(no_exist_file_sshfs_path, os.F_OK) == \
os.access(no_exist_file_xrosfs_path, os.F_OK)
assert os.access(no_exist_file_sshfs_path, os.R_OK) == \
os.access(no_exist_file_xrosfs_path, os.R_OK)
assert os.access(no_exist_file_sshfs_path, os.W_OK) == \
os.access(no_exist_file_xrosfs_path, os.W_OK)
assert os.access(no_exist_file_sshfs_path, os.X_OK) == \
os.access(no_exist_file_xrosfs_path, os.X_OK)
assert os.access(no_exist_file_sshfs_path,
os.R_OK | os.W_OK | os.X_OK) == \
os.access(no_exist_file_xrosfs_path, os.R_OK | os.W_OK | os.X_OK)
assert os.access(no_exist_file_sshfs_path, os.R_OK | os.W_OK) == \
os.access(no_exist_file_xrosfs_path, os.R_OK | os.W_OK)
assert os.access(no_exist_file_sshfs_path, os.R_OK | os.X_OK) == \
os.access(no_exist_file_xrosfs_path, os.R_OK | os.X_OK)
assert os.access(no_exist_file_sshfs_path, os.W_OK | os.X_OK) == \
os.access(no_exist_file_xrosfs_path, os.W_OK | os.X_OK)
| 49.721519 | 79 | 0.653768 | 681 | 3,928 | 3.348018 | 0.064611 | 0.089474 | 0.147368 | 0.164474 | 0.884211 | 0.871053 | 0.871053 | 0.863158 | 0.852193 | 0.832456 | 0 | 0.001987 | 0.231415 | 3,928 | 78 | 80 | 50.358974 | 0.75323 | 0.042515 | 0 | 0 | 0 | 0 | 0.012783 | 0 | 0 | 0 | 0 | 0 | 0.365079 | 1 | 0.015873 | false | 0 | 0.015873 | 0 | 0.095238 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
83eb3139b4037f45607013e1c559f1b0623778f7 | 2,454 | py | Python | games/azurelane/assist.py | sirakami-yuki/games-script | 14f9701847c3025f7cc157a83d3538dbb24cd5a2 | [
"Apache-2.0"
] | null | null | null | games/azurelane/assist.py | sirakami-yuki/games-script | 14f9701847c3025f7cc157a83d3538dbb24cd5a2 | [
"Apache-2.0"
] | null | null | null | games/azurelane/assist.py | sirakami-yuki/games-script | 14f9701847c3025f7cc157a83d3538dbb24cd5a2 | [
"Apache-2.0"
] | null | null | null | from common.logutil import logger
def calculate_move_map(context, _):
width, height = context.screen_width, context.screen_height
center_x, center_y = width / 2, height / 2 # 屏幕中心坐标
horizontal_unit_distance = width / 3.5 # 水平方向每次移动距离
vertical_unit_distance = height / 4 # 垂直方向每次移动距离
if context.swipe_hor_positive:
if context.swipe_hor_times < 2:
context.swipe_hor_times += 1
to_x = center_x - horizontal_unit_distance
to_y = center_y
else:
context.swipe_hor_positive = False
to_x = center_x
if context.swipe_ver_positive:
if context.swipe_ver_times < 2:
context.swipe_ver_times += 1
to_y = center_y + vertical_unit_distance
else:
context.swipe_ver_positive = False
context.swipe_ver_times -= 1
to_y = center_y - vertical_unit_distance
else:
if context.swipe_ver_times > -2:
context.swipe_ver_times -= 1
to_y = center_y - vertical_unit_distance
else:
context.swipe_ver_positive = True
context.swipe_ver_times += 1
to_y = center_y + vertical_unit_distance
else:
to_y = center_y
if context.swipe_hor_times > -2:
context.swipe_hor_times -= 1
to_x = center_x + horizontal_unit_distance
else:
context.swipe_hor_positive = True
to_x = center_x
if context.swipe_ver_positive:
if context.swipe_ver_times < 2:
context.swipe_ver_times += 1
to_y = center_y + vertical_unit_distance
else:
context.swipe_ver_positive = False
context.swipe_ver_times -= 1
to_y = center_y - vertical_unit_distance
else:
if context.swipe_ver_times > -2:
context.swipe_ver_times -= 1
to_y = center_y - vertical_unit_distance
else:
context.swipe_ver_positive = True
context.swipe_ver_times += 1
to_y = center_y + vertical_unit_distance
return center_x, center_y, to_x, to_y
| 41.59322 | 64 | 0.53749 | 275 | 2,454 | 4.385455 | 0.141818 | 0.248756 | 0.223881 | 0.199005 | 0.757877 | 0.713101 | 0.713101 | 0.713101 | 0.713101 | 0.713101 | 0 | 0.014513 | 0.41035 | 2,454 | 58 | 65 | 42.310345 | 0.818936 | 0.01141 | 0 | 0.709091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018182 | false | 0 | 0.018182 | 0 | 0.054545 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
f7ab4fded400c99a1cee49ee2b9fcd85972d747e | 2,634 | py | Python | src/tests/test_tools.py | devsetgo/test-api | 2a84bbacbf5cd043b2227e74332e8518927a8238 | [
"MIT"
] | 9 | 2019-05-22T08:46:01.000Z | 2021-12-10T06:44:56.000Z | src/tests/test_tools.py | devsetgo/test-api | 2a84bbacbf5cd043b2227e74332e8518927a8238 | [
"MIT"
] | 285 | 2019-09-03T00:52:39.000Z | 2022-02-13T02:13:59.000Z | src/tests/test_tools.py | devsetgo/test-api | 2a84bbacbf5cd043b2227e74332e8518927a8238 | [
"MIT"
] | 4 | 2019-09-19T18:14:09.000Z | 2020-12-15T18:35:07.000Z | # -*- coding: utf-8 -*-
import unittest
from pathlib import Path
from starlette.testclient import TestClient
from src.main import app
client = TestClient(app)
directory_to__files: str = "data"
class Test(unittest.TestCase):
# xml endpoint test
def test_xml_four_hundred(self):
file_directory = f"{directory_to__files}/testfiles"
directory_path = Path.cwd().joinpath(file_directory)
file_path = f"{directory_path}/test_sample.json"
url = f"/api/v1/tools/xml-json"
files = {"myfile": open(file_path, "r")}
response = client.post(url, files=files)
assert response.status_code == 400
def test_xml(self):
file_directory = f"{directory_to__files}/testfiles"
directory_path = Path.cwd().joinpath(file_directory)
file_path = f"{directory_path}/test_sample.xml"
url = f"/api/v1/tools/xml-json"
files = {"myfile": open(file_path, "r")}
response = client.post(url, files=files)
assert response.status_code == 200
def test_xml_error(self):
file_directory = f"{directory_to__files}/testfiles"
directory_path = Path.cwd().joinpath(file_directory)
file_path = f"{directory_path}/test_sample_bad.xml"
url = f"/api/v1/tools/xml-json"
files = {"myfile": open(file_path, "r")}
response = client.post(url, files=files)
assert response.status_code == 400
# json endpoint test
def test_json_four_hundred(self):
file_directory = f"{directory_to__files}/testfiles"
directory_path = Path.cwd().joinpath(file_directory)
file_path = f"{directory_path}/test_sample.xml"
url = f"/api/v1/tools/json-xml"
files = {"myfile": open(file_path, "r")}
response = client.post(url, files=files)
assert response.status_code == 400
def test_json(self):
file_directory = f"{directory_to__files}/testfiles"
directory_path = Path.cwd().joinpath(file_directory)
file_path = f"{directory_path}/test_sample.json"
url = f"/api/v1/tools/json-xml"
files = {"myfile": open(file_path, "r")}
response = client.post(url, files=files)
assert response.status_code == 200
def test_json_error(self):
file_directory = f"{directory_to__files}/testfiles"
directory_path = Path.cwd().joinpath(file_directory)
file_path = f"{directory_path}/test_sample_bad.json"
url = f"/api/v1/tools/json-xml"
files = {"myfile": open(file_path, "r")}
response = client.post(url, files=files)
assert response.status_code == 400
| 34.657895 | 60 | 0.653379 | 345 | 2,634 | 4.747826 | 0.147826 | 0.095238 | 0.068376 | 0.065934 | 0.855311 | 0.855311 | 0.855311 | 0.855311 | 0.855311 | 0.855311 | 0 | 0.012189 | 0.221336 | 2,634 | 75 | 61 | 35.12 | 0.786446 | 0.02202 | 0 | 0.727273 | 0 | 0 | 0.220451 | 0.202566 | 0 | 0 | 0 | 0 | 0.109091 | 1 | 0.109091 | false | 0 | 0.072727 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7fe5cc57b8dc3dbda90bbec12570fac6442abf1 | 778,605 | py | Python | test/unit/test_cloudant_v1.py | IBM/cloudant-python-sdk | e5c6dfb7a0b932025e3249dece60658e390a4e8d | [
"Apache-2.0"
] | 19 | 2020-11-10T13:50:54.000Z | 2022-03-28T03:35:53.000Z | test/unit/test_cloudant_v1.py | IBM/cloudant-python-sdk | e5c6dfb7a0b932025e3249dece60658e390a4e8d | [
"Apache-2.0"
] | 129 | 2020-08-19T14:20:18.000Z | 2022-03-31T07:39:30.000Z | test/unit/test_cloudant_v1.py | IBM/cloudant-python-sdk | e5c6dfb7a0b932025e3249dece60658e390a4e8d | [
"Apache-2.0"
] | 5 | 2020-06-29T15:25:41.000Z | 2021-08-16T23:33:35.000Z | # -*- coding: utf-8 -*-
# (C) Copyright IBM Corp. 2021.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Unit Tests for CloudantV1
"""
from datetime import datetime, timezone
from ibm_cloud_sdk_core.authenticators.no_auth_authenticator import NoAuthAuthenticator
from ibm_cloud_sdk_core.utils import datetime_to_string, string_to_datetime
import base64
import inspect
import io
import json
import os
import pytest
import re
import requests
import requests.models
import responses
import tempfile
import urllib
import gzip
from ibmcloudant.cloudant_v1 import *
_service = CloudantV1(
authenticator=NoAuthAuthenticator()
)
_base_url = 'http://localhost:5984'
_service.set_service_url(_base_url)
##############################################################################
# Start of Service: Server
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestGetServerInformation():
"""
Test Class for get_server_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_server_information_all_params(self):
"""
get_server_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/')
mock_response = '{"couchdb": "couchdb", "features": ["features"], "vendor": {"name": "name", "variant": "variant", "version": "version"}, "version": "version", "features_flags": ["features_flags"]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_server_information()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_server_information_all_params_with_retries(self):
# Enable retries and run test_get_server_information_all_params.
_service.enable_retries()
self.test_get_server_information_all_params()
# Disable retries and run test_get_server_information_all_params.
_service.disable_retries()
self.test_get_server_information_all_params()
class TestGetMembershipInformation():
"""
Test Class for get_membership_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_membership_information_all_params(self):
"""
get_membership_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_membership')
mock_response = '{"all_nodes": ["all_nodes"], "cluster_nodes": ["cluster_nodes"]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_membership_information()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_membership_information_all_params_with_retries(self):
# Enable retries and run test_get_membership_information_all_params.
_service.enable_retries()
self.test_get_membership_information_all_params()
# Disable retries and run test_get_membership_information_all_params.
_service.disable_retries()
self.test_get_membership_information_all_params()
class TestGetUuids():
"""
Test Class for get_uuids
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_uuids_all_params(self):
"""
get_uuids()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_uuids')
mock_response = '{"uuids": ["uuids"]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
count = 1
# Invoke method
response = _service.get_uuids(
count=count,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'count={}'.format(count) in query_string
def test_get_uuids_all_params_with_retries(self):
# Enable retries and run test_get_uuids_all_params.
_service.enable_retries()
self.test_get_uuids_all_params()
# Disable retries and run test_get_uuids_all_params.
_service.disable_retries()
self.test_get_uuids_all_params()
@responses.activate
def test_get_uuids_required_params(self):
"""
test_get_uuids_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_uuids')
mock_response = '{"uuids": ["uuids"]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_uuids()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_uuids_required_params_with_retries(self):
# Enable retries and run test_get_uuids_required_params.
_service.enable_retries()
self.test_get_uuids_required_params()
# Disable retries and run test_get_uuids_required_params.
_service.disable_retries()
self.test_get_uuids_required_params()
class TestGetCapacityThroughputInformation():
"""
Test Class for get_capacity_throughput_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_capacity_throughput_information_all_params(self):
"""
get_capacity_throughput_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_api/v2/user/capacity/throughput')
mock_response = '{"current": {"throughput": {"blocks": 0, "query": 0, "read": 0, "write": 0}}, "target": {"throughput": {"blocks": 0, "query": 0, "read": 0, "write": 0}}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_capacity_throughput_information()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_capacity_throughput_information_all_params_with_retries(self):
# Enable retries and run test_get_capacity_throughput_information_all_params.
_service.enable_retries()
self.test_get_capacity_throughput_information_all_params()
# Disable retries and run test_get_capacity_throughput_information_all_params.
_service.disable_retries()
self.test_get_capacity_throughput_information_all_params()
class TestPutCapacityThroughputConfiguration():
"""
Test Class for put_capacity_throughput_configuration
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_put_capacity_throughput_configuration_all_params(self):
"""
put_capacity_throughput_configuration()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_api/v2/user/capacity/throughput')
mock_response = '{"current": {"throughput": {"blocks": 0, "query": 0, "read": 0, "write": 0}}, "target": {"throughput": {"blocks": 0, "query": 0, "read": 0, "write": 0}}}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
blocks = 0
# Invoke method
response = _service.put_capacity_throughput_configuration(
blocks,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['blocks'] == 0
def test_put_capacity_throughput_configuration_all_params_with_retries(self):
# Enable retries and run test_put_capacity_throughput_configuration_all_params.
_service.enable_retries()
self.test_put_capacity_throughput_configuration_all_params()
# Disable retries and run test_put_capacity_throughput_configuration_all_params.
_service.disable_retries()
self.test_put_capacity_throughput_configuration_all_params()
@responses.activate
def test_put_capacity_throughput_configuration_value_error(self):
"""
test_put_capacity_throughput_configuration_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_api/v2/user/capacity/throughput')
mock_response = '{"current": {"throughput": {"blocks": 0, "query": 0, "read": 0, "write": 0}}, "target": {"throughput": {"blocks": 0, "query": 0, "read": 0, "write": 0}}}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
blocks = 0
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"blocks": blocks,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.put_capacity_throughput_configuration(**req_copy)
def test_put_capacity_throughput_configuration_value_error_with_retries(self):
# Enable retries and run test_put_capacity_throughput_configuration_value_error.
_service.enable_retries()
self.test_put_capacity_throughput_configuration_value_error()
# Disable retries and run test_put_capacity_throughput_configuration_value_error.
_service.disable_retries()
self.test_put_capacity_throughput_configuration_value_error()
# endregion
##############################################################################
# End of Service: Server
##############################################################################
##############################################################################
# Start of Service: Changes
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestGetDbUpdates():
"""
Test Class for get_db_updates
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_db_updates_all_params(self):
"""
get_db_updates()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_db_updates')
mock_response = '{"last_seq": "last_seq", "results": [{"account": "account", "db_name": "db_name", "seq": "seq", "type": "created"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
feed = 'normal'
heartbeat = 0
timeout = 0
since = '0'
# Invoke method
response = _service.get_db_updates(
feed=feed,
heartbeat=heartbeat,
timeout=timeout,
since=since,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'feed={}'.format(feed) in query_string
assert 'heartbeat={}'.format(heartbeat) in query_string
assert 'timeout={}'.format(timeout) in query_string
assert 'since={}'.format(since) in query_string
def test_get_db_updates_all_params_with_retries(self):
# Enable retries and run test_get_db_updates_all_params.
_service.enable_retries()
self.test_get_db_updates_all_params()
# Disable retries and run test_get_db_updates_all_params.
_service.disable_retries()
self.test_get_db_updates_all_params()
@responses.activate
def test_get_db_updates_required_params(self):
"""
test_get_db_updates_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_db_updates')
mock_response = '{"last_seq": "last_seq", "results": [{"account": "account", "db_name": "db_name", "seq": "seq", "type": "created"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_db_updates()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_db_updates_required_params_with_retries(self):
# Enable retries and run test_get_db_updates_required_params.
_service.enable_retries()
self.test_get_db_updates_required_params()
# Disable retries and run test_get_db_updates_required_params.
_service.disable_retries()
self.test_get_db_updates_required_params()
class TestPostChanges():
"""
Test Class for post_changes
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_changes_all_params(self):
"""
post_changes()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_changes')
mock_response = '{"last_seq": "last_seq", "pending": 7, "results": [{"changes": [{"rev": "rev"}], "deleted": false, "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "seq": "seq"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_ids = ['testString']
fields = ['testString']
selector = {}
last_event_id = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
feed = 'normal'
filter = 'testString'
heartbeat = 0
include_docs = False
limit = 0
seq_interval = 1
since = '0'
style = 'main_only'
timeout = 0
view = 'testString'
# Invoke method
response = _service.post_changes(
db,
doc_ids=doc_ids,
fields=fields,
selector=selector,
last_event_id=last_event_id,
att_encoding_info=att_encoding_info,
attachments=attachments,
conflicts=conflicts,
descending=descending,
feed=feed,
filter=filter,
heartbeat=heartbeat,
include_docs=include_docs,
limit=limit,
seq_interval=seq_interval,
since=since,
style=style,
timeout=timeout,
view=view,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'att_encoding_info={}'.format('true' if att_encoding_info else 'false') in query_string
assert 'attachments={}'.format('true' if attachments else 'false') in query_string
assert 'conflicts={}'.format('true' if conflicts else 'false') in query_string
assert 'descending={}'.format('true' if descending else 'false') in query_string
assert 'feed={}'.format(feed) in query_string
assert 'filter={}'.format(filter) in query_string
assert 'heartbeat={}'.format(heartbeat) in query_string
assert 'include_docs={}'.format('true' if include_docs else 'false') in query_string
assert 'limit={}'.format(limit) in query_string
assert 'seq_interval={}'.format(seq_interval) in query_string
assert 'since={}'.format(since) in query_string
assert 'style={}'.format(style) in query_string
assert 'timeout={}'.format(timeout) in query_string
assert 'view={}'.format(view) in query_string
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['doc_ids'] == ['testString']
assert req_body['fields'] == ['testString']
assert req_body['selector'] == {}
def test_post_changes_all_params_with_retries(self):
# Enable retries and run test_post_changes_all_params.
_service.enable_retries()
self.test_post_changes_all_params()
# Disable retries and run test_post_changes_all_params.
_service.disable_retries()
self.test_post_changes_all_params()
@responses.activate
def test_post_changes_required_params(self):
"""
test_post_changes_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_changes')
mock_response = '{"last_seq": "last_seq", "pending": 7, "results": [{"changes": [{"rev": "rev"}], "deleted": false, "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "seq": "seq"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_ids = ['testString']
fields = ['testString']
selector = {}
# Invoke method
response = _service.post_changes(
db,
doc_ids=doc_ids,
fields=fields,
selector=selector,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['doc_ids'] == ['testString']
assert req_body['fields'] == ['testString']
assert req_body['selector'] == {}
def test_post_changes_required_params_with_retries(self):
# Enable retries and run test_post_changes_required_params.
_service.enable_retries()
self.test_post_changes_required_params()
# Disable retries and run test_post_changes_required_params.
_service.disable_retries()
self.test_post_changes_required_params()
@responses.activate
def test_post_changes_value_error(self):
"""
test_post_changes_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_changes')
mock_response = '{"last_seq": "last_seq", "pending": 7, "results": [{"changes": [{"rev": "rev"}], "deleted": false, "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "seq": "seq"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_ids = ['testString']
fields = ['testString']
selector = {}
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_changes(**req_copy)
def test_post_changes_value_error_with_retries(self):
# Enable retries and run test_post_changes_value_error.
_service.enable_retries()
self.test_post_changes_value_error()
# Disable retries and run test_post_changes_value_error.
_service.disable_retries()
self.test_post_changes_value_error()
class TestPostChangesAsStream():
"""
Test Class for post_changes_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_changes_as_stream_all_params(self):
"""
post_changes_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_changes')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_ids = ['0007741142412418284']
fields = ['testString']
selector = {}
last_event_id = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
feed = 'normal'
filter = 'testString'
heartbeat = 0
include_docs = False
limit = 0
seq_interval = 1
since = '0'
style = 'main_only'
timeout = 0
view = 'testString'
# Invoke method
response = _service.post_changes_as_stream(
db,
doc_ids=doc_ids,
fields=fields,
selector=selector,
last_event_id=last_event_id,
att_encoding_info=att_encoding_info,
attachments=attachments,
conflicts=conflicts,
descending=descending,
feed=feed,
filter=filter,
heartbeat=heartbeat,
include_docs=include_docs,
limit=limit,
seq_interval=seq_interval,
since=since,
style=style,
timeout=timeout,
view=view,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'att_encoding_info={}'.format('true' if att_encoding_info else 'false') in query_string
assert 'attachments={}'.format('true' if attachments else 'false') in query_string
assert 'conflicts={}'.format('true' if conflicts else 'false') in query_string
assert 'descending={}'.format('true' if descending else 'false') in query_string
assert 'feed={}'.format(feed) in query_string
assert 'filter={}'.format(filter) in query_string
assert 'heartbeat={}'.format(heartbeat) in query_string
assert 'include_docs={}'.format('true' if include_docs else 'false') in query_string
assert 'limit={}'.format(limit) in query_string
assert 'seq_interval={}'.format(seq_interval) in query_string
assert 'since={}'.format(since) in query_string
assert 'style={}'.format(style) in query_string
assert 'timeout={}'.format(timeout) in query_string
assert 'view={}'.format(view) in query_string
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['doc_ids'] == ['0007741142412418284']
assert req_body['fields'] == ['testString']
assert req_body['selector'] == {}
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_changes_as_stream_all_params_with_retries(self):
# Enable retries and run test_post_changes_as_stream_all_params.
_service.enable_retries()
self.test_post_changes_as_stream_all_params()
# Disable retries and run test_post_changes_as_stream_all_params.
_service.disable_retries()
self.test_post_changes_as_stream_all_params()
@responses.activate
def test_post_changes_as_stream_required_params(self):
"""
test_post_changes_as_stream_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_changes')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_ids = ['0007741142412418284']
fields = ['testString']
selector = {}
# Invoke method
response = _service.post_changes_as_stream(
db,
doc_ids=doc_ids,
fields=fields,
selector=selector,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['doc_ids'] == ['0007741142412418284']
assert req_body['fields'] == ['testString']
assert req_body['selector'] == {}
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_changes_as_stream_required_params_with_retries(self):
# Enable retries and run test_post_changes_as_stream_required_params.
_service.enable_retries()
self.test_post_changes_as_stream_required_params()
# Disable retries and run test_post_changes_as_stream_required_params.
_service.disable_retries()
self.test_post_changes_as_stream_required_params()
@responses.activate
def test_post_changes_as_stream_value_error(self):
"""
test_post_changes_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_changes')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_ids = ['0007741142412418284']
fields = ['testString']
selector = {}
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_changes_as_stream(**req_copy)
def test_post_changes_as_stream_value_error_with_retries(self):
# Enable retries and run test_post_changes_as_stream_value_error.
_service.enable_retries()
self.test_post_changes_as_stream_value_error()
# Disable retries and run test_post_changes_as_stream_value_error.
_service.disable_retries()
self.test_post_changes_as_stream_value_error()
# endregion
##############################################################################
# End of Service: Changes
##############################################################################
##############################################################################
# Start of Service: Databases
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestHeadDatabase():
"""
Test Class for head_database
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_head_database_all_params(self):
"""
head_database()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
# Invoke method
response = _service.head_database(
db,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_head_database_all_params_with_retries(self):
# Enable retries and run test_head_database_all_params.
_service.enable_retries()
self.test_head_database_all_params()
# Disable retries and run test_head_database_all_params.
_service.disable_retries()
self.test_head_database_all_params()
@responses.activate
def test_head_database_value_error(self):
"""
test_head_database_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.head_database(**req_copy)
def test_head_database_value_error_with_retries(self):
# Enable retries and run test_head_database_value_error.
_service.enable_retries()
self.test_head_database_value_error()
# Disable retries and run test_head_database_value_error.
_service.disable_retries()
self.test_head_database_value_error()
class TestGetAllDbs():
"""
Test Class for get_all_dbs
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_all_dbs_all_params(self):
"""
get_all_dbs()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_all_dbs')
mock_response = '["operation_response"]'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
descending = False
endkey = 'testString'
limit = 0
skip = 0
startkey = 'testString'
# Invoke method
response = _service.get_all_dbs(
descending=descending,
endkey=endkey,
limit=limit,
skip=skip,
startkey=startkey,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'descending={}'.format('true' if descending else 'false') in query_string
assert 'endkey={}'.format(endkey) in query_string
assert 'limit={}'.format(limit) in query_string
assert 'skip={}'.format(skip) in query_string
assert 'startkey={}'.format(startkey) in query_string
def test_get_all_dbs_all_params_with_retries(self):
# Enable retries and run test_get_all_dbs_all_params.
_service.enable_retries()
self.test_get_all_dbs_all_params()
# Disable retries and run test_get_all_dbs_all_params.
_service.disable_retries()
self.test_get_all_dbs_all_params()
@responses.activate
def test_get_all_dbs_required_params(self):
"""
test_get_all_dbs_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_all_dbs')
mock_response = '["operation_response"]'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_all_dbs()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_all_dbs_required_params_with_retries(self):
# Enable retries and run test_get_all_dbs_required_params.
_service.enable_retries()
self.test_get_all_dbs_required_params()
# Disable retries and run test_get_all_dbs_required_params.
_service.disable_retries()
self.test_get_all_dbs_required_params()
class TestPostDbsInfo():
"""
Test Class for post_dbs_info
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_dbs_info_all_params(self):
"""
post_dbs_info()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_dbs_info')
mock_response = '[{"error": "error", "info": {"cluster": {"n": 1, "q": 1, "r": 1, "w": 1}, "committed_update_seq": "committed_update_seq", "compact_running": false, "compacted_seq": "compacted_seq", "db_name": "db_name", "disk_format_version": 19, "doc_count": 0, "doc_del_count": 0, "engine": "engine", "props": {"partitioned": false}, "sizes": {"active": 6, "external": 8, "file": 4}, "update_seq": "update_seq", "uuid": "uuid"}, "key": "key"}]'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
keys = ['testString']
# Invoke method
response = _service.post_dbs_info(
keys,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['keys'] == ['testString']
def test_post_dbs_info_all_params_with_retries(self):
# Enable retries and run test_post_dbs_info_all_params.
_service.enable_retries()
self.test_post_dbs_info_all_params()
# Disable retries and run test_post_dbs_info_all_params.
_service.disable_retries()
self.test_post_dbs_info_all_params()
@responses.activate
def test_post_dbs_info_value_error(self):
"""
test_post_dbs_info_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_dbs_info')
mock_response = '[{"error": "error", "info": {"cluster": {"n": 1, "q": 1, "r": 1, "w": 1}, "committed_update_seq": "committed_update_seq", "compact_running": false, "compacted_seq": "compacted_seq", "db_name": "db_name", "disk_format_version": 19, "doc_count": 0, "doc_del_count": 0, "engine": "engine", "props": {"partitioned": false}, "sizes": {"active": 6, "external": 8, "file": 4}, "update_seq": "update_seq", "uuid": "uuid"}, "key": "key"}]'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
keys = ['testString']
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"keys": keys,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_dbs_info(**req_copy)
def test_post_dbs_info_value_error_with_retries(self):
# Enable retries and run test_post_dbs_info_value_error.
_service.enable_retries()
self.test_post_dbs_info_value_error()
# Disable retries and run test_post_dbs_info_value_error.
_service.disable_retries()
self.test_post_dbs_info_value_error()
class TestDeleteDatabase():
"""
Test Class for delete_database
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_database_all_params(self):
"""
delete_database()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString')
mock_response = '{"ok": true}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
# Invoke method
response = _service.delete_database(
db,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_delete_database_all_params_with_retries(self):
# Enable retries and run test_delete_database_all_params.
_service.enable_retries()
self.test_delete_database_all_params()
# Disable retries and run test_delete_database_all_params.
_service.disable_retries()
self.test_delete_database_all_params()
@responses.activate
def test_delete_database_value_error(self):
"""
test_delete_database_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString')
mock_response = '{"ok": true}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.delete_database(**req_copy)
def test_delete_database_value_error_with_retries(self):
# Enable retries and run test_delete_database_value_error.
_service.enable_retries()
self.test_delete_database_value_error()
# Disable retries and run test_delete_database_value_error.
_service.disable_retries()
self.test_delete_database_value_error()
class TestGetDatabaseInformation():
"""
Test Class for get_database_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_database_information_all_params(self):
"""
get_database_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString')
mock_response = '{"cluster": {"n": 1, "q": 1, "r": 1, "w": 1}, "committed_update_seq": "committed_update_seq", "compact_running": false, "compacted_seq": "compacted_seq", "db_name": "db_name", "disk_format_version": 19, "doc_count": 0, "doc_del_count": 0, "engine": "engine", "props": {"partitioned": false}, "sizes": {"active": 6, "external": 8, "file": 4}, "update_seq": "update_seq", "uuid": "uuid"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
# Invoke method
response = _service.get_database_information(
db,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_database_information_all_params_with_retries(self):
# Enable retries and run test_get_database_information_all_params.
_service.enable_retries()
self.test_get_database_information_all_params()
# Disable retries and run test_get_database_information_all_params.
_service.disable_retries()
self.test_get_database_information_all_params()
@responses.activate
def test_get_database_information_value_error(self):
"""
test_get_database_information_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString')
mock_response = '{"cluster": {"n": 1, "q": 1, "r": 1, "w": 1}, "committed_update_seq": "committed_update_seq", "compact_running": false, "compacted_seq": "compacted_seq", "db_name": "db_name", "disk_format_version": 19, "doc_count": 0, "doc_del_count": 0, "engine": "engine", "props": {"partitioned": false}, "sizes": {"active": 6, "external": 8, "file": 4}, "update_seq": "update_seq", "uuid": "uuid"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_database_information(**req_copy)
def test_get_database_information_value_error_with_retries(self):
# Enable retries and run test_get_database_information_value_error.
_service.enable_retries()
self.test_get_database_information_value_error()
# Disable retries and run test_get_database_information_value_error.
_service.disable_retries()
self.test_get_database_information_value_error()
class TestPutDatabase():
"""
Test Class for put_database
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_put_database_all_params(self):
"""
put_database()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString')
mock_response = '{"ok": true}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Set up parameter values
db = 'testString'
partitioned = False
q = 1
# Invoke method
response = _service.put_database(
db,
partitioned=partitioned,
q=q,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'partitioned={}'.format('true' if partitioned else 'false') in query_string
assert 'q={}'.format(q) in query_string
def test_put_database_all_params_with_retries(self):
# Enable retries and run test_put_database_all_params.
_service.enable_retries()
self.test_put_database_all_params()
# Disable retries and run test_put_database_all_params.
_service.disable_retries()
self.test_put_database_all_params()
@responses.activate
def test_put_database_required_params(self):
"""
test_put_database_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString')
mock_response = '{"ok": true}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Set up parameter values
db = 'testString'
# Invoke method
response = _service.put_database(
db,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
def test_put_database_required_params_with_retries(self):
# Enable retries and run test_put_database_required_params.
_service.enable_retries()
self.test_put_database_required_params()
# Disable retries and run test_put_database_required_params.
_service.disable_retries()
self.test_put_database_required_params()
@responses.activate
def test_put_database_value_error(self):
"""
test_put_database_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString')
mock_response = '{"ok": true}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Set up parameter values
db = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.put_database(**req_copy)
def test_put_database_value_error_with_retries(self):
# Enable retries and run test_put_database_value_error.
_service.enable_retries()
self.test_put_database_value_error()
# Disable retries and run test_put_database_value_error.
_service.disable_retries()
self.test_put_database_value_error()
# endregion
##############################################################################
# End of Service: Databases
##############################################################################
##############################################################################
# Start of Service: Documents
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestHeadDocument():
"""
Test Class for head_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_head_document_all_params(self):
"""
head_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
if_none_match = 'testString'
latest = False
rev = 'testString'
# Invoke method
response = _service.head_document(
db,
doc_id,
if_none_match=if_none_match,
latest=latest,
rev=rev,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'latest={}'.format('true' if latest else 'false') in query_string
assert 'rev={}'.format(rev) in query_string
def test_head_document_all_params_with_retries(self):
# Enable retries and run test_head_document_all_params.
_service.enable_retries()
self.test_head_document_all_params()
# Disable retries and run test_head_document_all_params.
_service.disable_retries()
self.test_head_document_all_params()
@responses.activate
def test_head_document_required_params(self):
"""
test_head_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Invoke method
response = _service.head_document(
db,
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_head_document_required_params_with_retries(self):
# Enable retries and run test_head_document_required_params.
_service.enable_retries()
self.test_head_document_required_params()
# Disable retries and run test_head_document_required_params.
_service.disable_retries()
self.test_head_document_required_params()
@responses.activate
def test_head_document_value_error(self):
"""
test_head_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.head_document(**req_copy)
def test_head_document_value_error_with_retries(self):
# Enable retries and run test_head_document_value_error.
_service.enable_retries()
self.test_head_document_value_error()
# Disable retries and run test_head_document_value_error.
_service.disable_retries()
self.test_head_document_value_error()
class TestPostDocument():
"""
Test Class for post_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_document_all_params(self):
"""
post_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Document model
document_model = {}
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
# Set up parameter values
db = 'testString'
document = document_model
content_type = 'application/json'
batch = 'ok'
# Invoke method
response = _service.post_document(
db,
document,
content_type=content_type,
batch=batch,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'batch={}'.format(batch) in query_string
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
def test_post_document_all_params_with_retries(self):
# Enable retries and run test_post_document_all_params.
_service.enable_retries()
self.test_post_document_all_params()
# Disable retries and run test_post_document_all_params.
_service.disable_retries()
self.test_post_document_all_params()
@responses.activate
def test_post_document_required_params(self):
"""
test_post_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Document model
document_model = {}
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
# Set up parameter values
db = 'testString'
document = document_model
# Invoke method
response = _service.post_document(
db,
document,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
def test_post_document_required_params_with_retries(self):
# Enable retries and run test_post_document_required_params.
_service.enable_retries()
self.test_post_document_required_params()
# Disable retries and run test_post_document_required_params.
_service.disable_retries()
self.test_post_document_required_params()
@responses.activate
def test_post_document_value_error(self):
"""
test_post_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Document model
document_model = {}
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
# Set up parameter values
db = 'testString'
document = document_model
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"document": document,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_document(**req_copy)
def test_post_document_value_error_with_retries(self):
# Enable retries and run test_post_document_value_error.
_service.enable_retries()
self.test_post_document_value_error()
# Disable retries and run test_post_document_value_error.
_service.disable_retries()
self.test_post_document_value_error()
class TestPostAllDocs():
"""
Test Class for post_all_docs
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_all_docs_all_params(self):
"""
post_all_docs()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_all_docs')
mock_response = '{"total_rows": 0, "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "key", "value": {"rev": "rev"}}], "update_seq": "update_seq"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = False
inclusive_end = True
limit = 0
skip = 0
update_seq = False
endkey = 'testString'
key = 'testString'
keys = ['testString']
startkey = 'testString'
# Invoke method
response = _service.post_all_docs(
db,
att_encoding_info=att_encoding_info,
attachments=attachments,
conflicts=conflicts,
descending=descending,
include_docs=include_docs,
inclusive_end=inclusive_end,
limit=limit,
skip=skip,
update_seq=update_seq,
endkey=endkey,
key=key,
keys=keys,
startkey=startkey,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['att_encoding_info'] == False
assert req_body['attachments'] == False
assert req_body['conflicts'] == False
assert req_body['descending'] == False
assert req_body['include_docs'] == False
assert req_body['inclusive_end'] == True
assert req_body['limit'] == 0
assert req_body['skip'] == 0
assert req_body['update_seq'] == False
assert req_body['endkey'] == 'testString'
assert req_body['key'] == 'testString'
assert req_body['keys'] == ['testString']
assert req_body['startkey'] == 'testString'
def test_post_all_docs_all_params_with_retries(self):
# Enable retries and run test_post_all_docs_all_params.
_service.enable_retries()
self.test_post_all_docs_all_params()
# Disable retries and run test_post_all_docs_all_params.
_service.disable_retries()
self.test_post_all_docs_all_params()
@responses.activate
def test_post_all_docs_value_error(self):
"""
test_post_all_docs_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_all_docs')
mock_response = '{"total_rows": 0, "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "key", "value": {"rev": "rev"}}], "update_seq": "update_seq"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = False
inclusive_end = True
limit = 0
skip = 0
update_seq = False
endkey = 'testString'
key = 'testString'
keys = ['testString']
startkey = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_all_docs(**req_copy)
def test_post_all_docs_value_error_with_retries(self):
# Enable retries and run test_post_all_docs_value_error.
_service.enable_retries()
self.test_post_all_docs_value_error()
# Disable retries and run test_post_all_docs_value_error.
_service.disable_retries()
self.test_post_all_docs_value_error()
class TestPostAllDocsAsStream():
"""
Test Class for post_all_docs_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_all_docs_as_stream_all_params(self):
"""
post_all_docs_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_all_docs')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = False
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
key = 'testString'
keys = ['testString']
startkey = '0007741142412418284'
# Invoke method
response = _service.post_all_docs_as_stream(
db,
att_encoding_info=att_encoding_info,
attachments=attachments,
conflicts=conflicts,
descending=descending,
include_docs=include_docs,
inclusive_end=inclusive_end,
limit=limit,
skip=skip,
update_seq=update_seq,
endkey=endkey,
key=key,
keys=keys,
startkey=startkey,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['att_encoding_info'] == False
assert req_body['attachments'] == False
assert req_body['conflicts'] == False
assert req_body['descending'] == False
assert req_body['include_docs'] == False
assert req_body['inclusive_end'] == True
assert req_body['limit'] == 10
assert req_body['skip'] == 0
assert req_body['update_seq'] == False
assert req_body['endkey'] == 'testString'
assert req_body['key'] == 'testString'
assert req_body['keys'] == ['testString']
assert req_body['startkey'] == '0007741142412418284'
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_all_docs_as_stream_all_params_with_retries(self):
# Enable retries and run test_post_all_docs_as_stream_all_params.
_service.enable_retries()
self.test_post_all_docs_as_stream_all_params()
# Disable retries and run test_post_all_docs_as_stream_all_params.
_service.disable_retries()
self.test_post_all_docs_as_stream_all_params()
@responses.activate
def test_post_all_docs_as_stream_value_error(self):
"""
test_post_all_docs_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_all_docs')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = False
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
key = 'testString'
keys = ['testString']
startkey = '0007741142412418284'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_all_docs_as_stream(**req_copy)
def test_post_all_docs_as_stream_value_error_with_retries(self):
# Enable retries and run test_post_all_docs_as_stream_value_error.
_service.enable_retries()
self.test_post_all_docs_as_stream_value_error()
# Disable retries and run test_post_all_docs_as_stream_value_error.
_service.disable_retries()
self.test_post_all_docs_as_stream_value_error()
class TestPostAllDocsQueries():
"""
Test Class for post_all_docs_queries
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_all_docs_queries_all_params(self):
"""
post_all_docs_queries()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_all_docs/queries')
mock_response = '{"results": [{"total_rows": 0, "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "key", "value": {"rev": "rev"}}], "update_seq": "update_seq"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a AllDocsQuery model
all_docs_query_model = {}
all_docs_query_model['att_encoding_info'] = False
all_docs_query_model['attachments'] = False
all_docs_query_model['conflicts'] = False
all_docs_query_model['descending'] = False
all_docs_query_model['include_docs'] = False
all_docs_query_model['inclusive_end'] = True
all_docs_query_model['limit'] = 0
all_docs_query_model['skip'] = 0
all_docs_query_model['update_seq'] = False
all_docs_query_model['endkey'] = 'testString'
all_docs_query_model['key'] = 'testString'
all_docs_query_model['keys'] = ['testString']
all_docs_query_model['startkey'] = 'testString'
# Set up parameter values
db = 'testString'
queries = [all_docs_query_model]
# Invoke method
response = _service.post_all_docs_queries(
db,
queries,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['queries'] == [all_docs_query_model]
def test_post_all_docs_queries_all_params_with_retries(self):
# Enable retries and run test_post_all_docs_queries_all_params.
_service.enable_retries()
self.test_post_all_docs_queries_all_params()
# Disable retries and run test_post_all_docs_queries_all_params.
_service.disable_retries()
self.test_post_all_docs_queries_all_params()
@responses.activate
def test_post_all_docs_queries_value_error(self):
"""
test_post_all_docs_queries_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_all_docs/queries')
mock_response = '{"results": [{"total_rows": 0, "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "key", "value": {"rev": "rev"}}], "update_seq": "update_seq"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a AllDocsQuery model
all_docs_query_model = {}
all_docs_query_model['att_encoding_info'] = False
all_docs_query_model['attachments'] = False
all_docs_query_model['conflicts'] = False
all_docs_query_model['descending'] = False
all_docs_query_model['include_docs'] = False
all_docs_query_model['inclusive_end'] = True
all_docs_query_model['limit'] = 0
all_docs_query_model['skip'] = 0
all_docs_query_model['update_seq'] = False
all_docs_query_model['endkey'] = 'testString'
all_docs_query_model['key'] = 'testString'
all_docs_query_model['keys'] = ['testString']
all_docs_query_model['startkey'] = 'testString'
# Set up parameter values
db = 'testString'
queries = [all_docs_query_model]
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"queries": queries,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_all_docs_queries(**req_copy)
def test_post_all_docs_queries_value_error_with_retries(self):
# Enable retries and run test_post_all_docs_queries_value_error.
_service.enable_retries()
self.test_post_all_docs_queries_value_error()
# Disable retries and run test_post_all_docs_queries_value_error.
_service.disable_retries()
self.test_post_all_docs_queries_value_error()
class TestPostAllDocsQueriesAsStream():
"""
Test Class for post_all_docs_queries_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_all_docs_queries_as_stream_all_params(self):
"""
post_all_docs_queries_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_all_docs/queries')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a AllDocsQuery model
all_docs_query_model = {}
all_docs_query_model['att_encoding_info'] = False
all_docs_query_model['attachments'] = False
all_docs_query_model['conflicts'] = False
all_docs_query_model['descending'] = False
all_docs_query_model['include_docs'] = False
all_docs_query_model['inclusive_end'] = True
all_docs_query_model['limit'] = 0
all_docs_query_model['skip'] = 0
all_docs_query_model['update_seq'] = False
all_docs_query_model['endkey'] = 'testString'
all_docs_query_model['key'] = 'testString'
all_docs_query_model['keys'] = ['small-appliances:1000042', 'small-appliances:1000043']
all_docs_query_model['startkey'] = 'testString'
# Set up parameter values
db = 'testString'
queries = [all_docs_query_model]
# Invoke method
response = _service.post_all_docs_queries_as_stream(
db,
queries,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['queries'] == [all_docs_query_model]
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_all_docs_queries_as_stream_all_params_with_retries(self):
# Enable retries and run test_post_all_docs_queries_as_stream_all_params.
_service.enable_retries()
self.test_post_all_docs_queries_as_stream_all_params()
# Disable retries and run test_post_all_docs_queries_as_stream_all_params.
_service.disable_retries()
self.test_post_all_docs_queries_as_stream_all_params()
@responses.activate
def test_post_all_docs_queries_as_stream_value_error(self):
"""
test_post_all_docs_queries_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_all_docs/queries')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a AllDocsQuery model
all_docs_query_model = {}
all_docs_query_model['att_encoding_info'] = False
all_docs_query_model['attachments'] = False
all_docs_query_model['conflicts'] = False
all_docs_query_model['descending'] = False
all_docs_query_model['include_docs'] = False
all_docs_query_model['inclusive_end'] = True
all_docs_query_model['limit'] = 0
all_docs_query_model['skip'] = 0
all_docs_query_model['update_seq'] = False
all_docs_query_model['endkey'] = 'testString'
all_docs_query_model['key'] = 'testString'
all_docs_query_model['keys'] = ['small-appliances:1000042', 'small-appliances:1000043']
all_docs_query_model['startkey'] = 'testString'
# Set up parameter values
db = 'testString'
queries = [all_docs_query_model]
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"queries": queries,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_all_docs_queries_as_stream(**req_copy)
def test_post_all_docs_queries_as_stream_value_error_with_retries(self):
# Enable retries and run test_post_all_docs_queries_as_stream_value_error.
_service.enable_retries()
self.test_post_all_docs_queries_as_stream_value_error()
# Disable retries and run test_post_all_docs_queries_as_stream_value_error.
_service.disable_retries()
self.test_post_all_docs_queries_as_stream_value_error()
class TestPostBulkDocs():
"""
Test Class for post_bulk_docs
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_bulk_docs_all_params(self):
"""
post_bulk_docs()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_docs')
mock_response = '[{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}]'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Document model
document_model = {}
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
# Construct a dict representation of a BulkDocs model
bulk_docs_model = {}
bulk_docs_model['docs'] = [document_model]
bulk_docs_model['new_edits'] = True
# Set up parameter values
db = 'testString'
bulk_docs = bulk_docs_model
# Invoke method
response = _service.post_bulk_docs(
db,
bulk_docs,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body == bulk_docs
def test_post_bulk_docs_all_params_with_retries(self):
# Enable retries and run test_post_bulk_docs_all_params.
_service.enable_retries()
self.test_post_bulk_docs_all_params()
# Disable retries and run test_post_bulk_docs_all_params.
_service.disable_retries()
self.test_post_bulk_docs_all_params()
@responses.activate
def test_post_bulk_docs_value_error(self):
"""
test_post_bulk_docs_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_docs')
mock_response = '[{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}]'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Document model
document_model = {}
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
# Construct a dict representation of a BulkDocs model
bulk_docs_model = {}
bulk_docs_model['docs'] = [document_model]
bulk_docs_model['new_edits'] = True
# Set up parameter values
db = 'testString'
bulk_docs = bulk_docs_model
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"bulk_docs": bulk_docs,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_bulk_docs(**req_copy)
def test_post_bulk_docs_value_error_with_retries(self):
# Enable retries and run test_post_bulk_docs_value_error.
_service.enable_retries()
self.test_post_bulk_docs_value_error()
# Disable retries and run test_post_bulk_docs_value_error.
_service.disable_retries()
self.test_post_bulk_docs_value_error()
class TestPostBulkGet():
"""
Test Class for post_bulk_get
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_bulk_get_all_params(self):
"""
post_bulk_get()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_get')
mock_response = '{"results": [{"docs": [{"error": {"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}, "ok": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}}], "id": "id"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a BulkGetQueryDocument model
bulk_get_query_document_model = {}
bulk_get_query_document_model['atts_since'] = ['1-99b02e08da151943c2dcb40090160bb8']
bulk_get_query_document_model['id'] = 'testString'
bulk_get_query_document_model['rev'] = 'testString'
# Set up parameter values
db = 'testString'
docs = [bulk_get_query_document_model]
attachments = False
att_encoding_info = False
latest = False
revs = False
# Invoke method
response = _service.post_bulk_get(
db,
docs,
attachments=attachments,
att_encoding_info=att_encoding_info,
latest=latest,
revs=revs,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'attachments={}'.format('true' if attachments else 'false') in query_string
assert 'att_encoding_info={}'.format('true' if att_encoding_info else 'false') in query_string
assert 'latest={}'.format('true' if latest else 'false') in query_string
assert 'revs={}'.format('true' if revs else 'false') in query_string
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['docs'] == [bulk_get_query_document_model]
def test_post_bulk_get_all_params_with_retries(self):
# Enable retries and run test_post_bulk_get_all_params.
_service.enable_retries()
self.test_post_bulk_get_all_params()
# Disable retries and run test_post_bulk_get_all_params.
_service.disable_retries()
self.test_post_bulk_get_all_params()
@responses.activate
def test_post_bulk_get_required_params(self):
"""
test_post_bulk_get_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_get')
mock_response = '{"results": [{"docs": [{"error": {"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}, "ok": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}}], "id": "id"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a BulkGetQueryDocument model
bulk_get_query_document_model = {}
bulk_get_query_document_model['atts_since'] = ['1-99b02e08da151943c2dcb40090160bb8']
bulk_get_query_document_model['id'] = 'testString'
bulk_get_query_document_model['rev'] = 'testString'
# Set up parameter values
db = 'testString'
docs = [bulk_get_query_document_model]
# Invoke method
response = _service.post_bulk_get(
db,
docs,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['docs'] == [bulk_get_query_document_model]
def test_post_bulk_get_required_params_with_retries(self):
# Enable retries and run test_post_bulk_get_required_params.
_service.enable_retries()
self.test_post_bulk_get_required_params()
# Disable retries and run test_post_bulk_get_required_params.
_service.disable_retries()
self.test_post_bulk_get_required_params()
@responses.activate
def test_post_bulk_get_value_error(self):
"""
test_post_bulk_get_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_get')
mock_response = '{"results": [{"docs": [{"error": {"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}, "ok": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}}], "id": "id"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a BulkGetQueryDocument model
bulk_get_query_document_model = {}
bulk_get_query_document_model['atts_since'] = ['1-99b02e08da151943c2dcb40090160bb8']
bulk_get_query_document_model['id'] = 'testString'
bulk_get_query_document_model['rev'] = 'testString'
# Set up parameter values
db = 'testString'
docs = [bulk_get_query_document_model]
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"docs": docs,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_bulk_get(**req_copy)
def test_post_bulk_get_value_error_with_retries(self):
# Enable retries and run test_post_bulk_get_value_error.
_service.enable_retries()
self.test_post_bulk_get_value_error()
# Disable retries and run test_post_bulk_get_value_error.
_service.disable_retries()
self.test_post_bulk_get_value_error()
class TestPostBulkGetAsMixed():
"""
Test Class for post_bulk_get_as_mixed
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_bulk_get_as_mixed_all_params(self):
"""
post_bulk_get_as_mixed()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_get')
mock_response = 'This is a mock binary response.'
responses.add(responses.POST,
url,
body=mock_response,
content_type='multipart/mixed',
status=200)
# Construct a dict representation of a BulkGetQueryDocument model
bulk_get_query_document_model = {}
bulk_get_query_document_model['atts_since'] = ['1-99b02e08da151943c2dcb40090160bb8']
bulk_get_query_document_model['id'] = 'order00067'
bulk_get_query_document_model['rev'] = '3-917fa2381192822767f010b95b45325b'
# Set up parameter values
db = 'testString'
docs = [bulk_get_query_document_model]
attachments = False
att_encoding_info = False
latest = False
revs = False
# Invoke method
response = _service.post_bulk_get_as_mixed(
db,
docs,
attachments=attachments,
att_encoding_info=att_encoding_info,
latest=latest,
revs=revs,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'attachments={}'.format('true' if attachments else 'false') in query_string
assert 'att_encoding_info={}'.format('true' if att_encoding_info else 'false') in query_string
assert 'latest={}'.format('true' if latest else 'false') in query_string
assert 'revs={}'.format('true' if revs else 'false') in query_string
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['docs'] == [bulk_get_query_document_model]
def test_post_bulk_get_as_mixed_all_params_with_retries(self):
# Enable retries and run test_post_bulk_get_as_mixed_all_params.
_service.enable_retries()
self.test_post_bulk_get_as_mixed_all_params()
# Disable retries and run test_post_bulk_get_as_mixed_all_params.
_service.disable_retries()
self.test_post_bulk_get_as_mixed_all_params()
@responses.activate
def test_post_bulk_get_as_mixed_required_params(self):
"""
test_post_bulk_get_as_mixed_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_get')
mock_response = 'This is a mock binary response.'
responses.add(responses.POST,
url,
body=mock_response,
content_type='multipart/mixed',
status=200)
# Construct a dict representation of a BulkGetQueryDocument model
bulk_get_query_document_model = {}
bulk_get_query_document_model['atts_since'] = ['1-99b02e08da151943c2dcb40090160bb8']
bulk_get_query_document_model['id'] = 'order00067'
bulk_get_query_document_model['rev'] = '3-917fa2381192822767f010b95b45325b'
# Set up parameter values
db = 'testString'
docs = [bulk_get_query_document_model]
# Invoke method
response = _service.post_bulk_get_as_mixed(
db,
docs,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['docs'] == [bulk_get_query_document_model]
def test_post_bulk_get_as_mixed_required_params_with_retries(self):
# Enable retries and run test_post_bulk_get_as_mixed_required_params.
_service.enable_retries()
self.test_post_bulk_get_as_mixed_required_params()
# Disable retries and run test_post_bulk_get_as_mixed_required_params.
_service.disable_retries()
self.test_post_bulk_get_as_mixed_required_params()
@responses.activate
def test_post_bulk_get_as_mixed_value_error(self):
"""
test_post_bulk_get_as_mixed_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_get')
mock_response = 'This is a mock binary response.'
responses.add(responses.POST,
url,
body=mock_response,
content_type='multipart/mixed',
status=200)
# Construct a dict representation of a BulkGetQueryDocument model
bulk_get_query_document_model = {}
bulk_get_query_document_model['atts_since'] = ['1-99b02e08da151943c2dcb40090160bb8']
bulk_get_query_document_model['id'] = 'order00067'
bulk_get_query_document_model['rev'] = '3-917fa2381192822767f010b95b45325b'
# Set up parameter values
db = 'testString'
docs = [bulk_get_query_document_model]
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"docs": docs,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_bulk_get_as_mixed(**req_copy)
def test_post_bulk_get_as_mixed_value_error_with_retries(self):
# Enable retries and run test_post_bulk_get_as_mixed_value_error.
_service.enable_retries()
self.test_post_bulk_get_as_mixed_value_error()
# Disable retries and run test_post_bulk_get_as_mixed_value_error.
_service.disable_retries()
self.test_post_bulk_get_as_mixed_value_error()
class TestPostBulkGetAsRelated():
"""
Test Class for post_bulk_get_as_related
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_bulk_get_as_related_all_params(self):
"""
post_bulk_get_as_related()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_get')
mock_response = 'This is a mock binary response.'
responses.add(responses.POST,
url,
body=mock_response,
content_type='multipart/related',
status=200)
# Construct a dict representation of a BulkGetQueryDocument model
bulk_get_query_document_model = {}
bulk_get_query_document_model['atts_since'] = ['1-99b02e08da151943c2dcb40090160bb8']
bulk_get_query_document_model['id'] = 'order00067'
bulk_get_query_document_model['rev'] = '3-917fa2381192822767f010b95b45325b'
# Set up parameter values
db = 'testString'
docs = [bulk_get_query_document_model]
attachments = False
att_encoding_info = False
latest = False
revs = False
# Invoke method
response = _service.post_bulk_get_as_related(
db,
docs,
attachments=attachments,
att_encoding_info=att_encoding_info,
latest=latest,
revs=revs,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'attachments={}'.format('true' if attachments else 'false') in query_string
assert 'att_encoding_info={}'.format('true' if att_encoding_info else 'false') in query_string
assert 'latest={}'.format('true' if latest else 'false') in query_string
assert 'revs={}'.format('true' if revs else 'false') in query_string
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['docs'] == [bulk_get_query_document_model]
def test_post_bulk_get_as_related_all_params_with_retries(self):
# Enable retries and run test_post_bulk_get_as_related_all_params.
_service.enable_retries()
self.test_post_bulk_get_as_related_all_params()
# Disable retries and run test_post_bulk_get_as_related_all_params.
_service.disable_retries()
self.test_post_bulk_get_as_related_all_params()
@responses.activate
def test_post_bulk_get_as_related_required_params(self):
"""
test_post_bulk_get_as_related_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_get')
mock_response = 'This is a mock binary response.'
responses.add(responses.POST,
url,
body=mock_response,
content_type='multipart/related',
status=200)
# Construct a dict representation of a BulkGetQueryDocument model
bulk_get_query_document_model = {}
bulk_get_query_document_model['atts_since'] = ['1-99b02e08da151943c2dcb40090160bb8']
bulk_get_query_document_model['id'] = 'order00067'
bulk_get_query_document_model['rev'] = '3-917fa2381192822767f010b95b45325b'
# Set up parameter values
db = 'testString'
docs = [bulk_get_query_document_model]
# Invoke method
response = _service.post_bulk_get_as_related(
db,
docs,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['docs'] == [bulk_get_query_document_model]
def test_post_bulk_get_as_related_required_params_with_retries(self):
# Enable retries and run test_post_bulk_get_as_related_required_params.
_service.enable_retries()
self.test_post_bulk_get_as_related_required_params()
# Disable retries and run test_post_bulk_get_as_related_required_params.
_service.disable_retries()
self.test_post_bulk_get_as_related_required_params()
@responses.activate
def test_post_bulk_get_as_related_value_error(self):
"""
test_post_bulk_get_as_related_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_get')
mock_response = 'This is a mock binary response.'
responses.add(responses.POST,
url,
body=mock_response,
content_type='multipart/related',
status=200)
# Construct a dict representation of a BulkGetQueryDocument model
bulk_get_query_document_model = {}
bulk_get_query_document_model['atts_since'] = ['1-99b02e08da151943c2dcb40090160bb8']
bulk_get_query_document_model['id'] = 'order00067'
bulk_get_query_document_model['rev'] = '3-917fa2381192822767f010b95b45325b'
# Set up parameter values
db = 'testString'
docs = [bulk_get_query_document_model]
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"docs": docs,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_bulk_get_as_related(**req_copy)
def test_post_bulk_get_as_related_value_error_with_retries(self):
# Enable retries and run test_post_bulk_get_as_related_value_error.
_service.enable_retries()
self.test_post_bulk_get_as_related_value_error()
# Disable retries and run test_post_bulk_get_as_related_value_error.
_service.disable_retries()
self.test_post_bulk_get_as_related_value_error()
class TestPostBulkGetAsStream():
"""
Test Class for post_bulk_get_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_bulk_get_as_stream_all_params(self):
"""
post_bulk_get_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_get')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a BulkGetQueryDocument model
bulk_get_query_document_model = {}
bulk_get_query_document_model['atts_since'] = ['1-99b02e08da151943c2dcb40090160bb8']
bulk_get_query_document_model['id'] = 'order00067'
bulk_get_query_document_model['rev'] = '3-917fa2381192822767f010b95b45325b'
# Set up parameter values
db = 'testString'
docs = [bulk_get_query_document_model]
attachments = False
att_encoding_info = False
latest = False
revs = False
# Invoke method
response = _service.post_bulk_get_as_stream(
db,
docs,
attachments=attachments,
att_encoding_info=att_encoding_info,
latest=latest,
revs=revs,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'attachments={}'.format('true' if attachments else 'false') in query_string
assert 'att_encoding_info={}'.format('true' if att_encoding_info else 'false') in query_string
assert 'latest={}'.format('true' if latest else 'false') in query_string
assert 'revs={}'.format('true' if revs else 'false') in query_string
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['docs'] == [bulk_get_query_document_model]
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_bulk_get_as_stream_all_params_with_retries(self):
# Enable retries and run test_post_bulk_get_as_stream_all_params.
_service.enable_retries()
self.test_post_bulk_get_as_stream_all_params()
# Disable retries and run test_post_bulk_get_as_stream_all_params.
_service.disable_retries()
self.test_post_bulk_get_as_stream_all_params()
@responses.activate
def test_post_bulk_get_as_stream_required_params(self):
"""
test_post_bulk_get_as_stream_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_get')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a BulkGetQueryDocument model
bulk_get_query_document_model = {}
bulk_get_query_document_model['atts_since'] = ['1-99b02e08da151943c2dcb40090160bb8']
bulk_get_query_document_model['id'] = 'order00067'
bulk_get_query_document_model['rev'] = '3-917fa2381192822767f010b95b45325b'
# Set up parameter values
db = 'testString'
docs = [bulk_get_query_document_model]
# Invoke method
response = _service.post_bulk_get_as_stream(
db,
docs,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['docs'] == [bulk_get_query_document_model]
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_bulk_get_as_stream_required_params_with_retries(self):
# Enable retries and run test_post_bulk_get_as_stream_required_params.
_service.enable_retries()
self.test_post_bulk_get_as_stream_required_params()
# Disable retries and run test_post_bulk_get_as_stream_required_params.
_service.disable_retries()
self.test_post_bulk_get_as_stream_required_params()
@responses.activate
def test_post_bulk_get_as_stream_value_error(self):
"""
test_post_bulk_get_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_bulk_get')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a BulkGetQueryDocument model
bulk_get_query_document_model = {}
bulk_get_query_document_model['atts_since'] = ['1-99b02e08da151943c2dcb40090160bb8']
bulk_get_query_document_model['id'] = 'order00067'
bulk_get_query_document_model['rev'] = '3-917fa2381192822767f010b95b45325b'
# Set up parameter values
db = 'testString'
docs = [bulk_get_query_document_model]
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"docs": docs,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_bulk_get_as_stream(**req_copy)
def test_post_bulk_get_as_stream_value_error_with_retries(self):
# Enable retries and run test_post_bulk_get_as_stream_value_error.
_service.enable_retries()
self.test_post_bulk_get_as_stream_value_error()
# Disable retries and run test_post_bulk_get_as_stream_value_error.
_service.disable_retries()
self.test_post_bulk_get_as_stream_value_error()
class TestDeleteDocument():
"""
Test Class for delete_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_document_all_params(self):
"""
delete_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
if_match = 'testString'
batch = 'ok'
rev = 'testString'
# Invoke method
response = _service.delete_document(
db,
doc_id,
if_match=if_match,
batch=batch,
rev=rev,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'batch={}'.format(batch) in query_string
assert 'rev={}'.format(rev) in query_string
def test_delete_document_all_params_with_retries(self):
# Enable retries and run test_delete_document_all_params.
_service.enable_retries()
self.test_delete_document_all_params()
# Disable retries and run test_delete_document_all_params.
_service.disable_retries()
self.test_delete_document_all_params()
@responses.activate
def test_delete_document_required_params(self):
"""
test_delete_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Invoke method
response = _service.delete_document(
db,
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_delete_document_required_params_with_retries(self):
# Enable retries and run test_delete_document_required_params.
_service.enable_retries()
self.test_delete_document_required_params()
# Disable retries and run test_delete_document_required_params.
_service.disable_retries()
self.test_delete_document_required_params()
@responses.activate
def test_delete_document_value_error(self):
"""
test_delete_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.delete_document(**req_copy)
def test_delete_document_value_error_with_retries(self):
# Enable retries and run test_delete_document_value_error.
_service.enable_retries()
self.test_delete_document_value_error()
# Disable retries and run test_delete_document_value_error.
_service.disable_retries()
self.test_delete_document_value_error()
class TestGetDocument():
"""
Test Class for get_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_document_all_params(self):
"""
get_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = '{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
if_none_match = 'testString'
attachments = False
att_encoding_info = False
conflicts = False
deleted_conflicts = False
latest = False
local_seq = False
meta = False
rev = 'testString'
revs = False
revs_info = False
# Invoke method
response = _service.get_document(
db,
doc_id,
if_none_match=if_none_match,
attachments=attachments,
att_encoding_info=att_encoding_info,
conflicts=conflicts,
deleted_conflicts=deleted_conflicts,
latest=latest,
local_seq=local_seq,
meta=meta,
rev=rev,
revs=revs,
revs_info=revs_info,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'attachments={}'.format('true' if attachments else 'false') in query_string
assert 'att_encoding_info={}'.format('true' if att_encoding_info else 'false') in query_string
assert 'conflicts={}'.format('true' if conflicts else 'false') in query_string
assert 'deleted_conflicts={}'.format('true' if deleted_conflicts else 'false') in query_string
assert 'latest={}'.format('true' if latest else 'false') in query_string
assert 'local_seq={}'.format('true' if local_seq else 'false') in query_string
assert 'meta={}'.format('true' if meta else 'false') in query_string
assert 'rev={}'.format(rev) in query_string
assert 'revs={}'.format('true' if revs else 'false') in query_string
assert 'revs_info={}'.format('true' if revs_info else 'false') in query_string
def test_get_document_all_params_with_retries(self):
# Enable retries and run test_get_document_all_params.
_service.enable_retries()
self.test_get_document_all_params()
# Disable retries and run test_get_document_all_params.
_service.disable_retries()
self.test_get_document_all_params()
@responses.activate
def test_get_document_required_params(self):
"""
test_get_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = '{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Invoke method
response = _service.get_document(
db,
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_document_required_params_with_retries(self):
# Enable retries and run test_get_document_required_params.
_service.enable_retries()
self.test_get_document_required_params()
# Disable retries and run test_get_document_required_params.
_service.disable_retries()
self.test_get_document_required_params()
@responses.activate
def test_get_document_value_error(self):
"""
test_get_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = '{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_document(**req_copy)
def test_get_document_value_error_with_retries(self):
# Enable retries and run test_get_document_value_error.
_service.enable_retries()
self.test_get_document_value_error()
# Disable retries and run test_get_document_value_error.
_service.disable_retries()
self.test_get_document_value_error()
class TestGetDocumentAsMixed():
"""
Test Class for get_document_as_mixed
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_document_as_mixed_all_params(self):
"""
get_document_as_mixed()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = 'This is a mock binary response.'
responses.add(responses.GET,
url,
body=mock_response,
content_type='multipart/mixed',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
if_none_match = 'testString'
attachments = False
att_encoding_info = False
conflicts = False
deleted_conflicts = False
latest = False
local_seq = False
meta = False
rev = 'testString'
revs = False
revs_info = False
# Invoke method
response = _service.get_document_as_mixed(
db,
doc_id,
if_none_match=if_none_match,
attachments=attachments,
att_encoding_info=att_encoding_info,
conflicts=conflicts,
deleted_conflicts=deleted_conflicts,
latest=latest,
local_seq=local_seq,
meta=meta,
rev=rev,
revs=revs,
revs_info=revs_info,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'attachments={}'.format('true' if attachments else 'false') in query_string
assert 'att_encoding_info={}'.format('true' if att_encoding_info else 'false') in query_string
assert 'conflicts={}'.format('true' if conflicts else 'false') in query_string
assert 'deleted_conflicts={}'.format('true' if deleted_conflicts else 'false') in query_string
assert 'latest={}'.format('true' if latest else 'false') in query_string
assert 'local_seq={}'.format('true' if local_seq else 'false') in query_string
assert 'meta={}'.format('true' if meta else 'false') in query_string
assert 'rev={}'.format(rev) in query_string
assert 'revs={}'.format('true' if revs else 'false') in query_string
assert 'revs_info={}'.format('true' if revs_info else 'false') in query_string
def test_get_document_as_mixed_all_params_with_retries(self):
# Enable retries and run test_get_document_as_mixed_all_params.
_service.enable_retries()
self.test_get_document_as_mixed_all_params()
# Disable retries and run test_get_document_as_mixed_all_params.
_service.disable_retries()
self.test_get_document_as_mixed_all_params()
@responses.activate
def test_get_document_as_mixed_required_params(self):
"""
test_get_document_as_mixed_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = 'This is a mock binary response.'
responses.add(responses.GET,
url,
body=mock_response,
content_type='multipart/mixed',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Invoke method
response = _service.get_document_as_mixed(
db,
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_document_as_mixed_required_params_with_retries(self):
# Enable retries and run test_get_document_as_mixed_required_params.
_service.enable_retries()
self.test_get_document_as_mixed_required_params()
# Disable retries and run test_get_document_as_mixed_required_params.
_service.disable_retries()
self.test_get_document_as_mixed_required_params()
@responses.activate
def test_get_document_as_mixed_value_error(self):
"""
test_get_document_as_mixed_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = 'This is a mock binary response.'
responses.add(responses.GET,
url,
body=mock_response,
content_type='multipart/mixed',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_document_as_mixed(**req_copy)
def test_get_document_as_mixed_value_error_with_retries(self):
# Enable retries and run test_get_document_as_mixed_value_error.
_service.enable_retries()
self.test_get_document_as_mixed_value_error()
# Disable retries and run test_get_document_as_mixed_value_error.
_service.disable_retries()
self.test_get_document_as_mixed_value_error()
class TestGetDocumentAsRelated():
"""
Test Class for get_document_as_related
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_document_as_related_all_params(self):
"""
get_document_as_related()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = 'This is a mock binary response.'
responses.add(responses.GET,
url,
body=mock_response,
content_type='multipart/related',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
if_none_match = 'testString'
attachments = False
att_encoding_info = False
conflicts = False
deleted_conflicts = False
latest = False
local_seq = False
meta = False
rev = 'testString'
revs = False
revs_info = False
# Invoke method
response = _service.get_document_as_related(
db,
doc_id,
if_none_match=if_none_match,
attachments=attachments,
att_encoding_info=att_encoding_info,
conflicts=conflicts,
deleted_conflicts=deleted_conflicts,
latest=latest,
local_seq=local_seq,
meta=meta,
rev=rev,
revs=revs,
revs_info=revs_info,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'attachments={}'.format('true' if attachments else 'false') in query_string
assert 'att_encoding_info={}'.format('true' if att_encoding_info else 'false') in query_string
assert 'conflicts={}'.format('true' if conflicts else 'false') in query_string
assert 'deleted_conflicts={}'.format('true' if deleted_conflicts else 'false') in query_string
assert 'latest={}'.format('true' if latest else 'false') in query_string
assert 'local_seq={}'.format('true' if local_seq else 'false') in query_string
assert 'meta={}'.format('true' if meta else 'false') in query_string
assert 'rev={}'.format(rev) in query_string
assert 'revs={}'.format('true' if revs else 'false') in query_string
assert 'revs_info={}'.format('true' if revs_info else 'false') in query_string
def test_get_document_as_related_all_params_with_retries(self):
# Enable retries and run test_get_document_as_related_all_params.
_service.enable_retries()
self.test_get_document_as_related_all_params()
# Disable retries and run test_get_document_as_related_all_params.
_service.disable_retries()
self.test_get_document_as_related_all_params()
@responses.activate
def test_get_document_as_related_required_params(self):
"""
test_get_document_as_related_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = 'This is a mock binary response.'
responses.add(responses.GET,
url,
body=mock_response,
content_type='multipart/related',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Invoke method
response = _service.get_document_as_related(
db,
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_document_as_related_required_params_with_retries(self):
# Enable retries and run test_get_document_as_related_required_params.
_service.enable_retries()
self.test_get_document_as_related_required_params()
# Disable retries and run test_get_document_as_related_required_params.
_service.disable_retries()
self.test_get_document_as_related_required_params()
@responses.activate
def test_get_document_as_related_value_error(self):
"""
test_get_document_as_related_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = 'This is a mock binary response.'
responses.add(responses.GET,
url,
body=mock_response,
content_type='multipart/related',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_document_as_related(**req_copy)
def test_get_document_as_related_value_error_with_retries(self):
# Enable retries and run test_get_document_as_related_value_error.
_service.enable_retries()
self.test_get_document_as_related_value_error()
# Disable retries and run test_get_document_as_related_value_error.
_service.disable_retries()
self.test_get_document_as_related_value_error()
class TestGetDocumentAsStream():
"""
Test Class for get_document_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_document_as_stream_all_params(self):
"""
get_document_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
if_none_match = 'testString'
attachments = False
att_encoding_info = False
conflicts = False
deleted_conflicts = False
latest = False
local_seq = False
meta = False
rev = 'testString'
revs = False
revs_info = False
# Invoke method
response = _service.get_document_as_stream(
db,
doc_id,
if_none_match=if_none_match,
attachments=attachments,
att_encoding_info=att_encoding_info,
conflicts=conflicts,
deleted_conflicts=deleted_conflicts,
latest=latest,
local_seq=local_seq,
meta=meta,
rev=rev,
revs=revs,
revs_info=revs_info,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'attachments={}'.format('true' if attachments else 'false') in query_string
assert 'att_encoding_info={}'.format('true' if att_encoding_info else 'false') in query_string
assert 'conflicts={}'.format('true' if conflicts else 'false') in query_string
assert 'deleted_conflicts={}'.format('true' if deleted_conflicts else 'false') in query_string
assert 'latest={}'.format('true' if latest else 'false') in query_string
assert 'local_seq={}'.format('true' if local_seq else 'false') in query_string
assert 'meta={}'.format('true' if meta else 'false') in query_string
assert 'rev={}'.format(rev) in query_string
assert 'revs={}'.format('true' if revs else 'false') in query_string
assert 'revs_info={}'.format('true' if revs_info else 'false') in query_string
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_get_document_as_stream_all_params_with_retries(self):
# Enable retries and run test_get_document_as_stream_all_params.
_service.enable_retries()
self.test_get_document_as_stream_all_params()
# Disable retries and run test_get_document_as_stream_all_params.
_service.disable_retries()
self.test_get_document_as_stream_all_params()
@responses.activate
def test_get_document_as_stream_required_params(self):
"""
test_get_document_as_stream_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Invoke method
response = _service.get_document_as_stream(
db,
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_get_document_as_stream_required_params_with_retries(self):
# Enable retries and run test_get_document_as_stream_required_params.
_service.enable_retries()
self.test_get_document_as_stream_required_params()
# Disable retries and run test_get_document_as_stream_required_params.
_service.disable_retries()
self.test_get_document_as_stream_required_params()
@responses.activate
def test_get_document_as_stream_value_error(self):
"""
test_get_document_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_document_as_stream(**req_copy)
def test_get_document_as_stream_value_error_with_retries(self):
# Enable retries and run test_get_document_as_stream_value_error.
_service.enable_retries()
self.test_get_document_as_stream_value_error()
# Disable retries and run test_get_document_as_stream_value_error.
_service.disable_retries()
self.test_get_document_as_stream_value_error()
class TestPutDocument():
"""
Test Class for put_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_put_document_all_params(self):
"""
put_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Document model
document_model = {}
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'exampleid'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['brand'] = 'Foo'
document_model['colours'] = '["red","green","black","blue"]'
document_model['description'] = 'Slim Colourful Design Electronic Cooking Appliance for ...'
document_model['image'] = 'assets/img/0gmsnghhew.jpg'
document_model['keywords'] = '["Foo","Scales","Weight","Digital","Kitchen"]'
document_model['name'] = 'Digital Kitchen Scales'
document_model['price'] = '14.99'
document_model['productid'] = '1000042'
document_model['taxonomy'] = '["Home","Kitchen","Small Appliances"]'
document_model['type'] = 'product'
# Set up parameter values
db = 'testString'
doc_id = 'testString'
document = document_model
content_type = 'application/json'
if_match = 'testString'
batch = 'ok'
new_edits = False
rev = 'testString'
# Invoke method
response = _service.put_document(
db,
doc_id,
document,
content_type=content_type,
if_match=if_match,
batch=batch,
new_edits=new_edits,
rev=rev,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'batch={}'.format(batch) in query_string
assert 'new_edits={}'.format('true' if new_edits else 'false') in query_string
assert 'rev={}'.format(rev) in query_string
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
def test_put_document_all_params_with_retries(self):
# Enable retries and run test_put_document_all_params.
_service.enable_retries()
self.test_put_document_all_params()
# Disable retries and run test_put_document_all_params.
_service.disable_retries()
self.test_put_document_all_params()
@responses.activate
def test_put_document_required_params(self):
"""
test_put_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Document model
document_model = {}
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'exampleid'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['brand'] = 'Foo'
document_model['colours'] = '["red","green","black","blue"]'
document_model['description'] = 'Slim Colourful Design Electronic Cooking Appliance for ...'
document_model['image'] = 'assets/img/0gmsnghhew.jpg'
document_model['keywords'] = '["Foo","Scales","Weight","Digital","Kitchen"]'
document_model['name'] = 'Digital Kitchen Scales'
document_model['price'] = '14.99'
document_model['productid'] = '1000042'
document_model['taxonomy'] = '["Home","Kitchen","Small Appliances"]'
document_model['type'] = 'product'
# Set up parameter values
db = 'testString'
doc_id = 'testString'
document = document_model
# Invoke method
response = _service.put_document(
db,
doc_id,
document,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
def test_put_document_required_params_with_retries(self):
# Enable retries and run test_put_document_required_params.
_service.enable_retries()
self.test_put_document_required_params()
# Disable retries and run test_put_document_required_params.
_service.disable_retries()
self.test_put_document_required_params()
@responses.activate
def test_put_document_value_error(self):
"""
test_put_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Document model
document_model = {}
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'exampleid'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['brand'] = 'Foo'
document_model['colours'] = '["red","green","black","blue"]'
document_model['description'] = 'Slim Colourful Design Electronic Cooking Appliance for ...'
document_model['image'] = 'assets/img/0gmsnghhew.jpg'
document_model['keywords'] = '["Foo","Scales","Weight","Digital","Kitchen"]'
document_model['name'] = 'Digital Kitchen Scales'
document_model['price'] = '14.99'
document_model['productid'] = '1000042'
document_model['taxonomy'] = '["Home","Kitchen","Small Appliances"]'
document_model['type'] = 'product'
# Set up parameter values
db = 'testString'
doc_id = 'testString'
document = document_model
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
"document": document,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.put_document(**req_copy)
def test_put_document_value_error_with_retries(self):
# Enable retries and run test_put_document_value_error.
_service.enable_retries()
self.test_put_document_value_error()
# Disable retries and run test_put_document_value_error.
_service.disable_retries()
self.test_put_document_value_error()
# endregion
##############################################################################
# End of Service: Documents
##############################################################################
##############################################################################
# Start of Service: DesignDocuments
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestHeadDesignDocument():
"""
Test Class for head_design_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_head_design_document_all_params(self):
"""
head_design_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
if_none_match = 'testString'
# Invoke method
response = _service.head_design_document(
db,
ddoc,
if_none_match=if_none_match,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_head_design_document_all_params_with_retries(self):
# Enable retries and run test_head_design_document_all_params.
_service.enable_retries()
self.test_head_design_document_all_params()
# Disable retries and run test_head_design_document_all_params.
_service.disable_retries()
self.test_head_design_document_all_params()
@responses.activate
def test_head_design_document_required_params(self):
"""
test_head_design_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
# Invoke method
response = _service.head_design_document(
db,
ddoc,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_head_design_document_required_params_with_retries(self):
# Enable retries and run test_head_design_document_required_params.
_service.enable_retries()
self.test_head_design_document_required_params()
# Disable retries and run test_head_design_document_required_params.
_service.disable_retries()
self.test_head_design_document_required_params()
@responses.activate
def test_head_design_document_value_error(self):
"""
test_head_design_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.head_design_document(**req_copy)
def test_head_design_document_value_error_with_retries(self):
# Enable retries and run test_head_design_document_value_error.
_service.enable_retries()
self.test_head_design_document_value_error()
# Disable retries and run test_head_design_document_value_error.
_service.disable_retries()
self.test_head_design_document_value_error()
class TestDeleteDesignDocument():
"""
Test Class for delete_design_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_design_document_all_params(self):
"""
delete_design_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
if_match = 'testString'
batch = 'ok'
rev = 'testString'
# Invoke method
response = _service.delete_design_document(
db,
ddoc,
if_match=if_match,
batch=batch,
rev=rev,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'batch={}'.format(batch) in query_string
assert 'rev={}'.format(rev) in query_string
def test_delete_design_document_all_params_with_retries(self):
# Enable retries and run test_delete_design_document_all_params.
_service.enable_retries()
self.test_delete_design_document_all_params()
# Disable retries and run test_delete_design_document_all_params.
_service.disable_retries()
self.test_delete_design_document_all_params()
@responses.activate
def test_delete_design_document_required_params(self):
"""
test_delete_design_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
# Invoke method
response = _service.delete_design_document(
db,
ddoc,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_delete_design_document_required_params_with_retries(self):
# Enable retries and run test_delete_design_document_required_params.
_service.enable_retries()
self.test_delete_design_document_required_params()
# Disable retries and run test_delete_design_document_required_params.
_service.disable_retries()
self.test_delete_design_document_required_params()
@responses.activate
def test_delete_design_document_value_error(self):
"""
test_delete_design_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.delete_design_document(**req_copy)
def test_delete_design_document_value_error_with_retries(self):
# Enable retries and run test_delete_design_document_value_error.
_service.enable_retries()
self.test_delete_design_document_value_error()
# Disable retries and run test_delete_design_document_value_error.
_service.disable_retries()
self.test_delete_design_document_value_error()
class TestGetDesignDocument():
"""
Test Class for get_design_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_design_document_all_params(self):
"""
get_design_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString')
mock_response = '{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}], "autoupdate": true, "filters": {"mapKey": "inner"}, "indexes": {"mapKey": {"analyzer": {"name": "classic", "stopwords": ["stopwords"], "fields": {"mapKey": {"name": "classic", "stopwords": ["stopwords"]}}}, "index": "index"}}, "language": "javascript", "options": {"partitioned": false}, "validate_doc_update": "validate_doc_update", "views": {"mapKey": {"map": "map", "reduce": "reduce"}}, "st_indexes": {"mapKey": {"index": "index"}}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
if_none_match = 'testString'
attachments = False
att_encoding_info = False
conflicts = False
deleted_conflicts = False
latest = False
local_seq = False
meta = False
rev = 'testString'
revs = False
revs_info = False
# Invoke method
response = _service.get_design_document(
db,
ddoc,
if_none_match=if_none_match,
attachments=attachments,
att_encoding_info=att_encoding_info,
conflicts=conflicts,
deleted_conflicts=deleted_conflicts,
latest=latest,
local_seq=local_seq,
meta=meta,
rev=rev,
revs=revs,
revs_info=revs_info,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'attachments={}'.format('true' if attachments else 'false') in query_string
assert 'att_encoding_info={}'.format('true' if att_encoding_info else 'false') in query_string
assert 'conflicts={}'.format('true' if conflicts else 'false') in query_string
assert 'deleted_conflicts={}'.format('true' if deleted_conflicts else 'false') in query_string
assert 'latest={}'.format('true' if latest else 'false') in query_string
assert 'local_seq={}'.format('true' if local_seq else 'false') in query_string
assert 'meta={}'.format('true' if meta else 'false') in query_string
assert 'rev={}'.format(rev) in query_string
assert 'revs={}'.format('true' if revs else 'false') in query_string
assert 'revs_info={}'.format('true' if revs_info else 'false') in query_string
def test_get_design_document_all_params_with_retries(self):
# Enable retries and run test_get_design_document_all_params.
_service.enable_retries()
self.test_get_design_document_all_params()
# Disable retries and run test_get_design_document_all_params.
_service.disable_retries()
self.test_get_design_document_all_params()
@responses.activate
def test_get_design_document_required_params(self):
"""
test_get_design_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString')
mock_response = '{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}], "autoupdate": true, "filters": {"mapKey": "inner"}, "indexes": {"mapKey": {"analyzer": {"name": "classic", "stopwords": ["stopwords"], "fields": {"mapKey": {"name": "classic", "stopwords": ["stopwords"]}}}, "index": "index"}}, "language": "javascript", "options": {"partitioned": false}, "validate_doc_update": "validate_doc_update", "views": {"mapKey": {"map": "map", "reduce": "reduce"}}, "st_indexes": {"mapKey": {"index": "index"}}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
# Invoke method
response = _service.get_design_document(
db,
ddoc,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_design_document_required_params_with_retries(self):
# Enable retries and run test_get_design_document_required_params.
_service.enable_retries()
self.test_get_design_document_required_params()
# Disable retries and run test_get_design_document_required_params.
_service.disable_retries()
self.test_get_design_document_required_params()
@responses.activate
def test_get_design_document_value_error(self):
"""
test_get_design_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString')
mock_response = '{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}], "autoupdate": true, "filters": {"mapKey": "inner"}, "indexes": {"mapKey": {"analyzer": {"name": "classic", "stopwords": ["stopwords"], "fields": {"mapKey": {"name": "classic", "stopwords": ["stopwords"]}}}, "index": "index"}}, "language": "javascript", "options": {"partitioned": false}, "validate_doc_update": "validate_doc_update", "views": {"mapKey": {"map": "map", "reduce": "reduce"}}, "st_indexes": {"mapKey": {"index": "index"}}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_design_document(**req_copy)
def test_get_design_document_value_error_with_retries(self):
# Enable retries and run test_get_design_document_value_error.
_service.enable_retries()
self.test_get_design_document_value_error()
# Disable retries and run test_get_design_document_value_error.
_service.disable_retries()
self.test_get_design_document_value_error()
class TestPutDesignDocument():
"""
Test Class for put_design_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_put_design_document_all_params(self):
"""
put_design_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Analyzer model
analyzer_model = {}
analyzer_model['name'] = 'classic'
analyzer_model['stopwords'] = ['testString']
# Construct a dict representation of a AnalyzerConfiguration model
analyzer_configuration_model = {}
analyzer_configuration_model['name'] = 'classic'
analyzer_configuration_model['stopwords'] = ['testString']
analyzer_configuration_model['fields'] = {}
# Construct a dict representation of a SearchIndexDefinition model
search_index_definition_model = {}
search_index_definition_model['analyzer'] = analyzer_configuration_model
search_index_definition_model['index'] = 'testString'
# Construct a dict representation of a DesignDocumentOptions model
design_document_options_model = {}
design_document_options_model['partitioned'] = True
# Construct a dict representation of a DesignDocumentViewsMapReduce model
design_document_views_map_reduce_model = {}
design_document_views_map_reduce_model['map'] = 'testString'
design_document_views_map_reduce_model['reduce'] = 'testString'
# Construct a dict representation of a GeoIndexDefinition model
geo_index_definition_model = {}
geo_index_definition_model['index'] = 'testString'
# Construct a dict representation of a DesignDocument model
design_document_model = {}
design_document_model['_attachments'] = {}
design_document_model['_conflicts'] = ['testString']
design_document_model['_deleted'] = True
design_document_model['_deleted_conflicts'] = ['testString']
design_document_model['_id'] = 'testString'
design_document_model['_local_seq'] = 'testString'
design_document_model['_rev'] = 'testString'
design_document_model['_revisions'] = revisions_model
design_document_model['_revs_info'] = [document_revision_status_model]
design_document_model['autoupdate'] = True
design_document_model['filters'] = {}
design_document_model['indexes'] = {}
design_document_model['language'] = 'javascript'
design_document_model['options'] = design_document_options_model
design_document_model['validate_doc_update'] = 'testString'
design_document_model['views'] = {}
design_document_model['st_indexes'] = {}
design_document_model['foo'] = 'testString'
# Set up parameter values
db = 'testString'
ddoc = 'testString'
design_document = design_document_model
if_match = 'testString'
batch = 'ok'
new_edits = False
rev = 'testString'
# Invoke method
response = _service.put_design_document(
db,
ddoc,
design_document,
if_match=if_match,
batch=batch,
new_edits=new_edits,
rev=rev,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'batch={}'.format(batch) in query_string
assert 'new_edits={}'.format('true' if new_edits else 'false') in query_string
assert 'rev={}'.format(rev) in query_string
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body == design_document
def test_put_design_document_all_params_with_retries(self):
# Enable retries and run test_put_design_document_all_params.
_service.enable_retries()
self.test_put_design_document_all_params()
# Disable retries and run test_put_design_document_all_params.
_service.disable_retries()
self.test_put_design_document_all_params()
@responses.activate
def test_put_design_document_required_params(self):
"""
test_put_design_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Analyzer model
analyzer_model = {}
analyzer_model['name'] = 'classic'
analyzer_model['stopwords'] = ['testString']
# Construct a dict representation of a AnalyzerConfiguration model
analyzer_configuration_model = {}
analyzer_configuration_model['name'] = 'classic'
analyzer_configuration_model['stopwords'] = ['testString']
analyzer_configuration_model['fields'] = {}
# Construct a dict representation of a SearchIndexDefinition model
search_index_definition_model = {}
search_index_definition_model['analyzer'] = analyzer_configuration_model
search_index_definition_model['index'] = 'testString'
# Construct a dict representation of a DesignDocumentOptions model
design_document_options_model = {}
design_document_options_model['partitioned'] = True
# Construct a dict representation of a DesignDocumentViewsMapReduce model
design_document_views_map_reduce_model = {}
design_document_views_map_reduce_model['map'] = 'testString'
design_document_views_map_reduce_model['reduce'] = 'testString'
# Construct a dict representation of a GeoIndexDefinition model
geo_index_definition_model = {}
geo_index_definition_model['index'] = 'testString'
# Construct a dict representation of a DesignDocument model
design_document_model = {}
design_document_model['_attachments'] = {}
design_document_model['_conflicts'] = ['testString']
design_document_model['_deleted'] = True
design_document_model['_deleted_conflicts'] = ['testString']
design_document_model['_id'] = 'testString'
design_document_model['_local_seq'] = 'testString'
design_document_model['_rev'] = 'testString'
design_document_model['_revisions'] = revisions_model
design_document_model['_revs_info'] = [document_revision_status_model]
design_document_model['autoupdate'] = True
design_document_model['filters'] = {}
design_document_model['indexes'] = {}
design_document_model['language'] = 'javascript'
design_document_model['options'] = design_document_options_model
design_document_model['validate_doc_update'] = 'testString'
design_document_model['views'] = {}
design_document_model['st_indexes'] = {}
design_document_model['foo'] = 'testString'
# Set up parameter values
db = 'testString'
ddoc = 'testString'
design_document = design_document_model
# Invoke method
response = _service.put_design_document(
db,
ddoc,
design_document,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body == design_document
def test_put_design_document_required_params_with_retries(self):
# Enable retries and run test_put_design_document_required_params.
_service.enable_retries()
self.test_put_design_document_required_params()
# Disable retries and run test_put_design_document_required_params.
_service.disable_retries()
self.test_put_design_document_required_params()
@responses.activate
def test_put_design_document_value_error(self):
"""
test_put_design_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Analyzer model
analyzer_model = {}
analyzer_model['name'] = 'classic'
analyzer_model['stopwords'] = ['testString']
# Construct a dict representation of a AnalyzerConfiguration model
analyzer_configuration_model = {}
analyzer_configuration_model['name'] = 'classic'
analyzer_configuration_model['stopwords'] = ['testString']
analyzer_configuration_model['fields'] = {}
# Construct a dict representation of a SearchIndexDefinition model
search_index_definition_model = {}
search_index_definition_model['analyzer'] = analyzer_configuration_model
search_index_definition_model['index'] = 'testString'
# Construct a dict representation of a DesignDocumentOptions model
design_document_options_model = {}
design_document_options_model['partitioned'] = True
# Construct a dict representation of a DesignDocumentViewsMapReduce model
design_document_views_map_reduce_model = {}
design_document_views_map_reduce_model['map'] = 'testString'
design_document_views_map_reduce_model['reduce'] = 'testString'
# Construct a dict representation of a GeoIndexDefinition model
geo_index_definition_model = {}
geo_index_definition_model['index'] = 'testString'
# Construct a dict representation of a DesignDocument model
design_document_model = {}
design_document_model['_attachments'] = {}
design_document_model['_conflicts'] = ['testString']
design_document_model['_deleted'] = True
design_document_model['_deleted_conflicts'] = ['testString']
design_document_model['_id'] = 'testString'
design_document_model['_local_seq'] = 'testString'
design_document_model['_rev'] = 'testString'
design_document_model['_revisions'] = revisions_model
design_document_model['_revs_info'] = [document_revision_status_model]
design_document_model['autoupdate'] = True
design_document_model['filters'] = {}
design_document_model['indexes'] = {}
design_document_model['language'] = 'javascript'
design_document_model['options'] = design_document_options_model
design_document_model['validate_doc_update'] = 'testString'
design_document_model['views'] = {}
design_document_model['st_indexes'] = {}
design_document_model['foo'] = 'testString'
# Set up parameter values
db = 'testString'
ddoc = 'testString'
design_document = design_document_model
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
"design_document": design_document,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.put_design_document(**req_copy)
def test_put_design_document_value_error_with_retries(self):
# Enable retries and run test_put_design_document_value_error.
_service.enable_retries()
self.test_put_design_document_value_error()
# Disable retries and run test_put_design_document_value_error.
_service.disable_retries()
self.test_put_design_document_value_error()
class TestGetDesignDocumentInformation():
"""
Test Class for get_design_document_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_design_document_information_all_params(self):
"""
get_design_document_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_info')
mock_response = '{"name": "name", "view_index": {"compact_running": false, "language": "language", "signature": "signature", "sizes": {"active": 6, "external": 8, "file": 4}, "updater_running": false, "waiting_clients": 0, "waiting_commit": true}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
# Invoke method
response = _service.get_design_document_information(
db,
ddoc,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_design_document_information_all_params_with_retries(self):
# Enable retries and run test_get_design_document_information_all_params.
_service.enable_retries()
self.test_get_design_document_information_all_params()
# Disable retries and run test_get_design_document_information_all_params.
_service.disable_retries()
self.test_get_design_document_information_all_params()
@responses.activate
def test_get_design_document_information_value_error(self):
"""
test_get_design_document_information_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_info')
mock_response = '{"name": "name", "view_index": {"compact_running": false, "language": "language", "signature": "signature", "sizes": {"active": 6, "external": 8, "file": 4}, "updater_running": false, "waiting_clients": 0, "waiting_commit": true}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_design_document_information(**req_copy)
def test_get_design_document_information_value_error_with_retries(self):
# Enable retries and run test_get_design_document_information_value_error.
_service.enable_retries()
self.test_get_design_document_information_value_error()
# Disable retries and run test_get_design_document_information_value_error.
_service.disable_retries()
self.test_get_design_document_information_value_error()
class TestPostDesignDocs():
"""
Test Class for post_design_docs
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_design_docs_all_params(self):
"""
post_design_docs()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design_docs')
mock_response = '{"total_rows": 0, "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "key", "value": {"rev": "rev"}}], "update_seq": "update_seq"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = False
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
key = 'testString'
keys = ['testString']
startkey = '0007741142412418284'
accept = 'application/json'
# Invoke method
response = _service.post_design_docs(
db,
att_encoding_info=att_encoding_info,
attachments=attachments,
conflicts=conflicts,
descending=descending,
include_docs=include_docs,
inclusive_end=inclusive_end,
limit=limit,
skip=skip,
update_seq=update_seq,
endkey=endkey,
key=key,
keys=keys,
startkey=startkey,
accept=accept,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['att_encoding_info'] == False
assert req_body['attachments'] == False
assert req_body['conflicts'] == False
assert req_body['descending'] == False
assert req_body['include_docs'] == False
assert req_body['inclusive_end'] == True
assert req_body['limit'] == 10
assert req_body['skip'] == 0
assert req_body['update_seq'] == False
assert req_body['endkey'] == 'testString'
assert req_body['key'] == 'testString'
assert req_body['keys'] == ['testString']
assert req_body['startkey'] == '0007741142412418284'
def test_post_design_docs_all_params_with_retries(self):
# Enable retries and run test_post_design_docs_all_params.
_service.enable_retries()
self.test_post_design_docs_all_params()
# Disable retries and run test_post_design_docs_all_params.
_service.disable_retries()
self.test_post_design_docs_all_params()
@responses.activate
def test_post_design_docs_required_params(self):
"""
test_post_design_docs_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design_docs')
mock_response = '{"total_rows": 0, "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "key", "value": {"rev": "rev"}}], "update_seq": "update_seq"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = False
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
key = 'testString'
keys = ['testString']
startkey = '0007741142412418284'
# Invoke method
response = _service.post_design_docs(
db,
att_encoding_info=att_encoding_info,
attachments=attachments,
conflicts=conflicts,
descending=descending,
include_docs=include_docs,
inclusive_end=inclusive_end,
limit=limit,
skip=skip,
update_seq=update_seq,
endkey=endkey,
key=key,
keys=keys,
startkey=startkey,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['att_encoding_info'] == False
assert req_body['attachments'] == False
assert req_body['conflicts'] == False
assert req_body['descending'] == False
assert req_body['include_docs'] == False
assert req_body['inclusive_end'] == True
assert req_body['limit'] == 10
assert req_body['skip'] == 0
assert req_body['update_seq'] == False
assert req_body['endkey'] == 'testString'
assert req_body['key'] == 'testString'
assert req_body['keys'] == ['testString']
assert req_body['startkey'] == '0007741142412418284'
def test_post_design_docs_required_params_with_retries(self):
# Enable retries and run test_post_design_docs_required_params.
_service.enable_retries()
self.test_post_design_docs_required_params()
# Disable retries and run test_post_design_docs_required_params.
_service.disable_retries()
self.test_post_design_docs_required_params()
@responses.activate
def test_post_design_docs_value_error(self):
"""
test_post_design_docs_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design_docs')
mock_response = '{"total_rows": 0, "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "key", "value": {"rev": "rev"}}], "update_seq": "update_seq"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = False
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
key = 'testString'
keys = ['testString']
startkey = '0007741142412418284'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_design_docs(**req_copy)
def test_post_design_docs_value_error_with_retries(self):
# Enable retries and run test_post_design_docs_value_error.
_service.enable_retries()
self.test_post_design_docs_value_error()
# Disable retries and run test_post_design_docs_value_error.
_service.disable_retries()
self.test_post_design_docs_value_error()
class TestPostDesignDocsQueries():
"""
Test Class for post_design_docs_queries
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_design_docs_queries_all_params(self):
"""
post_design_docs_queries()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design_docs/queries')
mock_response = '{"results": [{"total_rows": 0, "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "key", "value": {"rev": "rev"}}], "update_seq": "update_seq"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a AllDocsQuery model
all_docs_query_model = {}
all_docs_query_model['att_encoding_info'] = False
all_docs_query_model['attachments'] = False
all_docs_query_model['conflicts'] = False
all_docs_query_model['descending'] = False
all_docs_query_model['include_docs'] = False
all_docs_query_model['inclusive_end'] = True
all_docs_query_model['limit'] = 0
all_docs_query_model['skip'] = 0
all_docs_query_model['update_seq'] = False
all_docs_query_model['endkey'] = 'testString'
all_docs_query_model['key'] = 'testString'
all_docs_query_model['keys'] = ['small-appliances:1000042', 'small-appliances:1000043']
all_docs_query_model['startkey'] = 'testString'
# Set up parameter values
db = 'testString'
queries = [all_docs_query_model]
accept = 'application/json'
# Invoke method
response = _service.post_design_docs_queries(
db,
queries,
accept=accept,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['queries'] == [all_docs_query_model]
def test_post_design_docs_queries_all_params_with_retries(self):
# Enable retries and run test_post_design_docs_queries_all_params.
_service.enable_retries()
self.test_post_design_docs_queries_all_params()
# Disable retries and run test_post_design_docs_queries_all_params.
_service.disable_retries()
self.test_post_design_docs_queries_all_params()
@responses.activate
def test_post_design_docs_queries_required_params(self):
"""
test_post_design_docs_queries_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design_docs/queries')
mock_response = '{"results": [{"total_rows": 0, "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "key", "value": {"rev": "rev"}}], "update_seq": "update_seq"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a AllDocsQuery model
all_docs_query_model = {}
all_docs_query_model['att_encoding_info'] = False
all_docs_query_model['attachments'] = False
all_docs_query_model['conflicts'] = False
all_docs_query_model['descending'] = False
all_docs_query_model['include_docs'] = False
all_docs_query_model['inclusive_end'] = True
all_docs_query_model['limit'] = 0
all_docs_query_model['skip'] = 0
all_docs_query_model['update_seq'] = False
all_docs_query_model['endkey'] = 'testString'
all_docs_query_model['key'] = 'testString'
all_docs_query_model['keys'] = ['small-appliances:1000042', 'small-appliances:1000043']
all_docs_query_model['startkey'] = 'testString'
# Set up parameter values
db = 'testString'
queries = [all_docs_query_model]
# Invoke method
response = _service.post_design_docs_queries(
db,
queries,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['queries'] == [all_docs_query_model]
def test_post_design_docs_queries_required_params_with_retries(self):
# Enable retries and run test_post_design_docs_queries_required_params.
_service.enable_retries()
self.test_post_design_docs_queries_required_params()
# Disable retries and run test_post_design_docs_queries_required_params.
_service.disable_retries()
self.test_post_design_docs_queries_required_params()
@responses.activate
def test_post_design_docs_queries_value_error(self):
"""
test_post_design_docs_queries_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design_docs/queries')
mock_response = '{"results": [{"total_rows": 0, "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "key", "value": {"rev": "rev"}}], "update_seq": "update_seq"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a AllDocsQuery model
all_docs_query_model = {}
all_docs_query_model['att_encoding_info'] = False
all_docs_query_model['attachments'] = False
all_docs_query_model['conflicts'] = False
all_docs_query_model['descending'] = False
all_docs_query_model['include_docs'] = False
all_docs_query_model['inclusive_end'] = True
all_docs_query_model['limit'] = 0
all_docs_query_model['skip'] = 0
all_docs_query_model['update_seq'] = False
all_docs_query_model['endkey'] = 'testString'
all_docs_query_model['key'] = 'testString'
all_docs_query_model['keys'] = ['small-appliances:1000042', 'small-appliances:1000043']
all_docs_query_model['startkey'] = 'testString'
# Set up parameter values
db = 'testString'
queries = [all_docs_query_model]
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"queries": queries,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_design_docs_queries(**req_copy)
def test_post_design_docs_queries_value_error_with_retries(self):
# Enable retries and run test_post_design_docs_queries_value_error.
_service.enable_retries()
self.test_post_design_docs_queries_value_error()
# Disable retries and run test_post_design_docs_queries_value_error.
_service.disable_retries()
self.test_post_design_docs_queries_value_error()
# endregion
##############################################################################
# End of Service: DesignDocuments
##############################################################################
##############################################################################
# Start of Service: Views
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestPostView():
"""
Test Class for post_view
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_view_all_params(self):
"""
post_view()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_view/testString')
mock_response = '{"total_rows": 0, "update_seq": "update_seq", "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "anyValue", "value": "anyValue"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
view = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = False
inclusive_end = True
limit = 0
skip = 0
update_seq = False
endkey = 'testString'
endkey_docid = 'testString'
group = False
group_level = 1
key = 'testString'
keys = ['testString']
reduce = True
stable = False
startkey = 'testString'
startkey_docid = 'testString'
update = 'true'
# Invoke method
response = _service.post_view(
db,
ddoc,
view,
att_encoding_info=att_encoding_info,
attachments=attachments,
conflicts=conflicts,
descending=descending,
include_docs=include_docs,
inclusive_end=inclusive_end,
limit=limit,
skip=skip,
update_seq=update_seq,
endkey=endkey,
endkey_docid=endkey_docid,
group=group,
group_level=group_level,
key=key,
keys=keys,
reduce=reduce,
stable=stable,
startkey=startkey,
startkey_docid=startkey_docid,
update=update,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['att_encoding_info'] == False
assert req_body['attachments'] == False
assert req_body['conflicts'] == False
assert req_body['descending'] == False
assert req_body['include_docs'] == False
assert req_body['inclusive_end'] == True
assert req_body['limit'] == 0
assert req_body['skip'] == 0
assert req_body['update_seq'] == False
assert req_body['endkey'] == 'testString'
assert req_body['endkey_docid'] == 'testString'
assert req_body['group'] == False
assert req_body['group_level'] == 1
assert req_body['key'] == 'testString'
assert req_body['keys'] == ['testString']
assert req_body['reduce'] == True
assert req_body['stable'] == False
assert req_body['startkey'] == 'testString'
assert req_body['startkey_docid'] == 'testString'
assert req_body['update'] == 'true'
def test_post_view_all_params_with_retries(self):
# Enable retries and run test_post_view_all_params.
_service.enable_retries()
self.test_post_view_all_params()
# Disable retries and run test_post_view_all_params.
_service.disable_retries()
self.test_post_view_all_params()
@responses.activate
def test_post_view_value_error(self):
"""
test_post_view_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_view/testString')
mock_response = '{"total_rows": 0, "update_seq": "update_seq", "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "anyValue", "value": "anyValue"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
view = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = False
inclusive_end = True
limit = 0
skip = 0
update_seq = False
endkey = 'testString'
endkey_docid = 'testString'
group = False
group_level = 1
key = 'testString'
keys = ['testString']
reduce = True
stable = False
startkey = 'testString'
startkey_docid = 'testString'
update = 'true'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
"view": view,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_view(**req_copy)
def test_post_view_value_error_with_retries(self):
# Enable retries and run test_post_view_value_error.
_service.enable_retries()
self.test_post_view_value_error()
# Disable retries and run test_post_view_value_error.
_service.disable_retries()
self.test_post_view_value_error()
class TestPostViewAsStream():
"""
Test Class for post_view_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_view_as_stream_all_params(self):
"""
post_view_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_view/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
view = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = True
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
endkey_docid = 'testString'
group = False
group_level = 1
key = 'testString'
keys = ['examplekey']
reduce = True
stable = False
startkey = 'testString'
startkey_docid = 'testString'
update = 'true'
# Invoke method
response = _service.post_view_as_stream(
db,
ddoc,
view,
att_encoding_info=att_encoding_info,
attachments=attachments,
conflicts=conflicts,
descending=descending,
include_docs=include_docs,
inclusive_end=inclusive_end,
limit=limit,
skip=skip,
update_seq=update_seq,
endkey=endkey,
endkey_docid=endkey_docid,
group=group,
group_level=group_level,
key=key,
keys=keys,
reduce=reduce,
stable=stable,
startkey=startkey,
startkey_docid=startkey_docid,
update=update,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['att_encoding_info'] == False
assert req_body['attachments'] == False
assert req_body['conflicts'] == False
assert req_body['descending'] == False
assert req_body['include_docs'] == True
assert req_body['inclusive_end'] == True
assert req_body['limit'] == 10
assert req_body['skip'] == 0
assert req_body['update_seq'] == False
assert req_body['endkey'] == 'testString'
assert req_body['endkey_docid'] == 'testString'
assert req_body['group'] == False
assert req_body['group_level'] == 1
assert req_body['key'] == 'testString'
assert req_body['keys'] == ['examplekey']
assert req_body['reduce'] == True
assert req_body['stable'] == False
assert req_body['startkey'] == 'testString'
assert req_body['startkey_docid'] == 'testString'
assert req_body['update'] == 'true'
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_view_as_stream_all_params_with_retries(self):
# Enable retries and run test_post_view_as_stream_all_params.
_service.enable_retries()
self.test_post_view_as_stream_all_params()
# Disable retries and run test_post_view_as_stream_all_params.
_service.disable_retries()
self.test_post_view_as_stream_all_params()
@responses.activate
def test_post_view_as_stream_value_error(self):
"""
test_post_view_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_view/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
view = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = True
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
endkey_docid = 'testString'
group = False
group_level = 1
key = 'testString'
keys = ['examplekey']
reduce = True
stable = False
startkey = 'testString'
startkey_docid = 'testString'
update = 'true'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
"view": view,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_view_as_stream(**req_copy)
def test_post_view_as_stream_value_error_with_retries(self):
# Enable retries and run test_post_view_as_stream_value_error.
_service.enable_retries()
self.test_post_view_as_stream_value_error()
# Disable retries and run test_post_view_as_stream_value_error.
_service.disable_retries()
self.test_post_view_as_stream_value_error()
class TestPostViewQueries():
"""
Test Class for post_view_queries
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_view_queries_all_params(self):
"""
post_view_queries()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_view/testString/queries')
mock_response = '{"results": [{"total_rows": 0, "update_seq": "update_seq", "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "anyValue", "value": "anyValue"}]}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a ViewQuery model
view_query_model = {}
view_query_model['att_encoding_info'] = False
view_query_model['attachments'] = False
view_query_model['conflicts'] = False
view_query_model['descending'] = False
view_query_model['include_docs'] = False
view_query_model['inclusive_end'] = True
view_query_model['limit'] = 0
view_query_model['skip'] = 0
view_query_model['update_seq'] = False
view_query_model['endkey'] = 'testString'
view_query_model['endkey_docid'] = 'testString'
view_query_model['group'] = False
view_query_model['group_level'] = 1
view_query_model['key'] = 'testString'
view_query_model['keys'] = ['testString']
view_query_model['reduce'] = True
view_query_model['stable'] = False
view_query_model['startkey'] = 'testString'
view_query_model['startkey_docid'] = 'testString'
view_query_model['update'] = 'true'
# Set up parameter values
db = 'testString'
ddoc = 'testString'
view = 'testString'
queries = [view_query_model]
# Invoke method
response = _service.post_view_queries(
db,
ddoc,
view,
queries,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['queries'] == [view_query_model]
def test_post_view_queries_all_params_with_retries(self):
# Enable retries and run test_post_view_queries_all_params.
_service.enable_retries()
self.test_post_view_queries_all_params()
# Disable retries and run test_post_view_queries_all_params.
_service.disable_retries()
self.test_post_view_queries_all_params()
@responses.activate
def test_post_view_queries_value_error(self):
"""
test_post_view_queries_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_view/testString/queries')
mock_response = '{"results": [{"total_rows": 0, "update_seq": "update_seq", "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "anyValue", "value": "anyValue"}]}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a ViewQuery model
view_query_model = {}
view_query_model['att_encoding_info'] = False
view_query_model['attachments'] = False
view_query_model['conflicts'] = False
view_query_model['descending'] = False
view_query_model['include_docs'] = False
view_query_model['inclusive_end'] = True
view_query_model['limit'] = 0
view_query_model['skip'] = 0
view_query_model['update_seq'] = False
view_query_model['endkey'] = 'testString'
view_query_model['endkey_docid'] = 'testString'
view_query_model['group'] = False
view_query_model['group_level'] = 1
view_query_model['key'] = 'testString'
view_query_model['keys'] = ['testString']
view_query_model['reduce'] = True
view_query_model['stable'] = False
view_query_model['startkey'] = 'testString'
view_query_model['startkey_docid'] = 'testString'
view_query_model['update'] = 'true'
# Set up parameter values
db = 'testString'
ddoc = 'testString'
view = 'testString'
queries = [view_query_model]
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
"view": view,
"queries": queries,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_view_queries(**req_copy)
def test_post_view_queries_value_error_with_retries(self):
# Enable retries and run test_post_view_queries_value_error.
_service.enable_retries()
self.test_post_view_queries_value_error()
# Disable retries and run test_post_view_queries_value_error.
_service.disable_retries()
self.test_post_view_queries_value_error()
class TestPostViewQueriesAsStream():
"""
Test Class for post_view_queries_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_view_queries_as_stream_all_params(self):
"""
post_view_queries_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_view/testString/queries')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a ViewQuery model
view_query_model = {}
view_query_model['att_encoding_info'] = False
view_query_model['attachments'] = False
view_query_model['conflicts'] = False
view_query_model['descending'] = False
view_query_model['include_docs'] = True
view_query_model['inclusive_end'] = True
view_query_model['limit'] = 5
view_query_model['skip'] = 0
view_query_model['update_seq'] = False
view_query_model['endkey'] = 'testString'
view_query_model['endkey_docid'] = 'testString'
view_query_model['group'] = False
view_query_model['group_level'] = 1
view_query_model['key'] = 'testString'
view_query_model['keys'] = ['testString']
view_query_model['reduce'] = True
view_query_model['stable'] = False
view_query_model['startkey'] = 'testString'
view_query_model['startkey_docid'] = 'testString'
view_query_model['update'] = 'true'
# Set up parameter values
db = 'testString'
ddoc = 'testString'
view = 'testString'
queries = [view_query_model]
# Invoke method
response = _service.post_view_queries_as_stream(
db,
ddoc,
view,
queries,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['queries'] == [view_query_model]
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_view_queries_as_stream_all_params_with_retries(self):
# Enable retries and run test_post_view_queries_as_stream_all_params.
_service.enable_retries()
self.test_post_view_queries_as_stream_all_params()
# Disable retries and run test_post_view_queries_as_stream_all_params.
_service.disable_retries()
self.test_post_view_queries_as_stream_all_params()
@responses.activate
def test_post_view_queries_as_stream_value_error(self):
"""
test_post_view_queries_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_view/testString/queries')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a ViewQuery model
view_query_model = {}
view_query_model['att_encoding_info'] = False
view_query_model['attachments'] = False
view_query_model['conflicts'] = False
view_query_model['descending'] = False
view_query_model['include_docs'] = True
view_query_model['inclusive_end'] = True
view_query_model['limit'] = 5
view_query_model['skip'] = 0
view_query_model['update_seq'] = False
view_query_model['endkey'] = 'testString'
view_query_model['endkey_docid'] = 'testString'
view_query_model['group'] = False
view_query_model['group_level'] = 1
view_query_model['key'] = 'testString'
view_query_model['keys'] = ['testString']
view_query_model['reduce'] = True
view_query_model['stable'] = False
view_query_model['startkey'] = 'testString'
view_query_model['startkey_docid'] = 'testString'
view_query_model['update'] = 'true'
# Set up parameter values
db = 'testString'
ddoc = 'testString'
view = 'testString'
queries = [view_query_model]
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
"view": view,
"queries": queries,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_view_queries_as_stream(**req_copy)
def test_post_view_queries_as_stream_value_error_with_retries(self):
# Enable retries and run test_post_view_queries_as_stream_value_error.
_service.enable_retries()
self.test_post_view_queries_as_stream_value_error()
# Disable retries and run test_post_view_queries_as_stream_value_error.
_service.disable_retries()
self.test_post_view_queries_as_stream_value_error()
# endregion
##############################################################################
# End of Service: Views
##############################################################################
##############################################################################
# Start of Service: PartitionedDatabases
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestGetPartitionInformation():
"""
Test Class for get_partition_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_partition_information_all_params(self):
"""
get_partition_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString')
mock_response = '{"db_name": "db_name", "doc_count": 0, "doc_del_count": 0, "partition": "partition", "partitioned_indexes": {"count": 0, "indexes": {"search": 0, "view": 0}, "limit": 0}, "sizes": {"active": 0, "external": 0}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
# Invoke method
response = _service.get_partition_information(
db,
partition_key,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_partition_information_all_params_with_retries(self):
# Enable retries and run test_get_partition_information_all_params.
_service.enable_retries()
self.test_get_partition_information_all_params()
# Disable retries and run test_get_partition_information_all_params.
_service.disable_retries()
self.test_get_partition_information_all_params()
@responses.activate
def test_get_partition_information_value_error(self):
"""
test_get_partition_information_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString')
mock_response = '{"db_name": "db_name", "doc_count": 0, "doc_del_count": 0, "partition": "partition", "partitioned_indexes": {"count": 0, "indexes": {"search": 0, "view": 0}, "limit": 0}, "sizes": {"active": 0, "external": 0}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"partition_key": partition_key,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_partition_information(**req_copy)
def test_get_partition_information_value_error_with_retries(self):
# Enable retries and run test_get_partition_information_value_error.
_service.enable_retries()
self.test_get_partition_information_value_error()
# Disable retries and run test_get_partition_information_value_error.
_service.disable_retries()
self.test_get_partition_information_value_error()
class TestPostPartitionAllDocs():
"""
Test Class for post_partition_all_docs
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_partition_all_docs_all_params(self):
"""
post_partition_all_docs()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_all_docs')
mock_response = '{"total_rows": 0, "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "key", "value": {"rev": "rev"}}], "update_seq": "update_seq"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = False
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
key = 'testString'
keys = ['testString']
startkey = '0007741142412418284'
# Invoke method
response = _service.post_partition_all_docs(
db,
partition_key,
att_encoding_info=att_encoding_info,
attachments=attachments,
conflicts=conflicts,
descending=descending,
include_docs=include_docs,
inclusive_end=inclusive_end,
limit=limit,
skip=skip,
update_seq=update_seq,
endkey=endkey,
key=key,
keys=keys,
startkey=startkey,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['att_encoding_info'] == False
assert req_body['attachments'] == False
assert req_body['conflicts'] == False
assert req_body['descending'] == False
assert req_body['include_docs'] == False
assert req_body['inclusive_end'] == True
assert req_body['limit'] == 10
assert req_body['skip'] == 0
assert req_body['update_seq'] == False
assert req_body['endkey'] == 'testString'
assert req_body['key'] == 'testString'
assert req_body['keys'] == ['testString']
assert req_body['startkey'] == '0007741142412418284'
def test_post_partition_all_docs_all_params_with_retries(self):
# Enable retries and run test_post_partition_all_docs_all_params.
_service.enable_retries()
self.test_post_partition_all_docs_all_params()
# Disable retries and run test_post_partition_all_docs_all_params.
_service.disable_retries()
self.test_post_partition_all_docs_all_params()
@responses.activate
def test_post_partition_all_docs_value_error(self):
"""
test_post_partition_all_docs_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_all_docs')
mock_response = '{"total_rows": 0, "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "key", "value": {"rev": "rev"}}], "update_seq": "update_seq"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = False
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
key = 'testString'
keys = ['testString']
startkey = '0007741142412418284'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"partition_key": partition_key,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_partition_all_docs(**req_copy)
def test_post_partition_all_docs_value_error_with_retries(self):
# Enable retries and run test_post_partition_all_docs_value_error.
_service.enable_retries()
self.test_post_partition_all_docs_value_error()
# Disable retries and run test_post_partition_all_docs_value_error.
_service.disable_retries()
self.test_post_partition_all_docs_value_error()
class TestPostPartitionAllDocsAsStream():
"""
Test Class for post_partition_all_docs_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_partition_all_docs_as_stream_all_params(self):
"""
post_partition_all_docs_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_all_docs')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = False
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
key = 'testString'
keys = ['testString']
startkey = '0007741142412418284'
# Invoke method
response = _service.post_partition_all_docs_as_stream(
db,
partition_key,
att_encoding_info=att_encoding_info,
attachments=attachments,
conflicts=conflicts,
descending=descending,
include_docs=include_docs,
inclusive_end=inclusive_end,
limit=limit,
skip=skip,
update_seq=update_seq,
endkey=endkey,
key=key,
keys=keys,
startkey=startkey,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['att_encoding_info'] == False
assert req_body['attachments'] == False
assert req_body['conflicts'] == False
assert req_body['descending'] == False
assert req_body['include_docs'] == False
assert req_body['inclusive_end'] == True
assert req_body['limit'] == 10
assert req_body['skip'] == 0
assert req_body['update_seq'] == False
assert req_body['endkey'] == 'testString'
assert req_body['key'] == 'testString'
assert req_body['keys'] == ['testString']
assert req_body['startkey'] == '0007741142412418284'
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_partition_all_docs_as_stream_all_params_with_retries(self):
# Enable retries and run test_post_partition_all_docs_as_stream_all_params.
_service.enable_retries()
self.test_post_partition_all_docs_as_stream_all_params()
# Disable retries and run test_post_partition_all_docs_as_stream_all_params.
_service.disable_retries()
self.test_post_partition_all_docs_as_stream_all_params()
@responses.activate
def test_post_partition_all_docs_as_stream_value_error(self):
"""
test_post_partition_all_docs_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_all_docs')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = False
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
key = 'testString'
keys = ['testString']
startkey = '0007741142412418284'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"partition_key": partition_key,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_partition_all_docs_as_stream(**req_copy)
def test_post_partition_all_docs_as_stream_value_error_with_retries(self):
# Enable retries and run test_post_partition_all_docs_as_stream_value_error.
_service.enable_retries()
self.test_post_partition_all_docs_as_stream_value_error()
# Disable retries and run test_post_partition_all_docs_as_stream_value_error.
_service.disable_retries()
self.test_post_partition_all_docs_as_stream_value_error()
class TestPostPartitionSearch():
"""
Test Class for post_partition_search
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_partition_search_all_params(self):
"""
post_partition_search()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_design/testString/_search/testString')
mock_response = '{"total_rows": 0, "bookmark": "bookmark", "by": "by", "counts": {"mapKey": {"mapKey": 0}}, "ranges": {"mapKey": {"mapKey": 0}}, "rows": [{"doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "fields": {"mapKey": "anyValue"}, "highlights": {"mapKey": ["inner"]}, "id": "id"}], "groups": [{"total_rows": 0, "bookmark": "bookmark", "by": "by", "counts": {"mapKey": {"mapKey": 0}}, "ranges": {"mapKey": {"mapKey": 0}}, "rows": [{"doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "fields": {"mapKey": "anyValue"}, "highlights": {"mapKey": ["inner"]}, "id": "id"}]}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
ddoc = 'testString'
index = 'testString'
query = 'testString'
bookmark = 'testString'
highlight_fields = ['testString']
highlight_number = 1
highlight_post_tag = '</em>'
highlight_pre_tag = '<em>'
highlight_size = 1
include_docs = False
include_fields = ['testString']
limit = 0
sort = ['testString']
stale = 'ok'
# Invoke method
response = _service.post_partition_search(
db,
partition_key,
ddoc,
index,
query,
bookmark=bookmark,
highlight_fields=highlight_fields,
highlight_number=highlight_number,
highlight_post_tag=highlight_post_tag,
highlight_pre_tag=highlight_pre_tag,
highlight_size=highlight_size,
include_docs=include_docs,
include_fields=include_fields,
limit=limit,
sort=sort,
stale=stale,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['query'] == 'testString'
assert req_body['bookmark'] == 'testString'
assert req_body['highlight_fields'] == ['testString']
assert req_body['highlight_number'] == 1
assert req_body['highlight_post_tag'] == '</em>'
assert req_body['highlight_pre_tag'] == '<em>'
assert req_body['highlight_size'] == 1
assert req_body['include_docs'] == False
assert req_body['include_fields'] == ['testString']
assert req_body['limit'] == 0
assert req_body['sort'] == ['testString']
assert req_body['stale'] == 'ok'
def test_post_partition_search_all_params_with_retries(self):
# Enable retries and run test_post_partition_search_all_params.
_service.enable_retries()
self.test_post_partition_search_all_params()
# Disable retries and run test_post_partition_search_all_params.
_service.disable_retries()
self.test_post_partition_search_all_params()
@responses.activate
def test_post_partition_search_value_error(self):
"""
test_post_partition_search_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_design/testString/_search/testString')
mock_response = '{"total_rows": 0, "bookmark": "bookmark", "by": "by", "counts": {"mapKey": {"mapKey": 0}}, "ranges": {"mapKey": {"mapKey": 0}}, "rows": [{"doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "fields": {"mapKey": "anyValue"}, "highlights": {"mapKey": ["inner"]}, "id": "id"}], "groups": [{"total_rows": 0, "bookmark": "bookmark", "by": "by", "counts": {"mapKey": {"mapKey": 0}}, "ranges": {"mapKey": {"mapKey": 0}}, "rows": [{"doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "fields": {"mapKey": "anyValue"}, "highlights": {"mapKey": ["inner"]}, "id": "id"}]}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
ddoc = 'testString'
index = 'testString'
query = 'testString'
bookmark = 'testString'
highlight_fields = ['testString']
highlight_number = 1
highlight_post_tag = '</em>'
highlight_pre_tag = '<em>'
highlight_size = 1
include_docs = False
include_fields = ['testString']
limit = 0
sort = ['testString']
stale = 'ok'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"partition_key": partition_key,
"ddoc": ddoc,
"index": index,
"query": query,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_partition_search(**req_copy)
def test_post_partition_search_value_error_with_retries(self):
# Enable retries and run test_post_partition_search_value_error.
_service.enable_retries()
self.test_post_partition_search_value_error()
# Disable retries and run test_post_partition_search_value_error.
_service.disable_retries()
self.test_post_partition_search_value_error()
class TestPostPartitionSearchAsStream():
"""
Test Class for post_partition_search_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_partition_search_as_stream_all_params(self):
"""
post_partition_search_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_design/testString/_search/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
ddoc = 'testString'
index = 'testString'
query = 'testString'
bookmark = 'testString'
highlight_fields = ['testString']
highlight_number = 1
highlight_post_tag = '</em>'
highlight_pre_tag = '<em>'
highlight_size = 1
include_docs = False
include_fields = ['testString']
limit = 3
sort = ['testString']
stale = 'ok'
# Invoke method
response = _service.post_partition_search_as_stream(
db,
partition_key,
ddoc,
index,
query,
bookmark=bookmark,
highlight_fields=highlight_fields,
highlight_number=highlight_number,
highlight_post_tag=highlight_post_tag,
highlight_pre_tag=highlight_pre_tag,
highlight_size=highlight_size,
include_docs=include_docs,
include_fields=include_fields,
limit=limit,
sort=sort,
stale=stale,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['query'] == 'testString'
assert req_body['bookmark'] == 'testString'
assert req_body['highlight_fields'] == ['testString']
assert req_body['highlight_number'] == 1
assert req_body['highlight_post_tag'] == '</em>'
assert req_body['highlight_pre_tag'] == '<em>'
assert req_body['highlight_size'] == 1
assert req_body['include_docs'] == False
assert req_body['include_fields'] == ['testString']
assert req_body['limit'] == 3
assert req_body['sort'] == ['testString']
assert req_body['stale'] == 'ok'
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_partition_search_as_stream_all_params_with_retries(self):
# Enable retries and run test_post_partition_search_as_stream_all_params.
_service.enable_retries()
self.test_post_partition_search_as_stream_all_params()
# Disable retries and run test_post_partition_search_as_stream_all_params.
_service.disable_retries()
self.test_post_partition_search_as_stream_all_params()
@responses.activate
def test_post_partition_search_as_stream_value_error(self):
"""
test_post_partition_search_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_design/testString/_search/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
ddoc = 'testString'
index = 'testString'
query = 'testString'
bookmark = 'testString'
highlight_fields = ['testString']
highlight_number = 1
highlight_post_tag = '</em>'
highlight_pre_tag = '<em>'
highlight_size = 1
include_docs = False
include_fields = ['testString']
limit = 3
sort = ['testString']
stale = 'ok'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"partition_key": partition_key,
"ddoc": ddoc,
"index": index,
"query": query,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_partition_search_as_stream(**req_copy)
def test_post_partition_search_as_stream_value_error_with_retries(self):
# Enable retries and run test_post_partition_search_as_stream_value_error.
_service.enable_retries()
self.test_post_partition_search_as_stream_value_error()
# Disable retries and run test_post_partition_search_as_stream_value_error.
_service.disable_retries()
self.test_post_partition_search_as_stream_value_error()
class TestPostPartitionView():
"""
Test Class for post_partition_view
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_partition_view_all_params(self):
"""
post_partition_view()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_design/testString/_view/testString')
mock_response = '{"total_rows": 0, "update_seq": "update_seq", "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "anyValue", "value": "anyValue"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
ddoc = 'testString'
view = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = True
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
endkey_docid = 'testString'
group = False
group_level = 1
key = 'testString'
keys = ['examplekey']
reduce = True
stable = False
startkey = 'testString'
startkey_docid = 'testString'
update = 'true'
# Invoke method
response = _service.post_partition_view(
db,
partition_key,
ddoc,
view,
att_encoding_info=att_encoding_info,
attachments=attachments,
conflicts=conflicts,
descending=descending,
include_docs=include_docs,
inclusive_end=inclusive_end,
limit=limit,
skip=skip,
update_seq=update_seq,
endkey=endkey,
endkey_docid=endkey_docid,
group=group,
group_level=group_level,
key=key,
keys=keys,
reduce=reduce,
stable=stable,
startkey=startkey,
startkey_docid=startkey_docid,
update=update,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['att_encoding_info'] == False
assert req_body['attachments'] == False
assert req_body['conflicts'] == False
assert req_body['descending'] == False
assert req_body['include_docs'] == True
assert req_body['inclusive_end'] == True
assert req_body['limit'] == 10
assert req_body['skip'] == 0
assert req_body['update_seq'] == False
assert req_body['endkey'] == 'testString'
assert req_body['endkey_docid'] == 'testString'
assert req_body['group'] == False
assert req_body['group_level'] == 1
assert req_body['key'] == 'testString'
assert req_body['keys'] == ['examplekey']
assert req_body['reduce'] == True
assert req_body['stable'] == False
assert req_body['startkey'] == 'testString'
assert req_body['startkey_docid'] == 'testString'
assert req_body['update'] == 'true'
def test_post_partition_view_all_params_with_retries(self):
# Enable retries and run test_post_partition_view_all_params.
_service.enable_retries()
self.test_post_partition_view_all_params()
# Disable retries and run test_post_partition_view_all_params.
_service.disable_retries()
self.test_post_partition_view_all_params()
@responses.activate
def test_post_partition_view_value_error(self):
"""
test_post_partition_view_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_design/testString/_view/testString')
mock_response = '{"total_rows": 0, "update_seq": "update_seq", "rows": [{"caused_by": "caused_by", "error": "error", "reason": "reason", "doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "id": "id", "key": "anyValue", "value": "anyValue"}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
ddoc = 'testString'
view = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = True
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
endkey_docid = 'testString'
group = False
group_level = 1
key = 'testString'
keys = ['examplekey']
reduce = True
stable = False
startkey = 'testString'
startkey_docid = 'testString'
update = 'true'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"partition_key": partition_key,
"ddoc": ddoc,
"view": view,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_partition_view(**req_copy)
def test_post_partition_view_value_error_with_retries(self):
# Enable retries and run test_post_partition_view_value_error.
_service.enable_retries()
self.test_post_partition_view_value_error()
# Disable retries and run test_post_partition_view_value_error.
_service.disable_retries()
self.test_post_partition_view_value_error()
class TestPostPartitionViewAsStream():
"""
Test Class for post_partition_view_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_partition_view_as_stream_all_params(self):
"""
post_partition_view_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_design/testString/_view/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
ddoc = 'testString'
view = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = True
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
endkey_docid = 'testString'
group = False
group_level = 1
key = 'testString'
keys = ['examplekey']
reduce = True
stable = False
startkey = 'testString'
startkey_docid = 'testString'
update = 'true'
# Invoke method
response = _service.post_partition_view_as_stream(
db,
partition_key,
ddoc,
view,
att_encoding_info=att_encoding_info,
attachments=attachments,
conflicts=conflicts,
descending=descending,
include_docs=include_docs,
inclusive_end=inclusive_end,
limit=limit,
skip=skip,
update_seq=update_seq,
endkey=endkey,
endkey_docid=endkey_docid,
group=group,
group_level=group_level,
key=key,
keys=keys,
reduce=reduce,
stable=stable,
startkey=startkey,
startkey_docid=startkey_docid,
update=update,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['att_encoding_info'] == False
assert req_body['attachments'] == False
assert req_body['conflicts'] == False
assert req_body['descending'] == False
assert req_body['include_docs'] == True
assert req_body['inclusive_end'] == True
assert req_body['limit'] == 10
assert req_body['skip'] == 0
assert req_body['update_seq'] == False
assert req_body['endkey'] == 'testString'
assert req_body['endkey_docid'] == 'testString'
assert req_body['group'] == False
assert req_body['group_level'] == 1
assert req_body['key'] == 'testString'
assert req_body['keys'] == ['examplekey']
assert req_body['reduce'] == True
assert req_body['stable'] == False
assert req_body['startkey'] == 'testString'
assert req_body['startkey_docid'] == 'testString'
assert req_body['update'] == 'true'
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_partition_view_as_stream_all_params_with_retries(self):
# Enable retries and run test_post_partition_view_as_stream_all_params.
_service.enable_retries()
self.test_post_partition_view_as_stream_all_params()
# Disable retries and run test_post_partition_view_as_stream_all_params.
_service.disable_retries()
self.test_post_partition_view_as_stream_all_params()
@responses.activate
def test_post_partition_view_as_stream_value_error(self):
"""
test_post_partition_view_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_design/testString/_view/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
ddoc = 'testString'
view = 'testString'
att_encoding_info = False
attachments = False
conflicts = False
descending = False
include_docs = True
inclusive_end = True
limit = 10
skip = 0
update_seq = False
endkey = 'testString'
endkey_docid = 'testString'
group = False
group_level = 1
key = 'testString'
keys = ['examplekey']
reduce = True
stable = False
startkey = 'testString'
startkey_docid = 'testString'
update = 'true'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"partition_key": partition_key,
"ddoc": ddoc,
"view": view,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_partition_view_as_stream(**req_copy)
def test_post_partition_view_as_stream_value_error_with_retries(self):
# Enable retries and run test_post_partition_view_as_stream_value_error.
_service.enable_retries()
self.test_post_partition_view_as_stream_value_error()
# Disable retries and run test_post_partition_view_as_stream_value_error.
_service.disable_retries()
self.test_post_partition_view_as_stream_value_error()
class TestPostPartitionFind():
"""
Test Class for post_partition_find
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_partition_find_all_params(self):
"""
post_partition_find()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_find')
mock_response = '{"bookmark": "bookmark", "docs": [{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}], "execution_stats": {"execution_time_ms": 17, "results_returned": 0, "total_docs_examined": 0, "total_keys_examined": 0, "total_quorum_docs_examined": 0}, "warning": "warning"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
selector = {}
bookmark = 'testString'
conflicts = True
execution_stats = True
fields = ['testString']
limit = 0
skip = 0
sort = [{}]
stable = True
update = 'true'
use_index = ['testString']
# Invoke method
response = _service.post_partition_find(
db,
partition_key,
selector,
bookmark=bookmark,
conflicts=conflicts,
execution_stats=execution_stats,
fields=fields,
limit=limit,
skip=skip,
sort=sort,
stable=stable,
update=update,
use_index=use_index,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['selector'] == {}
assert req_body['bookmark'] == 'testString'
assert req_body['conflicts'] == True
assert req_body['execution_stats'] == True
assert req_body['fields'] == ['testString']
assert req_body['limit'] == 0
assert req_body['skip'] == 0
assert req_body['sort'] == [{}]
assert req_body['stable'] == True
assert req_body['update'] == 'true'
assert req_body['use_index'] == ['testString']
def test_post_partition_find_all_params_with_retries(self):
# Enable retries and run test_post_partition_find_all_params.
_service.enable_retries()
self.test_post_partition_find_all_params()
# Disable retries and run test_post_partition_find_all_params.
_service.disable_retries()
self.test_post_partition_find_all_params()
@responses.activate
def test_post_partition_find_value_error(self):
"""
test_post_partition_find_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_find')
mock_response = '{"bookmark": "bookmark", "docs": [{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}], "execution_stats": {"execution_time_ms": 17, "results_returned": 0, "total_docs_examined": 0, "total_keys_examined": 0, "total_quorum_docs_examined": 0}, "warning": "warning"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
selector = {}
bookmark = 'testString'
conflicts = True
execution_stats = True
fields = ['testString']
limit = 0
skip = 0
sort = [{}]
stable = True
update = 'true'
use_index = ['testString']
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"partition_key": partition_key,
"selector": selector,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_partition_find(**req_copy)
def test_post_partition_find_value_error_with_retries(self):
# Enable retries and run test_post_partition_find_value_error.
_service.enable_retries()
self.test_post_partition_find_value_error()
# Disable retries and run test_post_partition_find_value_error.
_service.disable_retries()
self.test_post_partition_find_value_error()
class TestPostPartitionFindAsStream():
"""
Test Class for post_partition_find_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_partition_find_as_stream_all_params(self):
"""
post_partition_find_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_find')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
selector = {}
bookmark = 'testString'
conflicts = True
execution_stats = True
fields = ['productid', 'name', 'description']
limit = 0
skip = 0
sort = [{}]
stable = True
update = 'true'
use_index = ['testString']
# Invoke method
response = _service.post_partition_find_as_stream(
db,
partition_key,
selector,
bookmark=bookmark,
conflicts=conflicts,
execution_stats=execution_stats,
fields=fields,
limit=limit,
skip=skip,
sort=sort,
stable=stable,
update=update,
use_index=use_index,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['selector'] == {}
assert req_body['bookmark'] == 'testString'
assert req_body['conflicts'] == True
assert req_body['execution_stats'] == True
assert req_body['fields'] == ['productid', 'name', 'description']
assert req_body['limit'] == 0
assert req_body['skip'] == 0
assert req_body['sort'] == [{}]
assert req_body['stable'] == True
assert req_body['update'] == 'true'
assert req_body['use_index'] == ['testString']
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_partition_find_as_stream_all_params_with_retries(self):
# Enable retries and run test_post_partition_find_as_stream_all_params.
_service.enable_retries()
self.test_post_partition_find_as_stream_all_params()
# Disable retries and run test_post_partition_find_as_stream_all_params.
_service.disable_retries()
self.test_post_partition_find_as_stream_all_params()
@responses.activate
def test_post_partition_find_as_stream_value_error(self):
"""
test_post_partition_find_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_partition/testString/_find')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
partition_key = 'testString'
selector = {}
bookmark = 'testString'
conflicts = True
execution_stats = True
fields = ['productid', 'name', 'description']
limit = 0
skip = 0
sort = [{}]
stable = True
update = 'true'
use_index = ['testString']
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"partition_key": partition_key,
"selector": selector,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_partition_find_as_stream(**req_copy)
def test_post_partition_find_as_stream_value_error_with_retries(self):
# Enable retries and run test_post_partition_find_as_stream_value_error.
_service.enable_retries()
self.test_post_partition_find_as_stream_value_error()
# Disable retries and run test_post_partition_find_as_stream_value_error.
_service.disable_retries()
self.test_post_partition_find_as_stream_value_error()
# endregion
##############################################################################
# End of Service: PartitionedDatabases
##############################################################################
##############################################################################
# Start of Service: Queries
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestPostExplain():
"""
Test Class for post_explain
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_explain_all_params(self):
"""
post_explain()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_explain')
mock_response = '{"dbname": "dbname", "fields": ["fields"], "index": {"ddoc": "ddoc", "def": {"default_analyzer": {"name": "classic", "stopwords": ["stopwords"]}, "default_field": {"analyzer": {"name": "classic", "stopwords": ["stopwords"]}, "enabled": true}, "fields": [{"name": "name", "type": "boolean"}], "index_array_lengths": true, "partial_filter_selector": {"mapKey": "anyValue"}}, "name": "name", "type": "json"}, "limit": 0, "opts": {"mapKey": "anyValue"}, "range": {"end_key": ["anyValue"], "start_key": ["anyValue"]}, "selector": {"mapKey": "anyValue"}, "skip": 0}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
selector = {}
bookmark = 'testString'
conflicts = True
execution_stats = True
fields = ['testString']
limit = 0
skip = 0
sort = [{}]
stable = True
update = 'true'
use_index = ['testString']
r = 1
# Invoke method
response = _service.post_explain(
db,
selector,
bookmark=bookmark,
conflicts=conflicts,
execution_stats=execution_stats,
fields=fields,
limit=limit,
skip=skip,
sort=sort,
stable=stable,
update=update,
use_index=use_index,
r=r,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['selector'] == {}
assert req_body['bookmark'] == 'testString'
assert req_body['conflicts'] == True
assert req_body['execution_stats'] == True
assert req_body['fields'] == ['testString']
assert req_body['limit'] == 0
assert req_body['skip'] == 0
assert req_body['sort'] == [{}]
assert req_body['stable'] == True
assert req_body['update'] == 'true'
assert req_body['use_index'] == ['testString']
assert req_body['r'] == 1
def test_post_explain_all_params_with_retries(self):
# Enable retries and run test_post_explain_all_params.
_service.enable_retries()
self.test_post_explain_all_params()
# Disable retries and run test_post_explain_all_params.
_service.disable_retries()
self.test_post_explain_all_params()
@responses.activate
def test_post_explain_value_error(self):
"""
test_post_explain_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_explain')
mock_response = '{"dbname": "dbname", "fields": ["fields"], "index": {"ddoc": "ddoc", "def": {"default_analyzer": {"name": "classic", "stopwords": ["stopwords"]}, "default_field": {"analyzer": {"name": "classic", "stopwords": ["stopwords"]}, "enabled": true}, "fields": [{"name": "name", "type": "boolean"}], "index_array_lengths": true, "partial_filter_selector": {"mapKey": "anyValue"}}, "name": "name", "type": "json"}, "limit": 0, "opts": {"mapKey": "anyValue"}, "range": {"end_key": ["anyValue"], "start_key": ["anyValue"]}, "selector": {"mapKey": "anyValue"}, "skip": 0}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
selector = {}
bookmark = 'testString'
conflicts = True
execution_stats = True
fields = ['testString']
limit = 0
skip = 0
sort = [{}]
stable = True
update = 'true'
use_index = ['testString']
r = 1
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"selector": selector,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_explain(**req_copy)
def test_post_explain_value_error_with_retries(self):
# Enable retries and run test_post_explain_value_error.
_service.enable_retries()
self.test_post_explain_value_error()
# Disable retries and run test_post_explain_value_error.
_service.disable_retries()
self.test_post_explain_value_error()
class TestPostFind():
"""
Test Class for post_find
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_find_all_params(self):
"""
post_find()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_find')
mock_response = '{"bookmark": "bookmark", "docs": [{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}], "execution_stats": {"execution_time_ms": 17, "results_returned": 0, "total_docs_examined": 0, "total_keys_examined": 0, "total_quorum_docs_examined": 0}, "warning": "warning"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
selector = {}
bookmark = 'testString'
conflicts = True
execution_stats = True
fields = ['_id', 'type', 'name', 'email']
limit = 3
skip = 0
sort = [{}]
stable = True
update = 'true'
use_index = ['testString']
r = 1
# Invoke method
response = _service.post_find(
db,
selector,
bookmark=bookmark,
conflicts=conflicts,
execution_stats=execution_stats,
fields=fields,
limit=limit,
skip=skip,
sort=sort,
stable=stable,
update=update,
use_index=use_index,
r=r,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['selector'] == {}
assert req_body['bookmark'] == 'testString'
assert req_body['conflicts'] == True
assert req_body['execution_stats'] == True
assert req_body['fields'] == ['_id', 'type', 'name', 'email']
assert req_body['limit'] == 3
assert req_body['skip'] == 0
assert req_body['sort'] == [{}]
assert req_body['stable'] == True
assert req_body['update'] == 'true'
assert req_body['use_index'] == ['testString']
assert req_body['r'] == 1
def test_post_find_all_params_with_retries(self):
# Enable retries and run test_post_find_all_params.
_service.enable_retries()
self.test_post_find_all_params()
# Disable retries and run test_post_find_all_params.
_service.disable_retries()
self.test_post_find_all_params()
@responses.activate
def test_post_find_value_error(self):
"""
test_post_find_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_find')
mock_response = '{"bookmark": "bookmark", "docs": [{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}], "execution_stats": {"execution_time_ms": 17, "results_returned": 0, "total_docs_examined": 0, "total_keys_examined": 0, "total_quorum_docs_examined": 0}, "warning": "warning"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
selector = {}
bookmark = 'testString'
conflicts = True
execution_stats = True
fields = ['_id', 'type', 'name', 'email']
limit = 3
skip = 0
sort = [{}]
stable = True
update = 'true'
use_index = ['testString']
r = 1
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"selector": selector,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_find(**req_copy)
def test_post_find_value_error_with_retries(self):
# Enable retries and run test_post_find_value_error.
_service.enable_retries()
self.test_post_find_value_error()
# Disable retries and run test_post_find_value_error.
_service.disable_retries()
self.test_post_find_value_error()
class TestPostFindAsStream():
"""
Test Class for post_find_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_find_as_stream_all_params(self):
"""
post_find_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_find')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
selector = {}
bookmark = 'testString'
conflicts = True
execution_stats = True
fields = ['_id', 'type', 'name', 'email']
limit = 3
skip = 0
sort = [{}]
stable = True
update = 'true'
use_index = ['testString']
r = 1
# Invoke method
response = _service.post_find_as_stream(
db,
selector,
bookmark=bookmark,
conflicts=conflicts,
execution_stats=execution_stats,
fields=fields,
limit=limit,
skip=skip,
sort=sort,
stable=stable,
update=update,
use_index=use_index,
r=r,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['selector'] == {}
assert req_body['bookmark'] == 'testString'
assert req_body['conflicts'] == True
assert req_body['execution_stats'] == True
assert req_body['fields'] == ['_id', 'type', 'name', 'email']
assert req_body['limit'] == 3
assert req_body['skip'] == 0
assert req_body['sort'] == [{}]
assert req_body['stable'] == True
assert req_body['update'] == 'true'
assert req_body['use_index'] == ['testString']
assert req_body['r'] == 1
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_find_as_stream_all_params_with_retries(self):
# Enable retries and run test_post_find_as_stream_all_params.
_service.enable_retries()
self.test_post_find_as_stream_all_params()
# Disable retries and run test_post_find_as_stream_all_params.
_service.disable_retries()
self.test_post_find_as_stream_all_params()
@responses.activate
def test_post_find_as_stream_value_error(self):
"""
test_post_find_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_find')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
selector = {}
bookmark = 'testString'
conflicts = True
execution_stats = True
fields = ['_id', 'type', 'name', 'email']
limit = 3
skip = 0
sort = [{}]
stable = True
update = 'true'
use_index = ['testString']
r = 1
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"selector": selector,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_find_as_stream(**req_copy)
def test_post_find_as_stream_value_error_with_retries(self):
# Enable retries and run test_post_find_as_stream_value_error.
_service.enable_retries()
self.test_post_find_as_stream_value_error()
# Disable retries and run test_post_find_as_stream_value_error.
_service.disable_retries()
self.test_post_find_as_stream_value_error()
class TestGetIndexesInformation():
"""
Test Class for get_indexes_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_indexes_information_all_params(self):
"""
get_indexes_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_index')
mock_response = '{"total_rows": 0, "indexes": [{"ddoc": "ddoc", "def": {"default_analyzer": {"name": "classic", "stopwords": ["stopwords"]}, "default_field": {"analyzer": {"name": "classic", "stopwords": ["stopwords"]}, "enabled": true}, "fields": [{"name": "name", "type": "boolean"}], "index_array_lengths": true, "partial_filter_selector": {"mapKey": "anyValue"}}, "name": "name", "type": "json"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
# Invoke method
response = _service.get_indexes_information(
db,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_indexes_information_all_params_with_retries(self):
# Enable retries and run test_get_indexes_information_all_params.
_service.enable_retries()
self.test_get_indexes_information_all_params()
# Disable retries and run test_get_indexes_information_all_params.
_service.disable_retries()
self.test_get_indexes_information_all_params()
@responses.activate
def test_get_indexes_information_value_error(self):
"""
test_get_indexes_information_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_index')
mock_response = '{"total_rows": 0, "indexes": [{"ddoc": "ddoc", "def": {"default_analyzer": {"name": "classic", "stopwords": ["stopwords"]}, "default_field": {"analyzer": {"name": "classic", "stopwords": ["stopwords"]}, "enabled": true}, "fields": [{"name": "name", "type": "boolean"}], "index_array_lengths": true, "partial_filter_selector": {"mapKey": "anyValue"}}, "name": "name", "type": "json"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_indexes_information(**req_copy)
def test_get_indexes_information_value_error_with_retries(self):
# Enable retries and run test_get_indexes_information_value_error.
_service.enable_retries()
self.test_get_indexes_information_value_error()
# Disable retries and run test_get_indexes_information_value_error.
_service.disable_retries()
self.test_get_indexes_information_value_error()
class TestPostIndex():
"""
Test Class for post_index
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_index_all_params(self):
"""
post_index()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_index')
mock_response = '{"id": "id", "name": "name", "result": "created"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a Analyzer model
analyzer_model = {}
analyzer_model['name'] = 'classic'
analyzer_model['stopwords'] = ['testString']
# Construct a dict representation of a IndexTextOperatorDefaultField model
index_text_operator_default_field_model = {}
index_text_operator_default_field_model['analyzer'] = analyzer_model
index_text_operator_default_field_model['enabled'] = True
# Construct a dict representation of a IndexField model
index_field_model = {}
index_field_model['name'] = 'testString'
index_field_model['type'] = 'boolean'
index_field_model['foo'] = 'asc'
# Construct a dict representation of a IndexDefinition model
index_definition_model = {}
index_definition_model['default_analyzer'] = analyzer_model
index_definition_model['default_field'] = index_text_operator_default_field_model
index_definition_model['fields'] = [index_field_model]
index_definition_model['index_array_lengths'] = True
index_definition_model['partial_filter_selector'] = {}
# Set up parameter values
db = 'testString'
index = index_definition_model
ddoc = 'testString'
def_ = index_definition_model
name = 'testString'
partitioned = True
type = 'json'
# Invoke method
response = _service.post_index(
db,
index,
ddoc=ddoc,
def_=def_,
name=name,
partitioned=partitioned,
type=type,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['index'] == index_definition_model
assert req_body['ddoc'] == 'testString'
assert req_body['def'] == index_definition_model
assert req_body['name'] == 'testString'
assert req_body['partitioned'] == True
assert req_body['type'] == 'json'
def test_post_index_all_params_with_retries(self):
# Enable retries and run test_post_index_all_params.
_service.enable_retries()
self.test_post_index_all_params()
# Disable retries and run test_post_index_all_params.
_service.disable_retries()
self.test_post_index_all_params()
@responses.activate
def test_post_index_value_error(self):
"""
test_post_index_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_index')
mock_response = '{"id": "id", "name": "name", "result": "created"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a Analyzer model
analyzer_model = {}
analyzer_model['name'] = 'classic'
analyzer_model['stopwords'] = ['testString']
# Construct a dict representation of a IndexTextOperatorDefaultField model
index_text_operator_default_field_model = {}
index_text_operator_default_field_model['analyzer'] = analyzer_model
index_text_operator_default_field_model['enabled'] = True
# Construct a dict representation of a IndexField model
index_field_model = {}
index_field_model['name'] = 'testString'
index_field_model['type'] = 'boolean'
index_field_model['foo'] = 'asc'
# Construct a dict representation of a IndexDefinition model
index_definition_model = {}
index_definition_model['default_analyzer'] = analyzer_model
index_definition_model['default_field'] = index_text_operator_default_field_model
index_definition_model['fields'] = [index_field_model]
index_definition_model['index_array_lengths'] = True
index_definition_model['partial_filter_selector'] = {}
# Set up parameter values
db = 'testString'
index = index_definition_model
ddoc = 'testString'
def_ = index_definition_model
name = 'testString'
partitioned = True
type = 'json'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"index": index,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_index(**req_copy)
def test_post_index_value_error_with_retries(self):
# Enable retries and run test_post_index_value_error.
_service.enable_retries()
self.test_post_index_value_error()
# Disable retries and run test_post_index_value_error.
_service.disable_retries()
self.test_post_index_value_error()
class TestDeleteIndex():
"""
Test Class for delete_index
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_index_all_params(self):
"""
delete_index()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_index/_design/testString/json/testString')
mock_response = '{"ok": true}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
type = 'json'
index = 'testString'
# Invoke method
response = _service.delete_index(
db,
ddoc,
type,
index,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_delete_index_all_params_with_retries(self):
# Enable retries and run test_delete_index_all_params.
_service.enable_retries()
self.test_delete_index_all_params()
# Disable retries and run test_delete_index_all_params.
_service.disable_retries()
self.test_delete_index_all_params()
@responses.activate
def test_delete_index_value_error(self):
"""
test_delete_index_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_index/_design/testString/json/testString')
mock_response = '{"ok": true}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
type = 'json'
index = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
"type": type,
"index": index,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.delete_index(**req_copy)
def test_delete_index_value_error_with_retries(self):
# Enable retries and run test_delete_index_value_error.
_service.enable_retries()
self.test_delete_index_value_error()
# Disable retries and run test_delete_index_value_error.
_service.disable_retries()
self.test_delete_index_value_error()
# endregion
##############################################################################
# End of Service: Queries
##############################################################################
##############################################################################
# Start of Service: Searches
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestPostSearchAnalyze():
"""
Test Class for post_search_analyze
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_search_analyze_all_params(self):
"""
post_search_analyze()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_search_analyze')
mock_response = '{"tokens": ["tokens"]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
analyzer = 'arabic'
text = 'testString'
# Invoke method
response = _service.post_search_analyze(
analyzer,
text,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['analyzer'] == 'arabic'
assert req_body['text'] == 'testString'
def test_post_search_analyze_all_params_with_retries(self):
# Enable retries and run test_post_search_analyze_all_params.
_service.enable_retries()
self.test_post_search_analyze_all_params()
# Disable retries and run test_post_search_analyze_all_params.
_service.disable_retries()
self.test_post_search_analyze_all_params()
@responses.activate
def test_post_search_analyze_value_error(self):
"""
test_post_search_analyze_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_search_analyze')
mock_response = '{"tokens": ["tokens"]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
analyzer = 'arabic'
text = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"analyzer": analyzer,
"text": text,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_search_analyze(**req_copy)
def test_post_search_analyze_value_error_with_retries(self):
# Enable retries and run test_post_search_analyze_value_error.
_service.enable_retries()
self.test_post_search_analyze_value_error()
# Disable retries and run test_post_search_analyze_value_error.
_service.disable_retries()
self.test_post_search_analyze_value_error()
class TestPostSearch():
"""
Test Class for post_search
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_search_all_params(self):
"""
post_search()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_search/testString')
mock_response = '{"total_rows": 0, "bookmark": "bookmark", "by": "by", "counts": {"mapKey": {"mapKey": 0}}, "ranges": {"mapKey": {"mapKey": 0}}, "rows": [{"doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "fields": {"mapKey": "anyValue"}, "highlights": {"mapKey": ["inner"]}, "id": "id"}], "groups": [{"total_rows": 0, "bookmark": "bookmark", "by": "by", "counts": {"mapKey": {"mapKey": 0}}, "ranges": {"mapKey": {"mapKey": 0}}, "rows": [{"doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "fields": {"mapKey": "anyValue"}, "highlights": {"mapKey": ["inner"]}, "id": "id"}]}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
query = 'testString'
bookmark = 'testString'
highlight_fields = ['testString']
highlight_number = 1
highlight_post_tag = '</em>'
highlight_pre_tag = '<em>'
highlight_size = 1
include_docs = False
include_fields = ['testString']
limit = 0
sort = ['testString']
stale = 'ok'
counts = ['testString']
drilldown = [['testString']]
group_field = 'testString'
group_limit = 1
group_sort = ['testString']
ranges = {}
# Invoke method
response = _service.post_search(
db,
ddoc,
index,
query,
bookmark=bookmark,
highlight_fields=highlight_fields,
highlight_number=highlight_number,
highlight_post_tag=highlight_post_tag,
highlight_pre_tag=highlight_pre_tag,
highlight_size=highlight_size,
include_docs=include_docs,
include_fields=include_fields,
limit=limit,
sort=sort,
stale=stale,
counts=counts,
drilldown=drilldown,
group_field=group_field,
group_limit=group_limit,
group_sort=group_sort,
ranges=ranges,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['query'] == 'testString'
assert req_body['bookmark'] == 'testString'
assert req_body['highlight_fields'] == ['testString']
assert req_body['highlight_number'] == 1
assert req_body['highlight_post_tag'] == '</em>'
assert req_body['highlight_pre_tag'] == '<em>'
assert req_body['highlight_size'] == 1
assert req_body['include_docs'] == False
assert req_body['include_fields'] == ['testString']
assert req_body['limit'] == 0
assert req_body['sort'] == ['testString']
assert req_body['stale'] == 'ok'
assert req_body['counts'] == ['testString']
assert req_body['drilldown'] == [['testString']]
assert req_body['group_field'] == 'testString'
assert req_body['group_limit'] == 1
assert req_body['group_sort'] == ['testString']
assert req_body['ranges'] == {}
def test_post_search_all_params_with_retries(self):
# Enable retries and run test_post_search_all_params.
_service.enable_retries()
self.test_post_search_all_params()
# Disable retries and run test_post_search_all_params.
_service.disable_retries()
self.test_post_search_all_params()
@responses.activate
def test_post_search_value_error(self):
"""
test_post_search_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_search/testString')
mock_response = '{"total_rows": 0, "bookmark": "bookmark", "by": "by", "counts": {"mapKey": {"mapKey": 0}}, "ranges": {"mapKey": {"mapKey": 0}}, "rows": [{"doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "fields": {"mapKey": "anyValue"}, "highlights": {"mapKey": ["inner"]}, "id": "id"}], "groups": [{"total_rows": 0, "bookmark": "bookmark", "by": "by", "counts": {"mapKey": {"mapKey": 0}}, "ranges": {"mapKey": {"mapKey": 0}}, "rows": [{"doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "fields": {"mapKey": "anyValue"}, "highlights": {"mapKey": ["inner"]}, "id": "id"}]}]}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
query = 'testString'
bookmark = 'testString'
highlight_fields = ['testString']
highlight_number = 1
highlight_post_tag = '</em>'
highlight_pre_tag = '<em>'
highlight_size = 1
include_docs = False
include_fields = ['testString']
limit = 0
sort = ['testString']
stale = 'ok'
counts = ['testString']
drilldown = [['testString']]
group_field = 'testString'
group_limit = 1
group_sort = ['testString']
ranges = {}
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
"index": index,
"query": query,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_search(**req_copy)
def test_post_search_value_error_with_retries(self):
# Enable retries and run test_post_search_value_error.
_service.enable_retries()
self.test_post_search_value_error()
# Disable retries and run test_post_search_value_error.
_service.disable_retries()
self.test_post_search_value_error()
class TestPostSearchAsStream():
"""
Test Class for post_search_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_search_as_stream_all_params(self):
"""
post_search_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_search/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
query = 'testString'
bookmark = 'testString'
highlight_fields = ['testString']
highlight_number = 1
highlight_post_tag = '</em>'
highlight_pre_tag = '<em>'
highlight_size = 1
include_docs = False
include_fields = ['testString']
limit = 3
sort = ['testString']
stale = 'ok'
counts = ['testString']
drilldown = [['testString']]
group_field = 'testString'
group_limit = 1
group_sort = ['testString']
ranges = {}
# Invoke method
response = _service.post_search_as_stream(
db,
ddoc,
index,
query,
bookmark=bookmark,
highlight_fields=highlight_fields,
highlight_number=highlight_number,
highlight_post_tag=highlight_post_tag,
highlight_pre_tag=highlight_pre_tag,
highlight_size=highlight_size,
include_docs=include_docs,
include_fields=include_fields,
limit=limit,
sort=sort,
stale=stale,
counts=counts,
drilldown=drilldown,
group_field=group_field,
group_limit=group_limit,
group_sort=group_sort,
ranges=ranges,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['query'] == 'testString'
assert req_body['bookmark'] == 'testString'
assert req_body['highlight_fields'] == ['testString']
assert req_body['highlight_number'] == 1
assert req_body['highlight_post_tag'] == '</em>'
assert req_body['highlight_pre_tag'] == '<em>'
assert req_body['highlight_size'] == 1
assert req_body['include_docs'] == False
assert req_body['include_fields'] == ['testString']
assert req_body['limit'] == 3
assert req_body['sort'] == ['testString']
assert req_body['stale'] == 'ok'
assert req_body['counts'] == ['testString']
assert req_body['drilldown'] == [['testString']]
assert req_body['group_field'] == 'testString'
assert req_body['group_limit'] == 1
assert req_body['group_sort'] == ['testString']
assert req_body['ranges'] == {}
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_post_search_as_stream_all_params_with_retries(self):
# Enable retries and run test_post_search_as_stream_all_params.
_service.enable_retries()
self.test_post_search_as_stream_all_params()
# Disable retries and run test_post_search_as_stream_all_params.
_service.disable_retries()
self.test_post_search_as_stream_all_params()
@responses.activate
def test_post_search_as_stream_value_error(self):
"""
test_post_search_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_search/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
query = 'testString'
bookmark = 'testString'
highlight_fields = ['testString']
highlight_number = 1
highlight_post_tag = '</em>'
highlight_pre_tag = '<em>'
highlight_size = 1
include_docs = False
include_fields = ['testString']
limit = 3
sort = ['testString']
stale = 'ok'
counts = ['testString']
drilldown = [['testString']]
group_field = 'testString'
group_limit = 1
group_sort = ['testString']
ranges = {}
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
"index": index,
"query": query,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_search_as_stream(**req_copy)
def test_post_search_as_stream_value_error_with_retries(self):
# Enable retries and run test_post_search_as_stream_value_error.
_service.enable_retries()
self.test_post_search_as_stream_value_error()
# Disable retries and run test_post_search_as_stream_value_error.
_service.disable_retries()
self.test_post_search_as_stream_value_error()
class TestGetSearchInfo():
"""
Test Class for get_search_info
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_search_info_all_params(self):
"""
get_search_info()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_search_info/testString')
mock_response = '{"name": "name", "search_index": {"committed_seq": 13, "disk_size": 0, "doc_count": 0, "doc_del_count": 0, "pending_seq": 11}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
# Invoke method
response = _service.get_search_info(
db,
ddoc,
index,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_search_info_all_params_with_retries(self):
# Enable retries and run test_get_search_info_all_params.
_service.enable_retries()
self.test_get_search_info_all_params()
# Disable retries and run test_get_search_info_all_params.
_service.disable_retries()
self.test_get_search_info_all_params()
@responses.activate
def test_get_search_info_value_error(self):
"""
test_get_search_info_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_search_info/testString')
mock_response = '{"name": "name", "search_index": {"committed_seq": 13, "disk_size": 0, "doc_count": 0, "doc_del_count": 0, "pending_seq": 11}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
"index": index,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_search_info(**req_copy)
def test_get_search_info_value_error_with_retries(self):
# Enable retries and run test_get_search_info_value_error.
_service.enable_retries()
self.test_get_search_info_value_error()
# Disable retries and run test_get_search_info_value_error.
_service.disable_retries()
self.test_get_search_info_value_error()
# endregion
##############################################################################
# End of Service: Searches
##############################################################################
##############################################################################
# Start of Service: Geospatial
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestGetGeo():
"""
Test Class for get_geo
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_geo_all_params(self):
"""
get_geo()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_geo/testString')
mock_response = '{"bookmark": "bookmark", "features": [{"_id": "id", "_rev": "rev", "bbox": [4], "geometry": {"type": "Point", "coordinates": ["anyValue"]}, "properties": {"mapKey": "anyValue"}, "type": "Feature"}], "rows": [{"doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "geometry": {"type": "Point", "coordinates": ["anyValue"]}, "id": "id", "rev": "rev"}], "type": "FeatureCollection"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
bbox = 'testString'
bookmark = 'testString'
format = 'view'
g = 'testString'
include_docs = False
lat = -90
limit = 0
lon = -180
nearest = False
radius = 0
rangex = 0
rangey = 0
relation = 'intersects'
skip = 0
stale = 'ok'
# Invoke method
response = _service.get_geo(
db,
ddoc,
index,
bbox=bbox,
bookmark=bookmark,
format=format,
g=g,
include_docs=include_docs,
lat=lat,
limit=limit,
lon=lon,
nearest=nearest,
radius=radius,
rangex=rangex,
rangey=rangey,
relation=relation,
skip=skip,
stale=stale,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'bbox={}'.format(bbox) in query_string
assert 'bookmark={}'.format(bookmark) in query_string
assert 'format={}'.format(format) in query_string
assert 'g={}'.format(g) in query_string
assert 'include_docs={}'.format('true' if include_docs else 'false') in query_string
assert 'lat={}'.format(lat) in query_string
assert 'limit={}'.format(limit) in query_string
assert 'lon={}'.format(lon) in query_string
assert 'nearest={}'.format('true' if nearest else 'false') in query_string
assert 'radius={}'.format(radius) in query_string
assert 'rangex={}'.format(rangex) in query_string
assert 'rangey={}'.format(rangey) in query_string
assert 'relation={}'.format(relation) in query_string
assert 'skip={}'.format(skip) in query_string
assert 'stale={}'.format(stale) in query_string
def test_get_geo_all_params_with_retries(self):
# Enable retries and run test_get_geo_all_params.
_service.enable_retries()
self.test_get_geo_all_params()
# Disable retries and run test_get_geo_all_params.
_service.disable_retries()
self.test_get_geo_all_params()
@responses.activate
def test_get_geo_required_params(self):
"""
test_get_geo_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_geo/testString')
mock_response = '{"bookmark": "bookmark", "features": [{"_id": "id", "_rev": "rev", "bbox": [4], "geometry": {"type": "Point", "coordinates": ["anyValue"]}, "properties": {"mapKey": "anyValue"}, "type": "Feature"}], "rows": [{"doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "geometry": {"type": "Point", "coordinates": ["anyValue"]}, "id": "id", "rev": "rev"}], "type": "FeatureCollection"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
# Invoke method
response = _service.get_geo(
db,
ddoc,
index,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_geo_required_params_with_retries(self):
# Enable retries and run test_get_geo_required_params.
_service.enable_retries()
self.test_get_geo_required_params()
# Disable retries and run test_get_geo_required_params.
_service.disable_retries()
self.test_get_geo_required_params()
@responses.activate
def test_get_geo_value_error(self):
"""
test_get_geo_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_geo/testString')
mock_response = '{"bookmark": "bookmark", "features": [{"_id": "id", "_rev": "rev", "bbox": [4], "geometry": {"type": "Point", "coordinates": ["anyValue"]}, "properties": {"mapKey": "anyValue"}, "type": "Feature"}], "rows": [{"doc": {"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}, "geometry": {"type": "Point", "coordinates": ["anyValue"]}, "id": "id", "rev": "rev"}], "type": "FeatureCollection"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
"index": index,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_geo(**req_copy)
def test_get_geo_value_error_with_retries(self):
# Enable retries and run test_get_geo_value_error.
_service.enable_retries()
self.test_get_geo_value_error()
# Disable retries and run test_get_geo_value_error.
_service.disable_retries()
self.test_get_geo_value_error()
class TestGetGeoAsStream():
"""
Test Class for get_geo_as_stream
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_geo_as_stream_all_params(self):
"""
get_geo_as_stream()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_geo/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
bbox = 'testString'
bookmark = 'testString'
format = 'view'
g = 'testString'
include_docs = False
lat = -90
limit = 0
lon = -180
nearest = False
radius = 0
rangex = 0
rangey = 0
relation = 'intersects'
skip = 0
stale = 'ok'
# Invoke method
response = _service.get_geo_as_stream(
db,
ddoc,
index,
bbox=bbox,
bookmark=bookmark,
format=format,
g=g,
include_docs=include_docs,
lat=lat,
limit=limit,
lon=lon,
nearest=nearest,
radius=radius,
rangex=rangex,
rangey=rangey,
relation=relation,
skip=skip,
stale=stale,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'bbox={}'.format(bbox) in query_string
assert 'bookmark={}'.format(bookmark) in query_string
assert 'format={}'.format(format) in query_string
assert 'g={}'.format(g) in query_string
assert 'include_docs={}'.format('true' if include_docs else 'false') in query_string
assert 'lat={}'.format(lat) in query_string
assert 'limit={}'.format(limit) in query_string
assert 'lon={}'.format(lon) in query_string
assert 'nearest={}'.format('true' if nearest else 'false') in query_string
assert 'radius={}'.format(radius) in query_string
assert 'rangex={}'.format(rangex) in query_string
assert 'rangey={}'.format(rangey) in query_string
assert 'relation={}'.format(relation) in query_string
assert 'skip={}'.format(skip) in query_string
assert 'stale={}'.format(stale) in query_string
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_get_geo_as_stream_all_params_with_retries(self):
# Enable retries and run test_get_geo_as_stream_all_params.
_service.enable_retries()
self.test_get_geo_as_stream_all_params()
# Disable retries and run test_get_geo_as_stream_all_params.
_service.disable_retries()
self.test_get_geo_as_stream_all_params()
@responses.activate
def test_get_geo_as_stream_required_params(self):
"""
test_get_geo_as_stream_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_geo/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
# Invoke method
response = _service.get_geo_as_stream(
db,
ddoc,
index,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Verify streamed JSON response
result = response.get_result()
assert isinstance(result, requests.models.Response)
response_buf = result.iter_content(chunk_size=1024)
assert str(next(response_buf), "utf-8") == mock_response
def test_get_geo_as_stream_required_params_with_retries(self):
# Enable retries and run test_get_geo_as_stream_required_params.
_service.enable_retries()
self.test_get_geo_as_stream_required_params()
# Disable retries and run test_get_geo_as_stream_required_params.
_service.disable_retries()
self.test_get_geo_as_stream_required_params()
@responses.activate
def test_get_geo_as_stream_value_error(self):
"""
test_get_geo_as_stream_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_geo/testString')
mock_response = '{"foo": "this is a mock response for JSON streaming"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
"index": index,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_geo_as_stream(**req_copy)
def test_get_geo_as_stream_value_error_with_retries(self):
# Enable retries and run test_get_geo_as_stream_value_error.
_service.enable_retries()
self.test_get_geo_as_stream_value_error()
# Disable retries and run test_get_geo_as_stream_value_error.
_service.disable_retries()
self.test_get_geo_as_stream_value_error()
class TestPostGeoCleanup():
"""
Test Class for post_geo_cleanup
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_geo_cleanup_all_params(self):
"""
post_geo_cleanup()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_geo_cleanup')
mock_response = '{"ok": true}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=202)
# Set up parameter values
db = 'testString'
# Invoke method
response = _service.post_geo_cleanup(
db,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 202
def test_post_geo_cleanup_all_params_with_retries(self):
# Enable retries and run test_post_geo_cleanup_all_params.
_service.enable_retries()
self.test_post_geo_cleanup_all_params()
# Disable retries and run test_post_geo_cleanup_all_params.
_service.disable_retries()
self.test_post_geo_cleanup_all_params()
@responses.activate
def test_post_geo_cleanup_value_error(self):
"""
test_post_geo_cleanup_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_geo_cleanup')
mock_response = '{"ok": true}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=202)
# Set up parameter values
db = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_geo_cleanup(**req_copy)
def test_post_geo_cleanup_value_error_with_retries(self):
# Enable retries and run test_post_geo_cleanup_value_error.
_service.enable_retries()
self.test_post_geo_cleanup_value_error()
# Disable retries and run test_post_geo_cleanup_value_error.
_service.disable_retries()
self.test_post_geo_cleanup_value_error()
class TestGetGeoIndexInformation():
"""
Test Class for get_geo_index_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_geo_index_information_all_params(self):
"""
get_geo_index_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_geo_info/testString')
mock_response = '{"geo_index": {"data_size": 0, "disk_size": 0, "doc_count": 0}, "name": "name"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
# Invoke method
response = _service.get_geo_index_information(
db,
ddoc,
index,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_geo_index_information_all_params_with_retries(self):
# Enable retries and run test_get_geo_index_information_all_params.
_service.enable_retries()
self.test_get_geo_index_information_all_params()
# Disable retries and run test_get_geo_index_information_all_params.
_service.disable_retries()
self.test_get_geo_index_information_all_params()
@responses.activate
def test_get_geo_index_information_value_error(self):
"""
test_get_geo_index_information_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_design/testString/_geo_info/testString')
mock_response = '{"geo_index": {"data_size": 0, "disk_size": 0, "doc_count": 0}, "name": "name"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
ddoc = 'testString'
index = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"ddoc": ddoc,
"index": index,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_geo_index_information(**req_copy)
def test_get_geo_index_information_value_error_with_retries(self):
# Enable retries and run test_get_geo_index_information_value_error.
_service.enable_retries()
self.test_get_geo_index_information_value_error()
# Disable retries and run test_get_geo_index_information_value_error.
_service.disable_retries()
self.test_get_geo_index_information_value_error()
# endregion
##############################################################################
# End of Service: Geospatial
##############################################################################
##############################################################################
# Start of Service: Replication
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestHeadReplicationDocument():
"""
Test Class for head_replication_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_head_replication_document_all_params(self):
"""
head_replication_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_replicator/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
doc_id = 'testString'
if_none_match = 'testString'
# Invoke method
response = _service.head_replication_document(
doc_id,
if_none_match=if_none_match,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_head_replication_document_all_params_with_retries(self):
# Enable retries and run test_head_replication_document_all_params.
_service.enable_retries()
self.test_head_replication_document_all_params()
# Disable retries and run test_head_replication_document_all_params.
_service.disable_retries()
self.test_head_replication_document_all_params()
@responses.activate
def test_head_replication_document_required_params(self):
"""
test_head_replication_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_replicator/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
doc_id = 'testString'
# Invoke method
response = _service.head_replication_document(
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_head_replication_document_required_params_with_retries(self):
# Enable retries and run test_head_replication_document_required_params.
_service.enable_retries()
self.test_head_replication_document_required_params()
# Disable retries and run test_head_replication_document_required_params.
_service.disable_retries()
self.test_head_replication_document_required_params()
@responses.activate
def test_head_replication_document_value_error(self):
"""
test_head_replication_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_replicator/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.head_replication_document(**req_copy)
def test_head_replication_document_value_error_with_retries(self):
# Enable retries and run test_head_replication_document_value_error.
_service.enable_retries()
self.test_head_replication_document_value_error()
# Disable retries and run test_head_replication_document_value_error.
_service.disable_retries()
self.test_head_replication_document_value_error()
class TestHeadSchedulerDocument():
"""
Test Class for head_scheduler_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_head_scheduler_document_all_params(self):
"""
head_scheduler_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_scheduler/docs/_replicator/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
doc_id = 'testString'
# Invoke method
response = _service.head_scheduler_document(
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_head_scheduler_document_all_params_with_retries(self):
# Enable retries and run test_head_scheduler_document_all_params.
_service.enable_retries()
self.test_head_scheduler_document_all_params()
# Disable retries and run test_head_scheduler_document_all_params.
_service.disable_retries()
self.test_head_scheduler_document_all_params()
@responses.activate
def test_head_scheduler_document_value_error(self):
"""
test_head_scheduler_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_scheduler/docs/_replicator/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.head_scheduler_document(**req_copy)
def test_head_scheduler_document_value_error_with_retries(self):
# Enable retries and run test_head_scheduler_document_value_error.
_service.enable_retries()
self.test_head_scheduler_document_value_error()
# Disable retries and run test_head_scheduler_document_value_error.
_service.disable_retries()
self.test_head_scheduler_document_value_error()
class TestHeadSchedulerJob():
"""
Test Class for head_scheduler_job
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_head_scheduler_job_all_params(self):
"""
head_scheduler_job()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_scheduler/jobs/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
job_id = 'testString'
# Invoke method
response = _service.head_scheduler_job(
job_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_head_scheduler_job_all_params_with_retries(self):
# Enable retries and run test_head_scheduler_job_all_params.
_service.enable_retries()
self.test_head_scheduler_job_all_params()
# Disable retries and run test_head_scheduler_job_all_params.
_service.disable_retries()
self.test_head_scheduler_job_all_params()
@responses.activate
def test_head_scheduler_job_value_error(self):
"""
test_head_scheduler_job_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_scheduler/jobs/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
job_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"job_id": job_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.head_scheduler_job(**req_copy)
def test_head_scheduler_job_value_error_with_retries(self):
# Enable retries and run test_head_scheduler_job_value_error.
_service.enable_retries()
self.test_head_scheduler_job_value_error()
# Disable retries and run test_head_scheduler_job_value_error.
_service.disable_retries()
self.test_head_scheduler_job_value_error()
class TestDeleteReplicationDocument():
"""
Test Class for delete_replication_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_replication_document_all_params(self):
"""
delete_replication_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_replicator/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=201)
# Set up parameter values
doc_id = 'testString'
if_match = 'testString'
batch = 'ok'
rev = 'testString'
# Invoke method
response = _service.delete_replication_document(
doc_id,
if_match=if_match,
batch=batch,
rev=rev,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'batch={}'.format(batch) in query_string
assert 'rev={}'.format(rev) in query_string
def test_delete_replication_document_all_params_with_retries(self):
# Enable retries and run test_delete_replication_document_all_params.
_service.enable_retries()
self.test_delete_replication_document_all_params()
# Disable retries and run test_delete_replication_document_all_params.
_service.disable_retries()
self.test_delete_replication_document_all_params()
@responses.activate
def test_delete_replication_document_required_params(self):
"""
test_delete_replication_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_replicator/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=201)
# Set up parameter values
doc_id = 'testString'
# Invoke method
response = _service.delete_replication_document(
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
def test_delete_replication_document_required_params_with_retries(self):
# Enable retries and run test_delete_replication_document_required_params.
_service.enable_retries()
self.test_delete_replication_document_required_params()
# Disable retries and run test_delete_replication_document_required_params.
_service.disable_retries()
self.test_delete_replication_document_required_params()
@responses.activate
def test_delete_replication_document_value_error(self):
"""
test_delete_replication_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_replicator/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=201)
# Set up parameter values
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.delete_replication_document(**req_copy)
def test_delete_replication_document_value_error_with_retries(self):
# Enable retries and run test_delete_replication_document_value_error.
_service.enable_retries()
self.test_delete_replication_document_value_error()
# Disable retries and run test_delete_replication_document_value_error.
_service.disable_retries()
self.test_delete_replication_document_value_error()
class TestGetReplicationDocument():
"""
Test Class for get_replication_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_replication_document_all_params(self):
"""
get_replication_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_replicator/testString')
mock_response = '{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}], "cancel": true, "checkpoint_interval": 0, "connection_timeout": 0, "continuous": false, "create_target": false, "create_target_params": {"n": 1, "partitioned": false, "q": 1}, "doc_ids": ["doc_ids"], "filter": "filter", "http_connections": 1, "query_params": {"mapKey": "inner"}, "retries_per_request": 0, "selector": {"mapKey": "anyValue"}, "since_seq": "since_seq", "socket_options": "socket_options", "source": {"auth": {"basic": {"password": "password", "username": "username"}, "iam": {"api_key": "api_key"}}, "headers": {"mapKey": "inner"}, "url": "url"}, "source_proxy": "source_proxy", "target": {"auth": {"basic": {"password": "password", "username": "username"}, "iam": {"api_key": "api_key"}}, "headers": {"mapKey": "inner"}, "url": "url"}, "target_proxy": "target_proxy", "use_checkpoints": true, "user_ctx": {"db": "db", "name": "name", "roles": ["_reader"]}, "worker_batch_size": 1, "worker_processes": 1}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
doc_id = 'testString'
if_none_match = 'testString'
attachments = False
att_encoding_info = False
conflicts = False
deleted_conflicts = False
latest = False
local_seq = False
meta = False
rev = 'testString'
revs = False
revs_info = False
# Invoke method
response = _service.get_replication_document(
doc_id,
if_none_match=if_none_match,
attachments=attachments,
att_encoding_info=att_encoding_info,
conflicts=conflicts,
deleted_conflicts=deleted_conflicts,
latest=latest,
local_seq=local_seq,
meta=meta,
rev=rev,
revs=revs,
revs_info=revs_info,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'attachments={}'.format('true' if attachments else 'false') in query_string
assert 'att_encoding_info={}'.format('true' if att_encoding_info else 'false') in query_string
assert 'conflicts={}'.format('true' if conflicts else 'false') in query_string
assert 'deleted_conflicts={}'.format('true' if deleted_conflicts else 'false') in query_string
assert 'latest={}'.format('true' if latest else 'false') in query_string
assert 'local_seq={}'.format('true' if local_seq else 'false') in query_string
assert 'meta={}'.format('true' if meta else 'false') in query_string
assert 'rev={}'.format(rev) in query_string
assert 'revs={}'.format('true' if revs else 'false') in query_string
assert 'revs_info={}'.format('true' if revs_info else 'false') in query_string
def test_get_replication_document_all_params_with_retries(self):
# Enable retries and run test_get_replication_document_all_params.
_service.enable_retries()
self.test_get_replication_document_all_params()
# Disable retries and run test_get_replication_document_all_params.
_service.disable_retries()
self.test_get_replication_document_all_params()
@responses.activate
def test_get_replication_document_required_params(self):
"""
test_get_replication_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_replicator/testString')
mock_response = '{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}], "cancel": true, "checkpoint_interval": 0, "connection_timeout": 0, "continuous": false, "create_target": false, "create_target_params": {"n": 1, "partitioned": false, "q": 1}, "doc_ids": ["doc_ids"], "filter": "filter", "http_connections": 1, "query_params": {"mapKey": "inner"}, "retries_per_request": 0, "selector": {"mapKey": "anyValue"}, "since_seq": "since_seq", "socket_options": "socket_options", "source": {"auth": {"basic": {"password": "password", "username": "username"}, "iam": {"api_key": "api_key"}}, "headers": {"mapKey": "inner"}, "url": "url"}, "source_proxy": "source_proxy", "target": {"auth": {"basic": {"password": "password", "username": "username"}, "iam": {"api_key": "api_key"}}, "headers": {"mapKey": "inner"}, "url": "url"}, "target_proxy": "target_proxy", "use_checkpoints": true, "user_ctx": {"db": "db", "name": "name", "roles": ["_reader"]}, "worker_batch_size": 1, "worker_processes": 1}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
doc_id = 'testString'
# Invoke method
response = _service.get_replication_document(
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_replication_document_required_params_with_retries(self):
# Enable retries and run test_get_replication_document_required_params.
_service.enable_retries()
self.test_get_replication_document_required_params()
# Disable retries and run test_get_replication_document_required_params.
_service.disable_retries()
self.test_get_replication_document_required_params()
@responses.activate
def test_get_replication_document_value_error(self):
"""
test_get_replication_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_replicator/testString')
mock_response = '{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}], "cancel": true, "checkpoint_interval": 0, "connection_timeout": 0, "continuous": false, "create_target": false, "create_target_params": {"n": 1, "partitioned": false, "q": 1}, "doc_ids": ["doc_ids"], "filter": "filter", "http_connections": 1, "query_params": {"mapKey": "inner"}, "retries_per_request": 0, "selector": {"mapKey": "anyValue"}, "since_seq": "since_seq", "socket_options": "socket_options", "source": {"auth": {"basic": {"password": "password", "username": "username"}, "iam": {"api_key": "api_key"}}, "headers": {"mapKey": "inner"}, "url": "url"}, "source_proxy": "source_proxy", "target": {"auth": {"basic": {"password": "password", "username": "username"}, "iam": {"api_key": "api_key"}}, "headers": {"mapKey": "inner"}, "url": "url"}, "target_proxy": "target_proxy", "use_checkpoints": true, "user_ctx": {"db": "db", "name": "name", "roles": ["_reader"]}, "worker_batch_size": 1, "worker_processes": 1}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_replication_document(**req_copy)
def test_get_replication_document_value_error_with_retries(self):
# Enable retries and run test_get_replication_document_value_error.
_service.enable_retries()
self.test_get_replication_document_value_error()
# Disable retries and run test_get_replication_document_value_error.
_service.disable_retries()
self.test_get_replication_document_value_error()
class TestPutReplicationDocument():
"""
Test Class for put_replication_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_put_replication_document_all_params(self):
"""
put_replication_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_replicator/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a ReplicationCreateTargetParameters model
replication_create_target_parameters_model = {}
replication_create_target_parameters_model['n'] = 1
replication_create_target_parameters_model['partitioned'] = False
replication_create_target_parameters_model['q'] = 1
# Construct a dict representation of a ReplicationDatabaseAuthBasic model
replication_database_auth_basic_model = {}
replication_database_auth_basic_model['password'] = 'testString'
replication_database_auth_basic_model['username'] = 'testString'
# Construct a dict representation of a ReplicationDatabaseAuthIam model
replication_database_auth_iam_model = {}
replication_database_auth_iam_model['api_key'] = 'testString'
# Construct a dict representation of a ReplicationDatabaseAuth model
replication_database_auth_model = {}
replication_database_auth_model['basic'] = replication_database_auth_basic_model
replication_database_auth_model['iam'] = replication_database_auth_iam_model
# Construct a dict representation of a ReplicationDatabase model
replication_database_model = {}
replication_database_model['auth'] = replication_database_auth_model
replication_database_model['headers'] = {}
replication_database_model['url'] = 'testString'
# Construct a dict representation of a UserContext model
user_context_model = {}
user_context_model['db'] = 'testString'
user_context_model['name'] = 'testString'
user_context_model['roles'] = ['_reader']
# Construct a dict representation of a ReplicationDocument model
replication_document_model = {}
replication_document_model['_attachments'] = {}
replication_document_model['_conflicts'] = ['testString']
replication_document_model['_deleted'] = True
replication_document_model['_deleted_conflicts'] = ['testString']
replication_document_model['_id'] = 'testString'
replication_document_model['_local_seq'] = 'testString'
replication_document_model['_rev'] = 'testString'
replication_document_model['_revisions'] = revisions_model
replication_document_model['_revs_info'] = [document_revision_status_model]
replication_document_model['cancel'] = True
replication_document_model['checkpoint_interval'] = 0
replication_document_model['connection_timeout'] = 0
replication_document_model['continuous'] = False
replication_document_model['create_target'] = False
replication_document_model['create_target_params'] = replication_create_target_parameters_model
replication_document_model['doc_ids'] = ['testString']
replication_document_model['filter'] = 'testString'
replication_document_model['http_connections'] = 1
replication_document_model['query_params'] = {}
replication_document_model['retries_per_request'] = 0
replication_document_model['selector'] = {}
replication_document_model['since_seq'] = 'testString'
replication_document_model['socket_options'] = 'testString'
replication_document_model['source'] = replication_database_model
replication_document_model['source_proxy'] = 'testString'
replication_document_model['target'] = replication_database_model
replication_document_model['target_proxy'] = 'testString'
replication_document_model['use_checkpoints'] = True
replication_document_model['user_ctx'] = user_context_model
replication_document_model['worker_batch_size'] = 1
replication_document_model['worker_processes'] = 1
replication_document_model['foo'] = 'testString'
# Set up parameter values
doc_id = 'testString'
replication_document = replication_document_model
if_match = 'testString'
batch = 'ok'
new_edits = False
rev = 'testString'
# Invoke method
response = _service.put_replication_document(
doc_id,
replication_document,
if_match=if_match,
batch=batch,
new_edits=new_edits,
rev=rev,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'batch={}'.format(batch) in query_string
assert 'new_edits={}'.format('true' if new_edits else 'false') in query_string
assert 'rev={}'.format(rev) in query_string
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body == replication_document
def test_put_replication_document_all_params_with_retries(self):
# Enable retries and run test_put_replication_document_all_params.
_service.enable_retries()
self.test_put_replication_document_all_params()
# Disable retries and run test_put_replication_document_all_params.
_service.disable_retries()
self.test_put_replication_document_all_params()
@responses.activate
def test_put_replication_document_required_params(self):
"""
test_put_replication_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_replicator/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a ReplicationCreateTargetParameters model
replication_create_target_parameters_model = {}
replication_create_target_parameters_model['n'] = 1
replication_create_target_parameters_model['partitioned'] = False
replication_create_target_parameters_model['q'] = 1
# Construct a dict representation of a ReplicationDatabaseAuthBasic model
replication_database_auth_basic_model = {}
replication_database_auth_basic_model['password'] = 'testString'
replication_database_auth_basic_model['username'] = 'testString'
# Construct a dict representation of a ReplicationDatabaseAuthIam model
replication_database_auth_iam_model = {}
replication_database_auth_iam_model['api_key'] = 'testString'
# Construct a dict representation of a ReplicationDatabaseAuth model
replication_database_auth_model = {}
replication_database_auth_model['basic'] = replication_database_auth_basic_model
replication_database_auth_model['iam'] = replication_database_auth_iam_model
# Construct a dict representation of a ReplicationDatabase model
replication_database_model = {}
replication_database_model['auth'] = replication_database_auth_model
replication_database_model['headers'] = {}
replication_database_model['url'] = 'testString'
# Construct a dict representation of a UserContext model
user_context_model = {}
user_context_model['db'] = 'testString'
user_context_model['name'] = 'testString'
user_context_model['roles'] = ['_reader']
# Construct a dict representation of a ReplicationDocument model
replication_document_model = {}
replication_document_model['_attachments'] = {}
replication_document_model['_conflicts'] = ['testString']
replication_document_model['_deleted'] = True
replication_document_model['_deleted_conflicts'] = ['testString']
replication_document_model['_id'] = 'testString'
replication_document_model['_local_seq'] = 'testString'
replication_document_model['_rev'] = 'testString'
replication_document_model['_revisions'] = revisions_model
replication_document_model['_revs_info'] = [document_revision_status_model]
replication_document_model['cancel'] = True
replication_document_model['checkpoint_interval'] = 0
replication_document_model['connection_timeout'] = 0
replication_document_model['continuous'] = False
replication_document_model['create_target'] = False
replication_document_model['create_target_params'] = replication_create_target_parameters_model
replication_document_model['doc_ids'] = ['testString']
replication_document_model['filter'] = 'testString'
replication_document_model['http_connections'] = 1
replication_document_model['query_params'] = {}
replication_document_model['retries_per_request'] = 0
replication_document_model['selector'] = {}
replication_document_model['since_seq'] = 'testString'
replication_document_model['socket_options'] = 'testString'
replication_document_model['source'] = replication_database_model
replication_document_model['source_proxy'] = 'testString'
replication_document_model['target'] = replication_database_model
replication_document_model['target_proxy'] = 'testString'
replication_document_model['use_checkpoints'] = True
replication_document_model['user_ctx'] = user_context_model
replication_document_model['worker_batch_size'] = 1
replication_document_model['worker_processes'] = 1
replication_document_model['foo'] = 'testString'
# Set up parameter values
doc_id = 'testString'
replication_document = replication_document_model
# Invoke method
response = _service.put_replication_document(
doc_id,
replication_document,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body == replication_document
def test_put_replication_document_required_params_with_retries(self):
# Enable retries and run test_put_replication_document_required_params.
_service.enable_retries()
self.test_put_replication_document_required_params()
# Disable retries and run test_put_replication_document_required_params.
_service.disable_retries()
self.test_put_replication_document_required_params()
@responses.activate
def test_put_replication_document_value_error(self):
"""
test_put_replication_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_replicator/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a ReplicationCreateTargetParameters model
replication_create_target_parameters_model = {}
replication_create_target_parameters_model['n'] = 1
replication_create_target_parameters_model['partitioned'] = False
replication_create_target_parameters_model['q'] = 1
# Construct a dict representation of a ReplicationDatabaseAuthBasic model
replication_database_auth_basic_model = {}
replication_database_auth_basic_model['password'] = 'testString'
replication_database_auth_basic_model['username'] = 'testString'
# Construct a dict representation of a ReplicationDatabaseAuthIam model
replication_database_auth_iam_model = {}
replication_database_auth_iam_model['api_key'] = 'testString'
# Construct a dict representation of a ReplicationDatabaseAuth model
replication_database_auth_model = {}
replication_database_auth_model['basic'] = replication_database_auth_basic_model
replication_database_auth_model['iam'] = replication_database_auth_iam_model
# Construct a dict representation of a ReplicationDatabase model
replication_database_model = {}
replication_database_model['auth'] = replication_database_auth_model
replication_database_model['headers'] = {}
replication_database_model['url'] = 'testString'
# Construct a dict representation of a UserContext model
user_context_model = {}
user_context_model['db'] = 'testString'
user_context_model['name'] = 'testString'
user_context_model['roles'] = ['_reader']
# Construct a dict representation of a ReplicationDocument model
replication_document_model = {}
replication_document_model['_attachments'] = {}
replication_document_model['_conflicts'] = ['testString']
replication_document_model['_deleted'] = True
replication_document_model['_deleted_conflicts'] = ['testString']
replication_document_model['_id'] = 'testString'
replication_document_model['_local_seq'] = 'testString'
replication_document_model['_rev'] = 'testString'
replication_document_model['_revisions'] = revisions_model
replication_document_model['_revs_info'] = [document_revision_status_model]
replication_document_model['cancel'] = True
replication_document_model['checkpoint_interval'] = 0
replication_document_model['connection_timeout'] = 0
replication_document_model['continuous'] = False
replication_document_model['create_target'] = False
replication_document_model['create_target_params'] = replication_create_target_parameters_model
replication_document_model['doc_ids'] = ['testString']
replication_document_model['filter'] = 'testString'
replication_document_model['http_connections'] = 1
replication_document_model['query_params'] = {}
replication_document_model['retries_per_request'] = 0
replication_document_model['selector'] = {}
replication_document_model['since_seq'] = 'testString'
replication_document_model['socket_options'] = 'testString'
replication_document_model['source'] = replication_database_model
replication_document_model['source_proxy'] = 'testString'
replication_document_model['target'] = replication_database_model
replication_document_model['target_proxy'] = 'testString'
replication_document_model['use_checkpoints'] = True
replication_document_model['user_ctx'] = user_context_model
replication_document_model['worker_batch_size'] = 1
replication_document_model['worker_processes'] = 1
replication_document_model['foo'] = 'testString'
# Set up parameter values
doc_id = 'testString'
replication_document = replication_document_model
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"doc_id": doc_id,
"replication_document": replication_document,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.put_replication_document(**req_copy)
def test_put_replication_document_value_error_with_retries(self):
# Enable retries and run test_put_replication_document_value_error.
_service.enable_retries()
self.test_put_replication_document_value_error()
# Disable retries and run test_put_replication_document_value_error.
_service.disable_retries()
self.test_put_replication_document_value_error()
class TestGetSchedulerDocs():
"""
Test Class for get_scheduler_docs
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_scheduler_docs_all_params(self):
"""
get_scheduler_docs()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_scheduler/docs')
mock_response = '{"total_rows": 0, "docs": [{"database": "database", "doc_id": "doc_id", "error_count": 0, "id": "id", "info": {"changes_pending": 0, "checkpointed_source_seq": "checkpointed_source_seq", "doc_write_failures": 0, "docs_read": 0, "docs_written": 0, "error": "error", "missing_revisions_found": 0, "revisions_checked": 0, "source_seq": "source_seq", "through_seq": "through_seq"}, "last_updated": "2019-01-01T12:00:00.000Z", "node": "node", "source": "source", "source_proxy": "source_proxy", "start_time": "2019-01-01T12:00:00.000Z", "state": "initializing", "target": "target", "target_proxy": "target_proxy"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
limit = 0
skip = 0
states = ['initializing']
# Invoke method
response = _service.get_scheduler_docs(
limit=limit,
skip=skip,
states=states,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'limit={}'.format(limit) in query_string
assert 'skip={}'.format(skip) in query_string
assert 'states={}'.format(','.join(states)) in query_string
def test_get_scheduler_docs_all_params_with_retries(self):
# Enable retries and run test_get_scheduler_docs_all_params.
_service.enable_retries()
self.test_get_scheduler_docs_all_params()
# Disable retries and run test_get_scheduler_docs_all_params.
_service.disable_retries()
self.test_get_scheduler_docs_all_params()
@responses.activate
def test_get_scheduler_docs_required_params(self):
"""
test_get_scheduler_docs_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_scheduler/docs')
mock_response = '{"total_rows": 0, "docs": [{"database": "database", "doc_id": "doc_id", "error_count": 0, "id": "id", "info": {"changes_pending": 0, "checkpointed_source_seq": "checkpointed_source_seq", "doc_write_failures": 0, "docs_read": 0, "docs_written": 0, "error": "error", "missing_revisions_found": 0, "revisions_checked": 0, "source_seq": "source_seq", "through_seq": "through_seq"}, "last_updated": "2019-01-01T12:00:00.000Z", "node": "node", "source": "source", "source_proxy": "source_proxy", "start_time": "2019-01-01T12:00:00.000Z", "state": "initializing", "target": "target", "target_proxy": "target_proxy"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_scheduler_docs()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_scheduler_docs_required_params_with_retries(self):
# Enable retries and run test_get_scheduler_docs_required_params.
_service.enable_retries()
self.test_get_scheduler_docs_required_params()
# Disable retries and run test_get_scheduler_docs_required_params.
_service.disable_retries()
self.test_get_scheduler_docs_required_params()
class TestGetSchedulerDocument():
"""
Test Class for get_scheduler_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_scheduler_document_all_params(self):
"""
get_scheduler_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_scheduler/docs/_replicator/testString')
mock_response = '{"database": "database", "doc_id": "doc_id", "error_count": 0, "id": "id", "info": {"changes_pending": 0, "checkpointed_source_seq": "checkpointed_source_seq", "doc_write_failures": 0, "docs_read": 0, "docs_written": 0, "error": "error", "missing_revisions_found": 0, "revisions_checked": 0, "source_seq": "source_seq", "through_seq": "through_seq"}, "last_updated": "2019-01-01T12:00:00.000Z", "node": "node", "source": "source", "source_proxy": "source_proxy", "start_time": "2019-01-01T12:00:00.000Z", "state": "initializing", "target": "target", "target_proxy": "target_proxy"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
doc_id = 'testString'
# Invoke method
response = _service.get_scheduler_document(
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_scheduler_document_all_params_with_retries(self):
# Enable retries and run test_get_scheduler_document_all_params.
_service.enable_retries()
self.test_get_scheduler_document_all_params()
# Disable retries and run test_get_scheduler_document_all_params.
_service.disable_retries()
self.test_get_scheduler_document_all_params()
@responses.activate
def test_get_scheduler_document_value_error(self):
"""
test_get_scheduler_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_scheduler/docs/_replicator/testString')
mock_response = '{"database": "database", "doc_id": "doc_id", "error_count": 0, "id": "id", "info": {"changes_pending": 0, "checkpointed_source_seq": "checkpointed_source_seq", "doc_write_failures": 0, "docs_read": 0, "docs_written": 0, "error": "error", "missing_revisions_found": 0, "revisions_checked": 0, "source_seq": "source_seq", "through_seq": "through_seq"}, "last_updated": "2019-01-01T12:00:00.000Z", "node": "node", "source": "source", "source_proxy": "source_proxy", "start_time": "2019-01-01T12:00:00.000Z", "state": "initializing", "target": "target", "target_proxy": "target_proxy"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_scheduler_document(**req_copy)
def test_get_scheduler_document_value_error_with_retries(self):
# Enable retries and run test_get_scheduler_document_value_error.
_service.enable_retries()
self.test_get_scheduler_document_value_error()
# Disable retries and run test_get_scheduler_document_value_error.
_service.disable_retries()
self.test_get_scheduler_document_value_error()
class TestGetSchedulerJobs():
"""
Test Class for get_scheduler_jobs
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_scheduler_jobs_all_params(self):
"""
get_scheduler_jobs()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_scheduler/jobs')
mock_response = '{"total_rows": 0, "jobs": [{"database": "database", "doc_id": "doc_id", "history": [{"reason": "reason", "timestamp": "2019-01-01T12:00:00.000Z", "type": "type"}], "id": "id", "info": {"changes_pending": 0, "checkpointed_source_seq": "checkpointed_source_seq", "doc_write_failures": 0, "docs_read": 0, "docs_written": 0, "error": "error", "missing_revisions_found": 0, "revisions_checked": 0, "source_seq": "source_seq", "through_seq": "through_seq"}, "node": "node", "pid": "pid", "source": "source", "start_time": "2019-01-01T12:00:00.000Z", "target": "target", "user": "user"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
limit = 0
skip = 0
# Invoke method
response = _service.get_scheduler_jobs(
limit=limit,
skip=skip,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'limit={}'.format(limit) in query_string
assert 'skip={}'.format(skip) in query_string
def test_get_scheduler_jobs_all_params_with_retries(self):
# Enable retries and run test_get_scheduler_jobs_all_params.
_service.enable_retries()
self.test_get_scheduler_jobs_all_params()
# Disable retries and run test_get_scheduler_jobs_all_params.
_service.disable_retries()
self.test_get_scheduler_jobs_all_params()
@responses.activate
def test_get_scheduler_jobs_required_params(self):
"""
test_get_scheduler_jobs_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_scheduler/jobs')
mock_response = '{"total_rows": 0, "jobs": [{"database": "database", "doc_id": "doc_id", "history": [{"reason": "reason", "timestamp": "2019-01-01T12:00:00.000Z", "type": "type"}], "id": "id", "info": {"changes_pending": 0, "checkpointed_source_seq": "checkpointed_source_seq", "doc_write_failures": 0, "docs_read": 0, "docs_written": 0, "error": "error", "missing_revisions_found": 0, "revisions_checked": 0, "source_seq": "source_seq", "through_seq": "through_seq"}, "node": "node", "pid": "pid", "source": "source", "start_time": "2019-01-01T12:00:00.000Z", "target": "target", "user": "user"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_scheduler_jobs()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_scheduler_jobs_required_params_with_retries(self):
# Enable retries and run test_get_scheduler_jobs_required_params.
_service.enable_retries()
self.test_get_scheduler_jobs_required_params()
# Disable retries and run test_get_scheduler_jobs_required_params.
_service.disable_retries()
self.test_get_scheduler_jobs_required_params()
class TestGetSchedulerJob():
"""
Test Class for get_scheduler_job
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_scheduler_job_all_params(self):
"""
get_scheduler_job()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_scheduler/jobs/testString')
mock_response = '{"database": "database", "doc_id": "doc_id", "history": [{"reason": "reason", "timestamp": "2019-01-01T12:00:00.000Z", "type": "type"}], "id": "id", "info": {"changes_pending": 0, "checkpointed_source_seq": "checkpointed_source_seq", "doc_write_failures": 0, "docs_read": 0, "docs_written": 0, "error": "error", "missing_revisions_found": 0, "revisions_checked": 0, "source_seq": "source_seq", "through_seq": "through_seq"}, "node": "node", "pid": "pid", "source": "source", "start_time": "2019-01-01T12:00:00.000Z", "target": "target", "user": "user"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
job_id = 'testString'
# Invoke method
response = _service.get_scheduler_job(
job_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_scheduler_job_all_params_with_retries(self):
# Enable retries and run test_get_scheduler_job_all_params.
_service.enable_retries()
self.test_get_scheduler_job_all_params()
# Disable retries and run test_get_scheduler_job_all_params.
_service.disable_retries()
self.test_get_scheduler_job_all_params()
@responses.activate
def test_get_scheduler_job_value_error(self):
"""
test_get_scheduler_job_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_scheduler/jobs/testString')
mock_response = '{"database": "database", "doc_id": "doc_id", "history": [{"reason": "reason", "timestamp": "2019-01-01T12:00:00.000Z", "type": "type"}], "id": "id", "info": {"changes_pending": 0, "checkpointed_source_seq": "checkpointed_source_seq", "doc_write_failures": 0, "docs_read": 0, "docs_written": 0, "error": "error", "missing_revisions_found": 0, "revisions_checked": 0, "source_seq": "source_seq", "through_seq": "through_seq"}, "node": "node", "pid": "pid", "source": "source", "start_time": "2019-01-01T12:00:00.000Z", "target": "target", "user": "user"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
job_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"job_id": job_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_scheduler_job(**req_copy)
def test_get_scheduler_job_value_error_with_retries(self):
# Enable retries and run test_get_scheduler_job_value_error.
_service.enable_retries()
self.test_get_scheduler_job_value_error()
# Disable retries and run test_get_scheduler_job_value_error.
_service.disable_retries()
self.test_get_scheduler_job_value_error()
# endregion
##############################################################################
# End of Service: Replication
##############################################################################
##############################################################################
# Start of Service: Authentication
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestGetSessionInformation():
"""
Test Class for get_session_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_session_information_all_params(self):
"""
get_session_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_session')
mock_response = '{"ok": true, "info": {"authenticated": "authenticated", "authentication_db": "authentication_db", "authentication_handlers": ["authentication_handlers"]}, "userCtx": {"db": "db", "name": "name", "roles": ["_reader"]}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_session_information()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_session_information_all_params_with_retries(self):
# Enable retries and run test_get_session_information_all_params.
_service.enable_retries()
self.test_get_session_information_all_params()
# Disable retries and run test_get_session_information_all_params.
_service.disable_retries()
self.test_get_session_information_all_params()
# endregion
##############################################################################
# End of Service: Authentication
##############################################################################
##############################################################################
# Start of Service: Authorization
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestGetSecurity():
"""
Test Class for get_security
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_security_all_params(self):
"""
get_security()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_security')
mock_response = '{"admins": {"names": ["names"], "roles": ["roles"]}, "members": {"names": ["names"], "roles": ["roles"]}, "cloudant": {"mapKey": ["_reader"]}, "couchdb_auth_only": false}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
# Invoke method
response = _service.get_security(
db,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_security_all_params_with_retries(self):
# Enable retries and run test_get_security_all_params.
_service.enable_retries()
self.test_get_security_all_params()
# Disable retries and run test_get_security_all_params.
_service.disable_retries()
self.test_get_security_all_params()
@responses.activate
def test_get_security_value_error(self):
"""
test_get_security_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_security')
mock_response = '{"admins": {"names": ["names"], "roles": ["roles"]}, "members": {"names": ["names"], "roles": ["roles"]}, "cloudant": {"mapKey": ["_reader"]}, "couchdb_auth_only": false}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_security(**req_copy)
def test_get_security_value_error_with_retries(self):
# Enable retries and run test_get_security_value_error.
_service.enable_retries()
self.test_get_security_value_error()
# Disable retries and run test_get_security_value_error.
_service.disable_retries()
self.test_get_security_value_error()
class TestPutSecurity():
"""
Test Class for put_security
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_put_security_all_params(self):
"""
put_security()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_security')
mock_response = '{"ok": true}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a SecurityObject model
security_object_model = {}
security_object_model['names'] = ['testString']
security_object_model['roles'] = ['testString']
# Set up parameter values
db = 'testString'
admins = security_object_model
members = security_object_model
cloudant = {}
couchdb_auth_only = True
# Invoke method
response = _service.put_security(
db,
admins=admins,
members=members,
cloudant=cloudant,
couchdb_auth_only=couchdb_auth_only,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['admins'] == security_object_model
assert req_body['members'] == security_object_model
assert req_body['cloudant'] == {}
assert req_body['couchdb_auth_only'] == True
def test_put_security_all_params_with_retries(self):
# Enable retries and run test_put_security_all_params.
_service.enable_retries()
self.test_put_security_all_params()
# Disable retries and run test_put_security_all_params.
_service.disable_retries()
self.test_put_security_all_params()
@responses.activate
def test_put_security_value_error(self):
"""
test_put_security_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_security')
mock_response = '{"ok": true}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a SecurityObject model
security_object_model = {}
security_object_model['names'] = ['testString']
security_object_model['roles'] = ['testString']
# Set up parameter values
db = 'testString'
admins = security_object_model
members = security_object_model
cloudant = {}
couchdb_auth_only = True
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.put_security(**req_copy)
def test_put_security_value_error_with_retries(self):
# Enable retries and run test_put_security_value_error.
_service.enable_retries()
self.test_put_security_value_error()
# Disable retries and run test_put_security_value_error.
_service.disable_retries()
self.test_put_security_value_error()
class TestPostApiKeys():
"""
Test Class for post_api_keys
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_api_keys_all_params(self):
"""
post_api_keys()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_api/v2/api_keys')
mock_response = '{"ok": true, "key": "key", "password": "password"}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=201)
# Invoke method
response = _service.post_api_keys()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
def test_post_api_keys_all_params_with_retries(self):
# Enable retries and run test_post_api_keys_all_params.
_service.enable_retries()
self.test_post_api_keys_all_params()
# Disable retries and run test_post_api_keys_all_params.
_service.disable_retries()
self.test_post_api_keys_all_params()
class TestPutCloudantSecurityConfiguration():
"""
Test Class for put_cloudant_security_configuration
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_put_cloudant_security_configuration_all_params(self):
"""
put_cloudant_security_configuration()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_api/v2/db/testString/_security')
mock_response = '{"ok": true}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a SecurityObject model
security_object_model = {}
security_object_model['names'] = ['testString']
security_object_model['roles'] = ['testString']
# Set up parameter values
db = 'testString'
cloudant = {}
admins = security_object_model
members = security_object_model
couchdb_auth_only = True
# Invoke method
response = _service.put_cloudant_security_configuration(
db,
cloudant,
admins=admins,
members=members,
couchdb_auth_only=couchdb_auth_only,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['cloudant'] == {}
assert req_body['admins'] == security_object_model
assert req_body['members'] == security_object_model
assert req_body['couchdb_auth_only'] == True
def test_put_cloudant_security_configuration_all_params_with_retries(self):
# Enable retries and run test_put_cloudant_security_configuration_all_params.
_service.enable_retries()
self.test_put_cloudant_security_configuration_all_params()
# Disable retries and run test_put_cloudant_security_configuration_all_params.
_service.disable_retries()
self.test_put_cloudant_security_configuration_all_params()
@responses.activate
def test_put_cloudant_security_configuration_value_error(self):
"""
test_put_cloudant_security_configuration_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_api/v2/db/testString/_security')
mock_response = '{"ok": true}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=200)
# Construct a dict representation of a SecurityObject model
security_object_model = {}
security_object_model['names'] = ['testString']
security_object_model['roles'] = ['testString']
# Set up parameter values
db = 'testString'
cloudant = {}
admins = security_object_model
members = security_object_model
couchdb_auth_only = True
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"cloudant": cloudant,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.put_cloudant_security_configuration(**req_copy)
def test_put_cloudant_security_configuration_value_error_with_retries(self):
# Enable retries and run test_put_cloudant_security_configuration_value_error.
_service.enable_retries()
self.test_put_cloudant_security_configuration_value_error()
# Disable retries and run test_put_cloudant_security_configuration_value_error.
_service.disable_retries()
self.test_put_cloudant_security_configuration_value_error()
# endregion
##############################################################################
# End of Service: Authorization
##############################################################################
##############################################################################
# Start of Service: CORS
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestGetCorsInformation():
"""
Test Class for get_cors_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_cors_information_all_params(self):
"""
get_cors_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_api/v2/user/config/cors')
mock_response = '{"allow_credentials": true, "enable_cors": true, "origins": ["origins"]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_cors_information()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_cors_information_all_params_with_retries(self):
# Enable retries and run test_get_cors_information_all_params.
_service.enable_retries()
self.test_get_cors_information_all_params()
# Disable retries and run test_get_cors_information_all_params.
_service.disable_retries()
self.test_get_cors_information_all_params()
class TestPutCorsConfiguration():
"""
Test Class for put_cors_configuration
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_put_cors_configuration_all_params(self):
"""
put_cors_configuration()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_api/v2/user/config/cors')
mock_response = '{"ok": true}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
origins = ['testString']
allow_credentials = True
enable_cors = True
# Invoke method
response = _service.put_cors_configuration(
origins,
allow_credentials=allow_credentials,
enable_cors=enable_cors,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['origins'] == ['testString']
assert req_body['allow_credentials'] == True
assert req_body['enable_cors'] == True
def test_put_cors_configuration_all_params_with_retries(self):
# Enable retries and run test_put_cors_configuration_all_params.
_service.enable_retries()
self.test_put_cors_configuration_all_params()
# Disable retries and run test_put_cors_configuration_all_params.
_service.disable_retries()
self.test_put_cors_configuration_all_params()
@responses.activate
def test_put_cors_configuration_value_error(self):
"""
test_put_cors_configuration_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_api/v2/user/config/cors')
mock_response = '{"ok": true}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
origins = ['testString']
allow_credentials = True
enable_cors = True
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"origins": origins,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.put_cors_configuration(**req_copy)
def test_put_cors_configuration_value_error_with_retries(self):
# Enable retries and run test_put_cors_configuration_value_error.
_service.enable_retries()
self.test_put_cors_configuration_value_error()
# Disable retries and run test_put_cors_configuration_value_error.
_service.disable_retries()
self.test_put_cors_configuration_value_error()
# endregion
##############################################################################
# End of Service: CORS
##############################################################################
##############################################################################
# Start of Service: Attachments
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestHeadAttachment():
"""
Test Class for head_attachment
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_head_attachment_all_params(self):
"""
head_attachment()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
attachment_name = 'testString'
if_match = 'testString'
if_none_match = 'testString'
rev = 'testString'
# Invoke method
response = _service.head_attachment(
db,
doc_id,
attachment_name,
if_match=if_match,
if_none_match=if_none_match,
rev=rev,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'rev={}'.format(rev) in query_string
def test_head_attachment_all_params_with_retries(self):
# Enable retries and run test_head_attachment_all_params.
_service.enable_retries()
self.test_head_attachment_all_params()
# Disable retries and run test_head_attachment_all_params.
_service.disable_retries()
self.test_head_attachment_all_params()
@responses.activate
def test_head_attachment_required_params(self):
"""
test_head_attachment_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
attachment_name = 'testString'
# Invoke method
response = _service.head_attachment(
db,
doc_id,
attachment_name,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_head_attachment_required_params_with_retries(self):
# Enable retries and run test_head_attachment_required_params.
_service.enable_retries()
self.test_head_attachment_required_params()
# Disable retries and run test_head_attachment_required_params.
_service.disable_retries()
self.test_head_attachment_required_params()
@responses.activate
def test_head_attachment_value_error(self):
"""
test_head_attachment_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
attachment_name = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
"attachment_name": attachment_name,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.head_attachment(**req_copy)
def test_head_attachment_value_error_with_retries(self):
# Enable retries and run test_head_attachment_value_error.
_service.enable_retries()
self.test_head_attachment_value_error()
# Disable retries and run test_head_attachment_value_error.
_service.disable_retries()
self.test_head_attachment_value_error()
class TestDeleteAttachment():
"""
Test Class for delete_attachment
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_attachment_all_params(self):
"""
delete_attachment()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=201)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
attachment_name = 'testString'
if_match = 'testString'
rev = 'testString'
batch = 'ok'
# Invoke method
response = _service.delete_attachment(
db,
doc_id,
attachment_name,
if_match=if_match,
rev=rev,
batch=batch,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'rev={}'.format(rev) in query_string
assert 'batch={}'.format(batch) in query_string
def test_delete_attachment_all_params_with_retries(self):
# Enable retries and run test_delete_attachment_all_params.
_service.enable_retries()
self.test_delete_attachment_all_params()
# Disable retries and run test_delete_attachment_all_params.
_service.disable_retries()
self.test_delete_attachment_all_params()
@responses.activate
def test_delete_attachment_required_params(self):
"""
test_delete_attachment_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=201)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
attachment_name = 'testString'
# Invoke method
response = _service.delete_attachment(
db,
doc_id,
attachment_name,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
def test_delete_attachment_required_params_with_retries(self):
# Enable retries and run test_delete_attachment_required_params.
_service.enable_retries()
self.test_delete_attachment_required_params()
# Disable retries and run test_delete_attachment_required_params.
_service.disable_retries()
self.test_delete_attachment_required_params()
@responses.activate
def test_delete_attachment_value_error(self):
"""
test_delete_attachment_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=201)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
attachment_name = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
"attachment_name": attachment_name,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.delete_attachment(**req_copy)
def test_delete_attachment_value_error_with_retries(self):
# Enable retries and run test_delete_attachment_value_error.
_service.enable_retries()
self.test_delete_attachment_value_error()
# Disable retries and run test_delete_attachment_value_error.
_service.disable_retries()
self.test_delete_attachment_value_error()
class TestGetAttachment():
"""
Test Class for get_attachment
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_attachment_all_params(self):
"""
get_attachment()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString/testString')
mock_response = 'This is a mock binary response.'
responses.add(responses.GET,
url,
body=mock_response,
content_type='*/*',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
attachment_name = 'testString'
if_match = 'testString'
if_none_match = 'testString'
range = 'testString'
rev = 'testString'
# Invoke method
response = _service.get_attachment(
db,
doc_id,
attachment_name,
if_match=if_match,
if_none_match=if_none_match,
range=range,
rev=rev,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'rev={}'.format(rev) in query_string
def test_get_attachment_all_params_with_retries(self):
# Enable retries and run test_get_attachment_all_params.
_service.enable_retries()
self.test_get_attachment_all_params()
# Disable retries and run test_get_attachment_all_params.
_service.disable_retries()
self.test_get_attachment_all_params()
@responses.activate
def test_get_attachment_required_params(self):
"""
test_get_attachment_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString/testString')
mock_response = 'This is a mock binary response.'
responses.add(responses.GET,
url,
body=mock_response,
content_type='*/*',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
attachment_name = 'testString'
# Invoke method
response = _service.get_attachment(
db,
doc_id,
attachment_name,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_attachment_required_params_with_retries(self):
# Enable retries and run test_get_attachment_required_params.
_service.enable_retries()
self.test_get_attachment_required_params()
# Disable retries and run test_get_attachment_required_params.
_service.disable_retries()
self.test_get_attachment_required_params()
@responses.activate
def test_get_attachment_value_error(self):
"""
test_get_attachment_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString/testString')
mock_response = 'This is a mock binary response.'
responses.add(responses.GET,
url,
body=mock_response,
content_type='*/*',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
attachment_name = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
"attachment_name": attachment_name,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_attachment(**req_copy)
def test_get_attachment_value_error_with_retries(self):
# Enable retries and run test_get_attachment_value_error.
_service.enable_retries()
self.test_get_attachment_value_error()
# Disable retries and run test_get_attachment_value_error.
_service.disable_retries()
self.test_get_attachment_value_error()
class TestPutAttachment():
"""
Test Class for put_attachment
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_put_attachment_all_params(self):
"""
put_attachment()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
attachment_name = 'testString'
attachment = io.BytesIO(b'This is a mock file.').getvalue()
content_type = 'application/octet-stream'
if_match = 'testString'
rev = 'testString'
# Invoke method
response = _service.put_attachment(
db,
doc_id,
attachment_name,
attachment,
content_type,
if_match=if_match,
rev=rev,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'rev={}'.format(rev) in query_string
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
def test_put_attachment_all_params_with_retries(self):
# Enable retries and run test_put_attachment_all_params.
_service.enable_retries()
self.test_put_attachment_all_params()
# Disable retries and run test_put_attachment_all_params.
_service.disable_retries()
self.test_put_attachment_all_params()
@responses.activate
def test_put_attachment_required_params(self):
"""
test_put_attachment_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
attachment_name = 'testString'
attachment = io.BytesIO(b'This is a mock file.').getvalue()
content_type = 'application/octet-stream'
# Invoke method
response = _service.put_attachment(
db,
doc_id,
attachment_name,
attachment,
content_type,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
def test_put_attachment_required_params_with_retries(self):
# Enable retries and run test_put_attachment_required_params.
_service.enable_retries()
self.test_put_attachment_required_params()
# Disable retries and run test_put_attachment_required_params.
_service.disable_retries()
self.test_put_attachment_required_params()
@responses.activate
def test_put_attachment_value_error(self):
"""
test_put_attachment_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/testString/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
attachment_name = 'testString'
attachment = io.BytesIO(b'This is a mock file.').getvalue()
content_type = 'application/octet-stream'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
"attachment_name": attachment_name,
"attachment": attachment,
"content_type": content_type,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.put_attachment(**req_copy)
def test_put_attachment_value_error_with_retries(self):
# Enable retries and run test_put_attachment_value_error.
_service.enable_retries()
self.test_put_attachment_value_error()
# Disable retries and run test_put_attachment_value_error.
_service.disable_retries()
self.test_put_attachment_value_error()
# endregion
##############################################################################
# End of Service: Attachments
##############################################################################
##############################################################################
# Start of Service: LocalDocuments
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestHeadLocalDocument():
"""
Test Class for head_local_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_head_local_document_all_params(self):
"""
head_local_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_local/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
if_none_match = 'testString'
# Invoke method
response = _service.head_local_document(
db,
doc_id,
if_none_match=if_none_match,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_head_local_document_all_params_with_retries(self):
# Enable retries and run test_head_local_document_all_params.
_service.enable_retries()
self.test_head_local_document_all_params()
# Disable retries and run test_head_local_document_all_params.
_service.disable_retries()
self.test_head_local_document_all_params()
@responses.activate
def test_head_local_document_required_params(self):
"""
test_head_local_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_local/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Invoke method
response = _service.head_local_document(
db,
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_head_local_document_required_params_with_retries(self):
# Enable retries and run test_head_local_document_required_params.
_service.enable_retries()
self.test_head_local_document_required_params()
# Disable retries and run test_head_local_document_required_params.
_service.disable_retries()
self.test_head_local_document_required_params()
@responses.activate
def test_head_local_document_value_error(self):
"""
test_head_local_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_local/testString')
responses.add(responses.HEAD,
url,
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.head_local_document(**req_copy)
def test_head_local_document_value_error_with_retries(self):
# Enable retries and run test_head_local_document_value_error.
_service.enable_retries()
self.test_head_local_document_value_error()
# Disable retries and run test_head_local_document_value_error.
_service.disable_retries()
self.test_head_local_document_value_error()
class TestDeleteLocalDocument():
"""
Test Class for delete_local_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_delete_local_document_all_params(self):
"""
delete_local_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_local/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
batch = 'ok'
# Invoke method
response = _service.delete_local_document(
db,
doc_id,
batch=batch,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'batch={}'.format(batch) in query_string
def test_delete_local_document_all_params_with_retries(self):
# Enable retries and run test_delete_local_document_all_params.
_service.enable_retries()
self.test_delete_local_document_all_params()
# Disable retries and run test_delete_local_document_all_params.
_service.disable_retries()
self.test_delete_local_document_all_params()
@responses.activate
def test_delete_local_document_required_params(self):
"""
test_delete_local_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_local/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Invoke method
response = _service.delete_local_document(
db,
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_delete_local_document_required_params_with_retries(self):
# Enable retries and run test_delete_local_document_required_params.
_service.enable_retries()
self.test_delete_local_document_required_params()
# Disable retries and run test_delete_local_document_required_params.
_service.disable_retries()
self.test_delete_local_document_required_params()
@responses.activate
def test_delete_local_document_value_error(self):
"""
test_delete_local_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_local/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.DELETE,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.delete_local_document(**req_copy)
def test_delete_local_document_value_error_with_retries(self):
# Enable retries and run test_delete_local_document_value_error.
_service.enable_retries()
self.test_delete_local_document_value_error()
# Disable retries and run test_delete_local_document_value_error.
_service.disable_retries()
self.test_delete_local_document_value_error()
class TestGetLocalDocument():
"""
Test Class for get_local_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_local_document_all_params(self):
"""
get_local_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_local/testString')
mock_response = '{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
accept = 'application/json'
if_none_match = 'testString'
attachments = False
att_encoding_info = False
local_seq = False
# Invoke method
response = _service.get_local_document(
db,
doc_id,
accept=accept,
if_none_match=if_none_match,
attachments=attachments,
att_encoding_info=att_encoding_info,
local_seq=local_seq,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'attachments={}'.format('true' if attachments else 'false') in query_string
assert 'att_encoding_info={}'.format('true' if att_encoding_info else 'false') in query_string
assert 'local_seq={}'.format('true' if local_seq else 'false') in query_string
def test_get_local_document_all_params_with_retries(self):
# Enable retries and run test_get_local_document_all_params.
_service.enable_retries()
self.test_get_local_document_all_params()
# Disable retries and run test_get_local_document_all_params.
_service.disable_retries()
self.test_get_local_document_all_params()
@responses.activate
def test_get_local_document_required_params(self):
"""
test_get_local_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_local/testString')
mock_response = '{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Invoke method
response = _service.get_local_document(
db,
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_local_document_required_params_with_retries(self):
# Enable retries and run test_get_local_document_required_params.
_service.enable_retries()
self.test_get_local_document_required_params()
# Disable retries and run test_get_local_document_required_params.
_service.disable_retries()
self.test_get_local_document_required_params()
@responses.activate
def test_get_local_document_value_error(self):
"""
test_get_local_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_local/testString')
mock_response = '{"_attachments": {"mapKey": {"content_type": "content_type", "data": "VGhpcyBpcyBhbiBlbmNvZGVkIGJ5dGUgYXJyYXku", "digest": "digest", "encoded_length": 0, "encoding": "encoding", "follows": false, "length": 0, "revpos": 1, "stub": true}}, "_conflicts": ["conflicts"], "_deleted": false, "_deleted_conflicts": ["deleted_conflicts"], "_id": "id", "_local_seq": "local_seq", "_rev": "rev", "_revisions": {"ids": ["ids"], "start": 1}, "_revs_info": [{"rev": "rev", "status": "available"}]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_local_document(**req_copy)
def test_get_local_document_value_error_with_retries(self):
# Enable retries and run test_get_local_document_value_error.
_service.enable_retries()
self.test_get_local_document_value_error()
# Disable retries and run test_get_local_document_value_error.
_service.disable_retries()
self.test_get_local_document_value_error()
class TestPutLocalDocument():
"""
Test Class for put_local_document
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_put_local_document_all_params(self):
"""
put_local_document()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_local/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Document model
document_model = {}
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'exampleid'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['brand'] = 'Foo'
document_model['colours'] = '["red","green","black","blue"]'
document_model['description'] = 'Slim Colourful Design Electronic Cooking Appliance for ...'
document_model['image'] = 'assets/img/0gmsnghhew.jpg'
document_model['keywords'] = '["Foo","Scales","Weight","Digital","Kitchen"]'
document_model['name'] = 'Digital Kitchen Scales'
document_model['price'] = '14.99'
document_model['productid'] = '1000042'
document_model['taxonomy'] = '["Home","Kitchen","Small Appliances"]'
document_model['type'] = 'product'
# Set up parameter values
db = 'testString'
doc_id = 'testString'
document = document_model
content_type = 'application/json'
batch = 'ok'
# Invoke method
response = _service.put_local_document(
db,
doc_id,
document,
content_type=content_type,
batch=batch,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# Validate query params
query_string = responses.calls[0].request.url.split('?',1)[1]
query_string = urllib.parse.unquote_plus(query_string)
assert 'batch={}'.format(batch) in query_string
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
def test_put_local_document_all_params_with_retries(self):
# Enable retries and run test_put_local_document_all_params.
_service.enable_retries()
self.test_put_local_document_all_params()
# Disable retries and run test_put_local_document_all_params.
_service.disable_retries()
self.test_put_local_document_all_params()
@responses.activate
def test_put_local_document_required_params(self):
"""
test_put_local_document_required_params()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_local/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Document model
document_model = {}
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'exampleid'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['brand'] = 'Foo'
document_model['colours'] = '["red","green","black","blue"]'
document_model['description'] = 'Slim Colourful Design Electronic Cooking Appliance for ...'
document_model['image'] = 'assets/img/0gmsnghhew.jpg'
document_model['keywords'] = '["Foo","Scales","Weight","Digital","Kitchen"]'
document_model['name'] = 'Digital Kitchen Scales'
document_model['price'] = '14.99'
document_model['productid'] = '1000042'
document_model['taxonomy'] = '["Home","Kitchen","Small Appliances"]'
document_model['type'] = 'product'
# Set up parameter values
db = 'testString'
doc_id = 'testString'
document = document_model
# Invoke method
response = _service.put_local_document(
db,
doc_id,
document,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 201
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
def test_put_local_document_required_params_with_retries(self):
# Enable retries and run test_put_local_document_required_params.
_service.enable_retries()
self.test_put_local_document_required_params()
# Disable retries and run test_put_local_document_required_params.
_service.disable_retries()
self.test_put_local_document_required_params()
@responses.activate
def test_put_local_document_value_error(self):
"""
test_put_local_document_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_local/testString')
mock_response = '{"id": "id", "rev": "rev", "ok": true, "caused_by": "caused_by", "error": "error", "reason": "reason"}'
responses.add(responses.PUT,
url,
body=mock_response,
content_type='application/json',
status=201)
# Construct a dict representation of a Attachment model
attachment_model = {}
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
# Construct a dict representation of a Revisions model
revisions_model = {}
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
# Construct a dict representation of a DocumentRevisionStatus model
document_revision_status_model = {}
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a dict representation of a Document model
document_model = {}
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'exampleid'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['brand'] = 'Foo'
document_model['colours'] = '["red","green","black","blue"]'
document_model['description'] = 'Slim Colourful Design Electronic Cooking Appliance for ...'
document_model['image'] = 'assets/img/0gmsnghhew.jpg'
document_model['keywords'] = '["Foo","Scales","Weight","Digital","Kitchen"]'
document_model['name'] = 'Digital Kitchen Scales'
document_model['price'] = '14.99'
document_model['productid'] = '1000042'
document_model['taxonomy'] = '["Home","Kitchen","Small Appliances"]'
document_model['type'] = 'product'
# Set up parameter values
db = 'testString'
doc_id = 'testString'
document = document_model
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
"document": document,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.put_local_document(**req_copy)
def test_put_local_document_value_error_with_retries(self):
# Enable retries and run test_put_local_document_value_error.
_service.enable_retries()
self.test_put_local_document_value_error()
# Disable retries and run test_put_local_document_value_error.
_service.disable_retries()
self.test_put_local_document_value_error()
# endregion
##############################################################################
# End of Service: LocalDocuments
##############################################################################
##############################################################################
# Start of Service: DatabaseDetails
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestPostRevsDiff():
"""
Test Class for post_revs_diff
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_revs_diff_all_params(self):
"""
post_revs_diff()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_revs_diff')
mock_response = '{"mapKey": {"missing": ["missing"], "possible_ancestors": ["possible_ancestors"]}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
document_revisions = {}
# Invoke method
response = _service.post_revs_diff(
db,
document_revisions,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body == document_revisions
def test_post_revs_diff_all_params_with_retries(self):
# Enable retries and run test_post_revs_diff_all_params.
_service.enable_retries()
self.test_post_revs_diff_all_params()
# Disable retries and run test_post_revs_diff_all_params.
_service.disable_retries()
self.test_post_revs_diff_all_params()
@responses.activate
def test_post_revs_diff_value_error(self):
"""
test_post_revs_diff_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_revs_diff')
mock_response = '{"mapKey": {"missing": ["missing"], "possible_ancestors": ["possible_ancestors"]}}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
document_revisions = {}
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"document_revisions": document_revisions,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_revs_diff(**req_copy)
def test_post_revs_diff_value_error_with_retries(self):
# Enable retries and run test_post_revs_diff_value_error.
_service.enable_retries()
self.test_post_revs_diff_value_error()
# Disable retries and run test_post_revs_diff_value_error.
_service.disable_retries()
self.test_post_revs_diff_value_error()
class TestGetShardsInformation():
"""
Test Class for get_shards_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_shards_information_all_params(self):
"""
get_shards_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_shards')
mock_response = '{"shards": {"mapKey": ["inner"]}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
# Invoke method
response = _service.get_shards_information(
db,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_shards_information_all_params_with_retries(self):
# Enable retries and run test_get_shards_information_all_params.
_service.enable_retries()
self.test_get_shards_information_all_params()
# Disable retries and run test_get_shards_information_all_params.
_service.disable_retries()
self.test_get_shards_information_all_params()
@responses.activate
def test_get_shards_information_value_error(self):
"""
test_get_shards_information_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_shards')
mock_response = '{"shards": {"mapKey": ["inner"]}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_shards_information(**req_copy)
def test_get_shards_information_value_error_with_retries(self):
# Enable retries and run test_get_shards_information_value_error.
_service.enable_retries()
self.test_get_shards_information_value_error()
# Disable retries and run test_get_shards_information_value_error.
_service.disable_retries()
self.test_get_shards_information_value_error()
class TestGetDocumentShardsInfo():
"""
Test Class for get_document_shards_info
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_document_shards_info_all_params(self):
"""
get_document_shards_info()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_shards/testString')
mock_response = '{"nodes": ["nodes"], "range": "range"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Invoke method
response = _service.get_document_shards_info(
db,
doc_id,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_document_shards_info_all_params_with_retries(self):
# Enable retries and run test_get_document_shards_info_all_params.
_service.enable_retries()
self.test_get_document_shards_info_all_params()
# Disable retries and run test_get_document_shards_info_all_params.
_service.disable_retries()
self.test_get_document_shards_info_all_params()
@responses.activate
def test_get_document_shards_info_value_error(self):
"""
test_get_document_shards_info_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/testString/_shards/testString')
mock_response = '{"nodes": ["nodes"], "range": "range"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
db = 'testString'
doc_id = 'testString'
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"db": db,
"doc_id": doc_id,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.get_document_shards_info(**req_copy)
def test_get_document_shards_info_value_error_with_retries(self):
# Enable retries and run test_get_document_shards_info_value_error.
_service.enable_retries()
self.test_get_document_shards_info_value_error()
# Disable retries and run test_get_document_shards_info_value_error.
_service.disable_retries()
self.test_get_document_shards_info_value_error()
# endregion
##############################################################################
# End of Service: DatabaseDetails
##############################################################################
##############################################################################
# Start of Service: Monitoring
##############################################################################
# region
class TestNewInstance():
"""
Test Class for new_instance
"""
def test_new_instance(self):
"""
new_instance()
"""
os.environ['TEST_SERVICE_AUTH_TYPE'] = 'noAuth'
service = CloudantV1.new_instance(
service_name='TEST_SERVICE',
)
assert service is not None
assert isinstance(service, CloudantV1)
def test_new_instance_without_authenticator(self):
"""
new_instance_without_authenticator()
"""
with pytest.raises(ValueError, match='authenticator must be provided'):
service = CloudantV1.new_instance(
)
class TestHeadUpInformation():
"""
Test Class for head_up_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_head_up_information_all_params(self):
"""
head_up_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_up')
responses.add(responses.HEAD,
url,
status=200)
# Invoke method
response = _service.head_up_information()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_head_up_information_all_params_with_retries(self):
# Enable retries and run test_head_up_information_all_params.
_service.enable_retries()
self.test_head_up_information_all_params()
# Disable retries and run test_head_up_information_all_params.
_service.disable_retries()
self.test_head_up_information_all_params()
class TestGetActiveTasks():
"""
Test Class for get_active_tasks
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_active_tasks_all_params(self):
"""
get_active_tasks()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_active_tasks')
mock_response = '[{"changes_done": 0, "database": "database", "node": "node", "pid": "pid", "progress": 0, "started_on": 0, "status": "status", "task": "task", "total_changes": 0, "type": "type", "updated_on": 0}]'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_active_tasks()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_active_tasks_all_params_with_retries(self):
# Enable retries and run test_get_active_tasks_all_params.
_service.enable_retries()
self.test_get_active_tasks_all_params()
# Disable retries and run test_get_active_tasks_all_params.
_service.disable_retries()
self.test_get_active_tasks_all_params()
class TestGetUpInformation():
"""
Test Class for get_up_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_up_information_all_params(self):
"""
get_up_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_up')
mock_response = '{"seeds": {"anyKey": "anyValue"}, "status": "maintenance_mode"}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_up_information()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_up_information_all_params_with_retries(self):
# Enable retries and run test_get_up_information_all_params.
_service.enable_retries()
self.test_get_up_information_all_params()
# Disable retries and run test_get_up_information_all_params.
_service.disable_retries()
self.test_get_up_information_all_params()
class TestGetActivityTrackerEvents():
"""
Test Class for get_activity_tracker_events
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_activity_tracker_events_all_params(self):
"""
get_activity_tracker_events()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_api/v2/user/activity_tracker/events')
mock_response = '{"types": ["management"]}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_activity_tracker_events()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_activity_tracker_events_all_params_with_retries(self):
# Enable retries and run test_get_activity_tracker_events_all_params.
_service.enable_retries()
self.test_get_activity_tracker_events_all_params()
# Disable retries and run test_get_activity_tracker_events_all_params.
_service.disable_retries()
self.test_get_activity_tracker_events_all_params()
class TestPostActivityTrackerEvents():
"""
Test Class for post_activity_tracker_events
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_post_activity_tracker_events_all_params(self):
"""
post_activity_tracker_events()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_api/v2/user/activity_tracker/events')
mock_response = '{"ok": true}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
types = ['management']
# Invoke method
response = _service.post_activity_tracker_events(
types,
headers={}
)
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
# decompress gzip compressed request body
responses.calls[0].request.body = gzip.decompress(responses.calls[0].request.body)
# Validate body params
req_body = json.loads(str(responses.calls[0].request.body, 'utf-8'))
assert req_body['types'] == ['management']
def test_post_activity_tracker_events_all_params_with_retries(self):
# Enable retries and run test_post_activity_tracker_events_all_params.
_service.enable_retries()
self.test_post_activity_tracker_events_all_params()
# Disable retries and run test_post_activity_tracker_events_all_params.
_service.disable_retries()
self.test_post_activity_tracker_events_all_params()
@responses.activate
def test_post_activity_tracker_events_value_error(self):
"""
test_post_activity_tracker_events_value_error()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_api/v2/user/activity_tracker/events')
mock_response = '{"ok": true}'
responses.add(responses.POST,
url,
body=mock_response,
content_type='application/json',
status=200)
# Set up parameter values
types = ['management']
# Pass in all but one required param and check for a ValueError
req_param_dict = {
"types": types,
}
for param in req_param_dict.keys():
req_copy = {key:val if key is not param else None for (key,val) in req_param_dict.items()}
with pytest.raises(ValueError):
_service.post_activity_tracker_events(**req_copy)
def test_post_activity_tracker_events_value_error_with_retries(self):
# Enable retries and run test_post_activity_tracker_events_value_error.
_service.enable_retries()
self.test_post_activity_tracker_events_value_error()
# Disable retries and run test_post_activity_tracker_events_value_error.
_service.disable_retries()
self.test_post_activity_tracker_events_value_error()
class TestGetCurrentThroughputInformation():
"""
Test Class for get_current_throughput_information
"""
def preprocess_url(self, request_url: str):
"""
Preprocess the request URL to ensure the mock response will be found.
"""
request_url = urllib.parse.unquote(request_url) # don't double-encode if already encoded
request_url = urllib.parse.quote(request_url, safe=':/')
if re.fullmatch('.*/+', request_url) is None:
return request_url
else:
return re.compile(request_url.rstrip('/') + '/+')
@responses.activate
def test_get_current_throughput_information_all_params(self):
"""
get_current_throughput_information()
"""
# Set up mock
url = self.preprocess_url(_base_url + '/_api/v2/user/current/throughput')
mock_response = '{"throughput": {"query": 0, "read": 0, "write": 0}}'
responses.add(responses.GET,
url,
body=mock_response,
content_type='application/json',
status=200)
# Invoke method
response = _service.get_current_throughput_information()
# Check for correct operation
assert len(responses.calls) == 1
assert response.status_code == 200
def test_get_current_throughput_information_all_params_with_retries(self):
# Enable retries and run test_get_current_throughput_information_all_params.
_service.enable_retries()
self.test_get_current_throughput_information_all_params()
# Disable retries and run test_get_current_throughput_information_all_params.
_service.disable_retries()
self.test_get_current_throughput_information_all_params()
# endregion
##############################################################################
# End of Service: Monitoring
##############################################################################
##############################################################################
# Start of Model Tests
##############################################################################
# region
class TestModel_ActiveTask():
"""
Test Class for ActiveTask
"""
def test_active_task_serialization(self):
"""
Test serialization/deserialization for ActiveTask
"""
# Construct a json representation of a ActiveTask model
active_task_model_json = {}
active_task_model_json['changes_done'] = 0
active_task_model_json['database'] = 'testString'
active_task_model_json['node'] = 'testString'
active_task_model_json['pid'] = 'testString'
active_task_model_json['progress'] = 0
active_task_model_json['started_on'] = 0
active_task_model_json['status'] = 'testString'
active_task_model_json['task'] = 'testString'
active_task_model_json['total_changes'] = 0
active_task_model_json['type'] = 'testString'
active_task_model_json['updated_on'] = 0
# Construct a model instance of ActiveTask by calling from_dict on the json representation
active_task_model = ActiveTask.from_dict(active_task_model_json)
assert active_task_model != False
# Construct a model instance of ActiveTask by calling from_dict on the json representation
active_task_model_dict = ActiveTask.from_dict(active_task_model_json).__dict__
active_task_model2 = ActiveTask(**active_task_model_dict)
# Verify the model instances are equivalent
assert active_task_model == active_task_model2
# Convert model instance back to dict and verify no loss of data
active_task_model_json2 = active_task_model.to_dict()
assert active_task_model_json2 == active_task_model_json
class TestModel_ActivityTrackerEvents():
"""
Test Class for ActivityTrackerEvents
"""
def test_activity_tracker_events_serialization(self):
"""
Test serialization/deserialization for ActivityTrackerEvents
"""
# Construct a json representation of a ActivityTrackerEvents model
activity_tracker_events_model_json = {}
activity_tracker_events_model_json['types'] = ['management']
# Construct a model instance of ActivityTrackerEvents by calling from_dict on the json representation
activity_tracker_events_model = ActivityTrackerEvents.from_dict(activity_tracker_events_model_json)
assert activity_tracker_events_model != False
# Construct a model instance of ActivityTrackerEvents by calling from_dict on the json representation
activity_tracker_events_model_dict = ActivityTrackerEvents.from_dict(activity_tracker_events_model_json).__dict__
activity_tracker_events_model2 = ActivityTrackerEvents(**activity_tracker_events_model_dict)
# Verify the model instances are equivalent
assert activity_tracker_events_model == activity_tracker_events_model2
# Convert model instance back to dict and verify no loss of data
activity_tracker_events_model_json2 = activity_tracker_events_model.to_dict()
assert activity_tracker_events_model_json2 == activity_tracker_events_model_json
class TestModel_AllDocsQueriesResult():
"""
Test Class for AllDocsQueriesResult
"""
def test_all_docs_queries_result_serialization(self):
"""
Test serialization/deserialization for AllDocsQueriesResult
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
docs_result_row_value_model = {} # DocsResultRowValue
docs_result_row_value_model['rev'] = 'testString'
docs_result_row_model = {} # DocsResultRow
docs_result_row_model['caused_by'] = 'testString'
docs_result_row_model['error'] = 'testString'
docs_result_row_model['reason'] = 'testString'
docs_result_row_model['doc'] = document_model
docs_result_row_model['id'] = 'testString'
docs_result_row_model['key'] = 'testString'
docs_result_row_model['value'] = docs_result_row_value_model
all_docs_result_model = {} # AllDocsResult
all_docs_result_model['total_rows'] = 0
all_docs_result_model['rows'] = [docs_result_row_model]
all_docs_result_model['update_seq'] = 'testString'
# Construct a json representation of a AllDocsQueriesResult model
all_docs_queries_result_model_json = {}
all_docs_queries_result_model_json['results'] = [all_docs_result_model]
# Construct a model instance of AllDocsQueriesResult by calling from_dict on the json representation
all_docs_queries_result_model = AllDocsQueriesResult.from_dict(all_docs_queries_result_model_json)
assert all_docs_queries_result_model != False
# Construct a model instance of AllDocsQueriesResult by calling from_dict on the json representation
all_docs_queries_result_model_dict = AllDocsQueriesResult.from_dict(all_docs_queries_result_model_json).__dict__
all_docs_queries_result_model2 = AllDocsQueriesResult(**all_docs_queries_result_model_dict)
# Verify the model instances are equivalent
assert all_docs_queries_result_model == all_docs_queries_result_model2
# Convert model instance back to dict and verify no loss of data
all_docs_queries_result_model_json2 = all_docs_queries_result_model.to_dict()
assert all_docs_queries_result_model_json2 == all_docs_queries_result_model_json
class TestModel_AllDocsQuery():
"""
Test Class for AllDocsQuery
"""
def test_all_docs_query_serialization(self):
"""
Test serialization/deserialization for AllDocsQuery
"""
# Construct a json representation of a AllDocsQuery model
all_docs_query_model_json = {}
all_docs_query_model_json['att_encoding_info'] = False
all_docs_query_model_json['attachments'] = False
all_docs_query_model_json['conflicts'] = False
all_docs_query_model_json['descending'] = False
all_docs_query_model_json['include_docs'] = False
all_docs_query_model_json['inclusive_end'] = True
all_docs_query_model_json['limit'] = 0
all_docs_query_model_json['skip'] = 0
all_docs_query_model_json['update_seq'] = False
all_docs_query_model_json['endkey'] = 'testString'
all_docs_query_model_json['key'] = 'testString'
all_docs_query_model_json['keys'] = ['testString']
all_docs_query_model_json['startkey'] = 'testString'
# Construct a model instance of AllDocsQuery by calling from_dict on the json representation
all_docs_query_model = AllDocsQuery.from_dict(all_docs_query_model_json)
assert all_docs_query_model != False
# Construct a model instance of AllDocsQuery by calling from_dict on the json representation
all_docs_query_model_dict = AllDocsQuery.from_dict(all_docs_query_model_json).__dict__
all_docs_query_model2 = AllDocsQuery(**all_docs_query_model_dict)
# Verify the model instances are equivalent
assert all_docs_query_model == all_docs_query_model2
# Convert model instance back to dict and verify no loss of data
all_docs_query_model_json2 = all_docs_query_model.to_dict()
assert all_docs_query_model_json2 == all_docs_query_model_json
class TestModel_AllDocsResult():
"""
Test Class for AllDocsResult
"""
def test_all_docs_result_serialization(self):
"""
Test serialization/deserialization for AllDocsResult
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
docs_result_row_value_model = {} # DocsResultRowValue
docs_result_row_value_model['rev'] = 'testString'
docs_result_row_model = {} # DocsResultRow
docs_result_row_model['caused_by'] = 'testString'
docs_result_row_model['error'] = 'testString'
docs_result_row_model['reason'] = 'testString'
docs_result_row_model['doc'] = document_model
docs_result_row_model['id'] = 'testString'
docs_result_row_model['key'] = 'testString'
docs_result_row_model['value'] = docs_result_row_value_model
# Construct a json representation of a AllDocsResult model
all_docs_result_model_json = {}
all_docs_result_model_json['total_rows'] = 0
all_docs_result_model_json['rows'] = [docs_result_row_model]
all_docs_result_model_json['update_seq'] = 'testString'
# Construct a model instance of AllDocsResult by calling from_dict on the json representation
all_docs_result_model = AllDocsResult.from_dict(all_docs_result_model_json)
assert all_docs_result_model != False
# Construct a model instance of AllDocsResult by calling from_dict on the json representation
all_docs_result_model_dict = AllDocsResult.from_dict(all_docs_result_model_json).__dict__
all_docs_result_model2 = AllDocsResult(**all_docs_result_model_dict)
# Verify the model instances are equivalent
assert all_docs_result_model == all_docs_result_model2
# Convert model instance back to dict and verify no loss of data
all_docs_result_model_json2 = all_docs_result_model.to_dict()
assert all_docs_result_model_json2 == all_docs_result_model_json
class TestModel_Analyzer():
"""
Test Class for Analyzer
"""
def test_analyzer_serialization(self):
"""
Test serialization/deserialization for Analyzer
"""
# Construct a json representation of a Analyzer model
analyzer_model_json = {}
analyzer_model_json['name'] = 'classic'
analyzer_model_json['stopwords'] = ['testString']
# Construct a model instance of Analyzer by calling from_dict on the json representation
analyzer_model = Analyzer.from_dict(analyzer_model_json)
assert analyzer_model != False
# Construct a model instance of Analyzer by calling from_dict on the json representation
analyzer_model_dict = Analyzer.from_dict(analyzer_model_json).__dict__
analyzer_model2 = Analyzer(**analyzer_model_dict)
# Verify the model instances are equivalent
assert analyzer_model == analyzer_model2
# Convert model instance back to dict and verify no loss of data
analyzer_model_json2 = analyzer_model.to_dict()
assert analyzer_model_json2 == analyzer_model_json
class TestModel_AnalyzerConfiguration():
"""
Test Class for AnalyzerConfiguration
"""
def test_analyzer_configuration_serialization(self):
"""
Test serialization/deserialization for AnalyzerConfiguration
"""
# Construct dict forms of any model objects needed in order to build this model.
analyzer_model = {} # Analyzer
analyzer_model['name'] = 'classic'
analyzer_model['stopwords'] = ['testString']
# Construct a json representation of a AnalyzerConfiguration model
analyzer_configuration_model_json = {}
analyzer_configuration_model_json['name'] = 'classic'
analyzer_configuration_model_json['stopwords'] = ['testString']
analyzer_configuration_model_json['fields'] = {}
# Construct a model instance of AnalyzerConfiguration by calling from_dict on the json representation
analyzer_configuration_model = AnalyzerConfiguration.from_dict(analyzer_configuration_model_json)
assert analyzer_configuration_model != False
# Construct a model instance of AnalyzerConfiguration by calling from_dict on the json representation
analyzer_configuration_model_dict = AnalyzerConfiguration.from_dict(analyzer_configuration_model_json).__dict__
analyzer_configuration_model2 = AnalyzerConfiguration(**analyzer_configuration_model_dict)
# Verify the model instances are equivalent
assert analyzer_configuration_model == analyzer_configuration_model2
# Convert model instance back to dict and verify no loss of data
analyzer_configuration_model_json2 = analyzer_configuration_model.to_dict()
assert analyzer_configuration_model_json2 == analyzer_configuration_model_json
class TestModel_ApiKeysResult():
"""
Test Class for ApiKeysResult
"""
def test_api_keys_result_serialization(self):
"""
Test serialization/deserialization for ApiKeysResult
"""
# Construct a json representation of a ApiKeysResult model
api_keys_result_model_json = {}
api_keys_result_model_json['ok'] = True
api_keys_result_model_json['key'] = 'testString'
api_keys_result_model_json['password'] = 'testString'
# Construct a model instance of ApiKeysResult by calling from_dict on the json representation
api_keys_result_model = ApiKeysResult.from_dict(api_keys_result_model_json)
assert api_keys_result_model != False
# Construct a model instance of ApiKeysResult by calling from_dict on the json representation
api_keys_result_model_dict = ApiKeysResult.from_dict(api_keys_result_model_json).__dict__
api_keys_result_model2 = ApiKeysResult(**api_keys_result_model_dict)
# Verify the model instances are equivalent
assert api_keys_result_model == api_keys_result_model2
# Convert model instance back to dict and verify no loss of data
api_keys_result_model_json2 = api_keys_result_model.to_dict()
assert api_keys_result_model_json2 == api_keys_result_model_json
class TestModel_Attachment():
"""
Test Class for Attachment
"""
def test_attachment_serialization(self):
"""
Test serialization/deserialization for Attachment
"""
# Construct a json representation of a Attachment model
attachment_model_json = {}
attachment_model_json['content_type'] = 'testString'
attachment_model_json['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model_json['digest'] = 'testString'
attachment_model_json['encoded_length'] = 0
attachment_model_json['encoding'] = 'testString'
attachment_model_json['follows'] = True
attachment_model_json['length'] = 0
attachment_model_json['revpos'] = 1
attachment_model_json['stub'] = True
# Construct a model instance of Attachment by calling from_dict on the json representation
attachment_model = Attachment.from_dict(attachment_model_json)
assert attachment_model != False
# Construct a model instance of Attachment by calling from_dict on the json representation
attachment_model_dict = Attachment.from_dict(attachment_model_json).__dict__
attachment_model2 = Attachment(**attachment_model_dict)
# Verify the model instances are equivalent
assert attachment_model == attachment_model2
# Convert model instance back to dict and verify no loss of data
attachment_model_json2 = attachment_model.to_dict()
assert attachment_model_json2 == attachment_model_json
class TestModel_BulkDocs():
"""
Test Class for BulkDocs
"""
def test_bulk_docs_serialization(self):
"""
Test serialization/deserialization for BulkDocs
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
# Construct a json representation of a BulkDocs model
bulk_docs_model_json = {}
bulk_docs_model_json['docs'] = [document_model]
bulk_docs_model_json['new_edits'] = True
# Construct a model instance of BulkDocs by calling from_dict on the json representation
bulk_docs_model = BulkDocs.from_dict(bulk_docs_model_json)
assert bulk_docs_model != False
# Construct a model instance of BulkDocs by calling from_dict on the json representation
bulk_docs_model_dict = BulkDocs.from_dict(bulk_docs_model_json).__dict__
bulk_docs_model2 = BulkDocs(**bulk_docs_model_dict)
# Verify the model instances are equivalent
assert bulk_docs_model == bulk_docs_model2
# Convert model instance back to dict and verify no loss of data
bulk_docs_model_json2 = bulk_docs_model.to_dict()
assert bulk_docs_model_json2 == bulk_docs_model_json
class TestModel_BulkGetQueryDocument():
"""
Test Class for BulkGetQueryDocument
"""
def test_bulk_get_query_document_serialization(self):
"""
Test serialization/deserialization for BulkGetQueryDocument
"""
# Construct a json representation of a BulkGetQueryDocument model
bulk_get_query_document_model_json = {}
bulk_get_query_document_model_json['atts_since'] = ['1-99b02e08da151943c2dcb40090160bb8']
bulk_get_query_document_model_json['id'] = 'testString'
bulk_get_query_document_model_json['rev'] = 'testString'
# Construct a model instance of BulkGetQueryDocument by calling from_dict on the json representation
bulk_get_query_document_model = BulkGetQueryDocument.from_dict(bulk_get_query_document_model_json)
assert bulk_get_query_document_model != False
# Construct a model instance of BulkGetQueryDocument by calling from_dict on the json representation
bulk_get_query_document_model_dict = BulkGetQueryDocument.from_dict(bulk_get_query_document_model_json).__dict__
bulk_get_query_document_model2 = BulkGetQueryDocument(**bulk_get_query_document_model_dict)
# Verify the model instances are equivalent
assert bulk_get_query_document_model == bulk_get_query_document_model2
# Convert model instance back to dict and verify no loss of data
bulk_get_query_document_model_json2 = bulk_get_query_document_model.to_dict()
assert bulk_get_query_document_model_json2 == bulk_get_query_document_model_json
class TestModel_BulkGetResult():
"""
Test Class for BulkGetResult
"""
def test_bulk_get_result_serialization(self):
"""
Test serialization/deserialization for BulkGetResult
"""
# Construct dict forms of any model objects needed in order to build this model.
document_result_model = {} # DocumentResult
document_result_model['id'] = 'testString'
document_result_model['rev'] = 'testString'
document_result_model['ok'] = True
document_result_model['caused_by'] = 'testString'
document_result_model['error'] = 'testString'
document_result_model['reason'] = 'testString'
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
bulk_get_result_document_model = {} # BulkGetResultDocument
bulk_get_result_document_model['error'] = document_result_model
bulk_get_result_document_model['ok'] = document_model
bulk_get_result_item_model = {} # BulkGetResultItem
bulk_get_result_item_model['docs'] = [bulk_get_result_document_model]
bulk_get_result_item_model['id'] = 'testString'
# Construct a json representation of a BulkGetResult model
bulk_get_result_model_json = {}
bulk_get_result_model_json['results'] = [bulk_get_result_item_model]
# Construct a model instance of BulkGetResult by calling from_dict on the json representation
bulk_get_result_model = BulkGetResult.from_dict(bulk_get_result_model_json)
assert bulk_get_result_model != False
# Construct a model instance of BulkGetResult by calling from_dict on the json representation
bulk_get_result_model_dict = BulkGetResult.from_dict(bulk_get_result_model_json).__dict__
bulk_get_result_model2 = BulkGetResult(**bulk_get_result_model_dict)
# Verify the model instances are equivalent
assert bulk_get_result_model == bulk_get_result_model2
# Convert model instance back to dict and verify no loss of data
bulk_get_result_model_json2 = bulk_get_result_model.to_dict()
assert bulk_get_result_model_json2 == bulk_get_result_model_json
class TestModel_BulkGetResultDocument():
"""
Test Class for BulkGetResultDocument
"""
def test_bulk_get_result_document_serialization(self):
"""
Test serialization/deserialization for BulkGetResultDocument
"""
# Construct dict forms of any model objects needed in order to build this model.
document_result_model = {} # DocumentResult
document_result_model['id'] = 'testString'
document_result_model['rev'] = 'testString'
document_result_model['ok'] = True
document_result_model['caused_by'] = 'testString'
document_result_model['error'] = 'testString'
document_result_model['reason'] = 'testString'
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
# Construct a json representation of a BulkGetResultDocument model
bulk_get_result_document_model_json = {}
bulk_get_result_document_model_json['error'] = document_result_model
bulk_get_result_document_model_json['ok'] = document_model
# Construct a model instance of BulkGetResultDocument by calling from_dict on the json representation
bulk_get_result_document_model = BulkGetResultDocument.from_dict(bulk_get_result_document_model_json)
assert bulk_get_result_document_model != False
# Construct a model instance of BulkGetResultDocument by calling from_dict on the json representation
bulk_get_result_document_model_dict = BulkGetResultDocument.from_dict(bulk_get_result_document_model_json).__dict__
bulk_get_result_document_model2 = BulkGetResultDocument(**bulk_get_result_document_model_dict)
# Verify the model instances are equivalent
assert bulk_get_result_document_model == bulk_get_result_document_model2
# Convert model instance back to dict and verify no loss of data
bulk_get_result_document_model_json2 = bulk_get_result_document_model.to_dict()
assert bulk_get_result_document_model_json2 == bulk_get_result_document_model_json
class TestModel_BulkGetResultItem():
"""
Test Class for BulkGetResultItem
"""
def test_bulk_get_result_item_serialization(self):
"""
Test serialization/deserialization for BulkGetResultItem
"""
# Construct dict forms of any model objects needed in order to build this model.
document_result_model = {} # DocumentResult
document_result_model['id'] = 'testString'
document_result_model['rev'] = 'testString'
document_result_model['ok'] = True
document_result_model['caused_by'] = 'testString'
document_result_model['error'] = 'testString'
document_result_model['reason'] = 'testString'
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
bulk_get_result_document_model = {} # BulkGetResultDocument
bulk_get_result_document_model['error'] = document_result_model
bulk_get_result_document_model['ok'] = document_model
# Construct a json representation of a BulkGetResultItem model
bulk_get_result_item_model_json = {}
bulk_get_result_item_model_json['docs'] = [bulk_get_result_document_model]
bulk_get_result_item_model_json['id'] = 'testString'
# Construct a model instance of BulkGetResultItem by calling from_dict on the json representation
bulk_get_result_item_model = BulkGetResultItem.from_dict(bulk_get_result_item_model_json)
assert bulk_get_result_item_model != False
# Construct a model instance of BulkGetResultItem by calling from_dict on the json representation
bulk_get_result_item_model_dict = BulkGetResultItem.from_dict(bulk_get_result_item_model_json).__dict__
bulk_get_result_item_model2 = BulkGetResultItem(**bulk_get_result_item_model_dict)
# Verify the model instances are equivalent
assert bulk_get_result_item_model == bulk_get_result_item_model2
# Convert model instance back to dict and verify no loss of data
bulk_get_result_item_model_json2 = bulk_get_result_item_model.to_dict()
assert bulk_get_result_item_model_json2 == bulk_get_result_item_model_json
class TestModel_CapacityThroughputInformation():
"""
Test Class for CapacityThroughputInformation
"""
def test_capacity_throughput_information_serialization(self):
"""
Test serialization/deserialization for CapacityThroughputInformation
"""
# Construct dict forms of any model objects needed in order to build this model.
throughput_information_model = {} # ThroughputInformation
throughput_information_model['blocks'] = 0
throughput_information_model['query'] = 0
throughput_information_model['read'] = 0
throughput_information_model['write'] = 0
capacity_throughput_information_current_model = {} # CapacityThroughputInformationCurrent
capacity_throughput_information_current_model['throughput'] = throughput_information_model
capacity_throughput_information_target_model = {} # CapacityThroughputInformationTarget
capacity_throughput_information_target_model['throughput'] = throughput_information_model
# Construct a json representation of a CapacityThroughputInformation model
capacity_throughput_information_model_json = {}
capacity_throughput_information_model_json['current'] = capacity_throughput_information_current_model
capacity_throughput_information_model_json['target'] = capacity_throughput_information_target_model
# Construct a model instance of CapacityThroughputInformation by calling from_dict on the json representation
capacity_throughput_information_model = CapacityThroughputInformation.from_dict(capacity_throughput_information_model_json)
assert capacity_throughput_information_model != False
# Construct a model instance of CapacityThroughputInformation by calling from_dict on the json representation
capacity_throughput_information_model_dict = CapacityThroughputInformation.from_dict(capacity_throughput_information_model_json).__dict__
capacity_throughput_information_model2 = CapacityThroughputInformation(**capacity_throughput_information_model_dict)
# Verify the model instances are equivalent
assert capacity_throughput_information_model == capacity_throughput_information_model2
# Convert model instance back to dict and verify no loss of data
capacity_throughput_information_model_json2 = capacity_throughput_information_model.to_dict()
assert capacity_throughput_information_model_json2 == capacity_throughput_information_model_json
class TestModel_CapacityThroughputInformationCurrent():
"""
Test Class for CapacityThroughputInformationCurrent
"""
def test_capacity_throughput_information_current_serialization(self):
"""
Test serialization/deserialization for CapacityThroughputInformationCurrent
"""
# Construct dict forms of any model objects needed in order to build this model.
throughput_information_model = {} # ThroughputInformation
throughput_information_model['blocks'] = 0
throughput_information_model['query'] = 0
throughput_information_model['read'] = 0
throughput_information_model['write'] = 0
# Construct a json representation of a CapacityThroughputInformationCurrent model
capacity_throughput_information_current_model_json = {}
capacity_throughput_information_current_model_json['throughput'] = throughput_information_model
# Construct a model instance of CapacityThroughputInformationCurrent by calling from_dict on the json representation
capacity_throughput_information_current_model = CapacityThroughputInformationCurrent.from_dict(capacity_throughput_information_current_model_json)
assert capacity_throughput_information_current_model != False
# Construct a model instance of CapacityThroughputInformationCurrent by calling from_dict on the json representation
capacity_throughput_information_current_model_dict = CapacityThroughputInformationCurrent.from_dict(capacity_throughput_information_current_model_json).__dict__
capacity_throughput_information_current_model2 = CapacityThroughputInformationCurrent(**capacity_throughput_information_current_model_dict)
# Verify the model instances are equivalent
assert capacity_throughput_information_current_model == capacity_throughput_information_current_model2
# Convert model instance back to dict and verify no loss of data
capacity_throughput_information_current_model_json2 = capacity_throughput_information_current_model.to_dict()
assert capacity_throughput_information_current_model_json2 == capacity_throughput_information_current_model_json
class TestModel_CapacityThroughputInformationTarget():
"""
Test Class for CapacityThroughputInformationTarget
"""
def test_capacity_throughput_information_target_serialization(self):
"""
Test serialization/deserialization for CapacityThroughputInformationTarget
"""
# Construct dict forms of any model objects needed in order to build this model.
throughput_information_model = {} # ThroughputInformation
throughput_information_model['blocks'] = 0
throughput_information_model['query'] = 0
throughput_information_model['read'] = 0
throughput_information_model['write'] = 0
# Construct a json representation of a CapacityThroughputInformationTarget model
capacity_throughput_information_target_model_json = {}
capacity_throughput_information_target_model_json['throughput'] = throughput_information_model
# Construct a model instance of CapacityThroughputInformationTarget by calling from_dict on the json representation
capacity_throughput_information_target_model = CapacityThroughputInformationTarget.from_dict(capacity_throughput_information_target_model_json)
assert capacity_throughput_information_target_model != False
# Construct a model instance of CapacityThroughputInformationTarget by calling from_dict on the json representation
capacity_throughput_information_target_model_dict = CapacityThroughputInformationTarget.from_dict(capacity_throughput_information_target_model_json).__dict__
capacity_throughput_information_target_model2 = CapacityThroughputInformationTarget(**capacity_throughput_information_target_model_dict)
# Verify the model instances are equivalent
assert capacity_throughput_information_target_model == capacity_throughput_information_target_model2
# Convert model instance back to dict and verify no loss of data
capacity_throughput_information_target_model_json2 = capacity_throughput_information_target_model.to_dict()
assert capacity_throughput_information_target_model_json2 == capacity_throughput_information_target_model_json
class TestModel_Change():
"""
Test Class for Change
"""
def test_change_serialization(self):
"""
Test serialization/deserialization for Change
"""
# Construct a json representation of a Change model
change_model_json = {}
change_model_json['rev'] = 'testString'
# Construct a model instance of Change by calling from_dict on the json representation
change_model = Change.from_dict(change_model_json)
assert change_model != False
# Construct a model instance of Change by calling from_dict on the json representation
change_model_dict = Change.from_dict(change_model_json).__dict__
change_model2 = Change(**change_model_dict)
# Verify the model instances are equivalent
assert change_model == change_model2
# Convert model instance back to dict and verify no loss of data
change_model_json2 = change_model.to_dict()
assert change_model_json2 == change_model_json
class TestModel_ChangesResult():
"""
Test Class for ChangesResult
"""
def test_changes_result_serialization(self):
"""
Test serialization/deserialization for ChangesResult
"""
# Construct dict forms of any model objects needed in order to build this model.
change_model = {} # Change
change_model['rev'] = 'testString'
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
changes_result_item_model = {} # ChangesResultItem
changes_result_item_model['changes'] = [change_model]
changes_result_item_model['deleted'] = True
changes_result_item_model['doc'] = document_model
changes_result_item_model['id'] = 'testString'
changes_result_item_model['seq'] = 'testString'
# Construct a json representation of a ChangesResult model
changes_result_model_json = {}
changes_result_model_json['last_seq'] = 'testString'
changes_result_model_json['pending'] = 26
changes_result_model_json['results'] = [changes_result_item_model]
# Construct a model instance of ChangesResult by calling from_dict on the json representation
changes_result_model = ChangesResult.from_dict(changes_result_model_json)
assert changes_result_model != False
# Construct a model instance of ChangesResult by calling from_dict on the json representation
changes_result_model_dict = ChangesResult.from_dict(changes_result_model_json).__dict__
changes_result_model2 = ChangesResult(**changes_result_model_dict)
# Verify the model instances are equivalent
assert changes_result_model == changes_result_model2
# Convert model instance back to dict and verify no loss of data
changes_result_model_json2 = changes_result_model.to_dict()
assert changes_result_model_json2 == changes_result_model_json
class TestModel_ChangesResultItem():
"""
Test Class for ChangesResultItem
"""
def test_changes_result_item_serialization(self):
"""
Test serialization/deserialization for ChangesResultItem
"""
# Construct dict forms of any model objects needed in order to build this model.
change_model = {} # Change
change_model['rev'] = 'testString'
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
# Construct a json representation of a ChangesResultItem model
changes_result_item_model_json = {}
changes_result_item_model_json['changes'] = [change_model]
changes_result_item_model_json['deleted'] = True
changes_result_item_model_json['doc'] = document_model
changes_result_item_model_json['id'] = 'testString'
changes_result_item_model_json['seq'] = 'testString'
# Construct a model instance of ChangesResultItem by calling from_dict on the json representation
changes_result_item_model = ChangesResultItem.from_dict(changes_result_item_model_json)
assert changes_result_item_model != False
# Construct a model instance of ChangesResultItem by calling from_dict on the json representation
changes_result_item_model_dict = ChangesResultItem.from_dict(changes_result_item_model_json).__dict__
changes_result_item_model2 = ChangesResultItem(**changes_result_item_model_dict)
# Verify the model instances are equivalent
assert changes_result_item_model == changes_result_item_model2
# Convert model instance back to dict and verify no loss of data
changes_result_item_model_json2 = changes_result_item_model.to_dict()
assert changes_result_item_model_json2 == changes_result_item_model_json
class TestModel_ContentInformationSizes():
"""
Test Class for ContentInformationSizes
"""
def test_content_information_sizes_serialization(self):
"""
Test serialization/deserialization for ContentInformationSizes
"""
# Construct a json representation of a ContentInformationSizes model
content_information_sizes_model_json = {}
content_information_sizes_model_json['active'] = 26
content_information_sizes_model_json['external'] = 26
content_information_sizes_model_json['file'] = 26
# Construct a model instance of ContentInformationSizes by calling from_dict on the json representation
content_information_sizes_model = ContentInformationSizes.from_dict(content_information_sizes_model_json)
assert content_information_sizes_model != False
# Construct a model instance of ContentInformationSizes by calling from_dict on the json representation
content_information_sizes_model_dict = ContentInformationSizes.from_dict(content_information_sizes_model_json).__dict__
content_information_sizes_model2 = ContentInformationSizes(**content_information_sizes_model_dict)
# Verify the model instances are equivalent
assert content_information_sizes_model == content_information_sizes_model2
# Convert model instance back to dict and verify no loss of data
content_information_sizes_model_json2 = content_information_sizes_model.to_dict()
assert content_information_sizes_model_json2 == content_information_sizes_model_json
class TestModel_CorsInformation():
"""
Test Class for CorsInformation
"""
def test_cors_information_serialization(self):
"""
Test serialization/deserialization for CorsInformation
"""
# Construct a json representation of a CorsInformation model
cors_information_model_json = {}
cors_information_model_json['allow_credentials'] = True
cors_information_model_json['enable_cors'] = True
cors_information_model_json['origins'] = ['testString']
# Construct a model instance of CorsInformation by calling from_dict on the json representation
cors_information_model = CorsInformation.from_dict(cors_information_model_json)
assert cors_information_model != False
# Construct a model instance of CorsInformation by calling from_dict on the json representation
cors_information_model_dict = CorsInformation.from_dict(cors_information_model_json).__dict__
cors_information_model2 = CorsInformation(**cors_information_model_dict)
# Verify the model instances are equivalent
assert cors_information_model == cors_information_model2
# Convert model instance back to dict and verify no loss of data
cors_information_model_json2 = cors_information_model.to_dict()
assert cors_information_model_json2 == cors_information_model_json
class TestModel_CurrentThroughputInformation():
"""
Test Class for CurrentThroughputInformation
"""
def test_current_throughput_information_serialization(self):
"""
Test serialization/deserialization for CurrentThroughputInformation
"""
# Construct dict forms of any model objects needed in order to build this model.
current_throughput_information_throughput_model = {} # CurrentThroughputInformationThroughput
current_throughput_information_throughput_model['query'] = 0
current_throughput_information_throughput_model['read'] = 0
current_throughput_information_throughput_model['write'] = 0
# Construct a json representation of a CurrentThroughputInformation model
current_throughput_information_model_json = {}
current_throughput_information_model_json['throughput'] = current_throughput_information_throughput_model
# Construct a model instance of CurrentThroughputInformation by calling from_dict on the json representation
current_throughput_information_model = CurrentThroughputInformation.from_dict(current_throughput_information_model_json)
assert current_throughput_information_model != False
# Construct a model instance of CurrentThroughputInformation by calling from_dict on the json representation
current_throughput_information_model_dict = CurrentThroughputInformation.from_dict(current_throughput_information_model_json).__dict__
current_throughput_information_model2 = CurrentThroughputInformation(**current_throughput_information_model_dict)
# Verify the model instances are equivalent
assert current_throughput_information_model == current_throughput_information_model2
# Convert model instance back to dict and verify no loss of data
current_throughput_information_model_json2 = current_throughput_information_model.to_dict()
assert current_throughput_information_model_json2 == current_throughput_information_model_json
class TestModel_CurrentThroughputInformationThroughput():
"""
Test Class for CurrentThroughputInformationThroughput
"""
def test_current_throughput_information_throughput_serialization(self):
"""
Test serialization/deserialization for CurrentThroughputInformationThroughput
"""
# Construct a json representation of a CurrentThroughputInformationThroughput model
current_throughput_information_throughput_model_json = {}
current_throughput_information_throughput_model_json['query'] = 0
current_throughput_information_throughput_model_json['read'] = 0
current_throughput_information_throughput_model_json['write'] = 0
# Construct a model instance of CurrentThroughputInformationThroughput by calling from_dict on the json representation
current_throughput_information_throughput_model = CurrentThroughputInformationThroughput.from_dict(current_throughput_information_throughput_model_json)
assert current_throughput_information_throughput_model != False
# Construct a model instance of CurrentThroughputInformationThroughput by calling from_dict on the json representation
current_throughput_information_throughput_model_dict = CurrentThroughputInformationThroughput.from_dict(current_throughput_information_throughput_model_json).__dict__
current_throughput_information_throughput_model2 = CurrentThroughputInformationThroughput(**current_throughput_information_throughput_model_dict)
# Verify the model instances are equivalent
assert current_throughput_information_throughput_model == current_throughput_information_throughput_model2
# Convert model instance back to dict and verify no loss of data
current_throughput_information_throughput_model_json2 = current_throughput_information_throughput_model.to_dict()
assert current_throughput_information_throughput_model_json2 == current_throughput_information_throughput_model_json
class TestModel_DatabaseInformation():
"""
Test Class for DatabaseInformation
"""
def test_database_information_serialization(self):
"""
Test serialization/deserialization for DatabaseInformation
"""
# Construct dict forms of any model objects needed in order to build this model.
database_information_cluster_model = {} # DatabaseInformationCluster
database_information_cluster_model['n'] = 1
database_information_cluster_model['q'] = 1
database_information_cluster_model['r'] = 1
database_information_cluster_model['w'] = 1
database_information_props_model = {} # DatabaseInformationProps
database_information_props_model['partitioned'] = True
content_information_sizes_model = {} # ContentInformationSizes
content_information_sizes_model['active'] = 26
content_information_sizes_model['external'] = 26
content_information_sizes_model['file'] = 26
# Construct a json representation of a DatabaseInformation model
database_information_model_json = {}
database_information_model_json['cluster'] = database_information_cluster_model
database_information_model_json['committed_update_seq'] = 'testString'
database_information_model_json['compact_running'] = True
database_information_model_json['compacted_seq'] = 'testString'
database_information_model_json['db_name'] = 'testString'
database_information_model_json['disk_format_version'] = 26
database_information_model_json['doc_count'] = 0
database_information_model_json['doc_del_count'] = 0
database_information_model_json['engine'] = 'testString'
database_information_model_json['props'] = database_information_props_model
database_information_model_json['sizes'] = content_information_sizes_model
database_information_model_json['update_seq'] = 'testString'
database_information_model_json['uuid'] = 'testString'
# Construct a model instance of DatabaseInformation by calling from_dict on the json representation
database_information_model = DatabaseInformation.from_dict(database_information_model_json)
assert database_information_model != False
# Construct a model instance of DatabaseInformation by calling from_dict on the json representation
database_information_model_dict = DatabaseInformation.from_dict(database_information_model_json).__dict__
database_information_model2 = DatabaseInformation(**database_information_model_dict)
# Verify the model instances are equivalent
assert database_information_model == database_information_model2
# Convert model instance back to dict and verify no loss of data
database_information_model_json2 = database_information_model.to_dict()
assert database_information_model_json2 == database_information_model_json
class TestModel_DatabaseInformationCluster():
"""
Test Class for DatabaseInformationCluster
"""
def test_database_information_cluster_serialization(self):
"""
Test serialization/deserialization for DatabaseInformationCluster
"""
# Construct a json representation of a DatabaseInformationCluster model
database_information_cluster_model_json = {}
database_information_cluster_model_json['n'] = 1
database_information_cluster_model_json['q'] = 1
database_information_cluster_model_json['r'] = 1
database_information_cluster_model_json['w'] = 1
# Construct a model instance of DatabaseInformationCluster by calling from_dict on the json representation
database_information_cluster_model = DatabaseInformationCluster.from_dict(database_information_cluster_model_json)
assert database_information_cluster_model != False
# Construct a model instance of DatabaseInformationCluster by calling from_dict on the json representation
database_information_cluster_model_dict = DatabaseInformationCluster.from_dict(database_information_cluster_model_json).__dict__
database_information_cluster_model2 = DatabaseInformationCluster(**database_information_cluster_model_dict)
# Verify the model instances are equivalent
assert database_information_cluster_model == database_information_cluster_model2
# Convert model instance back to dict and verify no loss of data
database_information_cluster_model_json2 = database_information_cluster_model.to_dict()
assert database_information_cluster_model_json2 == database_information_cluster_model_json
class TestModel_DatabaseInformationProps():
"""
Test Class for DatabaseInformationProps
"""
def test_database_information_props_serialization(self):
"""
Test serialization/deserialization for DatabaseInformationProps
"""
# Construct a json representation of a DatabaseInformationProps model
database_information_props_model_json = {}
database_information_props_model_json['partitioned'] = True
# Construct a model instance of DatabaseInformationProps by calling from_dict on the json representation
database_information_props_model = DatabaseInformationProps.from_dict(database_information_props_model_json)
assert database_information_props_model != False
# Construct a model instance of DatabaseInformationProps by calling from_dict on the json representation
database_information_props_model_dict = DatabaseInformationProps.from_dict(database_information_props_model_json).__dict__
database_information_props_model2 = DatabaseInformationProps(**database_information_props_model_dict)
# Verify the model instances are equivalent
assert database_information_props_model == database_information_props_model2
# Convert model instance back to dict and verify no loss of data
database_information_props_model_json2 = database_information_props_model.to_dict()
assert database_information_props_model_json2 == database_information_props_model_json
class TestModel_DbEvent():
"""
Test Class for DbEvent
"""
def test_db_event_serialization(self):
"""
Test serialization/deserialization for DbEvent
"""
# Construct a json representation of a DbEvent model
db_event_model_json = {}
db_event_model_json['account'] = 'testString'
db_event_model_json['db_name'] = 'testString'
db_event_model_json['seq'] = 'testString'
db_event_model_json['type'] = 'created'
# Construct a model instance of DbEvent by calling from_dict on the json representation
db_event_model = DbEvent.from_dict(db_event_model_json)
assert db_event_model != False
# Construct a model instance of DbEvent by calling from_dict on the json representation
db_event_model_dict = DbEvent.from_dict(db_event_model_json).__dict__
db_event_model2 = DbEvent(**db_event_model_dict)
# Verify the model instances are equivalent
assert db_event_model == db_event_model2
# Convert model instance back to dict and verify no loss of data
db_event_model_json2 = db_event_model.to_dict()
assert db_event_model_json2 == db_event_model_json
class TestModel_DbUpdates():
"""
Test Class for DbUpdates
"""
def test_db_updates_serialization(self):
"""
Test serialization/deserialization for DbUpdates
"""
# Construct dict forms of any model objects needed in order to build this model.
db_event_model = {} # DbEvent
db_event_model['account'] = 'testString'
db_event_model['db_name'] = 'testString'
db_event_model['seq'] = 'testString'
db_event_model['type'] = 'created'
# Construct a json representation of a DbUpdates model
db_updates_model_json = {}
db_updates_model_json['last_seq'] = 'testString'
db_updates_model_json['results'] = [db_event_model]
# Construct a model instance of DbUpdates by calling from_dict on the json representation
db_updates_model = DbUpdates.from_dict(db_updates_model_json)
assert db_updates_model != False
# Construct a model instance of DbUpdates by calling from_dict on the json representation
db_updates_model_dict = DbUpdates.from_dict(db_updates_model_json).__dict__
db_updates_model2 = DbUpdates(**db_updates_model_dict)
# Verify the model instances are equivalent
assert db_updates_model == db_updates_model2
# Convert model instance back to dict and verify no loss of data
db_updates_model_json2 = db_updates_model.to_dict()
assert db_updates_model_json2 == db_updates_model_json
class TestModel_DbsInfoResult():
"""
Test Class for DbsInfoResult
"""
def test_dbs_info_result_serialization(self):
"""
Test serialization/deserialization for DbsInfoResult
"""
# Construct dict forms of any model objects needed in order to build this model.
database_information_cluster_model = {} # DatabaseInformationCluster
database_information_cluster_model['n'] = 1
database_information_cluster_model['q'] = 1
database_information_cluster_model['r'] = 1
database_information_cluster_model['w'] = 1
database_information_props_model = {} # DatabaseInformationProps
database_information_props_model['partitioned'] = True
content_information_sizes_model = {} # ContentInformationSizes
content_information_sizes_model['active'] = 26
content_information_sizes_model['external'] = 26
content_information_sizes_model['file'] = 26
database_information_model = {} # DatabaseInformation
database_information_model['cluster'] = database_information_cluster_model
database_information_model['committed_update_seq'] = 'testString'
database_information_model['compact_running'] = True
database_information_model['compacted_seq'] = 'testString'
database_information_model['db_name'] = 'testString'
database_information_model['disk_format_version'] = 26
database_information_model['doc_count'] = 0
database_information_model['doc_del_count'] = 0
database_information_model['engine'] = 'testString'
database_information_model['props'] = database_information_props_model
database_information_model['sizes'] = content_information_sizes_model
database_information_model['update_seq'] = 'testString'
database_information_model['uuid'] = 'testString'
# Construct a json representation of a DbsInfoResult model
dbs_info_result_model_json = {}
dbs_info_result_model_json['error'] = 'testString'
dbs_info_result_model_json['info'] = database_information_model
dbs_info_result_model_json['key'] = 'testString'
# Construct a model instance of DbsInfoResult by calling from_dict on the json representation
dbs_info_result_model = DbsInfoResult.from_dict(dbs_info_result_model_json)
assert dbs_info_result_model != False
# Construct a model instance of DbsInfoResult by calling from_dict on the json representation
dbs_info_result_model_dict = DbsInfoResult.from_dict(dbs_info_result_model_json).__dict__
dbs_info_result_model2 = DbsInfoResult(**dbs_info_result_model_dict)
# Verify the model instances are equivalent
assert dbs_info_result_model == dbs_info_result_model2
# Convert model instance back to dict and verify no loss of data
dbs_info_result_model_json2 = dbs_info_result_model.to_dict()
assert dbs_info_result_model_json2 == dbs_info_result_model_json
class TestModel_DesignDocument():
"""
Test Class for DesignDocument
"""
def test_design_document_serialization(self):
"""
Test serialization/deserialization for DesignDocument
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
analyzer_model = {} # Analyzer
analyzer_model['name'] = 'classic'
analyzer_model['stopwords'] = ['testString']
analyzer_configuration_model = {} # AnalyzerConfiguration
analyzer_configuration_model['name'] = 'classic'
analyzer_configuration_model['stopwords'] = ['testString']
analyzer_configuration_model['fields'] = {}
search_index_definition_model = {} # SearchIndexDefinition
search_index_definition_model['analyzer'] = analyzer_configuration_model
search_index_definition_model['index'] = 'testString'
design_document_options_model = {} # DesignDocumentOptions
design_document_options_model['partitioned'] = True
design_document_views_map_reduce_model = {} # DesignDocumentViewsMapReduce
design_document_views_map_reduce_model['map'] = 'testString'
design_document_views_map_reduce_model['reduce'] = 'testString'
geo_index_definition_model = {} # GeoIndexDefinition
geo_index_definition_model['index'] = 'testString'
# Construct a json representation of a DesignDocument model
design_document_model_json = {}
design_document_model_json['_attachments'] = {}
design_document_model_json['_conflicts'] = ['testString']
design_document_model_json['_deleted'] = True
design_document_model_json['_deleted_conflicts'] = ['testString']
design_document_model_json['_id'] = 'testString'
design_document_model_json['_local_seq'] = 'testString'
design_document_model_json['_rev'] = 'testString'
design_document_model_json['_revisions'] = revisions_model
design_document_model_json['_revs_info'] = [document_revision_status_model]
design_document_model_json['autoupdate'] = True
design_document_model_json['filters'] = {}
design_document_model_json['indexes'] = {}
design_document_model_json['language'] = 'javascript'
design_document_model_json['options'] = design_document_options_model
design_document_model_json['validate_doc_update'] = 'testString'
design_document_model_json['views'] = {}
design_document_model_json['st_indexes'] = {}
design_document_model_json['foo'] = 'testString'
# Construct a model instance of DesignDocument by calling from_dict on the json representation
design_document_model = DesignDocument.from_dict(design_document_model_json)
assert design_document_model != False
# Construct a model instance of DesignDocument by calling from_dict on the json representation
design_document_model_dict = DesignDocument.from_dict(design_document_model_json).__dict__
design_document_model2 = DesignDocument(**design_document_model_dict)
# Verify the model instances are equivalent
assert design_document_model == design_document_model2
# Convert model instance back to dict and verify no loss of data
design_document_model_json2 = design_document_model.to_dict()
assert design_document_model_json2 == design_document_model_json
# Test get_properties and set_properties methods.
design_document_model.set_properties({})
actual_dict = design_document_model.get_properties()
assert actual_dict == {}
expected_dict = {'foo': 'testString'}
design_document_model.set_properties(expected_dict)
actual_dict = design_document_model.get_properties()
assert actual_dict == expected_dict
class TestModel_DesignDocumentInformation():
"""
Test Class for DesignDocumentInformation
"""
def test_design_document_information_serialization(self):
"""
Test serialization/deserialization for DesignDocumentInformation
"""
# Construct dict forms of any model objects needed in order to build this model.
content_information_sizes_model = {} # ContentInformationSizes
content_information_sizes_model['active'] = 26
content_information_sizes_model['external'] = 26
content_information_sizes_model['file'] = 26
design_document_view_index_model = {} # DesignDocumentViewIndex
design_document_view_index_model['compact_running'] = True
design_document_view_index_model['language'] = 'testString'
design_document_view_index_model['signature'] = 'testString'
design_document_view_index_model['sizes'] = content_information_sizes_model
design_document_view_index_model['updater_running'] = True
design_document_view_index_model['waiting_clients'] = 0
design_document_view_index_model['waiting_commit'] = True
# Construct a json representation of a DesignDocumentInformation model
design_document_information_model_json = {}
design_document_information_model_json['name'] = 'testString'
design_document_information_model_json['view_index'] = design_document_view_index_model
# Construct a model instance of DesignDocumentInformation by calling from_dict on the json representation
design_document_information_model = DesignDocumentInformation.from_dict(design_document_information_model_json)
assert design_document_information_model != False
# Construct a model instance of DesignDocumentInformation by calling from_dict on the json representation
design_document_information_model_dict = DesignDocumentInformation.from_dict(design_document_information_model_json).__dict__
design_document_information_model2 = DesignDocumentInformation(**design_document_information_model_dict)
# Verify the model instances are equivalent
assert design_document_information_model == design_document_information_model2
# Convert model instance back to dict and verify no loss of data
design_document_information_model_json2 = design_document_information_model.to_dict()
assert design_document_information_model_json2 == design_document_information_model_json
class TestModel_DesignDocumentOptions():
"""
Test Class for DesignDocumentOptions
"""
def test_design_document_options_serialization(self):
"""
Test serialization/deserialization for DesignDocumentOptions
"""
# Construct a json representation of a DesignDocumentOptions model
design_document_options_model_json = {}
design_document_options_model_json['partitioned'] = True
# Construct a model instance of DesignDocumentOptions by calling from_dict on the json representation
design_document_options_model = DesignDocumentOptions.from_dict(design_document_options_model_json)
assert design_document_options_model != False
# Construct a model instance of DesignDocumentOptions by calling from_dict on the json representation
design_document_options_model_dict = DesignDocumentOptions.from_dict(design_document_options_model_json).__dict__
design_document_options_model2 = DesignDocumentOptions(**design_document_options_model_dict)
# Verify the model instances are equivalent
assert design_document_options_model == design_document_options_model2
# Convert model instance back to dict and verify no loss of data
design_document_options_model_json2 = design_document_options_model.to_dict()
assert design_document_options_model_json2 == design_document_options_model_json
class TestModel_DesignDocumentViewIndex():
"""
Test Class for DesignDocumentViewIndex
"""
def test_design_document_view_index_serialization(self):
"""
Test serialization/deserialization for DesignDocumentViewIndex
"""
# Construct dict forms of any model objects needed in order to build this model.
content_information_sizes_model = {} # ContentInformationSizes
content_information_sizes_model['active'] = 26
content_information_sizes_model['external'] = 26
content_information_sizes_model['file'] = 26
# Construct a json representation of a DesignDocumentViewIndex model
design_document_view_index_model_json = {}
design_document_view_index_model_json['compact_running'] = True
design_document_view_index_model_json['language'] = 'testString'
design_document_view_index_model_json['signature'] = 'testString'
design_document_view_index_model_json['sizes'] = content_information_sizes_model
design_document_view_index_model_json['updater_running'] = True
design_document_view_index_model_json['waiting_clients'] = 0
design_document_view_index_model_json['waiting_commit'] = True
# Construct a model instance of DesignDocumentViewIndex by calling from_dict on the json representation
design_document_view_index_model = DesignDocumentViewIndex.from_dict(design_document_view_index_model_json)
assert design_document_view_index_model != False
# Construct a model instance of DesignDocumentViewIndex by calling from_dict on the json representation
design_document_view_index_model_dict = DesignDocumentViewIndex.from_dict(design_document_view_index_model_json).__dict__
design_document_view_index_model2 = DesignDocumentViewIndex(**design_document_view_index_model_dict)
# Verify the model instances are equivalent
assert design_document_view_index_model == design_document_view_index_model2
# Convert model instance back to dict and verify no loss of data
design_document_view_index_model_json2 = design_document_view_index_model.to_dict()
assert design_document_view_index_model_json2 == design_document_view_index_model_json
class TestModel_DesignDocumentViewsMapReduce():
"""
Test Class for DesignDocumentViewsMapReduce
"""
def test_design_document_views_map_reduce_serialization(self):
"""
Test serialization/deserialization for DesignDocumentViewsMapReduce
"""
# Construct a json representation of a DesignDocumentViewsMapReduce model
design_document_views_map_reduce_model_json = {}
design_document_views_map_reduce_model_json['map'] = 'testString'
design_document_views_map_reduce_model_json['reduce'] = 'testString'
# Construct a model instance of DesignDocumentViewsMapReduce by calling from_dict on the json representation
design_document_views_map_reduce_model = DesignDocumentViewsMapReduce.from_dict(design_document_views_map_reduce_model_json)
assert design_document_views_map_reduce_model != False
# Construct a model instance of DesignDocumentViewsMapReduce by calling from_dict on the json representation
design_document_views_map_reduce_model_dict = DesignDocumentViewsMapReduce.from_dict(design_document_views_map_reduce_model_json).__dict__
design_document_views_map_reduce_model2 = DesignDocumentViewsMapReduce(**design_document_views_map_reduce_model_dict)
# Verify the model instances are equivalent
assert design_document_views_map_reduce_model == design_document_views_map_reduce_model2
# Convert model instance back to dict and verify no loss of data
design_document_views_map_reduce_model_json2 = design_document_views_map_reduce_model.to_dict()
assert design_document_views_map_reduce_model_json2 == design_document_views_map_reduce_model_json
class TestModel_DocsResultRow():
"""
Test Class for DocsResultRow
"""
def test_docs_result_row_serialization(self):
"""
Test serialization/deserialization for DocsResultRow
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
docs_result_row_value_model = {} # DocsResultRowValue
docs_result_row_value_model['rev'] = 'testString'
# Construct a json representation of a DocsResultRow model
docs_result_row_model_json = {}
docs_result_row_model_json['caused_by'] = 'testString'
docs_result_row_model_json['error'] = 'testString'
docs_result_row_model_json['reason'] = 'testString'
docs_result_row_model_json['doc'] = document_model
docs_result_row_model_json['id'] = 'testString'
docs_result_row_model_json['key'] = 'testString'
docs_result_row_model_json['value'] = docs_result_row_value_model
# Construct a model instance of DocsResultRow by calling from_dict on the json representation
docs_result_row_model = DocsResultRow.from_dict(docs_result_row_model_json)
assert docs_result_row_model != False
# Construct a model instance of DocsResultRow by calling from_dict on the json representation
docs_result_row_model_dict = DocsResultRow.from_dict(docs_result_row_model_json).__dict__
docs_result_row_model2 = DocsResultRow(**docs_result_row_model_dict)
# Verify the model instances are equivalent
assert docs_result_row_model == docs_result_row_model2
# Convert model instance back to dict and verify no loss of data
docs_result_row_model_json2 = docs_result_row_model.to_dict()
assert docs_result_row_model_json2 == docs_result_row_model_json
class TestModel_DocsResultRowValue():
"""
Test Class for DocsResultRowValue
"""
def test_docs_result_row_value_serialization(self):
"""
Test serialization/deserialization for DocsResultRowValue
"""
# Construct a json representation of a DocsResultRowValue model
docs_result_row_value_model_json = {}
docs_result_row_value_model_json['rev'] = 'testString'
# Construct a model instance of DocsResultRowValue by calling from_dict on the json representation
docs_result_row_value_model = DocsResultRowValue.from_dict(docs_result_row_value_model_json)
assert docs_result_row_value_model != False
# Construct a model instance of DocsResultRowValue by calling from_dict on the json representation
docs_result_row_value_model_dict = DocsResultRowValue.from_dict(docs_result_row_value_model_json).__dict__
docs_result_row_value_model2 = DocsResultRowValue(**docs_result_row_value_model_dict)
# Verify the model instances are equivalent
assert docs_result_row_value_model == docs_result_row_value_model2
# Convert model instance back to dict and verify no loss of data
docs_result_row_value_model_json2 = docs_result_row_value_model.to_dict()
assert docs_result_row_value_model_json2 == docs_result_row_value_model_json
class TestModel_Document():
"""
Test Class for Document
"""
def test_document_serialization(self):
"""
Test serialization/deserialization for Document
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
# Construct a json representation of a Document model
document_model_json = {}
document_model_json['_attachments'] = {}
document_model_json['_conflicts'] = ['testString']
document_model_json['_deleted'] = True
document_model_json['_deleted_conflicts'] = ['testString']
document_model_json['_id'] = 'testString'
document_model_json['_local_seq'] = 'testString'
document_model_json['_rev'] = 'testString'
document_model_json['_revisions'] = revisions_model
document_model_json['_revs_info'] = [document_revision_status_model]
document_model_json['foo'] = 'testString'
# Construct a model instance of Document by calling from_dict on the json representation
document_model = Document.from_dict(document_model_json)
assert document_model != False
# Construct a model instance of Document by calling from_dict on the json representation
document_model_dict = Document.from_dict(document_model_json).__dict__
document_model2 = Document(**document_model_dict)
# Verify the model instances are equivalent
assert document_model == document_model2
# Convert model instance back to dict and verify no loss of data
document_model_json2 = document_model.to_dict()
assert document_model_json2 == document_model_json
# Test get_properties and set_properties methods.
document_model.set_properties({})
actual_dict = document_model.get_properties()
assert actual_dict == {}
expected_dict = {'foo': 'testString'}
document_model.set_properties(expected_dict)
actual_dict = document_model.get_properties()
assert actual_dict == expected_dict
class TestModel_DocumentResult():
"""
Test Class for DocumentResult
"""
def test_document_result_serialization(self):
"""
Test serialization/deserialization for DocumentResult
"""
# Construct a json representation of a DocumentResult model
document_result_model_json = {}
document_result_model_json['id'] = 'testString'
document_result_model_json['rev'] = 'testString'
document_result_model_json['ok'] = True
document_result_model_json['caused_by'] = 'testString'
document_result_model_json['error'] = 'testString'
document_result_model_json['reason'] = 'testString'
# Construct a model instance of DocumentResult by calling from_dict on the json representation
document_result_model = DocumentResult.from_dict(document_result_model_json)
assert document_result_model != False
# Construct a model instance of DocumentResult by calling from_dict on the json representation
document_result_model_dict = DocumentResult.from_dict(document_result_model_json).__dict__
document_result_model2 = DocumentResult(**document_result_model_dict)
# Verify the model instances are equivalent
assert document_result_model == document_result_model2
# Convert model instance back to dict and verify no loss of data
document_result_model_json2 = document_result_model.to_dict()
assert document_result_model_json2 == document_result_model_json
class TestModel_DocumentRevisionStatus():
"""
Test Class for DocumentRevisionStatus
"""
def test_document_revision_status_serialization(self):
"""
Test serialization/deserialization for DocumentRevisionStatus
"""
# Construct a json representation of a DocumentRevisionStatus model
document_revision_status_model_json = {}
document_revision_status_model_json['rev'] = 'testString'
document_revision_status_model_json['status'] = 'available'
# Construct a model instance of DocumentRevisionStatus by calling from_dict on the json representation
document_revision_status_model = DocumentRevisionStatus.from_dict(document_revision_status_model_json)
assert document_revision_status_model != False
# Construct a model instance of DocumentRevisionStatus by calling from_dict on the json representation
document_revision_status_model_dict = DocumentRevisionStatus.from_dict(document_revision_status_model_json).__dict__
document_revision_status_model2 = DocumentRevisionStatus(**document_revision_status_model_dict)
# Verify the model instances are equivalent
assert document_revision_status_model == document_revision_status_model2
# Convert model instance back to dict and verify no loss of data
document_revision_status_model_json2 = document_revision_status_model.to_dict()
assert document_revision_status_model_json2 == document_revision_status_model_json
class TestModel_DocumentShardInfo():
"""
Test Class for DocumentShardInfo
"""
def test_document_shard_info_serialization(self):
"""
Test serialization/deserialization for DocumentShardInfo
"""
# Construct a json representation of a DocumentShardInfo model
document_shard_info_model_json = {}
document_shard_info_model_json['nodes'] = ['testString']
document_shard_info_model_json['range'] = 'testString'
# Construct a model instance of DocumentShardInfo by calling from_dict on the json representation
document_shard_info_model = DocumentShardInfo.from_dict(document_shard_info_model_json)
assert document_shard_info_model != False
# Construct a model instance of DocumentShardInfo by calling from_dict on the json representation
document_shard_info_model_dict = DocumentShardInfo.from_dict(document_shard_info_model_json).__dict__
document_shard_info_model2 = DocumentShardInfo(**document_shard_info_model_dict)
# Verify the model instances are equivalent
assert document_shard_info_model == document_shard_info_model2
# Convert model instance back to dict and verify no loss of data
document_shard_info_model_json2 = document_shard_info_model.to_dict()
assert document_shard_info_model_json2 == document_shard_info_model_json
class TestModel_ExecutionStats():
"""
Test Class for ExecutionStats
"""
def test_execution_stats_serialization(self):
"""
Test serialization/deserialization for ExecutionStats
"""
# Construct a json representation of a ExecutionStats model
execution_stats_model_json = {}
execution_stats_model_json['execution_time_ms'] = 72.5
execution_stats_model_json['results_returned'] = 0
execution_stats_model_json['total_docs_examined'] = 0
execution_stats_model_json['total_keys_examined'] = 0
execution_stats_model_json['total_quorum_docs_examined'] = 0
# Construct a model instance of ExecutionStats by calling from_dict on the json representation
execution_stats_model = ExecutionStats.from_dict(execution_stats_model_json)
assert execution_stats_model != False
# Construct a model instance of ExecutionStats by calling from_dict on the json representation
execution_stats_model_dict = ExecutionStats.from_dict(execution_stats_model_json).__dict__
execution_stats_model2 = ExecutionStats(**execution_stats_model_dict)
# Verify the model instances are equivalent
assert execution_stats_model == execution_stats_model2
# Convert model instance back to dict and verify no loss of data
execution_stats_model_json2 = execution_stats_model.to_dict()
assert execution_stats_model_json2 == execution_stats_model_json
class TestModel_ExplainResult():
"""
Test Class for ExplainResult
"""
def test_explain_result_serialization(self):
"""
Test serialization/deserialization for ExplainResult
"""
# Construct dict forms of any model objects needed in order to build this model.
analyzer_model = {} # Analyzer
analyzer_model['name'] = 'classic'
analyzer_model['stopwords'] = ['testString']
index_text_operator_default_field_model = {} # IndexTextOperatorDefaultField
index_text_operator_default_field_model['analyzer'] = analyzer_model
index_text_operator_default_field_model['enabled'] = True
index_field_model = {} # IndexField
index_field_model['name'] = 'testString'
index_field_model['type'] = 'boolean'
index_field_model['foo'] = 'asc'
index_definition_model = {} # IndexDefinition
index_definition_model['default_analyzer'] = analyzer_model
index_definition_model['default_field'] = index_text_operator_default_field_model
index_definition_model['fields'] = [index_field_model]
index_definition_model['index_array_lengths'] = True
index_definition_model['partial_filter_selector'] = {}
index_information_model = {} # IndexInformation
index_information_model['ddoc'] = 'testString'
index_information_model['def'] = index_definition_model
index_information_model['name'] = 'testString'
index_information_model['type'] = 'json'
explain_result_range_model = {} # ExplainResultRange
explain_result_range_model['end_key'] = ['testString']
explain_result_range_model['start_key'] = ['testString']
# Construct a json representation of a ExplainResult model
explain_result_model_json = {}
explain_result_model_json['dbname'] = 'testString'
explain_result_model_json['fields'] = ['testString']
explain_result_model_json['index'] = index_information_model
explain_result_model_json['limit'] = 0
explain_result_model_json['opts'] = {}
explain_result_model_json['range'] = explain_result_range_model
explain_result_model_json['selector'] = {}
explain_result_model_json['skip'] = 0
# Construct a model instance of ExplainResult by calling from_dict on the json representation
explain_result_model = ExplainResult.from_dict(explain_result_model_json)
assert explain_result_model != False
# Construct a model instance of ExplainResult by calling from_dict on the json representation
explain_result_model_dict = ExplainResult.from_dict(explain_result_model_json).__dict__
explain_result_model2 = ExplainResult(**explain_result_model_dict)
# Verify the model instances are equivalent
assert explain_result_model == explain_result_model2
# Convert model instance back to dict and verify no loss of data
explain_result_model_json2 = explain_result_model.to_dict()
assert explain_result_model_json2 == explain_result_model_json
class TestModel_ExplainResultRange():
"""
Test Class for ExplainResultRange
"""
def test_explain_result_range_serialization(self):
"""
Test serialization/deserialization for ExplainResultRange
"""
# Construct a json representation of a ExplainResultRange model
explain_result_range_model_json = {}
explain_result_range_model_json['end_key'] = ['testString']
explain_result_range_model_json['start_key'] = ['testString']
# Construct a model instance of ExplainResultRange by calling from_dict on the json representation
explain_result_range_model = ExplainResultRange.from_dict(explain_result_range_model_json)
assert explain_result_range_model != False
# Construct a model instance of ExplainResultRange by calling from_dict on the json representation
explain_result_range_model_dict = ExplainResultRange.from_dict(explain_result_range_model_json).__dict__
explain_result_range_model2 = ExplainResultRange(**explain_result_range_model_dict)
# Verify the model instances are equivalent
assert explain_result_range_model == explain_result_range_model2
# Convert model instance back to dict and verify no loss of data
explain_result_range_model_json2 = explain_result_range_model.to_dict()
assert explain_result_range_model_json2 == explain_result_range_model_json
class TestModel_FindResult():
"""
Test Class for FindResult
"""
def test_find_result_serialization(self):
"""
Test serialization/deserialization for FindResult
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
execution_stats_model = {} # ExecutionStats
execution_stats_model['execution_time_ms'] = 72.5
execution_stats_model['results_returned'] = 0
execution_stats_model['total_docs_examined'] = 0
execution_stats_model['total_keys_examined'] = 0
execution_stats_model['total_quorum_docs_examined'] = 0
# Construct a json representation of a FindResult model
find_result_model_json = {}
find_result_model_json['bookmark'] = 'testString'
find_result_model_json['docs'] = [document_model]
find_result_model_json['execution_stats'] = execution_stats_model
find_result_model_json['warning'] = 'testString'
# Construct a model instance of FindResult by calling from_dict on the json representation
find_result_model = FindResult.from_dict(find_result_model_json)
assert find_result_model != False
# Construct a model instance of FindResult by calling from_dict on the json representation
find_result_model_dict = FindResult.from_dict(find_result_model_json).__dict__
find_result_model2 = FindResult(**find_result_model_dict)
# Verify the model instances are equivalent
assert find_result_model == find_result_model2
# Convert model instance back to dict and verify no loss of data
find_result_model_json2 = find_result_model.to_dict()
assert find_result_model_json2 == find_result_model_json
class TestModel_GeoIndexDefinition():
"""
Test Class for GeoIndexDefinition
"""
def test_geo_index_definition_serialization(self):
"""
Test serialization/deserialization for GeoIndexDefinition
"""
# Construct a json representation of a GeoIndexDefinition model
geo_index_definition_model_json = {}
geo_index_definition_model_json['index'] = 'testString'
# Construct a model instance of GeoIndexDefinition by calling from_dict on the json representation
geo_index_definition_model = GeoIndexDefinition.from_dict(geo_index_definition_model_json)
assert geo_index_definition_model != False
# Construct a model instance of GeoIndexDefinition by calling from_dict on the json representation
geo_index_definition_model_dict = GeoIndexDefinition.from_dict(geo_index_definition_model_json).__dict__
geo_index_definition_model2 = GeoIndexDefinition(**geo_index_definition_model_dict)
# Verify the model instances are equivalent
assert geo_index_definition_model == geo_index_definition_model2
# Convert model instance back to dict and verify no loss of data
geo_index_definition_model_json2 = geo_index_definition_model.to_dict()
assert geo_index_definition_model_json2 == geo_index_definition_model_json
class TestModel_GeoIndexInformation():
"""
Test Class for GeoIndexInformation
"""
def test_geo_index_information_serialization(self):
"""
Test serialization/deserialization for GeoIndexInformation
"""
# Construct dict forms of any model objects needed in order to build this model.
geo_index_stats_model = {} # GeoIndexStats
geo_index_stats_model['data_size'] = 0
geo_index_stats_model['disk_size'] = 0
geo_index_stats_model['doc_count'] = 0
# Construct a json representation of a GeoIndexInformation model
geo_index_information_model_json = {}
geo_index_information_model_json['geo_index'] = geo_index_stats_model
geo_index_information_model_json['name'] = 'testString'
# Construct a model instance of GeoIndexInformation by calling from_dict on the json representation
geo_index_information_model = GeoIndexInformation.from_dict(geo_index_information_model_json)
assert geo_index_information_model != False
# Construct a model instance of GeoIndexInformation by calling from_dict on the json representation
geo_index_information_model_dict = GeoIndexInformation.from_dict(geo_index_information_model_json).__dict__
geo_index_information_model2 = GeoIndexInformation(**geo_index_information_model_dict)
# Verify the model instances are equivalent
assert geo_index_information_model == geo_index_information_model2
# Convert model instance back to dict and verify no loss of data
geo_index_information_model_json2 = geo_index_information_model.to_dict()
assert geo_index_information_model_json2 == geo_index_information_model_json
class TestModel_GeoIndexStats():
"""
Test Class for GeoIndexStats
"""
def test_geo_index_stats_serialization(self):
"""
Test serialization/deserialization for GeoIndexStats
"""
# Construct a json representation of a GeoIndexStats model
geo_index_stats_model_json = {}
geo_index_stats_model_json['data_size'] = 0
geo_index_stats_model_json['disk_size'] = 0
geo_index_stats_model_json['doc_count'] = 0
# Construct a model instance of GeoIndexStats by calling from_dict on the json representation
geo_index_stats_model = GeoIndexStats.from_dict(geo_index_stats_model_json)
assert geo_index_stats_model != False
# Construct a model instance of GeoIndexStats by calling from_dict on the json representation
geo_index_stats_model_dict = GeoIndexStats.from_dict(geo_index_stats_model_json).__dict__
geo_index_stats_model2 = GeoIndexStats(**geo_index_stats_model_dict)
# Verify the model instances are equivalent
assert geo_index_stats_model == geo_index_stats_model2
# Convert model instance back to dict and verify no loss of data
geo_index_stats_model_json2 = geo_index_stats_model.to_dict()
assert geo_index_stats_model_json2 == geo_index_stats_model_json
class TestModel_GeoJsonFeature():
"""
Test Class for GeoJsonFeature
"""
def test_geo_json_feature_serialization(self):
"""
Test serialization/deserialization for GeoJsonFeature
"""
# Construct dict forms of any model objects needed in order to build this model.
geo_json_geometry_object_model = {} # GeoJsonGeometry
geo_json_geometry_object_model['type'] = 'Point'
geo_json_geometry_object_model['coordinates'] = ['testString']
# Construct a json representation of a GeoJsonFeature model
geo_json_feature_model_json = {}
geo_json_feature_model_json['_id'] = 'testString'
geo_json_feature_model_json['_rev'] = 'testString'
geo_json_feature_model_json['bbox'] = [72.5]
geo_json_feature_model_json['geometry'] = geo_json_geometry_object_model
geo_json_feature_model_json['properties'] = {}
geo_json_feature_model_json['type'] = 'Feature'
geo_json_feature_model_json['foo'] = 'testString'
# Construct a model instance of GeoJsonFeature by calling from_dict on the json representation
geo_json_feature_model = GeoJsonFeature.from_dict(geo_json_feature_model_json)
assert geo_json_feature_model != False
# Construct a model instance of GeoJsonFeature by calling from_dict on the json representation
geo_json_feature_model_dict = GeoJsonFeature.from_dict(geo_json_feature_model_json).__dict__
geo_json_feature_model2 = GeoJsonFeature(**geo_json_feature_model_dict)
# Verify the model instances are equivalent
assert geo_json_feature_model == geo_json_feature_model2
# Convert model instance back to dict and verify no loss of data
geo_json_feature_model_json2 = geo_json_feature_model.to_dict()
assert geo_json_feature_model_json2 == geo_json_feature_model_json
# Test get_properties and set_properties methods.
geo_json_feature_model.set_properties({})
actual_dict = geo_json_feature_model.get_properties()
assert actual_dict == {}
expected_dict = {'foo': 'testString'}
geo_json_feature_model.set_properties(expected_dict)
actual_dict = geo_json_feature_model.get_properties()
assert actual_dict == expected_dict
class TestModel_GeoResult():
"""
Test Class for GeoResult
"""
def test_geo_result_serialization(self):
"""
Test serialization/deserialization for GeoResult
"""
# Construct dict forms of any model objects needed in order to build this model.
geo_json_geometry_object_model = {} # GeoJsonGeometry
geo_json_geometry_object_model['type'] = 'Point'
geo_json_geometry_object_model['coordinates'] = ['testString']
geo_json_feature_model = {} # GeoJsonFeature
geo_json_feature_model['_id'] = 'testString'
geo_json_feature_model['_rev'] = 'testString'
geo_json_feature_model['bbox'] = [72.5]
geo_json_feature_model['geometry'] = geo_json_geometry_object_model
geo_json_feature_model['properties'] = {}
geo_json_feature_model['type'] = 'Feature'
geo_json_feature_model['foo'] = 'testString'
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
geo_json_geometry_model = {} # GeoJsonGeometry
geo_json_geometry_model['type'] = 'Point'
geo_json_geometry_model['coordinates'] = ['testString']
geo_result_row_model = {} # GeoResultRow
geo_result_row_model['doc'] = document_model
geo_result_row_model['geometry'] = geo_json_geometry_model
geo_result_row_model['id'] = 'testString'
geo_result_row_model['rev'] = 'testString'
# Construct a json representation of a GeoResult model
geo_result_model_json = {}
geo_result_model_json['bookmark'] = 'testString'
geo_result_model_json['features'] = [geo_json_feature_model]
geo_result_model_json['rows'] = [geo_result_row_model]
geo_result_model_json['type'] = 'FeatureCollection'
# Construct a model instance of GeoResult by calling from_dict on the json representation
geo_result_model = GeoResult.from_dict(geo_result_model_json)
assert geo_result_model != False
# Construct a model instance of GeoResult by calling from_dict on the json representation
geo_result_model_dict = GeoResult.from_dict(geo_result_model_json).__dict__
geo_result_model2 = GeoResult(**geo_result_model_dict)
# Verify the model instances are equivalent
assert geo_result_model == geo_result_model2
# Convert model instance back to dict and verify no loss of data
geo_result_model_json2 = geo_result_model.to_dict()
assert geo_result_model_json2 == geo_result_model_json
class TestModel_GeoResultRow():
"""
Test Class for GeoResultRow
"""
def test_geo_result_row_serialization(self):
"""
Test serialization/deserialization for GeoResultRow
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
geo_json_geometry_model = {} # GeoJsonGeometry
geo_json_geometry_model['type'] = 'Point'
geo_json_geometry_model['coordinates'] = ['testString']
# Construct a json representation of a GeoResultRow model
geo_result_row_model_json = {}
geo_result_row_model_json['doc'] = document_model
geo_result_row_model_json['geometry'] = geo_json_geometry_model
geo_result_row_model_json['id'] = 'testString'
geo_result_row_model_json['rev'] = 'testString'
# Construct a model instance of GeoResultRow by calling from_dict on the json representation
geo_result_row_model = GeoResultRow.from_dict(geo_result_row_model_json)
assert geo_result_row_model != False
# Construct a model instance of GeoResultRow by calling from_dict on the json representation
geo_result_row_model_dict = GeoResultRow.from_dict(geo_result_row_model_json).__dict__
geo_result_row_model2 = GeoResultRow(**geo_result_row_model_dict)
# Verify the model instances are equivalent
assert geo_result_row_model == geo_result_row_model2
# Convert model instance back to dict and verify no loss of data
geo_result_row_model_json2 = geo_result_row_model.to_dict()
assert geo_result_row_model_json2 == geo_result_row_model_json
class TestModel_IndexDefinition():
"""
Test Class for IndexDefinition
"""
def test_index_definition_serialization(self):
"""
Test serialization/deserialization for IndexDefinition
"""
# Construct dict forms of any model objects needed in order to build this model.
analyzer_model = {} # Analyzer
analyzer_model['name'] = 'classic'
analyzer_model['stopwords'] = ['testString']
index_text_operator_default_field_model = {} # IndexTextOperatorDefaultField
index_text_operator_default_field_model['analyzer'] = analyzer_model
index_text_operator_default_field_model['enabled'] = True
index_field_model = {} # IndexField
index_field_model['name'] = 'testString'
index_field_model['type'] = 'boolean'
index_field_model['foo'] = 'asc'
# Construct a json representation of a IndexDefinition model
index_definition_model_json = {}
index_definition_model_json['default_analyzer'] = analyzer_model
index_definition_model_json['default_field'] = index_text_operator_default_field_model
index_definition_model_json['fields'] = [index_field_model]
index_definition_model_json['index_array_lengths'] = True
index_definition_model_json['partial_filter_selector'] = {}
# Construct a model instance of IndexDefinition by calling from_dict on the json representation
index_definition_model = IndexDefinition.from_dict(index_definition_model_json)
assert index_definition_model != False
# Construct a model instance of IndexDefinition by calling from_dict on the json representation
index_definition_model_dict = IndexDefinition.from_dict(index_definition_model_json).__dict__
index_definition_model2 = IndexDefinition(**index_definition_model_dict)
# Verify the model instances are equivalent
assert index_definition_model == index_definition_model2
# Convert model instance back to dict and verify no loss of data
index_definition_model_json2 = index_definition_model.to_dict()
assert index_definition_model_json2 == index_definition_model_json
class TestModel_IndexField():
"""
Test Class for IndexField
"""
def test_index_field_serialization(self):
"""
Test serialization/deserialization for IndexField
"""
# Construct a json representation of a IndexField model
index_field_model_json = {}
index_field_model_json['name'] = 'testString'
index_field_model_json['type'] = 'boolean'
index_field_model_json['foo'] = 'asc'
# Construct a model instance of IndexField by calling from_dict on the json representation
index_field_model = IndexField.from_dict(index_field_model_json)
assert index_field_model != False
# Construct a model instance of IndexField by calling from_dict on the json representation
index_field_model_dict = IndexField.from_dict(index_field_model_json).__dict__
index_field_model2 = IndexField(**index_field_model_dict)
# Verify the model instances are equivalent
assert index_field_model == index_field_model2
# Convert model instance back to dict and verify no loss of data
index_field_model_json2 = index_field_model.to_dict()
assert index_field_model_json2 == index_field_model_json
# Test get_properties and set_properties methods.
index_field_model.set_properties({})
actual_dict = index_field_model.get_properties()
assert actual_dict == {}
expected_dict = {'foo': 'asc'}
index_field_model.set_properties(expected_dict)
actual_dict = index_field_model.get_properties()
assert actual_dict == expected_dict
class TestModel_IndexInformation():
"""
Test Class for IndexInformation
"""
def test_index_information_serialization(self):
"""
Test serialization/deserialization for IndexInformation
"""
# Construct dict forms of any model objects needed in order to build this model.
analyzer_model = {} # Analyzer
analyzer_model['name'] = 'classic'
analyzer_model['stopwords'] = ['testString']
index_text_operator_default_field_model = {} # IndexTextOperatorDefaultField
index_text_operator_default_field_model['analyzer'] = analyzer_model
index_text_operator_default_field_model['enabled'] = True
index_field_model = {} # IndexField
index_field_model['name'] = 'testString'
index_field_model['type'] = 'boolean'
index_field_model['foo'] = 'asc'
index_definition_model = {} # IndexDefinition
index_definition_model['default_analyzer'] = analyzer_model
index_definition_model['default_field'] = index_text_operator_default_field_model
index_definition_model['fields'] = [index_field_model]
index_definition_model['index_array_lengths'] = True
index_definition_model['partial_filter_selector'] = {}
# Construct a json representation of a IndexInformation model
index_information_model_json = {}
index_information_model_json['ddoc'] = 'testString'
index_information_model_json['def'] = index_definition_model
index_information_model_json['name'] = 'testString'
index_information_model_json['type'] = 'json'
# Construct a model instance of IndexInformation by calling from_dict on the json representation
index_information_model = IndexInformation.from_dict(index_information_model_json)
assert index_information_model != False
# Construct a model instance of IndexInformation by calling from_dict on the json representation
index_information_model_dict = IndexInformation.from_dict(index_information_model_json).__dict__
index_information_model2 = IndexInformation(**index_information_model_dict)
# Verify the model instances are equivalent
assert index_information_model == index_information_model2
# Convert model instance back to dict and verify no loss of data
index_information_model_json2 = index_information_model.to_dict()
assert index_information_model_json2 == index_information_model_json
class TestModel_IndexResult():
"""
Test Class for IndexResult
"""
def test_index_result_serialization(self):
"""
Test serialization/deserialization for IndexResult
"""
# Construct a json representation of a IndexResult model
index_result_model_json = {}
index_result_model_json['id'] = 'testString'
index_result_model_json['name'] = 'testString'
index_result_model_json['result'] = 'created'
# Construct a model instance of IndexResult by calling from_dict on the json representation
index_result_model = IndexResult.from_dict(index_result_model_json)
assert index_result_model != False
# Construct a model instance of IndexResult by calling from_dict on the json representation
index_result_model_dict = IndexResult.from_dict(index_result_model_json).__dict__
index_result_model2 = IndexResult(**index_result_model_dict)
# Verify the model instances are equivalent
assert index_result_model == index_result_model2
# Convert model instance back to dict and verify no loss of data
index_result_model_json2 = index_result_model.to_dict()
assert index_result_model_json2 == index_result_model_json
class TestModel_IndexTextOperatorDefaultField():
"""
Test Class for IndexTextOperatorDefaultField
"""
def test_index_text_operator_default_field_serialization(self):
"""
Test serialization/deserialization for IndexTextOperatorDefaultField
"""
# Construct dict forms of any model objects needed in order to build this model.
analyzer_model = {} # Analyzer
analyzer_model['name'] = 'classic'
analyzer_model['stopwords'] = ['testString']
# Construct a json representation of a IndexTextOperatorDefaultField model
index_text_operator_default_field_model_json = {}
index_text_operator_default_field_model_json['analyzer'] = analyzer_model
index_text_operator_default_field_model_json['enabled'] = True
# Construct a model instance of IndexTextOperatorDefaultField by calling from_dict on the json representation
index_text_operator_default_field_model = IndexTextOperatorDefaultField.from_dict(index_text_operator_default_field_model_json)
assert index_text_operator_default_field_model != False
# Construct a model instance of IndexTextOperatorDefaultField by calling from_dict on the json representation
index_text_operator_default_field_model_dict = IndexTextOperatorDefaultField.from_dict(index_text_operator_default_field_model_json).__dict__
index_text_operator_default_field_model2 = IndexTextOperatorDefaultField(**index_text_operator_default_field_model_dict)
# Verify the model instances are equivalent
assert index_text_operator_default_field_model == index_text_operator_default_field_model2
# Convert model instance back to dict and verify no loss of data
index_text_operator_default_field_model_json2 = index_text_operator_default_field_model.to_dict()
assert index_text_operator_default_field_model_json2 == index_text_operator_default_field_model_json
class TestModel_IndexesInformation():
"""
Test Class for IndexesInformation
"""
def test_indexes_information_serialization(self):
"""
Test serialization/deserialization for IndexesInformation
"""
# Construct dict forms of any model objects needed in order to build this model.
analyzer_model = {} # Analyzer
analyzer_model['name'] = 'classic'
analyzer_model['stopwords'] = ['testString']
index_text_operator_default_field_model = {} # IndexTextOperatorDefaultField
index_text_operator_default_field_model['analyzer'] = analyzer_model
index_text_operator_default_field_model['enabled'] = True
index_field_model = {} # IndexField
index_field_model['name'] = 'testString'
index_field_model['type'] = 'boolean'
index_field_model['foo'] = 'asc'
index_definition_model = {} # IndexDefinition
index_definition_model['default_analyzer'] = analyzer_model
index_definition_model['default_field'] = index_text_operator_default_field_model
index_definition_model['fields'] = [index_field_model]
index_definition_model['index_array_lengths'] = True
index_definition_model['partial_filter_selector'] = {}
index_information_model = {} # IndexInformation
index_information_model['ddoc'] = 'testString'
index_information_model['def'] = index_definition_model
index_information_model['name'] = 'testString'
index_information_model['type'] = 'json'
# Construct a json representation of a IndexesInformation model
indexes_information_model_json = {}
indexes_information_model_json['total_rows'] = 0
indexes_information_model_json['indexes'] = [index_information_model]
# Construct a model instance of IndexesInformation by calling from_dict on the json representation
indexes_information_model = IndexesInformation.from_dict(indexes_information_model_json)
assert indexes_information_model != False
# Construct a model instance of IndexesInformation by calling from_dict on the json representation
indexes_information_model_dict = IndexesInformation.from_dict(indexes_information_model_json).__dict__
indexes_information_model2 = IndexesInformation(**indexes_information_model_dict)
# Verify the model instances are equivalent
assert indexes_information_model == indexes_information_model2
# Convert model instance back to dict and verify no loss of data
indexes_information_model_json2 = indexes_information_model.to_dict()
assert indexes_information_model_json2 == indexes_information_model_json
class TestModel_MembershipInformation():
"""
Test Class for MembershipInformation
"""
def test_membership_information_serialization(self):
"""
Test serialization/deserialization for MembershipInformation
"""
# Construct a json representation of a MembershipInformation model
membership_information_model_json = {}
membership_information_model_json['all_nodes'] = ['testString']
membership_information_model_json['cluster_nodes'] = ['testString']
# Construct a model instance of MembershipInformation by calling from_dict on the json representation
membership_information_model = MembershipInformation.from_dict(membership_information_model_json)
assert membership_information_model != False
# Construct a model instance of MembershipInformation by calling from_dict on the json representation
membership_information_model_dict = MembershipInformation.from_dict(membership_information_model_json).__dict__
membership_information_model2 = MembershipInformation(**membership_information_model_dict)
# Verify the model instances are equivalent
assert membership_information_model == membership_information_model2
# Convert model instance back to dict and verify no loss of data
membership_information_model_json2 = membership_information_model.to_dict()
assert membership_information_model_json2 == membership_information_model_json
class TestModel_Ok():
"""
Test Class for Ok
"""
def test_ok_serialization(self):
"""
Test serialization/deserialization for Ok
"""
# Construct a json representation of a Ok model
ok_model_json = {}
ok_model_json['ok'] = True
# Construct a model instance of Ok by calling from_dict on the json representation
ok_model = Ok.from_dict(ok_model_json)
assert ok_model != False
# Construct a model instance of Ok by calling from_dict on the json representation
ok_model_dict = Ok.from_dict(ok_model_json).__dict__
ok_model2 = Ok(**ok_model_dict)
# Verify the model instances are equivalent
assert ok_model == ok_model2
# Convert model instance back to dict and verify no loss of data
ok_model_json2 = ok_model.to_dict()
assert ok_model_json2 == ok_model_json
class TestModel_PartitionInformation():
"""
Test Class for PartitionInformation
"""
def test_partition_information_serialization(self):
"""
Test serialization/deserialization for PartitionInformation
"""
# Construct dict forms of any model objects needed in order to build this model.
partition_information_indexes_indexes_model = {} # PartitionInformationIndexesIndexes
partition_information_indexes_indexes_model['search'] = 0
partition_information_indexes_indexes_model['view'] = 0
partition_information_indexes_model = {} # PartitionInformationIndexes
partition_information_indexes_model['count'] = 0
partition_information_indexes_model['indexes'] = partition_information_indexes_indexes_model
partition_information_indexes_model['limit'] = 0
partition_information_sizes_model = {} # PartitionInformationSizes
partition_information_sizes_model['active'] = 0
partition_information_sizes_model['external'] = 0
# Construct a json representation of a PartitionInformation model
partition_information_model_json = {}
partition_information_model_json['db_name'] = 'testString'
partition_information_model_json['doc_count'] = 0
partition_information_model_json['doc_del_count'] = 0
partition_information_model_json['partition'] = 'testString'
partition_information_model_json['partitioned_indexes'] = partition_information_indexes_model
partition_information_model_json['sizes'] = partition_information_sizes_model
# Construct a model instance of PartitionInformation by calling from_dict on the json representation
partition_information_model = PartitionInformation.from_dict(partition_information_model_json)
assert partition_information_model != False
# Construct a model instance of PartitionInformation by calling from_dict on the json representation
partition_information_model_dict = PartitionInformation.from_dict(partition_information_model_json).__dict__
partition_information_model2 = PartitionInformation(**partition_information_model_dict)
# Verify the model instances are equivalent
assert partition_information_model == partition_information_model2
# Convert model instance back to dict and verify no loss of data
partition_information_model_json2 = partition_information_model.to_dict()
assert partition_information_model_json2 == partition_information_model_json
class TestModel_PartitionInformationIndexes():
"""
Test Class for PartitionInformationIndexes
"""
def test_partition_information_indexes_serialization(self):
"""
Test serialization/deserialization for PartitionInformationIndexes
"""
# Construct dict forms of any model objects needed in order to build this model.
partition_information_indexes_indexes_model = {} # PartitionInformationIndexesIndexes
partition_information_indexes_indexes_model['search'] = 0
partition_information_indexes_indexes_model['view'] = 0
# Construct a json representation of a PartitionInformationIndexes model
partition_information_indexes_model_json = {}
partition_information_indexes_model_json['count'] = 0
partition_information_indexes_model_json['indexes'] = partition_information_indexes_indexes_model
partition_information_indexes_model_json['limit'] = 0
# Construct a model instance of PartitionInformationIndexes by calling from_dict on the json representation
partition_information_indexes_model = PartitionInformationIndexes.from_dict(partition_information_indexes_model_json)
assert partition_information_indexes_model != False
# Construct a model instance of PartitionInformationIndexes by calling from_dict on the json representation
partition_information_indexes_model_dict = PartitionInformationIndexes.from_dict(partition_information_indexes_model_json).__dict__
partition_information_indexes_model2 = PartitionInformationIndexes(**partition_information_indexes_model_dict)
# Verify the model instances are equivalent
assert partition_information_indexes_model == partition_information_indexes_model2
# Convert model instance back to dict and verify no loss of data
partition_information_indexes_model_json2 = partition_information_indexes_model.to_dict()
assert partition_information_indexes_model_json2 == partition_information_indexes_model_json
class TestModel_PartitionInformationIndexesIndexes():
"""
Test Class for PartitionInformationIndexesIndexes
"""
def test_partition_information_indexes_indexes_serialization(self):
"""
Test serialization/deserialization for PartitionInformationIndexesIndexes
"""
# Construct a json representation of a PartitionInformationIndexesIndexes model
partition_information_indexes_indexes_model_json = {}
partition_information_indexes_indexes_model_json['search'] = 0
partition_information_indexes_indexes_model_json['view'] = 0
# Construct a model instance of PartitionInformationIndexesIndexes by calling from_dict on the json representation
partition_information_indexes_indexes_model = PartitionInformationIndexesIndexes.from_dict(partition_information_indexes_indexes_model_json)
assert partition_information_indexes_indexes_model != False
# Construct a model instance of PartitionInformationIndexesIndexes by calling from_dict on the json representation
partition_information_indexes_indexes_model_dict = PartitionInformationIndexesIndexes.from_dict(partition_information_indexes_indexes_model_json).__dict__
partition_information_indexes_indexes_model2 = PartitionInformationIndexesIndexes(**partition_information_indexes_indexes_model_dict)
# Verify the model instances are equivalent
assert partition_information_indexes_indexes_model == partition_information_indexes_indexes_model2
# Convert model instance back to dict and verify no loss of data
partition_information_indexes_indexes_model_json2 = partition_information_indexes_indexes_model.to_dict()
assert partition_information_indexes_indexes_model_json2 == partition_information_indexes_indexes_model_json
class TestModel_PartitionInformationSizes():
"""
Test Class for PartitionInformationSizes
"""
def test_partition_information_sizes_serialization(self):
"""
Test serialization/deserialization for PartitionInformationSizes
"""
# Construct a json representation of a PartitionInformationSizes model
partition_information_sizes_model_json = {}
partition_information_sizes_model_json['active'] = 0
partition_information_sizes_model_json['external'] = 0
# Construct a model instance of PartitionInformationSizes by calling from_dict on the json representation
partition_information_sizes_model = PartitionInformationSizes.from_dict(partition_information_sizes_model_json)
assert partition_information_sizes_model != False
# Construct a model instance of PartitionInformationSizes by calling from_dict on the json representation
partition_information_sizes_model_dict = PartitionInformationSizes.from_dict(partition_information_sizes_model_json).__dict__
partition_information_sizes_model2 = PartitionInformationSizes(**partition_information_sizes_model_dict)
# Verify the model instances are equivalent
assert partition_information_sizes_model == partition_information_sizes_model2
# Convert model instance back to dict and verify no loss of data
partition_information_sizes_model_json2 = partition_information_sizes_model.to_dict()
assert partition_information_sizes_model_json2 == partition_information_sizes_model_json
class TestModel_ReplicationCreateTargetParameters():
"""
Test Class for ReplicationCreateTargetParameters
"""
def test_replication_create_target_parameters_serialization(self):
"""
Test serialization/deserialization for ReplicationCreateTargetParameters
"""
# Construct a json representation of a ReplicationCreateTargetParameters model
replication_create_target_parameters_model_json = {}
replication_create_target_parameters_model_json['n'] = 1
replication_create_target_parameters_model_json['partitioned'] = False
replication_create_target_parameters_model_json['q'] = 1
# Construct a model instance of ReplicationCreateTargetParameters by calling from_dict on the json representation
replication_create_target_parameters_model = ReplicationCreateTargetParameters.from_dict(replication_create_target_parameters_model_json)
assert replication_create_target_parameters_model != False
# Construct a model instance of ReplicationCreateTargetParameters by calling from_dict on the json representation
replication_create_target_parameters_model_dict = ReplicationCreateTargetParameters.from_dict(replication_create_target_parameters_model_json).__dict__
replication_create_target_parameters_model2 = ReplicationCreateTargetParameters(**replication_create_target_parameters_model_dict)
# Verify the model instances are equivalent
assert replication_create_target_parameters_model == replication_create_target_parameters_model2
# Convert model instance back to dict and verify no loss of data
replication_create_target_parameters_model_json2 = replication_create_target_parameters_model.to_dict()
assert replication_create_target_parameters_model_json2 == replication_create_target_parameters_model_json
class TestModel_ReplicationDatabase():
"""
Test Class for ReplicationDatabase
"""
def test_replication_database_serialization(self):
"""
Test serialization/deserialization for ReplicationDatabase
"""
# Construct dict forms of any model objects needed in order to build this model.
replication_database_auth_basic_model = {} # ReplicationDatabaseAuthBasic
replication_database_auth_basic_model['password'] = 'testString'
replication_database_auth_basic_model['username'] = 'testString'
replication_database_auth_iam_model = {} # ReplicationDatabaseAuthIam
replication_database_auth_iam_model['api_key'] = 'testString'
replication_database_auth_model = {} # ReplicationDatabaseAuth
replication_database_auth_model['basic'] = replication_database_auth_basic_model
replication_database_auth_model['iam'] = replication_database_auth_iam_model
# Construct a json representation of a ReplicationDatabase model
replication_database_model_json = {}
replication_database_model_json['auth'] = replication_database_auth_model
replication_database_model_json['headers'] = {}
replication_database_model_json['url'] = 'testString'
# Construct a model instance of ReplicationDatabase by calling from_dict on the json representation
replication_database_model = ReplicationDatabase.from_dict(replication_database_model_json)
assert replication_database_model != False
# Construct a model instance of ReplicationDatabase by calling from_dict on the json representation
replication_database_model_dict = ReplicationDatabase.from_dict(replication_database_model_json).__dict__
replication_database_model2 = ReplicationDatabase(**replication_database_model_dict)
# Verify the model instances are equivalent
assert replication_database_model == replication_database_model2
# Convert model instance back to dict and verify no loss of data
replication_database_model_json2 = replication_database_model.to_dict()
assert replication_database_model_json2 == replication_database_model_json
class TestModel_ReplicationDatabaseAuth():
"""
Test Class for ReplicationDatabaseAuth
"""
def test_replication_database_auth_serialization(self):
"""
Test serialization/deserialization for ReplicationDatabaseAuth
"""
# Construct dict forms of any model objects needed in order to build this model.
replication_database_auth_basic_model = {} # ReplicationDatabaseAuthBasic
replication_database_auth_basic_model['password'] = 'testString'
replication_database_auth_basic_model['username'] = 'testString'
replication_database_auth_iam_model = {} # ReplicationDatabaseAuthIam
replication_database_auth_iam_model['api_key'] = 'testString'
# Construct a json representation of a ReplicationDatabaseAuth model
replication_database_auth_model_json = {}
replication_database_auth_model_json['basic'] = replication_database_auth_basic_model
replication_database_auth_model_json['iam'] = replication_database_auth_iam_model
# Construct a model instance of ReplicationDatabaseAuth by calling from_dict on the json representation
replication_database_auth_model = ReplicationDatabaseAuth.from_dict(replication_database_auth_model_json)
assert replication_database_auth_model != False
# Construct a model instance of ReplicationDatabaseAuth by calling from_dict on the json representation
replication_database_auth_model_dict = ReplicationDatabaseAuth.from_dict(replication_database_auth_model_json).__dict__
replication_database_auth_model2 = ReplicationDatabaseAuth(**replication_database_auth_model_dict)
# Verify the model instances are equivalent
assert replication_database_auth_model == replication_database_auth_model2
# Convert model instance back to dict and verify no loss of data
replication_database_auth_model_json2 = replication_database_auth_model.to_dict()
assert replication_database_auth_model_json2 == replication_database_auth_model_json
class TestModel_ReplicationDatabaseAuthBasic():
"""
Test Class for ReplicationDatabaseAuthBasic
"""
def test_replication_database_auth_basic_serialization(self):
"""
Test serialization/deserialization for ReplicationDatabaseAuthBasic
"""
# Construct a json representation of a ReplicationDatabaseAuthBasic model
replication_database_auth_basic_model_json = {}
replication_database_auth_basic_model_json['password'] = 'testString'
replication_database_auth_basic_model_json['username'] = 'testString'
# Construct a model instance of ReplicationDatabaseAuthBasic by calling from_dict on the json representation
replication_database_auth_basic_model = ReplicationDatabaseAuthBasic.from_dict(replication_database_auth_basic_model_json)
assert replication_database_auth_basic_model != False
# Construct a model instance of ReplicationDatabaseAuthBasic by calling from_dict on the json representation
replication_database_auth_basic_model_dict = ReplicationDatabaseAuthBasic.from_dict(replication_database_auth_basic_model_json).__dict__
replication_database_auth_basic_model2 = ReplicationDatabaseAuthBasic(**replication_database_auth_basic_model_dict)
# Verify the model instances are equivalent
assert replication_database_auth_basic_model == replication_database_auth_basic_model2
# Convert model instance back to dict and verify no loss of data
replication_database_auth_basic_model_json2 = replication_database_auth_basic_model.to_dict()
assert replication_database_auth_basic_model_json2 == replication_database_auth_basic_model_json
class TestModel_ReplicationDatabaseAuthIam():
"""
Test Class for ReplicationDatabaseAuthIam
"""
def test_replication_database_auth_iam_serialization(self):
"""
Test serialization/deserialization for ReplicationDatabaseAuthIam
"""
# Construct a json representation of a ReplicationDatabaseAuthIam model
replication_database_auth_iam_model_json = {}
replication_database_auth_iam_model_json['api_key'] = 'testString'
# Construct a model instance of ReplicationDatabaseAuthIam by calling from_dict on the json representation
replication_database_auth_iam_model = ReplicationDatabaseAuthIam.from_dict(replication_database_auth_iam_model_json)
assert replication_database_auth_iam_model != False
# Construct a model instance of ReplicationDatabaseAuthIam by calling from_dict on the json representation
replication_database_auth_iam_model_dict = ReplicationDatabaseAuthIam.from_dict(replication_database_auth_iam_model_json).__dict__
replication_database_auth_iam_model2 = ReplicationDatabaseAuthIam(**replication_database_auth_iam_model_dict)
# Verify the model instances are equivalent
assert replication_database_auth_iam_model == replication_database_auth_iam_model2
# Convert model instance back to dict and verify no loss of data
replication_database_auth_iam_model_json2 = replication_database_auth_iam_model.to_dict()
assert replication_database_auth_iam_model_json2 == replication_database_auth_iam_model_json
class TestModel_ReplicationDocument():
"""
Test Class for ReplicationDocument
"""
def test_replication_document_serialization(self):
"""
Test serialization/deserialization for ReplicationDocument
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
replication_create_target_parameters_model = {} # ReplicationCreateTargetParameters
replication_create_target_parameters_model['n'] = 1
replication_create_target_parameters_model['partitioned'] = False
replication_create_target_parameters_model['q'] = 1
replication_database_auth_basic_model = {} # ReplicationDatabaseAuthBasic
replication_database_auth_basic_model['password'] = 'testString'
replication_database_auth_basic_model['username'] = 'testString'
replication_database_auth_iam_model = {} # ReplicationDatabaseAuthIam
replication_database_auth_iam_model['api_key'] = 'testString'
replication_database_auth_model = {} # ReplicationDatabaseAuth
replication_database_auth_model['basic'] = replication_database_auth_basic_model
replication_database_auth_model['iam'] = replication_database_auth_iam_model
replication_database_model = {} # ReplicationDatabase
replication_database_model['auth'] = replication_database_auth_model
replication_database_model['headers'] = {}
replication_database_model['url'] = 'testString'
user_context_model = {} # UserContext
user_context_model['db'] = 'testString'
user_context_model['name'] = 'testString'
user_context_model['roles'] = ['_reader']
# Construct a json representation of a ReplicationDocument model
replication_document_model_json = {}
replication_document_model_json['_attachments'] = {}
replication_document_model_json['_conflicts'] = ['testString']
replication_document_model_json['_deleted'] = True
replication_document_model_json['_deleted_conflicts'] = ['testString']
replication_document_model_json['_id'] = 'testString'
replication_document_model_json['_local_seq'] = 'testString'
replication_document_model_json['_rev'] = 'testString'
replication_document_model_json['_revisions'] = revisions_model
replication_document_model_json['_revs_info'] = [document_revision_status_model]
replication_document_model_json['cancel'] = True
replication_document_model_json['checkpoint_interval'] = 0
replication_document_model_json['connection_timeout'] = 0
replication_document_model_json['continuous'] = False
replication_document_model_json['create_target'] = False
replication_document_model_json['create_target_params'] = replication_create_target_parameters_model
replication_document_model_json['doc_ids'] = ['testString']
replication_document_model_json['filter'] = 'testString'
replication_document_model_json['http_connections'] = 1
replication_document_model_json['query_params'] = {}
replication_document_model_json['retries_per_request'] = 0
replication_document_model_json['selector'] = {}
replication_document_model_json['since_seq'] = 'testString'
replication_document_model_json['socket_options'] = 'testString'
replication_document_model_json['source'] = replication_database_model
replication_document_model_json['source_proxy'] = 'testString'
replication_document_model_json['target'] = replication_database_model
replication_document_model_json['target_proxy'] = 'testString'
replication_document_model_json['use_checkpoints'] = True
replication_document_model_json['user_ctx'] = user_context_model
replication_document_model_json['worker_batch_size'] = 1
replication_document_model_json['worker_processes'] = 1
replication_document_model_json['foo'] = 'testString'
# Construct a model instance of ReplicationDocument by calling from_dict on the json representation
replication_document_model = ReplicationDocument.from_dict(replication_document_model_json)
assert replication_document_model != False
# Construct a model instance of ReplicationDocument by calling from_dict on the json representation
replication_document_model_dict = ReplicationDocument.from_dict(replication_document_model_json).__dict__
replication_document_model2 = ReplicationDocument(**replication_document_model_dict)
# Verify the model instances are equivalent
assert replication_document_model == replication_document_model2
# Convert model instance back to dict and verify no loss of data
replication_document_model_json2 = replication_document_model.to_dict()
assert replication_document_model_json2 == replication_document_model_json
# Test get_properties and set_properties methods.
replication_document_model.set_properties({})
actual_dict = replication_document_model.get_properties()
assert actual_dict == {}
expected_dict = {'foo': 'testString'}
replication_document_model.set_properties(expected_dict)
actual_dict = replication_document_model.get_properties()
assert actual_dict == expected_dict
class TestModel_Revisions():
"""
Test Class for Revisions
"""
def test_revisions_serialization(self):
"""
Test serialization/deserialization for Revisions
"""
# Construct a json representation of a Revisions model
revisions_model_json = {}
revisions_model_json['ids'] = ['testString']
revisions_model_json['start'] = 1
# Construct a model instance of Revisions by calling from_dict on the json representation
revisions_model = Revisions.from_dict(revisions_model_json)
assert revisions_model != False
# Construct a model instance of Revisions by calling from_dict on the json representation
revisions_model_dict = Revisions.from_dict(revisions_model_json).__dict__
revisions_model2 = Revisions(**revisions_model_dict)
# Verify the model instances are equivalent
assert revisions_model == revisions_model2
# Convert model instance back to dict and verify no loss of data
revisions_model_json2 = revisions_model.to_dict()
assert revisions_model_json2 == revisions_model_json
class TestModel_RevsDiff():
"""
Test Class for RevsDiff
"""
def test_revs_diff_serialization(self):
"""
Test serialization/deserialization for RevsDiff
"""
# Construct a json representation of a RevsDiff model
revs_diff_model_json = {}
revs_diff_model_json['missing'] = ['testString']
revs_diff_model_json['possible_ancestors'] = ['testString']
# Construct a model instance of RevsDiff by calling from_dict on the json representation
revs_diff_model = RevsDiff.from_dict(revs_diff_model_json)
assert revs_diff_model != False
# Construct a model instance of RevsDiff by calling from_dict on the json representation
revs_diff_model_dict = RevsDiff.from_dict(revs_diff_model_json).__dict__
revs_diff_model2 = RevsDiff(**revs_diff_model_dict)
# Verify the model instances are equivalent
assert revs_diff_model == revs_diff_model2
# Convert model instance back to dict and verify no loss of data
revs_diff_model_json2 = revs_diff_model.to_dict()
assert revs_diff_model_json2 == revs_diff_model_json
class TestModel_SchedulerDocsResult():
"""
Test Class for SchedulerDocsResult
"""
def test_scheduler_docs_result_serialization(self):
"""
Test serialization/deserialization for SchedulerDocsResult
"""
# Construct dict forms of any model objects needed in order to build this model.
scheduler_info_model = {} # SchedulerInfo
scheduler_info_model['changes_pending'] = 0
scheduler_info_model['checkpointed_source_seq'] = 'testString'
scheduler_info_model['doc_write_failures'] = 0
scheduler_info_model['docs_read'] = 0
scheduler_info_model['docs_written'] = 0
scheduler_info_model['error'] = 'testString'
scheduler_info_model['missing_revisions_found'] = 0
scheduler_info_model['revisions_checked'] = 0
scheduler_info_model['source_seq'] = 'testString'
scheduler_info_model['through_seq'] = 'testString'
scheduler_document_model = {} # SchedulerDocument
scheduler_document_model['database'] = 'testString'
scheduler_document_model['doc_id'] = 'testString'
scheduler_document_model['error_count'] = 0
scheduler_document_model['id'] = 'testString'
scheduler_document_model['info'] = scheduler_info_model
scheduler_document_model['last_updated'] = "2019-01-01T12:00:00Z"
scheduler_document_model['node'] = 'testString'
scheduler_document_model['source'] = 'testString'
scheduler_document_model['source_proxy'] = 'testString'
scheduler_document_model['start_time'] = "2019-01-01T12:00:00Z"
scheduler_document_model['state'] = 'initializing'
scheduler_document_model['target'] = 'testString'
scheduler_document_model['target_proxy'] = 'testString'
# Construct a json representation of a SchedulerDocsResult model
scheduler_docs_result_model_json = {}
scheduler_docs_result_model_json['total_rows'] = 0
scheduler_docs_result_model_json['docs'] = [scheduler_document_model]
# Construct a model instance of SchedulerDocsResult by calling from_dict on the json representation
scheduler_docs_result_model = SchedulerDocsResult.from_dict(scheduler_docs_result_model_json)
assert scheduler_docs_result_model != False
# Construct a model instance of SchedulerDocsResult by calling from_dict on the json representation
scheduler_docs_result_model_dict = SchedulerDocsResult.from_dict(scheduler_docs_result_model_json).__dict__
scheduler_docs_result_model2 = SchedulerDocsResult(**scheduler_docs_result_model_dict)
# Verify the model instances are equivalent
assert scheduler_docs_result_model == scheduler_docs_result_model2
# Convert model instance back to dict and verify no loss of data
scheduler_docs_result_model_json2 = scheduler_docs_result_model.to_dict()
assert scheduler_docs_result_model_json2 == scheduler_docs_result_model_json
class TestModel_SchedulerDocument():
"""
Test Class for SchedulerDocument
"""
def test_scheduler_document_serialization(self):
"""
Test serialization/deserialization for SchedulerDocument
"""
# Construct dict forms of any model objects needed in order to build this model.
scheduler_info_model = {} # SchedulerInfo
scheduler_info_model['changes_pending'] = 0
scheduler_info_model['checkpointed_source_seq'] = 'testString'
scheduler_info_model['doc_write_failures'] = 0
scheduler_info_model['docs_read'] = 0
scheduler_info_model['docs_written'] = 0
scheduler_info_model['error'] = 'testString'
scheduler_info_model['missing_revisions_found'] = 0
scheduler_info_model['revisions_checked'] = 0
scheduler_info_model['source_seq'] = 'testString'
scheduler_info_model['through_seq'] = 'testString'
# Construct a json representation of a SchedulerDocument model
scheduler_document_model_json = {}
scheduler_document_model_json['database'] = 'testString'
scheduler_document_model_json['doc_id'] = 'testString'
scheduler_document_model_json['error_count'] = 0
scheduler_document_model_json['id'] = 'testString'
scheduler_document_model_json['info'] = scheduler_info_model
scheduler_document_model_json['last_updated'] = "2019-01-01T12:00:00Z"
scheduler_document_model_json['node'] = 'testString'
scheduler_document_model_json['source'] = 'testString'
scheduler_document_model_json['source_proxy'] = 'testString'
scheduler_document_model_json['start_time'] = "2019-01-01T12:00:00Z"
scheduler_document_model_json['state'] = 'initializing'
scheduler_document_model_json['target'] = 'testString'
scheduler_document_model_json['target_proxy'] = 'testString'
# Construct a model instance of SchedulerDocument by calling from_dict on the json representation
scheduler_document_model = SchedulerDocument.from_dict(scheduler_document_model_json)
assert scheduler_document_model != False
# Construct a model instance of SchedulerDocument by calling from_dict on the json representation
scheduler_document_model_dict = SchedulerDocument.from_dict(scheduler_document_model_json).__dict__
scheduler_document_model2 = SchedulerDocument(**scheduler_document_model_dict)
# Verify the model instances are equivalent
assert scheduler_document_model == scheduler_document_model2
# Convert model instance back to dict and verify no loss of data
scheduler_document_model_json2 = scheduler_document_model.to_dict()
assert scheduler_document_model_json2 == scheduler_document_model_json
class TestModel_SchedulerInfo():
"""
Test Class for SchedulerInfo
"""
def test_scheduler_info_serialization(self):
"""
Test serialization/deserialization for SchedulerInfo
"""
# Construct a json representation of a SchedulerInfo model
scheduler_info_model_json = {}
scheduler_info_model_json['changes_pending'] = 0
scheduler_info_model_json['checkpointed_source_seq'] = 'testString'
scheduler_info_model_json['doc_write_failures'] = 0
scheduler_info_model_json['docs_read'] = 0
scheduler_info_model_json['docs_written'] = 0
scheduler_info_model_json['error'] = 'testString'
scheduler_info_model_json['missing_revisions_found'] = 0
scheduler_info_model_json['revisions_checked'] = 0
scheduler_info_model_json['source_seq'] = 'testString'
scheduler_info_model_json['through_seq'] = 'testString'
# Construct a model instance of SchedulerInfo by calling from_dict on the json representation
scheduler_info_model = SchedulerInfo.from_dict(scheduler_info_model_json)
assert scheduler_info_model != False
# Construct a model instance of SchedulerInfo by calling from_dict on the json representation
scheduler_info_model_dict = SchedulerInfo.from_dict(scheduler_info_model_json).__dict__
scheduler_info_model2 = SchedulerInfo(**scheduler_info_model_dict)
# Verify the model instances are equivalent
assert scheduler_info_model == scheduler_info_model2
# Convert model instance back to dict and verify no loss of data
scheduler_info_model_json2 = scheduler_info_model.to_dict()
assert scheduler_info_model_json2 == scheduler_info_model_json
class TestModel_SchedulerJob():
"""
Test Class for SchedulerJob
"""
def test_scheduler_job_serialization(self):
"""
Test serialization/deserialization for SchedulerJob
"""
# Construct dict forms of any model objects needed in order to build this model.
scheduler_job_event_model = {} # SchedulerJobEvent
scheduler_job_event_model['reason'] = 'testString'
scheduler_job_event_model['timestamp'] = "2019-01-01T12:00:00Z"
scheduler_job_event_model['type'] = 'testString'
scheduler_info_model = {} # SchedulerInfo
scheduler_info_model['changes_pending'] = 0
scheduler_info_model['checkpointed_source_seq'] = 'testString'
scheduler_info_model['doc_write_failures'] = 0
scheduler_info_model['docs_read'] = 0
scheduler_info_model['docs_written'] = 0
scheduler_info_model['error'] = 'testString'
scheduler_info_model['missing_revisions_found'] = 0
scheduler_info_model['revisions_checked'] = 0
scheduler_info_model['source_seq'] = 'testString'
scheduler_info_model['through_seq'] = 'testString'
# Construct a json representation of a SchedulerJob model
scheduler_job_model_json = {}
scheduler_job_model_json['database'] = 'testString'
scheduler_job_model_json['doc_id'] = 'testString'
scheduler_job_model_json['history'] = [scheduler_job_event_model]
scheduler_job_model_json['id'] = 'testString'
scheduler_job_model_json['info'] = scheduler_info_model
scheduler_job_model_json['node'] = 'testString'
scheduler_job_model_json['pid'] = 'testString'
scheduler_job_model_json['source'] = 'testString'
scheduler_job_model_json['start_time'] = "2019-01-01T12:00:00Z"
scheduler_job_model_json['target'] = 'testString'
scheduler_job_model_json['user'] = 'testString'
# Construct a model instance of SchedulerJob by calling from_dict on the json representation
scheduler_job_model = SchedulerJob.from_dict(scheduler_job_model_json)
assert scheduler_job_model != False
# Construct a model instance of SchedulerJob by calling from_dict on the json representation
scheduler_job_model_dict = SchedulerJob.from_dict(scheduler_job_model_json).__dict__
scheduler_job_model2 = SchedulerJob(**scheduler_job_model_dict)
# Verify the model instances are equivalent
assert scheduler_job_model == scheduler_job_model2
# Convert model instance back to dict and verify no loss of data
scheduler_job_model_json2 = scheduler_job_model.to_dict()
assert scheduler_job_model_json2 == scheduler_job_model_json
class TestModel_SchedulerJobEvent():
"""
Test Class for SchedulerJobEvent
"""
def test_scheduler_job_event_serialization(self):
"""
Test serialization/deserialization for SchedulerJobEvent
"""
# Construct a json representation of a SchedulerJobEvent model
scheduler_job_event_model_json = {}
scheduler_job_event_model_json['reason'] = 'testString'
scheduler_job_event_model_json['timestamp'] = "2019-01-01T12:00:00Z"
scheduler_job_event_model_json['type'] = 'testString'
# Construct a model instance of SchedulerJobEvent by calling from_dict on the json representation
scheduler_job_event_model = SchedulerJobEvent.from_dict(scheduler_job_event_model_json)
assert scheduler_job_event_model != False
# Construct a model instance of SchedulerJobEvent by calling from_dict on the json representation
scheduler_job_event_model_dict = SchedulerJobEvent.from_dict(scheduler_job_event_model_json).__dict__
scheduler_job_event_model2 = SchedulerJobEvent(**scheduler_job_event_model_dict)
# Verify the model instances are equivalent
assert scheduler_job_event_model == scheduler_job_event_model2
# Convert model instance back to dict and verify no loss of data
scheduler_job_event_model_json2 = scheduler_job_event_model.to_dict()
assert scheduler_job_event_model_json2 == scheduler_job_event_model_json
class TestModel_SchedulerJobsResult():
"""
Test Class for SchedulerJobsResult
"""
def test_scheduler_jobs_result_serialization(self):
"""
Test serialization/deserialization for SchedulerJobsResult
"""
# Construct dict forms of any model objects needed in order to build this model.
scheduler_job_event_model = {} # SchedulerJobEvent
scheduler_job_event_model['reason'] = 'testString'
scheduler_job_event_model['timestamp'] = "2019-01-01T12:00:00Z"
scheduler_job_event_model['type'] = 'testString'
scheduler_info_model = {} # SchedulerInfo
scheduler_info_model['changes_pending'] = 0
scheduler_info_model['checkpointed_source_seq'] = 'testString'
scheduler_info_model['doc_write_failures'] = 0
scheduler_info_model['docs_read'] = 0
scheduler_info_model['docs_written'] = 0
scheduler_info_model['error'] = 'testString'
scheduler_info_model['missing_revisions_found'] = 0
scheduler_info_model['revisions_checked'] = 0
scheduler_info_model['source_seq'] = 'testString'
scheduler_info_model['through_seq'] = 'testString'
scheduler_job_model = {} # SchedulerJob
scheduler_job_model['database'] = 'testString'
scheduler_job_model['doc_id'] = 'testString'
scheduler_job_model['history'] = [scheduler_job_event_model]
scheduler_job_model['id'] = 'testString'
scheduler_job_model['info'] = scheduler_info_model
scheduler_job_model['node'] = 'testString'
scheduler_job_model['pid'] = 'testString'
scheduler_job_model['source'] = 'testString'
scheduler_job_model['start_time'] = "2019-01-01T12:00:00Z"
scheduler_job_model['target'] = 'testString'
scheduler_job_model['user'] = 'testString'
# Construct a json representation of a SchedulerJobsResult model
scheduler_jobs_result_model_json = {}
scheduler_jobs_result_model_json['total_rows'] = 0
scheduler_jobs_result_model_json['jobs'] = [scheduler_job_model]
# Construct a model instance of SchedulerJobsResult by calling from_dict on the json representation
scheduler_jobs_result_model = SchedulerJobsResult.from_dict(scheduler_jobs_result_model_json)
assert scheduler_jobs_result_model != False
# Construct a model instance of SchedulerJobsResult by calling from_dict on the json representation
scheduler_jobs_result_model_dict = SchedulerJobsResult.from_dict(scheduler_jobs_result_model_json).__dict__
scheduler_jobs_result_model2 = SchedulerJobsResult(**scheduler_jobs_result_model_dict)
# Verify the model instances are equivalent
assert scheduler_jobs_result_model == scheduler_jobs_result_model2
# Convert model instance back to dict and verify no loss of data
scheduler_jobs_result_model_json2 = scheduler_jobs_result_model.to_dict()
assert scheduler_jobs_result_model_json2 == scheduler_jobs_result_model_json
class TestModel_SearchAnalyzeResult():
"""
Test Class for SearchAnalyzeResult
"""
def test_search_analyze_result_serialization(self):
"""
Test serialization/deserialization for SearchAnalyzeResult
"""
# Construct a json representation of a SearchAnalyzeResult model
search_analyze_result_model_json = {}
search_analyze_result_model_json['tokens'] = ['testString']
# Construct a model instance of SearchAnalyzeResult by calling from_dict on the json representation
search_analyze_result_model = SearchAnalyzeResult.from_dict(search_analyze_result_model_json)
assert search_analyze_result_model != False
# Construct a model instance of SearchAnalyzeResult by calling from_dict on the json representation
search_analyze_result_model_dict = SearchAnalyzeResult.from_dict(search_analyze_result_model_json).__dict__
search_analyze_result_model2 = SearchAnalyzeResult(**search_analyze_result_model_dict)
# Verify the model instances are equivalent
assert search_analyze_result_model == search_analyze_result_model2
# Convert model instance back to dict and verify no loss of data
search_analyze_result_model_json2 = search_analyze_result_model.to_dict()
assert search_analyze_result_model_json2 == search_analyze_result_model_json
class TestModel_SearchIndexDefinition():
"""
Test Class for SearchIndexDefinition
"""
def test_search_index_definition_serialization(self):
"""
Test serialization/deserialization for SearchIndexDefinition
"""
# Construct dict forms of any model objects needed in order to build this model.
analyzer_model = {} # Analyzer
analyzer_model['name'] = 'classic'
analyzer_model['stopwords'] = ['testString']
analyzer_configuration_model = {} # AnalyzerConfiguration
analyzer_configuration_model['name'] = 'classic'
analyzer_configuration_model['stopwords'] = ['testString']
analyzer_configuration_model['fields'] = {}
# Construct a json representation of a SearchIndexDefinition model
search_index_definition_model_json = {}
search_index_definition_model_json['analyzer'] = analyzer_configuration_model
search_index_definition_model_json['index'] = 'testString'
# Construct a model instance of SearchIndexDefinition by calling from_dict on the json representation
search_index_definition_model = SearchIndexDefinition.from_dict(search_index_definition_model_json)
assert search_index_definition_model != False
# Construct a model instance of SearchIndexDefinition by calling from_dict on the json representation
search_index_definition_model_dict = SearchIndexDefinition.from_dict(search_index_definition_model_json).__dict__
search_index_definition_model2 = SearchIndexDefinition(**search_index_definition_model_dict)
# Verify the model instances are equivalent
assert search_index_definition_model == search_index_definition_model2
# Convert model instance back to dict and verify no loss of data
search_index_definition_model_json2 = search_index_definition_model.to_dict()
assert search_index_definition_model_json2 == search_index_definition_model_json
class TestModel_SearchIndexInfo():
"""
Test Class for SearchIndexInfo
"""
def test_search_index_info_serialization(self):
"""
Test serialization/deserialization for SearchIndexInfo
"""
# Construct a json representation of a SearchIndexInfo model
search_index_info_model_json = {}
search_index_info_model_json['committed_seq'] = 26
search_index_info_model_json['disk_size'] = 0
search_index_info_model_json['doc_count'] = 0
search_index_info_model_json['doc_del_count'] = 0
search_index_info_model_json['pending_seq'] = 26
# Construct a model instance of SearchIndexInfo by calling from_dict on the json representation
search_index_info_model = SearchIndexInfo.from_dict(search_index_info_model_json)
assert search_index_info_model != False
# Construct a model instance of SearchIndexInfo by calling from_dict on the json representation
search_index_info_model_dict = SearchIndexInfo.from_dict(search_index_info_model_json).__dict__
search_index_info_model2 = SearchIndexInfo(**search_index_info_model_dict)
# Verify the model instances are equivalent
assert search_index_info_model == search_index_info_model2
# Convert model instance back to dict and verify no loss of data
search_index_info_model_json2 = search_index_info_model.to_dict()
assert search_index_info_model_json2 == search_index_info_model_json
class TestModel_SearchInfoResult():
"""
Test Class for SearchInfoResult
"""
def test_search_info_result_serialization(self):
"""
Test serialization/deserialization for SearchInfoResult
"""
# Construct dict forms of any model objects needed in order to build this model.
search_index_info_model = {} # SearchIndexInfo
search_index_info_model['committed_seq'] = 26
search_index_info_model['disk_size'] = 0
search_index_info_model['doc_count'] = 0
search_index_info_model['doc_del_count'] = 0
search_index_info_model['pending_seq'] = 26
# Construct a json representation of a SearchInfoResult model
search_info_result_model_json = {}
search_info_result_model_json['name'] = 'testString'
search_info_result_model_json['search_index'] = search_index_info_model
# Construct a model instance of SearchInfoResult by calling from_dict on the json representation
search_info_result_model = SearchInfoResult.from_dict(search_info_result_model_json)
assert search_info_result_model != False
# Construct a model instance of SearchInfoResult by calling from_dict on the json representation
search_info_result_model_dict = SearchInfoResult.from_dict(search_info_result_model_json).__dict__
search_info_result_model2 = SearchInfoResult(**search_info_result_model_dict)
# Verify the model instances are equivalent
assert search_info_result_model == search_info_result_model2
# Convert model instance back to dict and verify no loss of data
search_info_result_model_json2 = search_info_result_model.to_dict()
assert search_info_result_model_json2 == search_info_result_model_json
class TestModel_SearchResult():
"""
Test Class for SearchResult
"""
def test_search_result_serialization(self):
"""
Test serialization/deserialization for SearchResult
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
search_result_row_model = {} # SearchResultRow
search_result_row_model['doc'] = document_model
search_result_row_model['fields'] = {}
search_result_row_model['highlights'] = {}
search_result_row_model['id'] = 'testString'
search_result_properties_model = {} # SearchResultProperties
search_result_properties_model['total_rows'] = 0
search_result_properties_model['bookmark'] = 'testString'
search_result_properties_model['by'] = 'testString'
search_result_properties_model['counts'] = {}
search_result_properties_model['ranges'] = {}
search_result_properties_model['rows'] = [search_result_row_model]
# Construct a json representation of a SearchResult model
search_result_model_json = {}
search_result_model_json['total_rows'] = 0
search_result_model_json['bookmark'] = 'testString'
search_result_model_json['by'] = 'testString'
search_result_model_json['counts'] = {}
search_result_model_json['ranges'] = {}
search_result_model_json['rows'] = [search_result_row_model]
search_result_model_json['groups'] = [search_result_properties_model]
# Construct a model instance of SearchResult by calling from_dict on the json representation
search_result_model = SearchResult.from_dict(search_result_model_json)
assert search_result_model != False
# Construct a model instance of SearchResult by calling from_dict on the json representation
search_result_model_dict = SearchResult.from_dict(search_result_model_json).__dict__
search_result_model2 = SearchResult(**search_result_model_dict)
# Verify the model instances are equivalent
assert search_result_model == search_result_model2
# Convert model instance back to dict and verify no loss of data
search_result_model_json2 = search_result_model.to_dict()
assert search_result_model_json2 == search_result_model_json
class TestModel_SearchResultProperties():
"""
Test Class for SearchResultProperties
"""
def test_search_result_properties_serialization(self):
"""
Test serialization/deserialization for SearchResultProperties
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
search_result_row_model = {} # SearchResultRow
search_result_row_model['doc'] = document_model
search_result_row_model['fields'] = {}
search_result_row_model['highlights'] = {}
search_result_row_model['id'] = 'testString'
# Construct a json representation of a SearchResultProperties model
search_result_properties_model_json = {}
search_result_properties_model_json['total_rows'] = 0
search_result_properties_model_json['bookmark'] = 'testString'
search_result_properties_model_json['by'] = 'testString'
search_result_properties_model_json['counts'] = {}
search_result_properties_model_json['ranges'] = {}
search_result_properties_model_json['rows'] = [search_result_row_model]
# Construct a model instance of SearchResultProperties by calling from_dict on the json representation
search_result_properties_model = SearchResultProperties.from_dict(search_result_properties_model_json)
assert search_result_properties_model != False
# Construct a model instance of SearchResultProperties by calling from_dict on the json representation
search_result_properties_model_dict = SearchResultProperties.from_dict(search_result_properties_model_json).__dict__
search_result_properties_model2 = SearchResultProperties(**search_result_properties_model_dict)
# Verify the model instances are equivalent
assert search_result_properties_model == search_result_properties_model2
# Convert model instance back to dict and verify no loss of data
search_result_properties_model_json2 = search_result_properties_model.to_dict()
assert search_result_properties_model_json2 == search_result_properties_model_json
class TestModel_SearchResultRow():
"""
Test Class for SearchResultRow
"""
def test_search_result_row_serialization(self):
"""
Test serialization/deserialization for SearchResultRow
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
# Construct a json representation of a SearchResultRow model
search_result_row_model_json = {}
search_result_row_model_json['doc'] = document_model
search_result_row_model_json['fields'] = {}
search_result_row_model_json['highlights'] = {}
search_result_row_model_json['id'] = 'testString'
# Construct a model instance of SearchResultRow by calling from_dict on the json representation
search_result_row_model = SearchResultRow.from_dict(search_result_row_model_json)
assert search_result_row_model != False
# Construct a model instance of SearchResultRow by calling from_dict on the json representation
search_result_row_model_dict = SearchResultRow.from_dict(search_result_row_model_json).__dict__
search_result_row_model2 = SearchResultRow(**search_result_row_model_dict)
# Verify the model instances are equivalent
assert search_result_row_model == search_result_row_model2
# Convert model instance back to dict and verify no loss of data
search_result_row_model_json2 = search_result_row_model.to_dict()
assert search_result_row_model_json2 == search_result_row_model_json
class TestModel_Security():
"""
Test Class for Security
"""
def test_security_serialization(self):
"""
Test serialization/deserialization for Security
"""
# Construct dict forms of any model objects needed in order to build this model.
security_object_model = {} # SecurityObject
security_object_model['names'] = ['testString']
security_object_model['roles'] = ['testString']
# Construct a json representation of a Security model
security_model_json = {}
security_model_json['admins'] = security_object_model
security_model_json['members'] = security_object_model
security_model_json['cloudant'] = {}
security_model_json['couchdb_auth_only'] = True
# Construct a model instance of Security by calling from_dict on the json representation
security_model = Security.from_dict(security_model_json)
assert security_model != False
# Construct a model instance of Security by calling from_dict on the json representation
security_model_dict = Security.from_dict(security_model_json).__dict__
security_model2 = Security(**security_model_dict)
# Verify the model instances are equivalent
assert security_model == security_model2
# Convert model instance back to dict and verify no loss of data
security_model_json2 = security_model.to_dict()
assert security_model_json2 == security_model_json
class TestModel_SecurityObject():
"""
Test Class for SecurityObject
"""
def test_security_object_serialization(self):
"""
Test serialization/deserialization for SecurityObject
"""
# Construct a json representation of a SecurityObject model
security_object_model_json = {}
security_object_model_json['names'] = ['testString']
security_object_model_json['roles'] = ['testString']
# Construct a model instance of SecurityObject by calling from_dict on the json representation
security_object_model = SecurityObject.from_dict(security_object_model_json)
assert security_object_model != False
# Construct a model instance of SecurityObject by calling from_dict on the json representation
security_object_model_dict = SecurityObject.from_dict(security_object_model_json).__dict__
security_object_model2 = SecurityObject(**security_object_model_dict)
# Verify the model instances are equivalent
assert security_object_model == security_object_model2
# Convert model instance back to dict and verify no loss of data
security_object_model_json2 = security_object_model.to_dict()
assert security_object_model_json2 == security_object_model_json
class TestModel_ServerInformation():
"""
Test Class for ServerInformation
"""
def test_server_information_serialization(self):
"""
Test serialization/deserialization for ServerInformation
"""
# Construct dict forms of any model objects needed in order to build this model.
server_vendor_model = {} # ServerVendor
server_vendor_model['name'] = 'testString'
server_vendor_model['variant'] = 'testString'
server_vendor_model['version'] = 'testString'
# Construct a json representation of a ServerInformation model
server_information_model_json = {}
server_information_model_json['couchdb'] = 'testString'
server_information_model_json['features'] = ['testString']
server_information_model_json['vendor'] = server_vendor_model
server_information_model_json['version'] = 'testString'
server_information_model_json['features_flags'] = ['testString']
# Construct a model instance of ServerInformation by calling from_dict on the json representation
server_information_model = ServerInformation.from_dict(server_information_model_json)
assert server_information_model != False
# Construct a model instance of ServerInformation by calling from_dict on the json representation
server_information_model_dict = ServerInformation.from_dict(server_information_model_json).__dict__
server_information_model2 = ServerInformation(**server_information_model_dict)
# Verify the model instances are equivalent
assert server_information_model == server_information_model2
# Convert model instance back to dict and verify no loss of data
server_information_model_json2 = server_information_model.to_dict()
assert server_information_model_json2 == server_information_model_json
class TestModel_ServerVendor():
"""
Test Class for ServerVendor
"""
def test_server_vendor_serialization(self):
"""
Test serialization/deserialization for ServerVendor
"""
# Construct a json representation of a ServerVendor model
server_vendor_model_json = {}
server_vendor_model_json['name'] = 'testString'
server_vendor_model_json['variant'] = 'testString'
server_vendor_model_json['version'] = 'testString'
# Construct a model instance of ServerVendor by calling from_dict on the json representation
server_vendor_model = ServerVendor.from_dict(server_vendor_model_json)
assert server_vendor_model != False
# Construct a model instance of ServerVendor by calling from_dict on the json representation
server_vendor_model_dict = ServerVendor.from_dict(server_vendor_model_json).__dict__
server_vendor_model2 = ServerVendor(**server_vendor_model_dict)
# Verify the model instances are equivalent
assert server_vendor_model == server_vendor_model2
# Convert model instance back to dict and verify no loss of data
server_vendor_model_json2 = server_vendor_model.to_dict()
assert server_vendor_model_json2 == server_vendor_model_json
class TestModel_SessionAuthentication():
"""
Test Class for SessionAuthentication
"""
def test_session_authentication_serialization(self):
"""
Test serialization/deserialization for SessionAuthentication
"""
# Construct a json representation of a SessionAuthentication model
session_authentication_model_json = {}
session_authentication_model_json['authenticated'] = 'testString'
session_authentication_model_json['authentication_db'] = 'testString'
session_authentication_model_json['authentication_handlers'] = ['testString']
# Construct a model instance of SessionAuthentication by calling from_dict on the json representation
session_authentication_model = SessionAuthentication.from_dict(session_authentication_model_json)
assert session_authentication_model != False
# Construct a model instance of SessionAuthentication by calling from_dict on the json representation
session_authentication_model_dict = SessionAuthentication.from_dict(session_authentication_model_json).__dict__
session_authentication_model2 = SessionAuthentication(**session_authentication_model_dict)
# Verify the model instances are equivalent
assert session_authentication_model == session_authentication_model2
# Convert model instance back to dict and verify no loss of data
session_authentication_model_json2 = session_authentication_model.to_dict()
assert session_authentication_model_json2 == session_authentication_model_json
class TestModel_SessionInformation():
"""
Test Class for SessionInformation
"""
def test_session_information_serialization(self):
"""
Test serialization/deserialization for SessionInformation
"""
# Construct dict forms of any model objects needed in order to build this model.
session_authentication_model = {} # SessionAuthentication
session_authentication_model['authenticated'] = 'testString'
session_authentication_model['authentication_db'] = 'testString'
session_authentication_model['authentication_handlers'] = ['testString']
user_context_model = {} # UserContext
user_context_model['db'] = 'testString'
user_context_model['name'] = 'testString'
user_context_model['roles'] = ['_reader']
# Construct a json representation of a SessionInformation model
session_information_model_json = {}
session_information_model_json['ok'] = True
session_information_model_json['info'] = session_authentication_model
session_information_model_json['userCtx'] = user_context_model
# Construct a model instance of SessionInformation by calling from_dict on the json representation
session_information_model = SessionInformation.from_dict(session_information_model_json)
assert session_information_model != False
# Construct a model instance of SessionInformation by calling from_dict on the json representation
session_information_model_dict = SessionInformation.from_dict(session_information_model_json).__dict__
session_information_model2 = SessionInformation(**session_information_model_dict)
# Verify the model instances are equivalent
assert session_information_model == session_information_model2
# Convert model instance back to dict and verify no loss of data
session_information_model_json2 = session_information_model.to_dict()
assert session_information_model_json2 == session_information_model_json
class TestModel_ShardsInformation():
"""
Test Class for ShardsInformation
"""
def test_shards_information_serialization(self):
"""
Test serialization/deserialization for ShardsInformation
"""
# Construct a json representation of a ShardsInformation model
shards_information_model_json = {}
shards_information_model_json['shards'] = {}
# Construct a model instance of ShardsInformation by calling from_dict on the json representation
shards_information_model = ShardsInformation.from_dict(shards_information_model_json)
assert shards_information_model != False
# Construct a model instance of ShardsInformation by calling from_dict on the json representation
shards_information_model_dict = ShardsInformation.from_dict(shards_information_model_json).__dict__
shards_information_model2 = ShardsInformation(**shards_information_model_dict)
# Verify the model instances are equivalent
assert shards_information_model == shards_information_model2
# Convert model instance back to dict and verify no loss of data
shards_information_model_json2 = shards_information_model.to_dict()
assert shards_information_model_json2 == shards_information_model_json
class TestModel_ThroughputInformation():
"""
Test Class for ThroughputInformation
"""
def test_throughput_information_serialization(self):
"""
Test serialization/deserialization for ThroughputInformation
"""
# Construct a json representation of a ThroughputInformation model
throughput_information_model_json = {}
throughput_information_model_json['blocks'] = 0
throughput_information_model_json['query'] = 0
throughput_information_model_json['read'] = 0
throughput_information_model_json['write'] = 0
# Construct a model instance of ThroughputInformation by calling from_dict on the json representation
throughput_information_model = ThroughputInformation.from_dict(throughput_information_model_json)
assert throughput_information_model != False
# Construct a model instance of ThroughputInformation by calling from_dict on the json representation
throughput_information_model_dict = ThroughputInformation.from_dict(throughput_information_model_json).__dict__
throughput_information_model2 = ThroughputInformation(**throughput_information_model_dict)
# Verify the model instances are equivalent
assert throughput_information_model == throughput_information_model2
# Convert model instance back to dict and verify no loss of data
throughput_information_model_json2 = throughput_information_model.to_dict()
assert throughput_information_model_json2 == throughput_information_model_json
class TestModel_UpInformation():
"""
Test Class for UpInformation
"""
def test_up_information_serialization(self):
"""
Test serialization/deserialization for UpInformation
"""
# Construct a json representation of a UpInformation model
up_information_model_json = {}
up_information_model_json['seeds'] = { 'foo': 'bar' }
up_information_model_json['status'] = 'maintenance_mode'
# Construct a model instance of UpInformation by calling from_dict on the json representation
up_information_model = UpInformation.from_dict(up_information_model_json)
assert up_information_model != False
# Construct a model instance of UpInformation by calling from_dict on the json representation
up_information_model_dict = UpInformation.from_dict(up_information_model_json).__dict__
up_information_model2 = UpInformation(**up_information_model_dict)
# Verify the model instances are equivalent
assert up_information_model == up_information_model2
# Convert model instance back to dict and verify no loss of data
up_information_model_json2 = up_information_model.to_dict()
assert up_information_model_json2 == up_information_model_json
class TestModel_UserContext():
"""
Test Class for UserContext
"""
def test_user_context_serialization(self):
"""
Test serialization/deserialization for UserContext
"""
# Construct a json representation of a UserContext model
user_context_model_json = {}
user_context_model_json['db'] = 'testString'
user_context_model_json['name'] = 'testString'
user_context_model_json['roles'] = ['_reader']
# Construct a model instance of UserContext by calling from_dict on the json representation
user_context_model = UserContext.from_dict(user_context_model_json)
assert user_context_model != False
# Construct a model instance of UserContext by calling from_dict on the json representation
user_context_model_dict = UserContext.from_dict(user_context_model_json).__dict__
user_context_model2 = UserContext(**user_context_model_dict)
# Verify the model instances are equivalent
assert user_context_model == user_context_model2
# Convert model instance back to dict and verify no loss of data
user_context_model_json2 = user_context_model.to_dict()
assert user_context_model_json2 == user_context_model_json
class TestModel_UuidsResult():
"""
Test Class for UuidsResult
"""
def test_uuids_result_serialization(self):
"""
Test serialization/deserialization for UuidsResult
"""
# Construct a json representation of a UuidsResult model
uuids_result_model_json = {}
uuids_result_model_json['uuids'] = ['testString']
# Construct a model instance of UuidsResult by calling from_dict on the json representation
uuids_result_model = UuidsResult.from_dict(uuids_result_model_json)
assert uuids_result_model != False
# Construct a model instance of UuidsResult by calling from_dict on the json representation
uuids_result_model_dict = UuidsResult.from_dict(uuids_result_model_json).__dict__
uuids_result_model2 = UuidsResult(**uuids_result_model_dict)
# Verify the model instances are equivalent
assert uuids_result_model == uuids_result_model2
# Convert model instance back to dict and verify no loss of data
uuids_result_model_json2 = uuids_result_model.to_dict()
assert uuids_result_model_json2 == uuids_result_model_json
class TestModel_ViewQueriesResult():
"""
Test Class for ViewQueriesResult
"""
def test_view_queries_result_serialization(self):
"""
Test serialization/deserialization for ViewQueriesResult
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
view_result_row_model = {} # ViewResultRow
view_result_row_model['caused_by'] = 'testString'
view_result_row_model['error'] = 'testString'
view_result_row_model['reason'] = 'testString'
view_result_row_model['doc'] = document_model
view_result_row_model['id'] = 'testString'
view_result_row_model['key'] = 'testString'
view_result_row_model['value'] = 'testString'
view_result_model = {} # ViewResult
view_result_model['total_rows'] = 0
view_result_model['update_seq'] = 'testString'
view_result_model['rows'] = [view_result_row_model]
# Construct a json representation of a ViewQueriesResult model
view_queries_result_model_json = {}
view_queries_result_model_json['results'] = [view_result_model]
# Construct a model instance of ViewQueriesResult by calling from_dict on the json representation
view_queries_result_model = ViewQueriesResult.from_dict(view_queries_result_model_json)
assert view_queries_result_model != False
# Construct a model instance of ViewQueriesResult by calling from_dict on the json representation
view_queries_result_model_dict = ViewQueriesResult.from_dict(view_queries_result_model_json).__dict__
view_queries_result_model2 = ViewQueriesResult(**view_queries_result_model_dict)
# Verify the model instances are equivalent
assert view_queries_result_model == view_queries_result_model2
# Convert model instance back to dict and verify no loss of data
view_queries_result_model_json2 = view_queries_result_model.to_dict()
assert view_queries_result_model_json2 == view_queries_result_model_json
class TestModel_ViewQuery():
"""
Test Class for ViewQuery
"""
def test_view_query_serialization(self):
"""
Test serialization/deserialization for ViewQuery
"""
# Construct a json representation of a ViewQuery model
view_query_model_json = {}
view_query_model_json['att_encoding_info'] = False
view_query_model_json['attachments'] = False
view_query_model_json['conflicts'] = False
view_query_model_json['descending'] = False
view_query_model_json['include_docs'] = False
view_query_model_json['inclusive_end'] = True
view_query_model_json['limit'] = 0
view_query_model_json['skip'] = 0
view_query_model_json['update_seq'] = False
view_query_model_json['endkey'] = 'testString'
view_query_model_json['endkey_docid'] = 'testString'
view_query_model_json['group'] = False
view_query_model_json['group_level'] = 1
view_query_model_json['key'] = 'testString'
view_query_model_json['keys'] = ['testString']
view_query_model_json['reduce'] = True
view_query_model_json['stable'] = False
view_query_model_json['startkey'] = 'testString'
view_query_model_json['startkey_docid'] = 'testString'
view_query_model_json['update'] = 'true'
# Construct a model instance of ViewQuery by calling from_dict on the json representation
view_query_model = ViewQuery.from_dict(view_query_model_json)
assert view_query_model != False
# Construct a model instance of ViewQuery by calling from_dict on the json representation
view_query_model_dict = ViewQuery.from_dict(view_query_model_json).__dict__
view_query_model2 = ViewQuery(**view_query_model_dict)
# Verify the model instances are equivalent
assert view_query_model == view_query_model2
# Convert model instance back to dict and verify no loss of data
view_query_model_json2 = view_query_model.to_dict()
assert view_query_model_json2 == view_query_model_json
class TestModel_ViewResult():
"""
Test Class for ViewResult
"""
def test_view_result_serialization(self):
"""
Test serialization/deserialization for ViewResult
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
view_result_row_model = {} # ViewResultRow
view_result_row_model['caused_by'] = 'testString'
view_result_row_model['error'] = 'testString'
view_result_row_model['reason'] = 'testString'
view_result_row_model['doc'] = document_model
view_result_row_model['id'] = 'testString'
view_result_row_model['key'] = 'testString'
view_result_row_model['value'] = 'testString'
# Construct a json representation of a ViewResult model
view_result_model_json = {}
view_result_model_json['total_rows'] = 0
view_result_model_json['update_seq'] = 'testString'
view_result_model_json['rows'] = [view_result_row_model]
# Construct a model instance of ViewResult by calling from_dict on the json representation
view_result_model = ViewResult.from_dict(view_result_model_json)
assert view_result_model != False
# Construct a model instance of ViewResult by calling from_dict on the json representation
view_result_model_dict = ViewResult.from_dict(view_result_model_json).__dict__
view_result_model2 = ViewResult(**view_result_model_dict)
# Verify the model instances are equivalent
assert view_result_model == view_result_model2
# Convert model instance back to dict and verify no loss of data
view_result_model_json2 = view_result_model.to_dict()
assert view_result_model_json2 == view_result_model_json
class TestModel_ViewResultRow():
"""
Test Class for ViewResultRow
"""
def test_view_result_row_serialization(self):
"""
Test serialization/deserialization for ViewResultRow
"""
# Construct dict forms of any model objects needed in order to build this model.
attachment_model = {} # Attachment
attachment_model['content_type'] = 'testString'
attachment_model['data'] = 'VGhpcyBpcyBhIG1vY2sgYnl0ZSBhcnJheSB2YWx1ZS4='
attachment_model['digest'] = 'testString'
attachment_model['encoded_length'] = 0
attachment_model['encoding'] = 'testString'
attachment_model['follows'] = True
attachment_model['length'] = 0
attachment_model['revpos'] = 1
attachment_model['stub'] = True
revisions_model = {} # Revisions
revisions_model['ids'] = ['testString']
revisions_model['start'] = 1
document_revision_status_model = {} # DocumentRevisionStatus
document_revision_status_model['rev'] = 'testString'
document_revision_status_model['status'] = 'available'
document_model = {} # Document
document_model['_attachments'] = {}
document_model['_conflicts'] = ['testString']
document_model['_deleted'] = True
document_model['_deleted_conflicts'] = ['testString']
document_model['_id'] = 'testString'
document_model['_local_seq'] = 'testString'
document_model['_rev'] = 'testString'
document_model['_revisions'] = revisions_model
document_model['_revs_info'] = [document_revision_status_model]
document_model['foo'] = 'testString'
# Construct a json representation of a ViewResultRow model
view_result_row_model_json = {}
view_result_row_model_json['caused_by'] = 'testString'
view_result_row_model_json['error'] = 'testString'
view_result_row_model_json['reason'] = 'testString'
view_result_row_model_json['doc'] = document_model
view_result_row_model_json['id'] = 'testString'
view_result_row_model_json['key'] = 'testString'
view_result_row_model_json['value'] = 'testString'
# Construct a model instance of ViewResultRow by calling from_dict on the json representation
view_result_row_model = ViewResultRow.from_dict(view_result_row_model_json)
assert view_result_row_model != False
# Construct a model instance of ViewResultRow by calling from_dict on the json representation
view_result_row_model_dict = ViewResultRow.from_dict(view_result_row_model_json).__dict__
view_result_row_model2 = ViewResultRow(**view_result_row_model_dict)
# Verify the model instances are equivalent
assert view_result_row_model == view_result_row_model2
# Convert model instance back to dict and verify no loss of data
view_result_row_model_json2 = view_result_row_model.to_dict()
assert view_result_row_model_json2 == view_result_row_model_json
class TestModel_GeoJsonGeometry():
"""
Test Class for GeoJsonGeometry
"""
def test_geo_json_geometry_serialization(self):
"""
Test serialization/deserialization for GeoJsonGeometry
"""
# Construct a json representation of a GeoJsonGeometry model
geo_json_geometry_model_json = {}
geo_json_geometry_model_json['type'] = 'Point'
geo_json_geometry_model_json['coordinates'] = ['testString']
# Construct a model instance of GeoJsonGeometry by calling from_dict on the json representation
geo_json_geometry_model = GeoJsonGeometry.from_dict(geo_json_geometry_model_json)
assert geo_json_geometry_model != False
# Construct a model instance of GeoJsonGeometry by calling from_dict on the json representation
geo_json_geometry_model_dict = GeoJsonGeometry.from_dict(geo_json_geometry_model_json).__dict__
geo_json_geometry_model2 = GeoJsonGeometry(**geo_json_geometry_model_dict)
# Verify the model instances are equivalent
assert geo_json_geometry_model == geo_json_geometry_model2
# Convert model instance back to dict and verify no loss of data
geo_json_geometry_model_json2 = geo_json_geometry_model.to_dict()
assert geo_json_geometry_model_json2 == geo_json_geometry_model_json
class TestModel_GeoJsonGeometryCollection():
"""
Test Class for GeoJsonGeometryCollection
"""
def test_geo_json_geometry_collection_serialization(self):
"""
Test serialization/deserialization for GeoJsonGeometryCollection
"""
# Construct dict forms of any model objects needed in order to build this model.
geo_json_geometry_model = {} # GeoJsonGeometry
geo_json_geometry_model['type'] = 'Point'
geo_json_geometry_model['coordinates'] = ['testString']
# Construct a json representation of a GeoJsonGeometryCollection model
geo_json_geometry_collection_model_json = {}
geo_json_geometry_collection_model_json['type'] = 'Point'
geo_json_geometry_collection_model_json['geometries'] = [geo_json_geometry_model]
# Construct a model instance of GeoJsonGeometryCollection by calling from_dict on the json representation
geo_json_geometry_collection_model = GeoJsonGeometryCollection.from_dict(geo_json_geometry_collection_model_json)
assert geo_json_geometry_collection_model != False
# Construct a model instance of GeoJsonGeometryCollection by calling from_dict on the json representation
geo_json_geometry_collection_model_dict = GeoJsonGeometryCollection.from_dict(geo_json_geometry_collection_model_json).__dict__
geo_json_geometry_collection_model2 = GeoJsonGeometryCollection(**geo_json_geometry_collection_model_dict)
# Verify the model instances are equivalent
assert geo_json_geometry_collection_model == geo_json_geometry_collection_model2
# Convert model instance back to dict and verify no loss of data
geo_json_geometry_collection_model_json2 = geo_json_geometry_collection_model.to_dict()
assert geo_json_geometry_collection_model_json2 == geo_json_geometry_collection_model_json
# endregion
##############################################################################
# End of Model Tests
##############################################################################
| 41.556629 | 1,470 | 0.647074 | 85,181 | 778,605 | 5.559925 | 0.009357 | 0.019552 | 0.012188 | 0.015938 | 0.94708 | 0.917247 | 0.880572 | 0.836345 | 0.798592 | 0.748767 | 0 | 0.008571 | 0.250639 | 778,605 | 18,735 | 1,471 | 41.558847 | 0.803144 | 0.176193 | 0 | 0.774922 | 0 | 0.010003 | 0.178667 | 0.027602 | 0 | 0 | 0 | 0 | 0.101069 | 1 | 0.058641 | false | 0.001035 | 0.001466 | 0 | 0.095981 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7915c0398284363846ac6ff5d2243209ff6ba461 | 10,197 | py | Python | src/client/gen-py/cse124/ttypes.py | Atyansh/TwitterRPC | e5e252a3a7289d92af9187d7d6fc1d5bce74c984 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | src/client/gen-py/cse124/ttypes.py | Atyansh/TwitterRPC | e5e252a3a7289d92af9187d7d6fc1d5bce74c984 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | src/client/gen-py/cse124/ttypes.py | Atyansh/TwitterRPC | e5e252a3a7289d92af9187d7d6fc1d5bce74c984 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | #
# Autogenerated by Thrift Compiler (0.9.0)
#
# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
#
# options string: py
#
from thrift.Thrift import TType, TMessageType, TException, TApplicationException
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol, TProtocol
try:
from thrift.protocol import fastbinary
except:
fastbinary = None
class Tweet:
"""
Attributes:
- tweetId
- handle
- posted
- numStars
- tweetString
"""
thrift_spec = (
None, # 0
(1, TType.I64, 'tweetId', None, None, ), # 1
(2, TType.STRING, 'handle', None, None, ), # 2
(3, TType.I64, 'posted', None, None, ), # 3
(4, TType.I32, 'numStars', None, None, ), # 4
(5, TType.STRING, 'tweetString', None, None, ), # 5
)
def __init__(self, tweetId=None, handle=None, posted=None, numStars=None, tweetString=None,):
self.tweetId = tweetId
self.handle = handle
self.posted = posted
self.numStars = numStars
self.tweetString = tweetString
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.I64:
self.tweetId = iprot.readI64();
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.handle = iprot.readString();
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.I64:
self.posted = iprot.readI64();
else:
iprot.skip(ftype)
elif fid == 4:
if ftype == TType.I32:
self.numStars = iprot.readI32();
else:
iprot.skip(ftype)
elif fid == 5:
if ftype == TType.STRING:
self.tweetString = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('Tweet')
if self.tweetId is not None:
oprot.writeFieldBegin('tweetId', TType.I64, 1)
oprot.writeI64(self.tweetId)
oprot.writeFieldEnd()
if self.handle is not None:
oprot.writeFieldBegin('handle', TType.STRING, 2)
oprot.writeString(self.handle)
oprot.writeFieldEnd()
if self.posted is not None:
oprot.writeFieldBegin('posted', TType.I64, 3)
oprot.writeI64(self.posted)
oprot.writeFieldEnd()
if self.numStars is not None:
oprot.writeFieldBegin('numStars', TType.I32, 4)
oprot.writeI32(self.numStars)
oprot.writeFieldEnd()
if self.tweetString is not None:
oprot.writeFieldBegin('tweetString', TType.STRING, 5)
oprot.writeString(self.tweetString)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class AlreadyExistsException(TException):
"""
Attributes:
- user
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'user', None, None, ), # 1
)
def __init__(self, user=None,):
self.user = user
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.user = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('AlreadyExistsException')
if self.user is not None:
oprot.writeFieldBegin('user', TType.STRING, 1)
oprot.writeString(self.user)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __str__(self):
return repr(self)
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class NoSuchUserException(TException):
"""
Attributes:
- user
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'user', None, None, ), # 1
)
def __init__(self, user=None,):
self.user = user
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.user = iprot.readString();
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('NoSuchUserException')
if self.user is not None:
oprot.writeFieldBegin('user', TType.STRING, 1)
oprot.writeString(self.user)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __str__(self):
return repr(self)
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class TweetTooLongException(TException):
thrift_spec = (
)
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('TweetTooLongException')
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __str__(self):
return repr(self)
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class NoSuchTweetException(TException):
thrift_spec = (
)
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('NoSuchTweetException')
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __str__(self):
return repr(self)
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
| 29.728863 | 188 | 0.661567 | 1,195 | 10,197 | 5.374895 | 0.097071 | 0.021018 | 0.037833 | 0.033629 | 0.803986 | 0.770512 | 0.757123 | 0.757123 | 0.745602 | 0.745602 | 0 | 0.008052 | 0.220555 | 10,197 | 342 | 189 | 29.815789 | 0.800075 | 0.025105 | 0 | 0.779468 | 1 | 0 | 0.024704 | 0.004354 | 0 | 0 | 0 | 0 | 0 | 1 | 0.140684 | false | 0 | 0.015209 | 0.072243 | 0.323194 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
793a89402e7ef1e5ab1cd6a85bcd8f878e764b5e | 132 | py | Python | run.py | JonathanRys/better_lesson_code_test | aaa9d16f2ddb6ad0aacaa3ef6df9f36f14e3aa51 | [
"MIT"
] | null | null | null | run.py | JonathanRys/better_lesson_code_test | aaa9d16f2ddb6ad0aacaa3ef6df9f36f14e3aa51 | [
"MIT"
] | null | null | null | run.py | JonathanRys/better_lesson_code_test | aaa9d16f2ddb6ad0aacaa3ef6df9f36f14e3aa51 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from better_lesson_code_test.better_lesson_code_test import run_app
if __name__ == '__main__':
run_app()
| 22 | 67 | 0.780303 | 21 | 132 | 4.142857 | 0.714286 | 0.275862 | 0.367816 | 0.45977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 132 | 5 | 68 | 26.4 | 0.75 | 0.151515 | 0 | 0 | 0 | 0 | 0.072072 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
f707a796c39cbd67c8af787b6b9a266f5e4f040b | 2,578 | py | Python | tests/test_validator_create.py | VariantEffect/fqfa | f99e96009a05a24ca6aabbf9e04d3cf87fe00cb4 | [
"BSD-3-Clause"
] | 2 | 2019-12-24T06:53:57.000Z | 2020-01-15T21:06:24.000Z | tests/test_validator_create.py | VariantEffect/fqfa | f99e96009a05a24ca6aabbf9e04d3cf87fe00cb4 | [
"BSD-3-Clause"
] | 3 | 2020-01-23T03:34:47.000Z | 2020-02-20T10:22:23.000Z | tests/test_validator_create.py | VariantEffect/fqfa | f99e96009a05a24ca6aabbf9e04d3cf87fe00cb4 | [
"BSD-3-Clause"
] | 2 | 2020-02-11T23:39:11.000Z | 2020-03-28T22:00:24.000Z | import unittest
from fqfa.validator.create import create_validator
class TestCreateValidator(unittest.TestCase):
def test_create_from_string(self) -> None:
# case sensitive
validator = create_validator("ACGT")
# test valid strings
self.assertIsNotNone(validator("ACGT"))
self.assertIsNotNone(validator("AAAAAAA"))
# test invalid strings
self.assertIsNone(validator("acgt"))
self.assertIsNone(validator("AAAAAAa"))
self.assertIsNone(validator(""))
self.assertIsNone(validator("123"))
self.assertIsNone(validator("AAAA AAA"))
# case insensitive
validator = create_validator("ACGT", case_sensitive=False)
# test valid strings
self.assertIsNotNone(validator("ACGT"))
self.assertIsNotNone(validator("AAAAAAA"))
self.assertIsNotNone(validator("acgt"))
self.assertIsNotNone(validator("AAAAAAa"))
# test invalid strings
self.assertIsNone(validator(""))
self.assertIsNone(validator("123"))
self.assertIsNone(validator("AAAA AAA"))
def test_create_from_list(self) -> None:
# case sensitive
validator = create_validator(list("ACGT"))
# test valid strings
self.assertIsNotNone(validator("ACGT"))
self.assertIsNotNone(validator("AAAAAAA"))
# test invalid strings
self.assertIsNone(validator("acgt"))
self.assertIsNone(validator("AAAAAAa"))
self.assertIsNone(validator(""))
self.assertIsNone(validator("123"))
self.assertIsNone(validator("AAAA AAA"))
# case insensitive
validator = create_validator(list("ACGT"), case_sensitive=False)
# test valid strings
self.assertIsNotNone(validator("ACGT"))
self.assertIsNotNone(validator("AAAAAAA"))
self.assertIsNotNone(validator("acgt"))
self.assertIsNotNone(validator("AAAAAAa"))
# test invalid strings
self.assertIsNone(validator(""))
self.assertIsNone(validator("123"))
self.assertIsNone(validator("AAAA AAA"))
# invalid list arguments
self.assertRaises(ValueError, create_validator, ["A", "C", "GT"])
self.assertRaises(
ValueError, create_validator, ["A", "C", "GT"], case_sensitive=False
)
self.assertRaises(ValueError, create_validator, ["A", "C", ""])
self.assertRaises(
ValueError, create_validator, ["A", "C", ""], case_sensitive=False
)
if __name__ == "__main__":
unittest.main()
| 33.051282 | 80 | 0.640419 | 239 | 2,578 | 6.794979 | 0.158996 | 0.157635 | 0.246305 | 0.118227 | 0.864532 | 0.859606 | 0.859606 | 0.751232 | 0.695813 | 0.695813 | 0 | 0.006073 | 0.233514 | 2,578 | 77 | 81 | 33.480519 | 0.815789 | 0.095423 | 0 | 0.638298 | 0 | 0 | 0.072476 | 0 | 0 | 0 | 0 | 0 | 0.680851 | 1 | 0.042553 | false | 0 | 0.042553 | 0 | 0.106383 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f748f37d68d2e11bb2e0ff359cd1df4633cf0d23 | 51,569 | py | Python | src/rsc_rc.py | mjbonnington/uistyle | f2102e408a9ba6aa7cc7f37ec3f4a4d3bf134b10 | [
"MIT"
] | null | null | null | src/rsc_rc.py | mjbonnington/uistyle | f2102e408a9ba6aa7cc7f37ec3f4a4d3bf134b10 | [
"MIT"
] | null | null | null | src/rsc_rc.py | mjbonnington/uistyle | f2102e408a9ba6aa7cc7f37ec3f4a4d3bf134b10 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Resource object code
#
# Created by: The Resource Compiler for PyQt5 (Qt v5.9.1)
#
# WARNING! All changes made in this file will be lost!
from Qt import QtCore
qt_resource_data = b"\
\x00\x00\x00\x74\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x1f\x00\x00\x00\x1f\x08\x06\x00\x00\x00\x1f\xae\x16\x39\
\x00\x00\x00\x3b\x49\x44\x41\x54\x48\x89\xed\xd5\xb1\x0d\x00\x20\
\x0c\x03\x41\x87\x61\xb3\x93\xa7\x85\x26\x13\x60\x24\x9a\xbf\xde\
\xfa\xd2\xa5\x40\x77\x6f\x49\x65\xfb\x6a\xbf\x92\x78\x8a\x38\x71\
\xe2\xc4\x89\x13\x7f\xaa\x92\xf1\x5c\x2a\x00\x00\xf8\xeb\x00\x57\
\xcc\x06\x4d\x56\xa3\x34\x4b\x00\x00\x00\x00\x49\x45\x4e\x44\xae\
\x42\x60\x82\
\x00\x00\x00\x66\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x2d\x49\x44\x41\x54\x18\x95\x63\x60\xa0\x2b\x60\x64\
\x60\x60\x60\x30\x36\x36\xfe\x8f\x4b\xc1\xd9\xb3\x67\x19\x99\x90\
\x15\xe3\x32\x84\x09\x5d\x00\x1b\x9f\x09\x87\x04\x2e\x93\x29\x04\
\x00\x23\x70\x04\x0e\x7e\xcf\x74\x12\x00\x00\x00\x00\x49\x45\x4e\
\x44\xae\x42\x60\x82\
\x00\x00\x03\xb6\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x19\x74\x45\x58\x74\x53\x6f\x66\x74\x77\x61\x72\x65\
\x00\x41\x64\x6f\x62\x65\x20\x49\x6d\x61\x67\x65\x52\x65\x61\x64\
\x79\x71\xc9\x65\x3c\x00\x00\x03\x22\x69\x54\x58\x74\x58\x4d\x4c\
\x3a\x63\x6f\x6d\x2e\x61\x64\x6f\x62\x65\x2e\x78\x6d\x70\x00\x00\
\x00\x00\x00\x3c\x3f\x78\x70\x61\x63\x6b\x65\x74\x20\x62\x65\x67\
\x69\x6e\x3d\x22\xef\xbb\xbf\x22\x20\x69\x64\x3d\x22\x57\x35\x4d\
\x30\x4d\x70\x43\x65\x68\x69\x48\x7a\x72\x65\x53\x7a\x4e\x54\x63\
\x7a\x6b\x63\x39\x64\x22\x3f\x3e\x20\x3c\x78\x3a\x78\x6d\x70\x6d\
\x65\x74\x61\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x3d\x22\x61\x64\x6f\
\x62\x65\x3a\x6e\x73\x3a\x6d\x65\x74\x61\x2f\x22\x20\x78\x3a\x78\
\x6d\x70\x74\x6b\x3d\x22\x41\x64\x6f\x62\x65\x20\x58\x4d\x50\x20\
\x43\x6f\x72\x65\x20\x35\x2e\x33\x2d\x63\x30\x31\x31\x20\x36\x36\
\x2e\x31\x34\x35\x36\x36\x31\x2c\x20\x32\x30\x31\x32\x2f\x30\x32\
\x2f\x30\x36\x2d\x31\x34\x3a\x35\x36\x3a\x32\x37\x20\x20\x20\x20\
\x20\x20\x20\x20\x22\x3e\x20\x3c\x72\x64\x66\x3a\x52\x44\x46\x20\
\x78\x6d\x6c\x6e\x73\x3a\x72\x64\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x77\x77\x77\x2e\x77\x33\x2e\x6f\x72\x67\x2f\x31\x39\x39\
\x39\x2f\x30\x32\x2f\x32\x32\x2d\x72\x64\x66\x2d\x73\x79\x6e\x74\
\x61\x78\x2d\x6e\x73\x23\x22\x3e\x20\x3c\x72\x64\x66\x3a\x44\x65\
\x73\x63\x72\x69\x70\x74\x69\x6f\x6e\x20\x72\x64\x66\x3a\x61\x62\
\x6f\x75\x74\x3d\x22\x22\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\
\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\
\x65\x2e\x63\x6f\x6d\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x22\x20\
\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\x4d\x4d\x3d\x22\x68\x74\x74\
\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\
\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x6d\x6d\x2f\x22\x20\x78\x6d\
\x6c\x6e\x73\x3a\x73\x74\x52\x65\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\x2f\x78\
\x61\x70\x2f\x31\x2e\x30\x2f\x73\x54\x79\x70\x65\x2f\x52\x65\x73\
\x6f\x75\x72\x63\x65\x52\x65\x66\x23\x22\x20\x78\x6d\x70\x3a\x43\
\x72\x65\x61\x74\x6f\x72\x54\x6f\x6f\x6c\x3d\x22\x41\x64\x6f\x62\
\x65\x20\x50\x68\x6f\x74\x6f\x73\x68\x6f\x70\x20\x43\x53\x36\x20\
\x28\x57\x69\x6e\x64\x6f\x77\x73\x29\x22\x20\x78\x6d\x70\x4d\x4d\
\x3a\x49\x6e\x73\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\
\x2e\x69\x69\x64\x3a\x33\x41\x36\x34\x43\x30\x31\x45\x45\x38\x42\
\x44\x31\x31\x45\x38\x38\x32\x39\x37\x42\x37\x46\x42\x46\x42\x34\
\x45\x32\x41\x30\x31\x22\x20\x78\x6d\x70\x4d\x4d\x3a\x44\x6f\x63\
\x75\x6d\x65\x6e\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\
\x3a\x33\x41\x36\x34\x43\x30\x31\x46\x45\x38\x42\x44\x31\x31\x45\
\x38\x38\x32\x39\x37\x42\x37\x46\x42\x46\x42\x34\x45\x32\x41\x30\
\x31\x22\x3e\x20\x3c\x78\x6d\x70\x4d\x4d\x3a\x44\x65\x72\x69\x76\
\x65\x64\x46\x72\x6f\x6d\x20\x73\x74\x52\x65\x66\x3a\x69\x6e\x73\
\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\x2e\x69\x69\x64\
\x3a\x33\x41\x36\x34\x43\x30\x31\x43\x45\x38\x42\x44\x31\x31\x45\
\x38\x38\x32\x39\x37\x42\x37\x46\x42\x46\x42\x34\x45\x32\x41\x30\
\x31\x22\x20\x73\x74\x52\x65\x66\x3a\x64\x6f\x63\x75\x6d\x65\x6e\
\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\x3a\x33\x41\x36\
\x34\x43\x30\x31\x44\x45\x38\x42\x44\x31\x31\x45\x38\x38\x32\x39\
\x37\x42\x37\x46\x42\x46\x42\x34\x45\x32\x41\x30\x31\x22\x2f\x3e\
\x20\x3c\x2f\x72\x64\x66\x3a\x44\x65\x73\x63\x72\x69\x70\x74\x69\
\x6f\x6e\x3e\x20\x3c\x2f\x72\x64\x66\x3a\x52\x44\x46\x3e\x20\x3c\
\x2f\x78\x3a\x78\x6d\x70\x6d\x65\x74\x61\x3e\x20\x3c\x3f\x78\x70\
\x61\x63\x6b\x65\x74\x20\x65\x6e\x64\x3d\x22\x72\x22\x3f\x3e\xa2\
\x0b\x58\x7d\x00\x00\x00\x2a\x49\x44\x41\x54\x78\xda\x62\xfc\xff\
\xff\x3f\x03\x21\xc0\xc4\x40\x04\x60\x81\x31\xce\x9e\x3d\x8b\x61\
\xa4\xb1\xb1\x31\x23\xd1\x26\xd1\x59\x11\x23\xd5\x82\x00\x20\xc0\
\x00\x99\x75\x0a\x0d\x07\x98\x52\x81\x00\x00\x00\x00\x49\x45\x4e\
\x44\xae\x42\x60\x82\
\x00\x00\x03\xcc\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x19\x74\x45\x58\x74\x53\x6f\x66\x74\x77\x61\x72\x65\
\x00\x41\x64\x6f\x62\x65\x20\x49\x6d\x61\x67\x65\x52\x65\x61\x64\
\x79\x71\xc9\x65\x3c\x00\x00\x03\x22\x69\x54\x58\x74\x58\x4d\x4c\
\x3a\x63\x6f\x6d\x2e\x61\x64\x6f\x62\x65\x2e\x78\x6d\x70\x00\x00\
\x00\x00\x00\x3c\x3f\x78\x70\x61\x63\x6b\x65\x74\x20\x62\x65\x67\
\x69\x6e\x3d\x22\xef\xbb\xbf\x22\x20\x69\x64\x3d\x22\x57\x35\x4d\
\x30\x4d\x70\x43\x65\x68\x69\x48\x7a\x72\x65\x53\x7a\x4e\x54\x63\
\x7a\x6b\x63\x39\x64\x22\x3f\x3e\x20\x3c\x78\x3a\x78\x6d\x70\x6d\
\x65\x74\x61\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x3d\x22\x61\x64\x6f\
\x62\x65\x3a\x6e\x73\x3a\x6d\x65\x74\x61\x2f\x22\x20\x78\x3a\x78\
\x6d\x70\x74\x6b\x3d\x22\x41\x64\x6f\x62\x65\x20\x58\x4d\x50\x20\
\x43\x6f\x72\x65\x20\x35\x2e\x33\x2d\x63\x30\x31\x31\x20\x36\x36\
\x2e\x31\x34\x35\x36\x36\x31\x2c\x20\x32\x30\x31\x32\x2f\x30\x32\
\x2f\x30\x36\x2d\x31\x34\x3a\x35\x36\x3a\x32\x37\x20\x20\x20\x20\
\x20\x20\x20\x20\x22\x3e\x20\x3c\x72\x64\x66\x3a\x52\x44\x46\x20\
\x78\x6d\x6c\x6e\x73\x3a\x72\x64\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x77\x77\x77\x2e\x77\x33\x2e\x6f\x72\x67\x2f\x31\x39\x39\
\x39\x2f\x30\x32\x2f\x32\x32\x2d\x72\x64\x66\x2d\x73\x79\x6e\x74\
\x61\x78\x2d\x6e\x73\x23\x22\x3e\x20\x3c\x72\x64\x66\x3a\x44\x65\
\x73\x63\x72\x69\x70\x74\x69\x6f\x6e\x20\x72\x64\x66\x3a\x61\x62\
\x6f\x75\x74\x3d\x22\x22\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\
\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\
\x65\x2e\x63\x6f\x6d\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x22\x20\
\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\x4d\x4d\x3d\x22\x68\x74\x74\
\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\
\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x6d\x6d\x2f\x22\x20\x78\x6d\
\x6c\x6e\x73\x3a\x73\x74\x52\x65\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\x2f\x78\
\x61\x70\x2f\x31\x2e\x30\x2f\x73\x54\x79\x70\x65\x2f\x52\x65\x73\
\x6f\x75\x72\x63\x65\x52\x65\x66\x23\x22\x20\x78\x6d\x70\x3a\x43\
\x72\x65\x61\x74\x6f\x72\x54\x6f\x6f\x6c\x3d\x22\x41\x64\x6f\x62\
\x65\x20\x50\x68\x6f\x74\x6f\x73\x68\x6f\x70\x20\x43\x53\x36\x20\
\x28\x57\x69\x6e\x64\x6f\x77\x73\x29\x22\x20\x78\x6d\x70\x4d\x4d\
\x3a\x49\x6e\x73\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\
\x2e\x69\x69\x64\x3a\x30\x36\x37\x43\x46\x33\x34\x35\x45\x38\x42\
\x43\x31\x31\x45\x38\x41\x34\x34\x37\x46\x31\x46\x34\x42\x34\x36\
\x37\x45\x41\x45\x37\x22\x20\x78\x6d\x70\x4d\x4d\x3a\x44\x6f\x63\
\x75\x6d\x65\x6e\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\
\x3a\x30\x36\x37\x43\x46\x33\x34\x36\x45\x38\x42\x43\x31\x31\x45\
\x38\x41\x34\x34\x37\x46\x31\x46\x34\x42\x34\x36\x37\x45\x41\x45\
\x37\x22\x3e\x20\x3c\x78\x6d\x70\x4d\x4d\x3a\x44\x65\x72\x69\x76\
\x65\x64\x46\x72\x6f\x6d\x20\x73\x74\x52\x65\x66\x3a\x69\x6e\x73\
\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\x2e\x69\x69\x64\
\x3a\x30\x36\x37\x43\x46\x33\x34\x33\x45\x38\x42\x43\x31\x31\x45\
\x38\x41\x34\x34\x37\x46\x31\x46\x34\x42\x34\x36\x37\x45\x41\x45\
\x37\x22\x20\x73\x74\x52\x65\x66\x3a\x64\x6f\x63\x75\x6d\x65\x6e\
\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\x3a\x30\x36\x37\
\x43\x46\x33\x34\x34\x45\x38\x42\x43\x31\x31\x45\x38\x41\x34\x34\
\x37\x46\x31\x46\x34\x42\x34\x36\x37\x45\x41\x45\x37\x22\x2f\x3e\
\x20\x3c\x2f\x72\x64\x66\x3a\x44\x65\x73\x63\x72\x69\x70\x74\x69\
\x6f\x6e\x3e\x20\x3c\x2f\x72\x64\x66\x3a\x52\x44\x46\x3e\x20\x3c\
\x2f\x78\x3a\x78\x6d\x70\x6d\x65\x74\x61\x3e\x20\x3c\x3f\x78\x70\
\x61\x63\x6b\x65\x74\x20\x65\x6e\x64\x3d\x22\x72\x22\x3f\x3e\x98\
\xe2\x1c\x8b\x00\x00\x00\x40\x49\x44\x41\x54\x78\xda\x62\xfc\xff\
\xff\x3f\x03\x21\xc0\x82\xcc\x39\x7b\xf6\x2c\x48\x07\x23\x8c\x6f\
\x6c\x6c\x0c\xa6\x99\xb0\x68\xc4\x30\x9a\x09\x87\x0d\xff\x89\x51\
\x84\xa2\x10\x9f\x22\x46\x42\x8a\x18\x09\x59\xc7\x88\xec\x33\xb0\
\x00\x31\xe1\x04\x10\x60\x00\x1f\x57\x0d\xa6\xcc\xae\xd6\xc8\x00\
\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x00\x6a\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x1f\x00\x00\x00\x1f\x08\x06\x00\x00\x00\x1f\xae\x16\x39\
\x00\x00\x00\x31\x49\x44\x41\x54\x48\x89\xed\xcd\xb1\x0d\x00\x30\
\x0c\xc3\xb0\x20\xc7\xfa\xff\x0f\xda\x39\xb3\x57\x6a\x17\x38\x53\
\x94\xe4\x35\xff\x36\x73\x1b\x1c\x0e\x87\xc3\xe1\x70\x38\x1c\x0e\
\x87\xc3\xe1\xa7\x0f\x79\x12\x02\x6e\xab\x28\xbe\xf1\x00\x00\x00\
\x00\x49\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x00\x92\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x1f\x00\x00\x00\x1f\x08\x06\x00\x00\x00\x1f\xae\x16\x39\
\x00\x00\x00\x59\x49\x44\x41\x54\x48\x89\xed\xd6\x31\x0a\x00\x21\
\x0c\x44\xd1\x64\xd9\xb3\x26\x67\xca\x69\xd7\xc6\xce\x46\x1c\x41\
\x58\xff\xf4\xc3\x43\x09\x24\x6e\x42\x22\xe2\x33\x33\xaf\xaa\xa5\
\xfe\xa3\xe0\x6a\xc0\xc1\xc1\xc1\xc1\xc1\xb7\xc6\x95\x72\x5f\xa9\
\xcb\x79\x95\x72\xcf\xb0\xcf\x33\x73\xaa\x78\xf4\xdb\x77\xbc\x7c\
\xc8\xec\x65\x73\xef\xb4\x83\x83\x83\x83\xff\x17\x6f\xcc\x54\x0b\
\x3b\xe0\x85\xdc\x63\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\
\x82\
\x00\x00\x00\x71\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x38\x49\x44\x41\x54\x18\x95\xbd\xcb\xc1\x0d\x00\x20\
\x0c\x42\xd1\xa7\x2b\x75\xff\x11\x3a\x93\x9e\x4c\xaa\x89\x37\x23\
\x17\xc2\x07\x78\xa5\x06\x11\x51\xd9\x58\x1c\x32\x53\x3f\x4e\xe3\
\x70\xd8\x46\x5b\x51\x73\xbf\x0c\x6e\xc7\x1f\x9a\x83\xe9\x0a\x02\
\xd2\xeb\x37\x71\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82\
\
\x00\x00\x00\xbe\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x05\x00\x00\x00\x05\x08\x06\x00\x00\x00\x8d\x6f\x26\xe5\
\x00\x00\x00\x04\x67\x41\x4d\x41\x00\x00\xb1\x8f\x0b\xfc\x61\x05\
\x00\x00\x00\x20\x63\x48\x52\x4d\x00\x00\x7a\x26\x00\x00\x80\x84\
\x00\x00\xfa\x00\x00\x00\x80\xe8\x00\x00\x75\x30\x00\x00\xea\x60\
\x00\x00\x3a\x98\x00\x00\x17\x70\x9c\xba\x51\x3c\x00\x00\x00\x06\
\x62\x4b\x47\x44\x00\x00\x00\x00\x00\x00\xf9\x43\xbb\x7f\x00\x00\
\x00\x09\x70\x48\x59\x73\x00\x00\x0b\x12\x00\x00\x0b\x12\x01\xd2\
\xdd\x7e\xfc\x00\x00\x00\x22\x49\x44\x41\x54\x08\xd7\x5d\xc8\x31\
\x01\x00\x30\x0c\x02\xb0\xa0\x7c\x15\x88\xa8\xfd\xe4\x0c\xa3\xed\
\x65\x03\x2f\x1b\x90\x0d\xc8\x06\x7c\x46\xd2\x10\x75\x18\x9e\xcf\
\xd6\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x00\x77\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x3e\x49\x44\x41\x54\x18\x95\x85\xcd\xc1\x11\x00\x30\
\x04\x44\xd1\x2f\x2d\xe9\xbf\x36\x39\xc9\x88\x10\x4e\x6b\xe7\x0d\
\xa2\xaa\x0c\x63\x6b\x02\x00\x3f\x64\x1e\x3a\x64\x21\xcb\x4a\xc5\
\x03\xe2\x25\xeb\x40\x7e\x57\x02\x47\x57\x51\xec\xe7\x92\x74\x00\
\x60\x03\x7a\x63\x09\xaa\xdb\xad\xe8\x0b\x00\x00\x00\x00\x49\x45\
\x4e\x44\xae\x42\x60\x82\
\x00\x00\x00\xa2\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x69\x49\x44\x41\x54\x18\x95\x8d\xcf\x31\x0a\xc3\x30\
\x10\x44\xd1\x17\x13\xf0\x85\xd4\x0a\xd2\xe5\x0a\x2e\x7d\xac\x94\
\x2a\x7d\x0f\xb5\xba\x90\xbb\x34\x6b\x23\x27\x08\x3c\xd5\xec\xf0\
\x61\x67\xb8\xa1\x07\xb4\xd6\x60\xc5\x82\x8c\x8a\x0d\x25\xa5\xe4\
\x19\xf0\x8a\x0f\xe6\xb8\xdf\x78\x85\x2f\x53\x98\xa5\x03\x0e\xcd\
\x91\x3b\xa0\x3c\xa8\x93\x7b\xa8\x0e\xa0\xda\x43\x1b\xf6\x1f\x60\
\x8f\xfc\x2c\x5e\xba\x6e\x97\x75\x83\x0f\xff\xfa\x02\x09\x67\x11\
\x78\xb7\x1a\x2c\xbd\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\
\x82\
\x00\x00\x00\xb8\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x7f\x49\x44\x41\x54\x18\x95\x8d\xcd\xab\x11\xc2\x40\
\x18\x04\xe0\x0f\xe8\x22\x6d\xa4\x81\xb3\x51\x91\xb1\x48\x0a\x60\
\x28\x02\x98\xc1\x22\xb1\x91\xa7\x30\x88\x2b\x80\x74\x80\xbf\x36\
\x30\x7f\x98\x0c\x2a\x6b\x76\xf6\x31\xbb\xac\xc0\x06\xda\xb6\xbd\
\x23\xe3\xb9\xc8\x3a\xf4\xd3\x34\x1d\xb6\x61\x64\x3c\x90\x42\xa7\
\xd0\x19\x76\xd0\x34\xcd\x07\x6f\x8c\xb1\x7e\xc3\x80\x57\xad\xd5\
\xbc\x04\x05\x57\x9c\x83\xcb\x1c\x2c\x4b\x09\x47\x9c\x82\xd3\x7f\
\xa9\x8b\xab\x01\x97\xe0\x31\xfc\x5f\xa9\xc7\x7e\x71\x51\x42\xf7\
\xd6\xe2\x0b\x67\xa1\x1b\x23\x0e\x49\xd4\x13\x00\x00\x00\x00\x49\
\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x00\xab\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x72\x49\x44\x41\x54\x18\x95\x8d\xcd\xa1\x11\x02\x31\
\x14\x84\xe1\x0f\x6a\x4a\x01\xb1\x51\x27\xcf\x22\x29\x00\x28\x02\
\xae\x00\x24\x16\x19\x75\x36\x05\xa4\x28\xcc\xcb\x4c\x06\x75\x6b\
\x76\xf6\xed\x3f\x6f\x39\xa0\x13\xf4\xde\xdf\xa8\xd8\xa7\xae\x60\
\x49\x29\x5d\xcf\x71\xa8\xf8\x20\x47\xce\x91\x2b\x0c\x68\xc7\x8a\
\x2f\xee\xe1\xeb\xf8\x3c\x20\x68\xd8\xf0\x0c\x6f\xa3\x98\xa1\x8c\
\x1b\x1e\xe1\xf9\x1f\x2a\xd3\xc4\x6b\x9a\x2e\x33\xb4\xe0\x32\x4d\
\xb4\xc8\x8b\xa3\xfa\x01\x47\x23\x18\x0e\xd6\x51\xab\xd9\x00\x00\
\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x03\xc7\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x19\x74\x45\x58\x74\x53\x6f\x66\x74\x77\x61\x72\x65\
\x00\x41\x64\x6f\x62\x65\x20\x49\x6d\x61\x67\x65\x52\x65\x61\x64\
\x79\x71\xc9\x65\x3c\x00\x00\x03\x22\x69\x54\x58\x74\x58\x4d\x4c\
\x3a\x63\x6f\x6d\x2e\x61\x64\x6f\x62\x65\x2e\x78\x6d\x70\x00\x00\
\x00\x00\x00\x3c\x3f\x78\x70\x61\x63\x6b\x65\x74\x20\x62\x65\x67\
\x69\x6e\x3d\x22\xef\xbb\xbf\x22\x20\x69\x64\x3d\x22\x57\x35\x4d\
\x30\x4d\x70\x43\x65\x68\x69\x48\x7a\x72\x65\x53\x7a\x4e\x54\x63\
\x7a\x6b\x63\x39\x64\x22\x3f\x3e\x20\x3c\x78\x3a\x78\x6d\x70\x6d\
\x65\x74\x61\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x3d\x22\x61\x64\x6f\
\x62\x65\x3a\x6e\x73\x3a\x6d\x65\x74\x61\x2f\x22\x20\x78\x3a\x78\
\x6d\x70\x74\x6b\x3d\x22\x41\x64\x6f\x62\x65\x20\x58\x4d\x50\x20\
\x43\x6f\x72\x65\x20\x35\x2e\x33\x2d\x63\x30\x31\x31\x20\x36\x36\
\x2e\x31\x34\x35\x36\x36\x31\x2c\x20\x32\x30\x31\x32\x2f\x30\x32\
\x2f\x30\x36\x2d\x31\x34\x3a\x35\x36\x3a\x32\x37\x20\x20\x20\x20\
\x20\x20\x20\x20\x22\x3e\x20\x3c\x72\x64\x66\x3a\x52\x44\x46\x20\
\x78\x6d\x6c\x6e\x73\x3a\x72\x64\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x77\x77\x77\x2e\x77\x33\x2e\x6f\x72\x67\x2f\x31\x39\x39\
\x39\x2f\x30\x32\x2f\x32\x32\x2d\x72\x64\x66\x2d\x73\x79\x6e\x74\
\x61\x78\x2d\x6e\x73\x23\x22\x3e\x20\x3c\x72\x64\x66\x3a\x44\x65\
\x73\x63\x72\x69\x70\x74\x69\x6f\x6e\x20\x72\x64\x66\x3a\x61\x62\
\x6f\x75\x74\x3d\x22\x22\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\
\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\
\x65\x2e\x63\x6f\x6d\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x22\x20\
\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\x4d\x4d\x3d\x22\x68\x74\x74\
\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\
\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x6d\x6d\x2f\x22\x20\x78\x6d\
\x6c\x6e\x73\x3a\x73\x74\x52\x65\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\x2f\x78\
\x61\x70\x2f\x31\x2e\x30\x2f\x73\x54\x79\x70\x65\x2f\x52\x65\x73\
\x6f\x75\x72\x63\x65\x52\x65\x66\x23\x22\x20\x78\x6d\x70\x3a\x43\
\x72\x65\x61\x74\x6f\x72\x54\x6f\x6f\x6c\x3d\x22\x41\x64\x6f\x62\
\x65\x20\x50\x68\x6f\x74\x6f\x73\x68\x6f\x70\x20\x43\x53\x36\x20\
\x28\x57\x69\x6e\x64\x6f\x77\x73\x29\x22\x20\x78\x6d\x70\x4d\x4d\
\x3a\x49\x6e\x73\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\
\x2e\x69\x69\x64\x3a\x36\x43\x36\x42\x35\x32\x43\x45\x45\x38\x42\
\x44\x31\x31\x45\x38\x38\x34\x31\x37\x44\x45\x44\x43\x45\x38\x39\
\x39\x35\x30\x37\x37\x22\x20\x78\x6d\x70\x4d\x4d\x3a\x44\x6f\x63\
\x75\x6d\x65\x6e\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\
\x3a\x36\x43\x36\x42\x35\x32\x43\x46\x45\x38\x42\x44\x31\x31\x45\
\x38\x38\x34\x31\x37\x44\x45\x44\x43\x45\x38\x39\x39\x35\x30\x37\
\x37\x22\x3e\x20\x3c\x78\x6d\x70\x4d\x4d\x3a\x44\x65\x72\x69\x76\
\x65\x64\x46\x72\x6f\x6d\x20\x73\x74\x52\x65\x66\x3a\x69\x6e\x73\
\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\x2e\x69\x69\x64\
\x3a\x36\x43\x36\x42\x35\x32\x43\x43\x45\x38\x42\x44\x31\x31\x45\
\x38\x38\x34\x31\x37\x44\x45\x44\x43\x45\x38\x39\x39\x35\x30\x37\
\x37\x22\x20\x73\x74\x52\x65\x66\x3a\x64\x6f\x63\x75\x6d\x65\x6e\
\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\x3a\x36\x43\x36\
\x42\x35\x32\x43\x44\x45\x38\x42\x44\x31\x31\x45\x38\x38\x34\x31\
\x37\x44\x45\x44\x43\x45\x38\x39\x39\x35\x30\x37\x37\x22\x2f\x3e\
\x20\x3c\x2f\x72\x64\x66\x3a\x44\x65\x73\x63\x72\x69\x70\x74\x69\
\x6f\x6e\x3e\x20\x3c\x2f\x72\x64\x66\x3a\x52\x44\x46\x3e\x20\x3c\
\x2f\x78\x3a\x78\x6d\x70\x6d\x65\x74\x61\x3e\x20\x3c\x3f\x78\x70\
\x61\x63\x6b\x65\x74\x20\x65\x6e\x64\x3d\x22\x72\x22\x3f\x3e\x03\
\x3c\x14\x29\x00\x00\x00\x3b\x49\x44\x41\x54\x78\xda\x62\x3c\x73\
\xe6\xcc\x7f\x06\x02\x80\x05\x4a\x33\x1a\x1b\x1b\xc3\x05\xcf\x9e\
\x3d\x8b\xac\xe6\x3f\x13\x03\x11\x80\x28\x45\x2c\x48\x56\x60\x73\
\x1b\x23\x8a\x22\x7c\xee\xa2\xbe\x9b\xfe\x63\xf1\x3a\x1c\x00\x04\
\x18\x00\xce\x62\x11\x4b\x10\x70\x41\xf5\x00\x00\x00\x00\x49\x45\
\x4e\x44\xae\x42\x60\x82\
\x00\x00\x03\xcb\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x19\x74\x45\x58\x74\x53\x6f\x66\x74\x77\x61\x72\x65\
\x00\x41\x64\x6f\x62\x65\x20\x49\x6d\x61\x67\x65\x52\x65\x61\x64\
\x79\x71\xc9\x65\x3c\x00\x00\x03\x22\x69\x54\x58\x74\x58\x4d\x4c\
\x3a\x63\x6f\x6d\x2e\x61\x64\x6f\x62\x65\x2e\x78\x6d\x70\x00\x00\
\x00\x00\x00\x3c\x3f\x78\x70\x61\x63\x6b\x65\x74\x20\x62\x65\x67\
\x69\x6e\x3d\x22\xef\xbb\xbf\x22\x20\x69\x64\x3d\x22\x57\x35\x4d\
\x30\x4d\x70\x43\x65\x68\x69\x48\x7a\x72\x65\x53\x7a\x4e\x54\x63\
\x7a\x6b\x63\x39\x64\x22\x3f\x3e\x20\x3c\x78\x3a\x78\x6d\x70\x6d\
\x65\x74\x61\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x3d\x22\x61\x64\x6f\
\x62\x65\x3a\x6e\x73\x3a\x6d\x65\x74\x61\x2f\x22\x20\x78\x3a\x78\
\x6d\x70\x74\x6b\x3d\x22\x41\x64\x6f\x62\x65\x20\x58\x4d\x50\x20\
\x43\x6f\x72\x65\x20\x35\x2e\x33\x2d\x63\x30\x31\x31\x20\x36\x36\
\x2e\x31\x34\x35\x36\x36\x31\x2c\x20\x32\x30\x31\x32\x2f\x30\x32\
\x2f\x30\x36\x2d\x31\x34\x3a\x35\x36\x3a\x32\x37\x20\x20\x20\x20\
\x20\x20\x20\x20\x22\x3e\x20\x3c\x72\x64\x66\x3a\x52\x44\x46\x20\
\x78\x6d\x6c\x6e\x73\x3a\x72\x64\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x77\x77\x77\x2e\x77\x33\x2e\x6f\x72\x67\x2f\x31\x39\x39\
\x39\x2f\x30\x32\x2f\x32\x32\x2d\x72\x64\x66\x2d\x73\x79\x6e\x74\
\x61\x78\x2d\x6e\x73\x23\x22\x3e\x20\x3c\x72\x64\x66\x3a\x44\x65\
\x73\x63\x72\x69\x70\x74\x69\x6f\x6e\x20\x72\x64\x66\x3a\x61\x62\
\x6f\x75\x74\x3d\x22\x22\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\
\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\
\x65\x2e\x63\x6f\x6d\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x22\x20\
\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\x4d\x4d\x3d\x22\x68\x74\x74\
\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\
\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x6d\x6d\x2f\x22\x20\x78\x6d\
\x6c\x6e\x73\x3a\x73\x74\x52\x65\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\x2f\x78\
\x61\x70\x2f\x31\x2e\x30\x2f\x73\x54\x79\x70\x65\x2f\x52\x65\x73\
\x6f\x75\x72\x63\x65\x52\x65\x66\x23\x22\x20\x78\x6d\x70\x3a\x43\
\x72\x65\x61\x74\x6f\x72\x54\x6f\x6f\x6c\x3d\x22\x41\x64\x6f\x62\
\x65\x20\x50\x68\x6f\x74\x6f\x73\x68\x6f\x70\x20\x43\x53\x36\x20\
\x28\x57\x69\x6e\x64\x6f\x77\x73\x29\x22\x20\x78\x6d\x70\x4d\x4d\
\x3a\x49\x6e\x73\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\
\x2e\x69\x69\x64\x3a\x45\x42\x37\x44\x39\x35\x35\x36\x45\x38\x42\
\x42\x31\x31\x45\x38\x39\x39\x33\x37\x38\x39\x36\x37\x39\x33\x32\
\x33\x34\x42\x44\x44\x22\x20\x78\x6d\x70\x4d\x4d\x3a\x44\x6f\x63\
\x75\x6d\x65\x6e\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\
\x3a\x45\x42\x37\x44\x39\x35\x35\x37\x45\x38\x42\x42\x31\x31\x45\
\x38\x39\x39\x33\x37\x38\x39\x36\x37\x39\x33\x32\x33\x34\x42\x44\
\x44\x22\x3e\x20\x3c\x78\x6d\x70\x4d\x4d\x3a\x44\x65\x72\x69\x76\
\x65\x64\x46\x72\x6f\x6d\x20\x73\x74\x52\x65\x66\x3a\x69\x6e\x73\
\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\x2e\x69\x69\x64\
\x3a\x45\x42\x37\x44\x39\x35\x35\x34\x45\x38\x42\x42\x31\x31\x45\
\x38\x39\x39\x33\x37\x38\x39\x36\x37\x39\x33\x32\x33\x34\x42\x44\
\x44\x22\x20\x73\x74\x52\x65\x66\x3a\x64\x6f\x63\x75\x6d\x65\x6e\
\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\x3a\x45\x42\x37\
\x44\x39\x35\x35\x35\x45\x38\x42\x42\x31\x31\x45\x38\x39\x39\x33\
\x37\x38\x39\x36\x37\x39\x33\x32\x33\x34\x42\x44\x44\x22\x2f\x3e\
\x20\x3c\x2f\x72\x64\x66\x3a\x44\x65\x73\x63\x72\x69\x70\x74\x69\
\x6f\x6e\x3e\x20\x3c\x2f\x72\x64\x66\x3a\x52\x44\x46\x3e\x20\x3c\
\x2f\x78\x3a\x78\x6d\x70\x6d\x65\x74\x61\x3e\x20\x3c\x3f\x78\x70\
\x61\x63\x6b\x65\x74\x20\x65\x6e\x64\x3d\x22\x72\x22\x3f\x3e\x67\
\x8b\x61\xe5\x00\x00\x00\x3f\x49\x44\x41\x54\x78\xda\x62\xfc\xff\
\xff\x3f\x03\x21\xc0\x82\xcc\x39\x7b\xf6\x2c\x8c\xf9\xdf\xd8\xd8\
\x98\x11\xc6\x61\xc2\xa6\x00\xdd\x24\x26\x34\x3e\x56\xbb\x99\x08\
\x29\x40\x57\xc4\x48\x8c\x22\x9c\x0a\x99\xb0\x88\x61\x28\x64\x24\
\x26\x9c\x00\x02\x0c\x00\x6b\x58\x0f\x74\x3a\x78\xe7\x18\x00\x00\
\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x00\xa2\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x69\x49\x44\x41\x54\x18\x95\x8d\xcf\x31\x0a\xc3\x30\
\x10\x44\xd1\x17\x13\xf0\x81\x54\x0b\xd2\xe5\x0a\x2e\x7d\xac\x94\
\x2a\x7d\x0f\xd5\x3a\x90\xbb\x34\x6b\x23\x27\x08\x3c\xd5\xec\xf0\
\x61\x67\xb8\xa1\x07\xa4\x94\x60\xc5\x82\x8c\x8a\x0d\xa5\xb5\xe6\
\x19\xf0\x8a\x0f\xe6\xb8\xdf\x78\x85\x2f\x53\x98\xa5\x03\x0e\xcd\
\x91\x3b\xa0\x3c\xa8\x93\x7b\xa8\x0e\xa0\xda\x43\x1b\xf6\x1f\x60\
\x8f\xfc\x2c\x5e\xba\x6e\x97\x75\x83\x0f\xff\xfa\x02\xd7\x24\x11\
\x78\x18\xc5\x9a\x1e\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\
\x82\
\x00\x00\x00\x70\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x37\x49\x44\x41\x54\x18\x95\x63\x60\x20\x02\x30\xc2\
\x18\xc6\xc6\xc6\xc8\xe2\xff\xcf\x9e\x3d\x0b\x97\x63\xc2\xa2\xf1\
\x3f\xba\x00\xba\x22\x0c\x05\xe8\x8a\xb0\x2a\x40\x57\xc4\x48\x8c\
\x22\x9c\x0a\xb1\x39\x1c\xa7\x89\x78\x01\x00\xb6\xdc\x07\x0e\xb6\
\x96\x48\x81\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x03\xc8\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x19\x74\x45\x58\x74\x53\x6f\x66\x74\x77\x61\x72\x65\
\x00\x41\x64\x6f\x62\x65\x20\x49\x6d\x61\x67\x65\x52\x65\x61\x64\
\x79\x71\xc9\x65\x3c\x00\x00\x03\x22\x69\x54\x58\x74\x58\x4d\x4c\
\x3a\x63\x6f\x6d\x2e\x61\x64\x6f\x62\x65\x2e\x78\x6d\x70\x00\x00\
\x00\x00\x00\x3c\x3f\x78\x70\x61\x63\x6b\x65\x74\x20\x62\x65\x67\
\x69\x6e\x3d\x22\xef\xbb\xbf\x22\x20\x69\x64\x3d\x22\x57\x35\x4d\
\x30\x4d\x70\x43\x65\x68\x69\x48\x7a\x72\x65\x53\x7a\x4e\x54\x63\
\x7a\x6b\x63\x39\x64\x22\x3f\x3e\x20\x3c\x78\x3a\x78\x6d\x70\x6d\
\x65\x74\x61\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x3d\x22\x61\x64\x6f\
\x62\x65\x3a\x6e\x73\x3a\x6d\x65\x74\x61\x2f\x22\x20\x78\x3a\x78\
\x6d\x70\x74\x6b\x3d\x22\x41\x64\x6f\x62\x65\x20\x58\x4d\x50\x20\
\x43\x6f\x72\x65\x20\x35\x2e\x33\x2d\x63\x30\x31\x31\x20\x36\x36\
\x2e\x31\x34\x35\x36\x36\x31\x2c\x20\x32\x30\x31\x32\x2f\x30\x32\
\x2f\x30\x36\x2d\x31\x34\x3a\x35\x36\x3a\x32\x37\x20\x20\x20\x20\
\x20\x20\x20\x20\x22\x3e\x20\x3c\x72\x64\x66\x3a\x52\x44\x46\x20\
\x78\x6d\x6c\x6e\x73\x3a\x72\x64\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x77\x77\x77\x2e\x77\x33\x2e\x6f\x72\x67\x2f\x31\x39\x39\
\x39\x2f\x30\x32\x2f\x32\x32\x2d\x72\x64\x66\x2d\x73\x79\x6e\x74\
\x61\x78\x2d\x6e\x73\x23\x22\x3e\x20\x3c\x72\x64\x66\x3a\x44\x65\
\x73\x63\x72\x69\x70\x74\x69\x6f\x6e\x20\x72\x64\x66\x3a\x61\x62\
\x6f\x75\x74\x3d\x22\x22\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\
\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\
\x65\x2e\x63\x6f\x6d\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x22\x20\
\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\x4d\x4d\x3d\x22\x68\x74\x74\
\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\
\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x6d\x6d\x2f\x22\x20\x78\x6d\
\x6c\x6e\x73\x3a\x73\x74\x52\x65\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\x2f\x78\
\x61\x70\x2f\x31\x2e\x30\x2f\x73\x54\x79\x70\x65\x2f\x52\x65\x73\
\x6f\x75\x72\x63\x65\x52\x65\x66\x23\x22\x20\x78\x6d\x70\x3a\x43\
\x72\x65\x61\x74\x6f\x72\x54\x6f\x6f\x6c\x3d\x22\x41\x64\x6f\x62\
\x65\x20\x50\x68\x6f\x74\x6f\x73\x68\x6f\x70\x20\x43\x53\x36\x20\
\x28\x57\x69\x6e\x64\x6f\x77\x73\x29\x22\x20\x78\x6d\x70\x4d\x4d\
\x3a\x49\x6e\x73\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\
\x2e\x69\x69\x64\x3a\x43\x46\x42\x41\x36\x35\x36\x39\x45\x38\x42\
\x42\x31\x31\x45\x38\x39\x36\x41\x30\x38\x42\x38\x37\x32\x43\x42\
\x36\x30\x43\x44\x35\x22\x20\x78\x6d\x70\x4d\x4d\x3a\x44\x6f\x63\
\x75\x6d\x65\x6e\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\
\x3a\x43\x46\x42\x41\x36\x35\x36\x41\x45\x38\x42\x42\x31\x31\x45\
\x38\x39\x36\x41\x30\x38\x42\x38\x37\x32\x43\x42\x36\x30\x43\x44\
\x35\x22\x3e\x20\x3c\x78\x6d\x70\x4d\x4d\x3a\x44\x65\x72\x69\x76\
\x65\x64\x46\x72\x6f\x6d\x20\x73\x74\x52\x65\x66\x3a\x69\x6e\x73\
\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\x2e\x69\x69\x64\
\x3a\x43\x46\x42\x41\x36\x35\x36\x37\x45\x38\x42\x42\x31\x31\x45\
\x38\x39\x36\x41\x30\x38\x42\x38\x37\x32\x43\x42\x36\x30\x43\x44\
\x35\x22\x20\x73\x74\x52\x65\x66\x3a\x64\x6f\x63\x75\x6d\x65\x6e\
\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\x3a\x43\x46\x42\
\x41\x36\x35\x36\x38\x45\x38\x42\x42\x31\x31\x45\x38\x39\x36\x41\
\x30\x38\x42\x38\x37\x32\x43\x42\x36\x30\x43\x44\x35\x22\x2f\x3e\
\x20\x3c\x2f\x72\x64\x66\x3a\x44\x65\x73\x63\x72\x69\x70\x74\x69\
\x6f\x6e\x3e\x20\x3c\x2f\x72\x64\x66\x3a\x52\x44\x46\x3e\x20\x3c\
\x2f\x78\x3a\x78\x6d\x70\x6d\x65\x74\x61\x3e\x20\x3c\x3f\x78\x70\
\x61\x63\x6b\x65\x74\x20\x65\x6e\x64\x3d\x22\x72\x22\x3f\x3e\x50\
\x12\xed\xaf\x00\x00\x00\x3c\x49\x44\x41\x54\x78\xda\x62\xfc\xff\
\xff\x3f\x03\x21\xc0\xc4\x40\x04\x20\x4a\x11\x0b\x88\x38\x7b\xf6\
\x2c\x4e\x3b\x8d\x8d\x8d\x19\x61\x26\x31\xe2\x50\xc3\x88\x6e\x1d\
\x23\x36\x05\x70\x45\x40\x23\xd1\x25\x50\x34\x30\x52\x2d\x08\x00\
\x02\x0c\x00\x52\x1c\x0a\xac\x63\x42\x07\x75\x00\x00\x00\x00\x49\
\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x00\x6a\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x31\x49\x44\x41\x54\x18\x95\x63\x34\x36\x36\xfe\xcf\
\x40\x00\xb0\x40\x69\x46\x3c\x6a\xfe\x33\x11\x32\x85\x81\x81\x81\
\x81\x28\x45\x2c\x48\x6c\x6c\x6e\x63\x44\x57\x84\xd3\x5d\xd4\x77\
\x13\xde\xb0\x02\x00\xee\x60\x04\xb2\xdd\x37\x6a\x98\x00\x00\x00\
\x00\x49\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x00\x5e\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x25\x49\x44\x41\x54\x18\x95\x63\x60\xa0\x16\x60\x84\
\x31\x8c\x8d\x8d\xff\xa3\x4b\x9e\x3d\x7b\x96\x91\x81\x81\x81\x81\
\x89\x18\x93\xe8\xac\x88\x7a\x00\x00\xb9\x91\x04\x0a\xd2\x01\x5c\
\xd3\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x03\xc6\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x19\x74\x45\x58\x74\x53\x6f\x66\x74\x77\x61\x72\x65\
\x00\x41\x64\x6f\x62\x65\x20\x49\x6d\x61\x67\x65\x52\x65\x61\x64\
\x79\x71\xc9\x65\x3c\x00\x00\x03\x22\x69\x54\x58\x74\x58\x4d\x4c\
\x3a\x63\x6f\x6d\x2e\x61\x64\x6f\x62\x65\x2e\x78\x6d\x70\x00\x00\
\x00\x00\x00\x3c\x3f\x78\x70\x61\x63\x6b\x65\x74\x20\x62\x65\x67\
\x69\x6e\x3d\x22\xef\xbb\xbf\x22\x20\x69\x64\x3d\x22\x57\x35\x4d\
\x30\x4d\x70\x43\x65\x68\x69\x48\x7a\x72\x65\x53\x7a\x4e\x54\x63\
\x7a\x6b\x63\x39\x64\x22\x3f\x3e\x20\x3c\x78\x3a\x78\x6d\x70\x6d\
\x65\x74\x61\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x3d\x22\x61\x64\x6f\
\x62\x65\x3a\x6e\x73\x3a\x6d\x65\x74\x61\x2f\x22\x20\x78\x3a\x78\
\x6d\x70\x74\x6b\x3d\x22\x41\x64\x6f\x62\x65\x20\x58\x4d\x50\x20\
\x43\x6f\x72\x65\x20\x35\x2e\x33\x2d\x63\x30\x31\x31\x20\x36\x36\
\x2e\x31\x34\x35\x36\x36\x31\x2c\x20\x32\x30\x31\x32\x2f\x30\x32\
\x2f\x30\x36\x2d\x31\x34\x3a\x35\x36\x3a\x32\x37\x20\x20\x20\x20\
\x20\x20\x20\x20\x22\x3e\x20\x3c\x72\x64\x66\x3a\x52\x44\x46\x20\
\x78\x6d\x6c\x6e\x73\x3a\x72\x64\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x77\x77\x77\x2e\x77\x33\x2e\x6f\x72\x67\x2f\x31\x39\x39\
\x39\x2f\x30\x32\x2f\x32\x32\x2d\x72\x64\x66\x2d\x73\x79\x6e\x74\
\x61\x78\x2d\x6e\x73\x23\x22\x3e\x20\x3c\x72\x64\x66\x3a\x44\x65\
\x73\x63\x72\x69\x70\x74\x69\x6f\x6e\x20\x72\x64\x66\x3a\x61\x62\
\x6f\x75\x74\x3d\x22\x22\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\
\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\
\x65\x2e\x63\x6f\x6d\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x22\x20\
\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\x4d\x4d\x3d\x22\x68\x74\x74\
\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\
\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x6d\x6d\x2f\x22\x20\x78\x6d\
\x6c\x6e\x73\x3a\x73\x74\x52\x65\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\x2f\x78\
\x61\x70\x2f\x31\x2e\x30\x2f\x73\x54\x79\x70\x65\x2f\x52\x65\x73\
\x6f\x75\x72\x63\x65\x52\x65\x66\x23\x22\x20\x78\x6d\x70\x3a\x43\
\x72\x65\x61\x74\x6f\x72\x54\x6f\x6f\x6c\x3d\x22\x41\x64\x6f\x62\
\x65\x20\x50\x68\x6f\x74\x6f\x73\x68\x6f\x70\x20\x43\x53\x36\x20\
\x28\x57\x69\x6e\x64\x6f\x77\x73\x29\x22\x20\x78\x6d\x70\x4d\x4d\
\x3a\x49\x6e\x73\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\
\x2e\x69\x69\x64\x3a\x37\x33\x42\x31\x41\x34\x46\x45\x45\x38\x42\
\x44\x31\x31\x45\x38\x41\x42\x41\x38\x46\x36\x45\x34\x30\x35\x36\
\x30\x43\x42\x39\x32\x22\x20\x78\x6d\x70\x4d\x4d\x3a\x44\x6f\x63\
\x75\x6d\x65\x6e\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\
\x3a\x37\x33\x42\x31\x41\x34\x46\x46\x45\x38\x42\x44\x31\x31\x45\
\x38\x41\x42\x41\x38\x46\x36\x45\x34\x30\x35\x36\x30\x43\x42\x39\
\x32\x22\x3e\x20\x3c\x78\x6d\x70\x4d\x4d\x3a\x44\x65\x72\x69\x76\
\x65\x64\x46\x72\x6f\x6d\x20\x73\x74\x52\x65\x66\x3a\x69\x6e\x73\
\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\x2e\x69\x69\x64\
\x3a\x37\x33\x42\x31\x41\x34\x46\x43\x45\x38\x42\x44\x31\x31\x45\
\x38\x41\x42\x41\x38\x46\x36\x45\x34\x30\x35\x36\x30\x43\x42\x39\
\x32\x22\x20\x73\x74\x52\x65\x66\x3a\x64\x6f\x63\x75\x6d\x65\x6e\
\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\x3a\x37\x33\x42\
\x31\x41\x34\x46\x44\x45\x38\x42\x44\x31\x31\x45\x38\x41\x42\x41\
\x38\x46\x36\x45\x34\x30\x35\x36\x30\x43\x42\x39\x32\x22\x2f\x3e\
\x20\x3c\x2f\x72\x64\x66\x3a\x44\x65\x73\x63\x72\x69\x70\x74\x69\
\x6f\x6e\x3e\x20\x3c\x2f\x72\x64\x66\x3a\x52\x44\x46\x3e\x20\x3c\
\x2f\x78\x3a\x78\x6d\x70\x6d\x65\x74\x61\x3e\x20\x3c\x3f\x78\x70\
\x61\x63\x6b\x65\x74\x20\x65\x6e\x64\x3d\x22\x72\x22\x3f\x3e\x87\
\x36\x7f\x88\x00\x00\x00\x3a\x49\x44\x41\x54\x78\xda\x62\x3c\x73\
\xe6\xcc\x7f\x06\x02\x80\x05\x4a\x33\xe2\x51\xf3\x9f\x09\x9b\x20\
\xba\x00\x13\x03\x11\x80\x05\x87\x09\xff\x91\x9d\x81\xac\x88\x11\
\x49\x01\x23\xc9\xd6\x61\x53\xc4\x88\xcb\x4d\x78\xc3\x0a\x20\xc0\
\x00\x4e\xc6\x09\x7c\x09\x4e\xec\x4b\x00\x00\x00\x00\x49\x45\x4e\
\x44\xae\x42\x60\x82\
\x00\x00\x00\x5e\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x05\x00\x00\x00\x05\x08\x06\x00\x00\x00\x8d\x6f\x26\xe5\
\x00\x00\x00\x25\x49\x44\x41\x54\x08\x99\x5d\xc8\x31\x01\x00\x20\
\x0c\x03\xb0\x30\x4b\x15\x08\xd2\xb9\x38\x68\xce\xac\x24\xca\x99\
\x0e\xec\xe9\x80\xe9\x78\xf9\x05\x5c\x0b\xfa\x05\x1f\x6e\x0e\xad\
\x79\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x03\xc9\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x19\x74\x45\x58\x74\x53\x6f\x66\x74\x77\x61\x72\x65\
\x00\x41\x64\x6f\x62\x65\x20\x49\x6d\x61\x67\x65\x52\x65\x61\x64\
\x79\x71\xc9\x65\x3c\x00\x00\x03\x22\x69\x54\x58\x74\x58\x4d\x4c\
\x3a\x63\x6f\x6d\x2e\x61\x64\x6f\x62\x65\x2e\x78\x6d\x70\x00\x00\
\x00\x00\x00\x3c\x3f\x78\x70\x61\x63\x6b\x65\x74\x20\x62\x65\x67\
\x69\x6e\x3d\x22\xef\xbb\xbf\x22\x20\x69\x64\x3d\x22\x57\x35\x4d\
\x30\x4d\x70\x43\x65\x68\x69\x48\x7a\x72\x65\x53\x7a\x4e\x54\x63\
\x7a\x6b\x63\x39\x64\x22\x3f\x3e\x20\x3c\x78\x3a\x78\x6d\x70\x6d\
\x65\x74\x61\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x3d\x22\x61\x64\x6f\
\x62\x65\x3a\x6e\x73\x3a\x6d\x65\x74\x61\x2f\x22\x20\x78\x3a\x78\
\x6d\x70\x74\x6b\x3d\x22\x41\x64\x6f\x62\x65\x20\x58\x4d\x50\x20\
\x43\x6f\x72\x65\x20\x35\x2e\x33\x2d\x63\x30\x31\x31\x20\x36\x36\
\x2e\x31\x34\x35\x36\x36\x31\x2c\x20\x32\x30\x31\x32\x2f\x30\x32\
\x2f\x30\x36\x2d\x31\x34\x3a\x35\x36\x3a\x32\x37\x20\x20\x20\x20\
\x20\x20\x20\x20\x22\x3e\x20\x3c\x72\x64\x66\x3a\x52\x44\x46\x20\
\x78\x6d\x6c\x6e\x73\x3a\x72\x64\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x77\x77\x77\x2e\x77\x33\x2e\x6f\x72\x67\x2f\x31\x39\x39\
\x39\x2f\x30\x32\x2f\x32\x32\x2d\x72\x64\x66\x2d\x73\x79\x6e\x74\
\x61\x78\x2d\x6e\x73\x23\x22\x3e\x20\x3c\x72\x64\x66\x3a\x44\x65\
\x73\x63\x72\x69\x70\x74\x69\x6f\x6e\x20\x72\x64\x66\x3a\x61\x62\
\x6f\x75\x74\x3d\x22\x22\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\
\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\
\x65\x2e\x63\x6f\x6d\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x22\x20\
\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\x4d\x4d\x3d\x22\x68\x74\x74\
\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\
\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x6d\x6d\x2f\x22\x20\x78\x6d\
\x6c\x6e\x73\x3a\x73\x74\x52\x65\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\x2f\x78\
\x61\x70\x2f\x31\x2e\x30\x2f\x73\x54\x79\x70\x65\x2f\x52\x65\x73\
\x6f\x75\x72\x63\x65\x52\x65\x66\x23\x22\x20\x78\x6d\x70\x3a\x43\
\x72\x65\x61\x74\x6f\x72\x54\x6f\x6f\x6c\x3d\x22\x41\x64\x6f\x62\
\x65\x20\x50\x68\x6f\x74\x6f\x73\x68\x6f\x70\x20\x43\x53\x36\x20\
\x28\x57\x69\x6e\x64\x6f\x77\x73\x29\x22\x20\x78\x6d\x70\x4d\x4d\
\x3a\x49\x6e\x73\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\
\x2e\x69\x69\x64\x3a\x46\x36\x42\x35\x31\x43\x38\x46\x45\x38\x42\
\x42\x31\x31\x45\x38\x42\x46\x44\x46\x41\x41\x31\x43\x30\x33\x32\
\x39\x34\x41\x43\x32\x22\x20\x78\x6d\x70\x4d\x4d\x3a\x44\x6f\x63\
\x75\x6d\x65\x6e\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\
\x3a\x46\x36\x42\x35\x31\x43\x39\x30\x45\x38\x42\x42\x31\x31\x45\
\x38\x42\x46\x44\x46\x41\x41\x31\x43\x30\x33\x32\x39\x34\x41\x43\
\x32\x22\x3e\x20\x3c\x78\x6d\x70\x4d\x4d\x3a\x44\x65\x72\x69\x76\
\x65\x64\x46\x72\x6f\x6d\x20\x73\x74\x52\x65\x66\x3a\x69\x6e\x73\
\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\x2e\x69\x69\x64\
\x3a\x46\x36\x42\x35\x31\x43\x38\x44\x45\x38\x42\x42\x31\x31\x45\
\x38\x42\x46\x44\x46\x41\x41\x31\x43\x30\x33\x32\x39\x34\x41\x43\
\x32\x22\x20\x73\x74\x52\x65\x66\x3a\x64\x6f\x63\x75\x6d\x65\x6e\
\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\x3a\x46\x36\x42\
\x35\x31\x43\x38\x45\x45\x38\x42\x42\x31\x31\x45\x38\x42\x46\x44\
\x46\x41\x41\x31\x43\x30\x33\x32\x39\x34\x41\x43\x32\x22\x2f\x3e\
\x20\x3c\x2f\x72\x64\x66\x3a\x44\x65\x73\x63\x72\x69\x70\x74\x69\
\x6f\x6e\x3e\x20\x3c\x2f\x72\x64\x66\x3a\x52\x44\x46\x3e\x20\x3c\
\x2f\x78\x3a\x78\x6d\x70\x6d\x65\x74\x61\x3e\x20\x3c\x3f\x78\x70\
\x61\x63\x6b\x65\x74\x20\x65\x6e\x64\x3d\x22\x72\x22\x3f\x3e\xbd\
\x1a\x2c\x7a\x00\x00\x00\x3d\x49\x44\x41\x54\x78\xda\x62\xfc\xff\
\xff\x3f\x03\x21\xc0\xc4\x40\x04\x60\x01\x11\x67\xcf\x9e\x45\x16\
\x03\x19\xcd\x08\x62\x18\x1b\x1b\x63\x35\xe9\x3f\x32\x0d\xd3\xcc\
\x84\x45\x01\x06\x9f\x09\x87\x02\x14\x85\x8c\x54\xf3\x1d\x51\x8a\
\x00\x02\x0c\x00\xb1\x49\x12\x6e\xce\x39\x93\xd8\x00\x00\x00\x00\
\x49\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x00\x74\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x3b\x49\x44\x41\x54\x18\x95\x63\x60\x20\x02\x30\x22\
\x73\x8c\x8d\x8d\xff\x23\x8b\x9d\x3d\x7b\x96\x81\x81\x81\x81\x81\
\x09\x8b\xc6\xff\xe8\x02\xd8\x14\x61\x28\xc4\xa5\x08\x45\x21\x3e\
\x45\x8c\x84\x14\xa1\x78\x08\x9b\x22\x46\x2c\x62\x84\x01\x00\x9a\
\xdc\x07\x0e\x55\xa8\x58\x10\x00\x00\x00\x00\x49\x45\x4e\x44\xae\
\x42\x60\x82\
\x00\x00\x03\xca\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x19\x74\x45\x58\x74\x53\x6f\x66\x74\x77\x61\x72\x65\
\x00\x41\x64\x6f\x62\x65\x20\x49\x6d\x61\x67\x65\x52\x65\x61\x64\
\x79\x71\xc9\x65\x3c\x00\x00\x03\x22\x69\x54\x58\x74\x58\x4d\x4c\
\x3a\x63\x6f\x6d\x2e\x61\x64\x6f\x62\x65\x2e\x78\x6d\x70\x00\x00\
\x00\x00\x00\x3c\x3f\x78\x70\x61\x63\x6b\x65\x74\x20\x62\x65\x67\
\x69\x6e\x3d\x22\xef\xbb\xbf\x22\x20\x69\x64\x3d\x22\x57\x35\x4d\
\x30\x4d\x70\x43\x65\x68\x69\x48\x7a\x72\x65\x53\x7a\x4e\x54\x63\
\x7a\x6b\x63\x39\x64\x22\x3f\x3e\x20\x3c\x78\x3a\x78\x6d\x70\x6d\
\x65\x74\x61\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x3d\x22\x61\x64\x6f\
\x62\x65\x3a\x6e\x73\x3a\x6d\x65\x74\x61\x2f\x22\x20\x78\x3a\x78\
\x6d\x70\x74\x6b\x3d\x22\x41\x64\x6f\x62\x65\x20\x58\x4d\x50\x20\
\x43\x6f\x72\x65\x20\x35\x2e\x33\x2d\x63\x30\x31\x31\x20\x36\x36\
\x2e\x31\x34\x35\x36\x36\x31\x2c\x20\x32\x30\x31\x32\x2f\x30\x32\
\x2f\x30\x36\x2d\x31\x34\x3a\x35\x36\x3a\x32\x37\x20\x20\x20\x20\
\x20\x20\x20\x20\x22\x3e\x20\x3c\x72\x64\x66\x3a\x52\x44\x46\x20\
\x78\x6d\x6c\x6e\x73\x3a\x72\x64\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x77\x77\x77\x2e\x77\x33\x2e\x6f\x72\x67\x2f\x31\x39\x39\
\x39\x2f\x30\x32\x2f\x32\x32\x2d\x72\x64\x66\x2d\x73\x79\x6e\x74\
\x61\x78\x2d\x6e\x73\x23\x22\x3e\x20\x3c\x72\x64\x66\x3a\x44\x65\
\x73\x63\x72\x69\x70\x74\x69\x6f\x6e\x20\x72\x64\x66\x3a\x61\x62\
\x6f\x75\x74\x3d\x22\x22\x20\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\
\x3d\x22\x68\x74\x74\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\
\x65\x2e\x63\x6f\x6d\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x22\x20\
\x78\x6d\x6c\x6e\x73\x3a\x78\x6d\x70\x4d\x4d\x3d\x22\x68\x74\x74\
\x70\x3a\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\
\x2f\x78\x61\x70\x2f\x31\x2e\x30\x2f\x6d\x6d\x2f\x22\x20\x78\x6d\
\x6c\x6e\x73\x3a\x73\x74\x52\x65\x66\x3d\x22\x68\x74\x74\x70\x3a\
\x2f\x2f\x6e\x73\x2e\x61\x64\x6f\x62\x65\x2e\x63\x6f\x6d\x2f\x78\
\x61\x70\x2f\x31\x2e\x30\x2f\x73\x54\x79\x70\x65\x2f\x52\x65\x73\
\x6f\x75\x72\x63\x65\x52\x65\x66\x23\x22\x20\x78\x6d\x70\x3a\x43\
\x72\x65\x61\x74\x6f\x72\x54\x6f\x6f\x6c\x3d\x22\x41\x64\x6f\x62\
\x65\x20\x50\x68\x6f\x74\x6f\x73\x68\x6f\x70\x20\x43\x53\x36\x20\
\x28\x57\x69\x6e\x64\x6f\x77\x73\x29\x22\x20\x78\x6d\x70\x4d\x4d\
\x3a\x49\x6e\x73\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\
\x2e\x69\x69\x64\x3a\x31\x35\x39\x32\x45\x41\x41\x41\x45\x38\x42\
\x44\x31\x31\x45\x38\x39\x33\x38\x36\x44\x34\x38\x42\x34\x34\x35\
\x35\x32\x42\x45\x41\x22\x20\x78\x6d\x70\x4d\x4d\x3a\x44\x6f\x63\
\x75\x6d\x65\x6e\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\
\x3a\x31\x35\x39\x32\x45\x41\x41\x42\x45\x38\x42\x44\x31\x31\x45\
\x38\x39\x33\x38\x36\x44\x34\x38\x42\x34\x34\x35\x35\x32\x42\x45\
\x41\x22\x3e\x20\x3c\x78\x6d\x70\x4d\x4d\x3a\x44\x65\x72\x69\x76\
\x65\x64\x46\x72\x6f\x6d\x20\x73\x74\x52\x65\x66\x3a\x69\x6e\x73\
\x74\x61\x6e\x63\x65\x49\x44\x3d\x22\x78\x6d\x70\x2e\x69\x69\x64\
\x3a\x31\x35\x39\x32\x45\x41\x41\x38\x45\x38\x42\x44\x31\x31\x45\
\x38\x39\x33\x38\x36\x44\x34\x38\x42\x34\x34\x35\x35\x32\x42\x45\
\x41\x22\x20\x73\x74\x52\x65\x66\x3a\x64\x6f\x63\x75\x6d\x65\x6e\
\x74\x49\x44\x3d\x22\x78\x6d\x70\x2e\x64\x69\x64\x3a\x31\x35\x39\
\x32\x45\x41\x41\x39\x45\x38\x42\x44\x31\x31\x45\x38\x39\x33\x38\
\x36\x44\x34\x38\x42\x34\x34\x35\x35\x32\x42\x45\x41\x22\x2f\x3e\
\x20\x3c\x2f\x72\x64\x66\x3a\x44\x65\x73\x63\x72\x69\x70\x74\x69\
\x6f\x6e\x3e\x20\x3c\x2f\x72\x64\x66\x3a\x52\x44\x46\x3e\x20\x3c\
\x2f\x78\x3a\x78\x6d\x70\x6d\x65\x74\x61\x3e\x20\x3c\x3f\x78\x70\
\x61\x63\x6b\x65\x74\x20\x65\x6e\x64\x3d\x22\x72\x22\x3f\x3e\x25\
\x1e\x21\xc9\x00\x00\x00\x3e\x49\x44\x41\x54\x78\xda\x62\x3c\x73\
\xe6\x0c\x03\x01\xf0\x9f\x89\x90\x02\x10\xc1\x44\x48\x01\x3e\x45\
\xff\x91\xd8\x8c\x4c\x68\x02\x18\x0a\x90\x4d\xfa\x8f\x4b\x01\xba\
\x75\x58\x15\xc0\x14\x31\xa2\x59\x87\xce\x87\x9b\xc4\x88\x4b\x01\
\x08\x00\x04\x18\x00\xcb\xfa\x0b\x75\x42\x7e\x04\xf0\x00\x00\x00\
\x00\x49\x45\x4e\x44\xae\x42\x60\x82\
\x00\x00\x00\x72\
\x89\
\x50\x4e\x47\x0d\x0a\x1a\x0a\x00\x00\x00\x0d\x49\x48\x44\x52\x00\
\x00\x00\x09\x00\x00\x00\x09\x08\x06\x00\x00\x00\xe0\x91\x06\x10\
\x00\x00\x00\x39\x49\x44\x41\x54\x18\x95\x63\x34\x36\x36\xfe\xcf\
\x40\x00\xb0\x40\x69\x46\x3c\x6a\xfe\x33\x61\x13\x44\x17\xc0\xa6\
\x08\xa7\x75\xe8\x26\xc0\xd8\x8c\xe8\x8a\x18\x91\x14\xa0\xb8\x91\
\x28\xeb\xb0\x29\xc2\xf0\x29\xcc\x3a\xbc\x61\x05\x00\xfd\x20\x07\
\xb1\x0e\xa6\xbf\xc7\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\
\x82\
"
qt_resource_name = b"\
\x00\x06\
\x07\x03\x7d\xc3\
\x00\x69\
\x00\x6d\x00\x61\x00\x67\x00\x65\x00\x73\
\x00\x0e\
\x0d\x4e\x7b\x47\
\x00\x62\
\x00\x72\x00\x61\x00\x6e\x00\x63\x00\x68\x00\x2d\x00\x65\x00\x6e\x00\x64\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x13\
\x0c\x50\x6a\x27\
\x00\x61\
\x00\x72\x00\x72\x00\x6f\x00\x77\x00\x5f\x00\x64\x00\x6f\x00\x77\x00\x6e\x00\x5f\x00\x64\x00\x61\x00\x72\x00\x6b\x00\x2e\x00\x70\
\x00\x6e\x00\x67\
\x00\x20\
\x0e\xe6\xe0\xe7\
\x00\x63\
\x00\x68\x00\x65\x00\x63\x00\x6b\x00\x62\x00\x6f\x00\x78\x00\x5f\x00\x69\x00\x6e\x00\x64\x00\x65\x00\x74\x00\x65\x00\x72\x00\x6d\
\x00\x69\x00\x6e\x00\x61\x00\x74\x00\x65\x00\x5f\x00\x6c\x00\x69\x00\x67\x00\x68\x00\x74\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x15\
\x03\xbd\xf5\x87\
\x00\x61\
\x00\x72\x00\x72\x00\x6f\x00\x77\x00\x5f\x00\x72\x00\x69\x00\x67\x00\x68\x00\x74\x00\x5f\x00\x6c\x00\x69\x00\x67\x00\x68\x00\x74\
\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x09\
\x00\x48\xad\x27\
\x00\x76\
\x00\x6c\x00\x69\x00\x6e\x00\x65\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x0f\
\x06\x16\x91\xe7\
\x00\x62\
\x00\x72\x00\x61\x00\x6e\x00\x63\x00\x68\x00\x2d\x00\x6d\x00\x6f\x00\x72\x00\x65\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x11\
\x03\xf9\x79\xa7\
\x00\x61\
\x00\x72\x00\x72\x00\x6f\x00\x77\x00\x5f\x00\x75\x00\x70\x00\x5f\x00\x64\x00\x61\x00\x72\x00\x6b\x00\x2e\x00\x70\x00\x6e\x00\x67\
\
\x00\x19\
\x01\x2a\xc1\x07\
\x00\x70\
\x00\x6f\x00\x70\x00\x75\x00\x70\x00\x5f\x00\x69\x00\x6e\x00\x64\x00\x69\x00\x63\x00\x61\x00\x74\x00\x6f\x00\x72\x00\x5f\x00\x6c\
\x00\x69\x00\x67\x00\x68\x00\x74\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x19\
\x00\x72\xec\xc7\
\x00\x63\
\x00\x68\x00\x65\x00\x63\x00\x6b\x00\x62\x00\x6f\x00\x78\x00\x5f\x00\x63\x00\x68\x00\x65\x00\x63\x00\x6b\x00\x65\x00\x64\x00\x5f\
\x00\x64\x00\x61\x00\x72\x00\x6b\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x1d\
\x04\x8e\xd7\x67\
\x00\x72\
\x00\x61\x00\x64\x00\x69\x00\x6f\x00\x62\x00\x75\x00\x74\x00\x74\x00\x6f\x00\x6e\x00\x5f\x00\x63\x00\x68\x00\x65\x00\x63\x00\x6b\
\x00\x65\x00\x64\x00\x5f\x00\x6c\x00\x69\x00\x67\x00\x68\x00\x74\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x0e\
\x0a\x5b\x66\xa7\
\x00\x63\
\x00\x72\x00\x6f\x00\x73\x00\x73\x00\x5f\x00\x64\x00\x61\x00\x72\x00\x6b\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x0f\
\x00\xa4\x38\xe7\
\x00\x63\
\x00\x72\x00\x6f\x00\x73\x00\x73\x00\x5f\x00\x6c\x00\x69\x00\x67\x00\x68\x00\x74\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x18\
\x0e\x47\x66\xa7\
\x00\x69\
\x00\x6e\x00\x64\x00\x69\x00\x63\x00\x61\x00\x74\x00\x6f\x00\x72\x00\x5f\x00\x6c\x00\x65\x00\x73\x00\x73\x00\x5f\x00\x6c\x00\x69\
\x00\x67\x00\x68\x00\x74\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x14\
\x0c\x07\x7e\x47\
\x00\x61\
\x00\x72\x00\x72\x00\x6f\x00\x77\x00\x5f\x00\x6c\x00\x65\x00\x66\x00\x74\x00\x5f\x00\x6c\x00\x69\x00\x67\x00\x68\x00\x74\x00\x2e\
\x00\x70\x00\x6e\x00\x67\
\x00\x1c\
\x06\x1d\xc0\x47\
\x00\x72\
\x00\x61\x00\x64\x00\x69\x00\x6f\x00\x62\x00\x75\x00\x74\x00\x74\x00\x6f\x00\x6e\x00\x5f\x00\x63\x00\x68\x00\x65\x00\x63\x00\x6b\
\x00\x65\x00\x64\x00\x5f\x00\x64\x00\x61\x00\x72\x00\x6b\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x13\
\x0f\x94\xaa\xc7\
\x00\x61\
\x00\x72\x00\x72\x00\x6f\x00\x77\x00\x5f\x00\x6c\x00\x65\x00\x66\x00\x74\x00\x5f\x00\x64\x00\x61\x00\x72\x00\x6b\x00\x2e\x00\x70\
\x00\x6e\x00\x67\
\x00\x14\
\x01\xd3\x70\x27\
\x00\x61\
\x00\x72\x00\x72\x00\x6f\x00\x77\x00\x5f\x00\x64\x00\x6f\x00\x77\x00\x6e\x00\x5f\x00\x6c\x00\x69\x00\x67\x00\x68\x00\x74\x00\x2e\
\x00\x70\x00\x6e\x00\x67\
\x00\x17\
\x08\xb8\xab\x47\
\x00\x69\
\x00\x6e\x00\x64\x00\x69\x00\x63\x00\x61\x00\x74\x00\x6f\x00\x72\x00\x5f\x00\x6c\x00\x65\x00\x73\x00\x73\x00\x5f\x00\x64\x00\x61\
\x00\x72\x00\x6b\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x1f\
\x0a\xbf\x43\x27\
\x00\x63\
\x00\x68\x00\x65\x00\x63\x00\x6b\x00\x62\x00\x6f\x00\x78\x00\x5f\x00\x69\x00\x6e\x00\x64\x00\x65\x00\x74\x00\x65\x00\x72\x00\x6d\
\x00\x69\x00\x6e\x00\x61\x00\x74\x00\x65\x00\x5f\x00\x64\x00\x61\x00\x72\x00\x6b\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x18\
\x0a\x1b\x66\x07\
\x00\x69\
\x00\x6e\x00\x64\x00\x69\x00\x63\x00\x61\x00\x74\x00\x6f\x00\x72\x00\x5f\x00\x6d\x00\x6f\x00\x72\x00\x65\x00\x5f\x00\x6c\x00\x69\
\x00\x67\x00\x68\x00\x74\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x18\
\x05\x43\x81\x27\
\x00\x70\
\x00\x6f\x00\x70\x00\x75\x00\x70\x00\x5f\x00\x69\x00\x6e\x00\x64\x00\x69\x00\x63\x00\x61\x00\x74\x00\x6f\x00\x72\x00\x5f\x00\x64\
\x00\x61\x00\x72\x00\x6b\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x12\
\x0a\x44\x49\xc7\
\x00\x61\
\x00\x72\x00\x72\x00\x6f\x00\x77\x00\x5f\x00\x75\x00\x70\x00\x5f\x00\x6c\x00\x69\x00\x67\x00\x68\x00\x74\x00\x2e\x00\x70\x00\x6e\
\x00\x67\
\x00\x14\
\x01\x6a\xf2\x67\
\x00\x61\
\x00\x72\x00\x72\x00\x6f\x00\x77\x00\x5f\x00\x72\x00\x69\x00\x67\x00\x68\x00\x74\x00\x5f\x00\x64\x00\x61\x00\x72\x00\x6b\x00\x2e\
\x00\x70\x00\x6e\x00\x67\
\x00\x1a\
\x02\x3c\x9f\xa7\
\x00\x63\
\x00\x68\x00\x65\x00\x63\x00\x6b\x00\x62\x00\x6f\x00\x78\x00\x5f\x00\x63\x00\x68\x00\x65\x00\x63\x00\x6b\x00\x65\x00\x64\x00\x5f\
\x00\x6c\x00\x69\x00\x67\x00\x68\x00\x74\x00\x2e\x00\x70\x00\x6e\x00\x67\
\x00\x17\
\x0d\xf4\xeb\x47\
\x00\x69\
\x00\x6e\x00\x64\x00\x69\x00\x63\x00\x61\x00\x74\x00\x6f\x00\x72\x00\x5f\x00\x6d\x00\x6f\x00\x72\x00\x65\x00\x5f\x00\x64\x00\x61\
\x00\x72\x00\x6b\x00\x2e\x00\x70\x00\x6e\x00\x67\
"
qt_resource_struct_v1 = b"\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x19\x00\x00\x00\x02\
\x00\x00\x00\xd6\x00\x00\x00\x00\x00\x01\x00\x00\x08\x6c\
\x00\x00\x01\x72\x00\x00\x00\x00\x00\x01\x00\x00\x0a\xa7\
\x00\x00\x02\x0c\x00\x00\x00\x00\x00\x01\x00\x00\x0c\x84\
\x00\x00\x01\x3a\x00\x00\x00\x00\x00\x01\x00\x00\x09\xe5\
\x00\x00\x04\x3a\x00\x00\x00\x00\x00\x01\x00\x00\x22\x7c\
\x00\x00\x02\xfe\x00\x00\x00\x00\x00\x01\x00\x00\x15\xe7\
\x00\x00\x04\x68\x00\x00\x00\x00\x00\x01\x00\x00\x22\xf4\
\x00\x00\x00\xa6\x00\x00\x00\x00\x00\x01\x00\x00\x04\x9c\
\x00\x00\x01\x12\x00\x00\x00\x00\x00\x01\x00\x00\x09\x70\
\x00\x00\x01\xaa\x00\x00\x00\x00\x00\x01\x00\x00\x0b\x22\
\x00\x00\x03\xda\x00\x00\x00\x00\x00\x01\x00\x00\x1e\x4d\
\x00\x00\x00\xee\x00\x00\x00\x00\x00\x01\x00\x00\x08\xda\
\x00\x00\x02\x94\x00\x00\x00\x00\x00\x01\x00\x00\x14\xcd\
\x00\x00\x03\x2c\x00\x00\x00\x00\x00\x01\x00\x00\x19\xb3\
\x00\x00\x03\xa4\x00\x00\x00\x00\x00\x01\x00\x00\x1a\x83\
\x00\x00\x04\x10\x00\x00\x00\x00\x00\x01\x00\x00\x1e\xaf\
\x00\x00\x01\xea\x00\x00\x00\x00\x00\x01\x00\x00\x0b\xc8\
\x00\x00\x03\x60\x00\x00\x00\x00\x00\x01\x00\x00\x1a\x21\
\x00\x00\x02\x66\x00\x00\x00\x00\x00\x01\x00\x00\x10\xfe\
\x00\x00\x00\x34\x00\x00\x00\x00\x00\x01\x00\x00\x00\x78\
\x00\x00\x00\x12\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\
\x00\x00\x04\xa2\x00\x00\x00\x00\x00\x01\x00\x00\x26\xc2\
\x00\x00\x02\x30\x00\x00\x00\x00\x00\x01\x00\x00\x0d\x33\
\x00\x00\x00\x60\x00\x00\x00\x00\x00\x01\x00\x00\x00\xe2\
\x00\x00\x02\xd2\x00\x00\x00\x00\x00\x01\x00\x00\x15\x73\
"
qt_resource_struct_v2 = b"\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\
\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x19\x00\x00\x00\x02\
\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\xd6\x00\x00\x00\x00\x00\x01\x00\x00\x08\x6c\
\x00\x00\x01\x67\xc8\xb0\x92\xb0\
\x00\x00\x01\x72\x00\x00\x00\x00\x00\x01\x00\x00\x0a\xa7\
\x00\x00\x01\x67\xc8\xaf\x6d\xb6\
\x00\x00\x02\x0c\x00\x00\x00\x00\x00\x01\x00\x00\x0c\x84\
\x00\x00\x01\x67\xc8\xb1\x39\x22\
\x00\x00\x01\x3a\x00\x00\x00\x00\x00\x01\x00\x00\x09\xe5\
\x00\x00\x01\x66\xf5\x0d\xf0\xb6\
\x00\x00\x04\x3a\x00\x00\x00\x00\x00\x01\x00\x00\x22\x7c\
\x00\x00\x01\x67\xc8\xae\xfb\x6a\
\x00\x00\x02\xfe\x00\x00\x00\x00\x00\x01\x00\x00\x15\xe7\
\x00\x00\x01\x67\x42\x41\x95\x69\
\x00\x00\x04\x68\x00\x00\x00\x00\x00\x01\x00\x00\x22\xf4\
\x00\x00\x01\x67\x42\x41\x96\xdf\
\x00\x00\x00\xa6\x00\x00\x00\x00\x00\x01\x00\x00\x04\x9c\
\x00\x00\x01\x67\x42\x41\x96\x31\
\x00\x00\x01\x12\x00\x00\x00\x00\x00\x01\x00\x00\x09\x70\
\x00\x00\x01\x67\xc8\xaf\x38\x3f\
\x00\x00\x01\xaa\x00\x00\x00\x00\x00\x01\x00\x00\x0b\x22\
\x00\x00\x01\x67\xc8\xb1\xf9\x90\
\x00\x00\x03\xda\x00\x00\x00\x00\x00\x01\x00\x00\x1e\x4d\
\x00\x00\x01\x67\xc8\xb1\x1a\xb8\
\x00\x00\x00\xee\x00\x00\x00\x00\x00\x01\x00\x00\x08\xda\
\x00\x00\x01\x67\xc8\xb0\xfc\x2f\
\x00\x00\x02\x94\x00\x00\x00\x00\x00\x01\x00\x00\x14\xcd\
\x00\x00\x01\x67\xc8\xae\xb3\x0a\
\x00\x00\x03\x2c\x00\x00\x00\x00\x00\x01\x00\x00\x19\xb3\
\x00\x00\x01\x67\xc8\xb0\x46\x24\
\x00\x00\x03\xa4\x00\x00\x00\x00\x00\x01\x00\x00\x1a\x83\
\x00\x00\x01\x67\x42\x41\x97\xf3\
\x00\x00\x04\x10\x00\x00\x00\x00\x00\x01\x00\x00\x1e\xaf\
\x00\x00\x01\x67\x42\x41\x96\x96\
\x00\x00\x01\xea\x00\x00\x00\x00\x00\x01\x00\x00\x0b\xc8\
\x00\x00\x01\x67\xc8\xae\x8c\xe9\
\x00\x00\x03\x60\x00\x00\x00\x00\x00\x01\x00\x00\x1a\x21\
\x00\x00\x01\x67\xc8\xb1\x5c\x5f\
\x00\x00\x02\x66\x00\x00\x00\x00\x00\x01\x00\x00\x10\xfe\
\x00\x00\x01\x67\x42\x41\x95\xcf\
\x00\x00\x00\x34\x00\x00\x00\x00\x00\x01\x00\x00\x00\x78\
\x00\x00\x01\x67\xc8\xae\x0e\x0f\
\x00\x00\x00\x12\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\
\x00\x00\x01\x67\xc8\xb0\xba\xc5\
\x00\x00\x04\xa2\x00\x00\x00\x00\x00\x01\x00\x00\x26\xc2\
\x00\x00\x01\x67\xc8\xaf\xa6\xe2\
\x00\x00\x02\x30\x00\x00\x00\x00\x00\x01\x00\x00\x0d\x33\
\x00\x00\x01\x67\x42\x41\x97\x8f\
\x00\x00\x00\x60\x00\x00\x00\x00\x00\x01\x00\x00\x00\xe2\
\x00\x00\x01\x67\x42\x41\x97\x3b\
\x00\x00\x02\xd2\x00\x00\x00\x00\x00\x01\x00\x00\x15\x73\
\x00\x00\x01\x67\xc8\xaf\xef\xf7\
"
qt_version = QtCore.qVersion().split('.')
if qt_version < ['5', '8', '0']:
rcc_version = 1
qt_resource_struct = qt_resource_struct_v1
else:
rcc_version = 2
qt_resource_struct = qt_resource_struct_v2
def qInitResources():
QtCore.qRegisterResourceData(rcc_version, qt_resource_struct, qt_resource_name, qt_resource_data)
def qCleanupResources():
QtCore.qUnregisterResourceData(rcc_version, qt_resource_struct, qt_resource_name, qt_resource_data)
qInitResources()
| 55.629989 | 129 | 0.724001 | 12,362 | 51,569 | 3.017149 | 0.02443 | 0.135932 | 0.106896 | 0.055982 | 0.857097 | 0.853558 | 0.844683 | 0.840581 | 0.835514 | 0.833342 | 0 | 0.410036 | 0.019585 | 51,569 | 926 | 130 | 55.690065 | 0.327677 | 0.002928 | 0 | 0.618681 | 0 | 0.778022 | 0.000078 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0.002198 | false | 0 | 0.001099 | 0 | 0.003297 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
e3dd44107817c1fc85039ce386c8248d25a39aca | 36,726 | py | Python | main.py | Holly-Jiang/QCTSA | b90136b9df18fc21ae53b431f1e5e0c6ef786fae | [
"MIT"
] | null | null | null | main.py | Holly-Jiang/QCTSA | b90136b9df18fc21ae53b431f1e5e0c6ef786fae | [
"MIT"
] | null | null | null | main.py | Holly-Jiang/QCTSA | b90136b9df18fc21ae53b431f1e5e0c6ef786fae | [
"MIT"
] | null | null | null | # This is a sample Python script.
# Press Shift+F10 to execute it or replace it with your code.
# Press Double Shift to search everywhere for classes, files, tool windows, actions, and settings.
import math
import os,sys
# Press the green button in the gutter to run the script.
def read_tabu_files(path):
if path=='':
print('the path \'%s\' is not exist'%path)
map=dict()
f = open(path, "r")
while True:
str = f.readline().strip()
line = f.readline().strip()
if not line:
break
if line.__eq__(''):
break
list1=list()
arr=line.split(' ')
for i in range(len(arr)):
if i > 0:
list1.append(float(arr[i]))
else:
list1.append(arr[i])
map['%s'%str]=list1
return map
def read_tabu_files1(path):
if path=='':
print('the path \'%s\' is not exist'%path)
map=dict()
f = open(path, "r")
while True:
str = f.readline().strip()
line = f.readline().strip()
if not line:
break
if line.__eq__(''):
break
list1=list()
arr=line.split(' ')
for i in range(len(arr)):
list1.append(float(arr[i]))
map['%s'%str]=list1
return map
def read_sabre_files(path):
map=dict()
f = open(path, "r")
res=list()
while True:
str = f.readline().strip()
line = f.readline().strip()
if not line:
break
if line.__eq__(''):
break
list1=list()
arr=line.split(' ')
list1.append(float(arr[0]))
list1.append(float(arr[1]))
list1.append(float(arr[2]))
map[str] = list1
return map
def readOptm(path):
map=dict()
f = open(path, "r")
res=list()
key=''
while True:
line = f.readline().strip()
if not line:
break
if line.__eq__(''):
break
list1=list()
arr=line.split(' ')
if len(arr)==4:
list1.append(float(arr[0]))
list1.append(float(arr[1]))
list1.append(float(arr[2]))
list1.append(float(arr[3]))
res.append(list1)
elif len(arr)==5:
list1.append(float(arr[0]))
list1.append(float(arr[1]))
list1.append(float(arr[2]))
list1.append(float(arr[3]))
list1.append(float(arr[4]))
res.append(list1)
else:
if (res != None and len(res) > 0):
map[key]= res
res=list()
key = line
return map
def read_topgraph_files(path):
if path=='':
print('the path \'%s\' is not exist'%path)
map=dict()
f = open(path, "r")
while True:
str = f.readline().strip()
line = f.readline().strip()
if not line:
break
if line.__eq__(''):
break
list1=list()
arr=line.split(' ')
if not float(arr[0]).__eq__(9999999):
list1.append(float(arr[0]))
list1.append(float(arr[1]))
map['%s'%str]=list1
return map
def selectTheBestResultFromFiles(path):
files = os.listdir(path)
mingatesum=999999999
mingatename=''
for file in files:
map = read_tabu_files('%s/%s'%(path,file))
gatesum = 0
it1 = map.keys()
# if len(it1)!=159:
# continue
for k1 in it1:
gatesum += map.get(k1)[5]
if gatesum < mingatesum:
mingatesum = gatesum
mingatename=file
# print('%s %s %s %f'%(file,file[30:],file[19:23], math.log(gatesum,10)))
print('%s %d'%(file, gatesum))
print("the minimal file is [[%s]], consisting of [[%d]] gates. "%(mingatename,mingatesum))
def selectTheMinimalDepthFromFiles(path,name,exam):
po = open(name, "w")
files = os.listdir(path)
res=dict()
mapu=read_tabu_files(exam)
it=mapu.keys()
gatesum = 0
print('[Minimal mapping index] [Number of 2-qubit gates of the initial circuit] [Depth of the generated circuit] \n[Number of SWAP inserted] [Number of look-ahead layers] [Attenuation factor] [Runtime] ')
for k in it:
res['%s'%k]=mapu.get(k)
for file in files:
map = read_tabu_files('%s/%s' % (path, file))
it1 = map.keys()
for k1 in it1:
if k1.__eq__(k):
if res['%s'%k][4]>map.get(k1)[4]:
res['%s' % k]=map.get(k1)
break
if res['%s' % k][4]!=999999999:
gatesum += res['%s' % k][4]
key = '%s' % k
print(key)
po.write('%s\n' % k)
for i in range(len(res[key])):
# print(res[key][i],end=' ')
if i != 0 and i != 6 and i != 7:
print('%s ' % res[key][i], end=' ')
po.write('%s ' % res[key][i])
print()
po.write('\n')
po.flush()
po.close()
print("the minimal file [[%s]], depth: %d " %(name,gatesum))
def selectTheMinimalGatesFromFiles(path,name,exam):
po = open(name, "w")
files = os.listdir(path)
res=dict()
mapu=read_tabu_files(exam)
it=mapu.keys()
gatesum = 0
print('[Minimal mapping index] [Number of gates of the initial circuit] [Depth of the generated circuit] \n[Number of SWAP inserted] [Runtime] ')
for k in it:
res['%s'%k]=mapu.get(k)
for file in files:
map = read_tabu_files('%s/%s' % (path, file))
it1 = map.keys()
for k1 in it1:
if k1.__eq__(k):
if res['%s'%k][5]>map.get(k1)[5]:
res['%s' % k]=map.get(k1)
break
if res['%s' % k][5] != 999999999:
gatesum += res['%s' % k][5]
key = '%s' % k
print(key)
po.write('%s\n' % k)
for i in range(len(res[key])):
# print('9999999 ', end=' ')
if i != 0 and i != 6 and i != 7:
print('%s ' % res[key][i], end=' ')
po.write('%s ' % res[key][i])
print()
po.write('\n')
po.flush()
po.close()
print("the minimal file [[%s]] consists of %d gates " %(name,gatesum))
#mini ./results/qct/ ./results/test/tsa1
def caculateTheAdjustTSA():
print('_________________________comparison of <*_TSA_num>______________________')
fcca = "./results/data/tsa/fidsl_tsa"
fidslcca = read_tabu_files1(fcca)
tcca = "./results/data/tsa/tsa"
tsacca = read_tabu_files1(tcca)
occa = "./results/data/tsa/ga_tsa"
optmcca = read_tabu_files1(occa)
sabrestr = "./results/data/tsa/sabre_tsa"
sabremap = read_tabu_files1(sabrestr)
names=list()
it = tsacca.keys()
for k in it:
if fidslcca.get(k) ==None:
names.append(k)
elif optmcca.get(k) == None:
names.append(k)
elif sabremap.get(k) == None:
names.append(k)
for i in range(len(names)):
del tsacca[names[i]]
tsagate = 0
optmgate = 0
fidslgate = 0
ori = 0
sabregate = 0
it = tsacca.keys()
for k in it:
tsagate += tsacca.get(k)[4] * 3
fidsltsa = fidslcca.get(k)
fidslgate += fidsltsa[4] * 3
optm = optmcca.get(k)
ori += tsacca.get(k)[1]
optmgate += optm[4] * 3
sab = sabremap.get(k)
sabregate += sab[4] * 3
print("number of case:" , len(tsacca), "ORI: " , ori , "SABRE: " , sabregate , "TSA_num: " , tsagate ,
"GA: " , optmgate , "FiDSL: " , fidslgate)
print("GA: " , (optmgate - tsagate + 0.0) / optmgate)
print("SABRE: " , (sabregate - tsagate + 0.0) / sabregate)
print("FiDSL: " , (fidslgate - tsagate + 0.0) / fidslgate)
def caculateTheAdjustCCA():
print('_________________________comparison of <*_TSA_cca>______________________')
fcca = "./results/data/cca/fidsl_cca"
fidslcca = read_tabu_files(fcca)
tcca = "./results/data/cca/tsa_cca"
tsacca = read_tabu_files(tcca)
occa = "./results/data/cca/ga_cca"
optmcca = read_tabu_files(occa)
sabrestr = "./results/data/cca/sabre_cca"
sabremap = read_tabu_files(sabrestr)
names = list()
it = tsacca.keys()
for k in it:
if fidslcca.get(k) == None:
names.append(k)
elif optmcca.get(k) == None:
names.append(k)
elif sabremap.get(k) == None:
names.append(k)
for i in range(len(names)):
del tsacca[names[i]]
tsagate = 0
optmgate = 0
fidslgate = 0
ori = 0
sabregate = 0
it = tsacca.keys()
for k in it:
tsagate += tsacca.get(k)[4] * 3
fidsltsa = fidslcca.get(k)
fidslgate += fidsltsa[4] * 3
optm = optmcca.get(k)
ori += tsacca.get(k)[1]
optmgate += optm[4] * 3
sab = sabremap.get(k)
sabregate += sab[4] * 3
print("number of case: ", len(tsacca), "ORI: ", ori,", GA: ", optmgate, ", SABRE: ", sabregate,
", FiDSL: ", fidslgate, ", TSA_cca: ", tsagate)
print("(GA-TSA_cca)/GA: ", (optmgate - tsagate + 0.0) / optmgate)
print("(SABRE-TSA_cca)/SABRE: ", (sabregate - tsagate + 0.0) / sabregate)
print("(FiDSL-TSA_cca)/FiDSL: ", (fidslgate - tsagate + 0.0) / fidslgate)
def caculateTheAdjustOptm():
print('_________________________comparison of <*_GA>______________________')
sabreoptm = "./results/data/optm/sabre_optm"
sabremap = read_sabre_files(sabreoptm)
optmStr = "./results/data/optm/total_A_ini_connect"
optmmap = readOptm(optmStr)
optmStr1 = "./results/data/optm/GA_num"
optmmap1 = readOptm(optmStr1)
tsamap = read_tabu_files1("./results/data/tsa/tsa")
tsagate = 0
optmgate = 0
fidslgate = 0
ori = 0
count = 0
sabregate = 0
it = tsamap.keys()
for k in it:
optm = optmmap.get(k)
optm1 = optmmap1.get(k)
if optm[2][3] != 9999999 and optm[3][3] != 9999999 and optm1[1][3] != 9999999 and sabremap.get(k)[
1] != 9999999:
count += 1
ori += tsamap.get(k)[1]
tsagate += optm[3][3] * 3
fidslgate += optm[2][3] * 3
optmgate += optm1[1][3] * 3
sabregate += sabremap.get(k)[2] * 3
print("number of case: ", count, "ORI: ", ori,", GA: ", optmgate, ", SABRE: ", sabregate, ", FiDSL: ", fidslgate, ", TSA_num: ", tsagate)
print("(GA-TSA_num)/GA: ", (optmgate - tsagate + 0.0) / optmgate)
print("(SABRE-TSA_num)/SABRE: ", (sabregate - tsagate + 0.0) / sabregate)
print("(FiDSL-TSA_num)/FiDSL: ", (fidslgate - tsagate + 0.0) / fidslgate)
def caculateTheAdjustFiDSL():
print('_________________________comparison of <*_FiDSL>______________________')
fcca = "./results/data/fidsl/fidsl"
fidslcca = read_topgraph_files(fcca)
tcca = "./results/data/fidsl/tsa_fidsl"
tsacca = read_topgraph_files(tcca)
occa = "./results/data/fidsl/optm_fidsl"
optmcca = read_topgraph_files(occa)
sabre = "./results/data/fidsl/sabre_fidsl"
sabremap = read_topgraph_files(sabre)
names = list()
it = tsacca.keys()
for k in it:
if fidslcca.get(k) == None or fidslcca.get(k)[1]==9999999:
names.append(k)
elif optmcca.get(k) == None or optmcca.get(k)[1]==9999999:
names.append(k)
elif sabremap.get(k) == None or sabremap.get(k)[1]==9999999:
names.append(k)
for i in range(len(names)):
del tsacca[names[i]]
tsagate = 0
optmgate = 0
fidslgate = 0
ori = 0
sabregate = 0
it = tsacca.keys()
for k in it:
tsagate += tsacca.get(k)[1] * 3
fidsltsa = fidslcca.get(k)
fidslgate += fidsltsa[1] * 3
optm = optmcca.get(k)
ori += tsacca.get(k)[1]
optmgate += optm[1] * 3
sab = sabremap.get(k)
sabregate += sab[1] * 3
print("number of case: ", len(tsacca), "ORI: ",
", GA: ", optmgate, ori, ", SABRE: ", sabregate, ", FiDSL: ", fidslgate, ", TSA_num: ", tsagate)
print("(GA-TSA_num)/GA: ", (optmgate - tsagate + 0.0) / optmgate)
print("(SABRE-TSA_num)/SABRE: ", (sabregate - tsagate + 0.0) / sabregate)
print("(FiDSL-TSA_num)/FiDSL: ", (fidslgate - tsagate + 0.0) / fidslgate)
def caculateTheAdjustSABRE():
print('_________________________comparison of <*_SABRE>______________________')
fcca = "./results/data/sabre/fidsl_sabre"
fidslcca = read_sabre_files(fcca)
tcca = "./results/data/sabre/tsa_sabre"
tsacca = read_sabre_files(tcca)
occa = "./results/data/sabre/optm_sabre"
optmcca = read_sabre_files(occa)
sabre = "./results/data/sabre/sabre"
sabremap = read_sabre_files(sabre)
names = list()
it = tsacca.keys()
for k in it:
if fidslcca.get(k) == None or fidslcca.get(k)[1] == 9999999:
names.append(k)
elif optmcca.get(k) == None or optmcca.get(k)[1] == 9999999:
names.append(k)
elif sabremap.get(k) == None or sabremap.get(k)[1] == 9999999:
names.append(k)
for i in range(len(names)):
del tsacca[names[i]]
tsagate = 0
optmgate = 0
fidslgate = 0
ori = 0
sabregate = 0
it = tsacca.keys()
for k in it:
tsagate += tsacca.get(k)[1]
fidsltsa = fidslcca.get(k)
fidslgate += fidsltsa[1]
optm = optmcca.get(k)
ori += tsacca.get(k)[0]
optmgate += optm[1]
sab = sabremap.get(k)
sabregate += sab[1]
print("number of case: ", len(tsacca), "ORI: ", ori,
", GA: ", optmgate, ", SABRE: ", sabregate, ", FiDSL: ", fidslgate, ", TSA_num: ", tsagate)
print("(GA-TSA_num)/GA: ", (optmgate - tsagate + 0.0) / optmgate)
print("(SABRE-TSA_num)/SABRE: ", (sabregate - tsagate + 0.0) / sabregate)
print("(FiDSL-TSA_num)/FiDSL: ", (fidslgate - tsagate + 0.0) / fidslgate)
def caculateFiDSL_():
print('________________comparison of <FiDSL_*>__________________')
topgraph = "./results/data/fidsl/fidsl"
fidslmap = read_topgraph_files(topgraph)
optmStr = "./results/data/optm/total_A_ini_connect"
optmmap = readOptm(optmStr)
# optmStr1 = "./results/data/optm/GA_num"
# optm_result1 = readOptm(optmStr1)
# optmStr = "./results/data/optm/sabre_optm"
# optmmap = read_sabre_files(optmStr)
sabrestr = "./results/data/sabre/fidsl_sabre"
sabremap = read_sabre_files(sabrestr)
count = 0
tsamap = read_tabu_files1("./results/data/tsa/fidsl_tsa")
ccamap = read_tabu_files1("./results/data/cca/fidsl_cca")
names = list()
it = tsamap.keys()
for k in it:
if (optmmap.get(k) == None or optmmap.get(k)[2][3] == 9999999):
names.append(k)
if fidslmap.get(k) == None or fidslmap.get(k)[1]==9999999:
names.append(k)
# if optmmap.get(k) == None or optmmap.get(k)[1]==9999999:
# names.append(k)
if sabremap.get(k) == None or sabremap.get(k)[1]==9999999:
names.append(k)
for i in range(len(names)):
if names[i] in tsamap.keys():
del tsamap[names[i]]
ccagate = 0
tsagate = 0
optmgate = 0
fidslgate = 0
ori = 0
sabregate = 0
it = tsamap.keys()
for k in it:
tsagate += tsamap.get(k)[4] * 3
fidslgate += fidslmap.get(k)[1] * 3
ori += tsamap.get(k)[1]
optmgate += optmmap.get(k)[2][3] * 3
ccagate += ccamap.get(k)[4] * 3
sabregate += sabremap.get(k)[1]
print("number of case: ", len(tsamap),
"GA: ", optmgate, "SABRE: ", sabregate, "FiDSL: ", fidslgate, "TSA_num: ", tsagate,", TSA_cca: ",ccagate)
print("(GA-TSA_num)/GA: ", (optmgate - tsagate + 0.0) / optmgate)
print("(SABRE-TSA_num)/SABRE: ", (sabregate - tsagate + 0.0) / sabregate)
print("(FiDSL-TSA_num)/FiDSL: ", (fidslgate - tsagate + 0.0) / fidslgate)
print("(TSA_cca-TSA_num)/TSA_cca: ",(ccagate-tsagate+0.0)/ccagate)
def caculateTSA_():
print('________________comparison of <TSA_*>__________________')
topgraph = "./results/data/fidsl/tsa_fidsl"
fidslmap = read_topgraph_files(topgraph)
optmStr = "./results/data/optm/total_A_ini_connect"
optmmap = readOptm(optmStr)
sabrestr = "./results/data/sabre/tsa_sabre"
sabremap = read_sabre_files(sabrestr)
tsamap = read_tabu_files1("./results/data/tsa/tsa")
ccamap = read_tabu_files1("./results/data/cca/tsa_cca")
names = list()
it = tsamap.keys()
for k in it:
if (optmmap.get(k) == None or optmmap.get(k)[3][3] - 9999999==0):
names.append(k)
if fidslmap.get(k) == None or fidslmap.get(k)[1] - 9999999==0:
names.append(k)
if sabremap.get(k) == None or sabremap.get(k)[1]- 9999999==0:
names.append(k)
for i in range(len(names)):
if names[i] in tsamap.keys():
del tsamap[names[i]]
ccagate = 0
tsagate = 0
optmgate = 0
fidslgate = 0
ori = 0
sabregate = 0
it = tsamap.keys()
for k in it:
tsagate += tsamap.get(k)[4] * 3
fidslgate += fidslmap.get(k)[1] * 3
ori += tsamap.get(k)[1]
optmgate += optmmap.get(k)[3][3] * 3
ccagate += ccamap.get(k)[4] * 3
sabregate += sabremap.get(k)[1]
print("number of case: ", len(tsamap), "ori: ",ori,", GA: ", optmgate, ", SABRE: ", sabregate,
", FiDSL: ", fidslgate, ", TSA_num: ", tsagate,", TSA_cca: ",ccagate)
print("(GA-TSA_num)/GA: ", (optmgate - tsagate + 0.0) / optmgate)
print("(SABRE-TSA_num)/SABRE: ", (sabregate - tsagate + 0.0) / sabregate)
print("(FiDSL-TSA_num)/FiDSL: ", (fidslgate - tsagate + 0.0) / fidslgate)
print("(TSA_cca-TSA_num)/TSA_cca: ",(ccagate-tsagate+0.0)/ccagate)
def caculateOptm_():
print('________________comparison of <GA_*>__________________')
topgraph = "./results/data/fidsl/optm_fidsl"
fidslmap = read_topgraph_files(topgraph)
optmStr = "./results/data/optm/total_A_ini_connect"
optmmap = readOptm(optmStr)
sabrestr = "./results/data/sabre/optm_sabre"
sabremap = read_sabre_files(sabrestr)
tsamap = read_tabu_files1("./results/data/tsa/ga_tsa")
ccamap = read_tabu_files1("./results/data/cca/ga_cca")
names = list()
it = tsamap.keys()
for k in it:
if (optmmap.get(k) == None or optmmap.get(k)[1][3] - 9999999==0):
names.append(k)
if fidslmap.get(k) == None or fidslmap.get(k)[1] - 9999999==0:
names.append(k)
if sabremap.get(k) == None or sabremap.get(k)[1]- 9999999==0:
names.append(k)
for i in range(len(names)):
if names[i] in tsamap.keys():
del tsamap[names[i]]
ccagate = 0
tsagate = 0
optmgate = 0
fidslgate = 0
ori = 0
sabregate = 0
it = tsamap.keys()
for k in it:
tsagate += tsamap.get(k)[4] * 3
fidslgate += fidslmap.get(k)[1] * 3
ori += tsamap.get(k)[1]
optmgate += optmmap.get(k)[1][3] * 3
ccagate += ccamap.get(k)[4] * 3
sabregate += sabremap.get(k)[1]
print("number of case: ", len(tsamap), "ori: ", ori, ", GA: ", optmgate, ", SABRE: ", sabregate,
", FiDSL: ", fidslgate, ", TSA_num: ", tsagate, ", TSA_cca: ", ccagate)
print("(GA-TSA_num)/GA: ", (optmgate - tsagate + 0.0) / optmgate)
print("(SABRE-TSA_num)/SABRE: ", (sabregate - tsagate + 0.0) / sabregate)
print("(FiDSL-TSA_num)/FiDSL: ", (fidslgate - tsagate + 0.0) / fidslgate)
print("(TSA_cca-TSA_num)/TSA_cca: ",(ccagate-tsagate+0.0)/ccagate)
def caculateSABRE_():
print('________________comparison of <SABRE_*>__________________')
topgraph = "./results/data/fidsl/sabre_fidsl"
fidslmap = read_topgraph_files(topgraph)
optmStr = "./results/data/optm/sabre_optm"
optmmap = read_sabre_files(optmStr)
sabrestr = "./results/data/sabre/sabre"
sabremap = read_sabre_files(sabrestr)
tsamap = read_tabu_files1("./results/data/tsa/sabre_tsa")
ccamap = read_tabu_files1("./results/data/cca/sabre_cca")
names = list()
it = tsamap.keys()
for k in it:
if (optmmap.get(k) == None or optmmap.get(k)[2]- 9999999==0):
names.append(k)
if fidslmap.get(k) == None or fidslmap.get(k)[1] - 9999999==0:
names.append(k)
if sabremap.get(k) == None or sabremap.get(k)[1] -9999999==0:
names.append(k)
for i in range(len(names)):
if names[i] in tsamap.keys():
del tsamap[names[i]]
ccagate = 0
tsagate = 0
optmgate = 0
fidslgate = 0
ori = 0
sabregate = 0
it = tsamap.keys()
for k in it:
tsagate += tsamap.get(k)[4] * 3
fidslgate += fidslmap.get(k)[1] * 3
ori += tsamap.get(k)[1]
optmgate += optmmap.get(k)[2] * 3
ccagate += ccamap.get(k)[4] * 3
sabregate += sabremap.get(k)[1]
print("number of case: ", len(tsamap), "ori: ", ori,
", GA: ", optmgate, ", SABRE: ", sabregate, ", FiDSL: ", fidslgate, ", TSA_num: ", tsagate, ", TSA_cca: ", ccagate)
print("(GA-TSA_num)/GA: ", (optmgate - tsagate + 0.0) / optmgate)
print("(SABRE-TSA_num)/SABRE: ", (sabregate - tsagate + 0.0) / sabregate)
print("(FiDSL-TSA_num)/FiDSL: ", (fidslgate - tsagate + 0.0) / fidslgate)
print("(TSA_cca-TSA_num)/TSA_cca: ",(ccagate-tsagate+0.0)/ccagate)
def compareSABRE_TSA(sabrepath,tsapath,type):
sabremap = read_sabre_files(sabrepath)
tsamap = read_tabu_files1(tsapath)
greater_top_gql = 0
greater_gql_top = 0
eq_gql_top = 0
gate_top = 0
gate_gql = 0
gate_gql_all = 0
pub_res = 0
pro_top_gql = 0
pro_gql_top = 0
sabretime=0
tsatime=0
it=tsamap.keys()
for k in it:
v1list=tsamap.get(k)
gate_gql_all += v1list[1]
if type=='small':
if v1list[1]>100:
continue
elif type=='medium':
if v1list[1]<=100 or v1list[1]>1000:
continue
elif type=='large':
if v1list[1]<=1000:
continue
if (sabremap.get(k) == None):
continue
# // 比较swap个数
v1 = v1list[4] * 3
v2 = sabremap.get(k)[1]
pub_res += 1
gate_gql += v1
gate_top += v2
tsatime+=v1list[5]
sabretime+=sabremap.get(k)[2]
if v1 < v2:
greater_gql_top +=1
pro_gql_top += v2 - v1
elif (v2 < v1) :
# print(k)
# print(v2 , " ", v1)
greater_top_gql += 1
pro_top_gql += v1 - v2
else:
eq_gql_top += 1
print("number of successful SABRE case:", len(sabremap), ", TSA case:", len(tsamap))
print("both successful case:", pub_res)
print("number of gates inserted: SABRE:", gate_top ,", TSA:", gate_gql)
print("number of case: SABRE < TSA: ", greater_top_gql, ", TSA < SABRE: ", greater_gql_top, ", equal: ", eq_gql_top)
print("(SABRE-TSA)/SABRE:", (gate_top - gate_gql + 0.0) / gate_top * 100, "% ")
print('TSA time: %s'%tsatime)
def compareFiDSL_TSA(fidslpath,tsapath,type):
fidslmap = read_topgraph_files(fidslpath)
tsamap = read_tabu_files1(tsapath)
greater_top_gql = 0
greater_gql_top = 0
eq_gql_top = 0
gate_top = 0
gate_gql = 0
gate_gql_all = 0
pub_res = 0
pro_top_gql = 0
pro_gql_top = 0
tsatime=0
it=tsamap.keys()
for k in it:
v1list=tsamap.get(k)
if type=='small':
if v1list[1]>100:
continue
elif type=='medium':
if v1list[1]<=100 or v1list[1]>1000:
continue
elif type=='large':
if v1list[1]<=1000:
continue
if (fidslmap.get(k) == None ):
continue
# // 比较swap个数
v1 = v1list[4]*3
tsatime+=v1list[5]
v2 = fidslmap.get(k)[1]*3
pub_res += 1
gate_gql += v1
gate_top += v2
if v1 < v2:
greater_gql_top +=1
pro_gql_top += v2 - v1
elif (v2 < v1) :
# print(k)
# print(v2 , " ", v1)
greater_top_gql += 1
pro_top_gql += v1 - v2
else:
eq_gql_top += 1
print("number of successful FiDSL case:", len(fidslmap), ", TSA case:", len(tsamap))
print("both successful case:", pub_res)
print("number of gates inserted: FiDSL:", gate_top, " TSA:", gate_gql)
print("number of case: FiDSL < TSA: ", greater_top_gql, ", TSA < FiDSL: ", greater_gql_top, ", equal: ", eq_gql_top)
print("(FiDSL-TSA)/FiDSL:", (gate_gql - gate_top + 0.0) / gate_gql * 100, "% ")
print('TSA time: %s'%tsatime)
def comparenumCCA_TSA(ccapath,tsapath,type='all'):
ccamap = read_tabu_files1(ccapath)
tsamap = read_tabu_files1(tsapath)
greater_top_gql = 0
greater_gql_top = 0
eq_gql_top = 0
gate_top = 0
gate_gql = 0
pub_res = 0
pro_top_gql = 0
pro_gql_top = 0
tsatime=0
ccatime=0
it=tsamap.keys()
for k in it:
v1list=tsamap.get(k)
if type=='small':
if v1list[1]>100:
continue
elif type=='medium':
if v1list[1]<=100 or v1list[1]>1000:
continue
elif type=='large':
if v1list[1]<=1000:
continue
if ccamap.get(k) == None:
continue
# // 比较swap个数
v1 = v1list[4]*3
v2 = ccamap.get(k)[4]*3
pub_res += 1
tsatime+=v1list[5]
ccatime+=ccamap.get(k)[5]
gate_gql += v1
gate_top += v2
if v1 < v2:
greater_gql_top +=1
pro_gql_top += v2 - v1
elif (v2 < v1) :
greater_top_gql += 1
pro_top_gql += v1 - v2
else:
eq_gql_top += 1
print("number of successful TSA_cca case:", len(ccamap), ", TSA_num case:", len(tsamap))
print("both successful case:", pub_res)
print("number of gates inserted: TSA_cca:", gate_top, " TSA_num:", gate_gql)
print("number of case: TSA_cca < TSA_num: ", greater_top_gql, ", TSA_num < TSA_cca: ", greater_gql_top, ", equal: ", eq_gql_top)
print("(TSA_cca-TSA_num)/TSA_cca:", (gate_top - gate_gql + 0.0) / gate_top * 100, "% ")
print('TSA time: %s, CCA time: %s' %(tsatime,ccatime))
def comparenumDepth_TSA(depthpath,tsapath):
depthmap = read_tabu_files1(depthpath)
tsamap = read_tabu_files1(tsapath)
greater_top_gql = 0
greater_gql_top = 0
eq_gql_top = 0
gate_top = 0
gate_gql = 0
pub_res = 0
pro_top_gql = 0
pro_gql_top = 0
it=tsamap.keys()
for k in it:
v1list=tsamap.get(k)
if depthmap.get(k) == None:
continue
# // 比较swap个数
v1 = v1list[4]*3
v2 = depthmap.get(k)[4]*3
pub_res += 1
gate_gql += v1
gate_top += v2
if v1 < v2:
greater_gql_top +=1
pro_gql_top += v2 - v1
elif (v2 < v1) :
greater_top_gql += 1
pro_top_gql += v1 - v2
else:
eq_gql_top += 1
print("number of successful TSA_depth case:", len(depthmap), ", TSA_num case:", len(tsamap))
print("both successful case:", pub_res)
print("number of gates inserted: TSA_depth:", gate_top, " TSA_num:", gate_gql)
print("case: TSA_depth < TSA_num: ", greater_top_gql, ", TSA_num < TSA_depth: ", greater_gql_top, ", equal: ", eq_gql_top)
print("(TSA_depth-TSA_num)/TSA_depth:", (gate_top - gate_gql + 0.0) / gate_top * 100, "% ")
def comparenumCCA_Depth(ccapath,depthpath):
ccamap = read_tabu_files1(ccapath)
depthmap = read_tabu_files1(depthpath)
greater_top_gql = 0
greater_gql_top = 0
eq_gql_top = 0
gate_top = 0
gate_gql = 0
pub_res = 0
pro_top_gql = 0
pro_gql_top = 0
it=depthmap.keys()
for k in it:
v1list=depthmap.get(k)
if ccamap.get(k) == None:
continue
# // 比较swap个数
v1 = v1list[4]*3
v2 = ccamap.get(k)[4]*3
pub_res += 1
gate_gql += v1
gate_top += v2
if v1 < v2:
greater_gql_top +=1
pro_gql_top += v2 - v1
elif (v2 < v1) :
greater_top_gql += 1
pro_top_gql += v1 - v2
else:
eq_gql_top += 1
print("number of successful TSA_cca case:", len(ccamap), " TSA_depth case:", len(depthmap))
print("both successful case:", pub_res)
print("number of gates inserted: TSA_cca:", gate_top, " TSA_depth:", gate_gql)
print("case: TSA_cca < TSA_depth: ", greater_top_gql, ", TSA_depth < TSA_cca: ", greater_gql_top, ", equal: ", eq_gql_top)
print("(TSA_depth-TSA_cca)/TSA_depth:", (gate_gql - gate_top + 0.0) / gate_gql * 100, "% ")
def comparedepthCCA_TSA(ccapath, tsapath, type='all'):
ccamap = read_tabu_files1(ccapath)
tsamap = read_tabu_files1(tsapath)
greater_top_gql = 0
greater_gql_top = 0
eq_gql_top = 0
gate_top = 0
gate_gql = 0
pub_res = 0
pro_top_gql = 0
pro_gql_top = 0
tsatime = 0
ccatime = 0
it = tsamap.keys()
for k in it:
v1list = tsamap.get(k)
if type == 'small':
if v1list[1] > 100:
continue
elif type == 'medium':
if v1list[1] <= 100 or v1list[1] > 1000:
continue
elif type == 'large':
if v1list[1] <= 1000:
continue
if ccamap.get(k) == None:
continue
# // 比较swap个数
v1 = v1list[3]
v2 = ccamap.get(k)[3]
pub_res += 1
tsatime += v1list[5]
ccatime += ccamap.get(k)[5]
gate_gql += v1
gate_top += v2
if v1 < v2:
greater_gql_top += 1
pro_gql_top += v2 - v1
elif (v2 < v1):
greater_top_gql += 1
pro_top_gql += v1 - v2
else:
eq_gql_top += 1
print("number of successful TSA_cca case:", len(ccamap), ", TSA_num case:", len(tsamap))
print("both successful case:", pub_res)
print("depth of gates inserted: TSA_cca:", gate_top, " TSA_num:", gate_gql)
print("number of case: TSA_cca < TSA_num: ", greater_top_gql, ", TSA_num < TSA_cca: ", greater_gql_top, ", equal: ",
eq_gql_top)
print("(TSA_cca-TSA_num)/TSA_cca:", (gate_top - gate_gql + 0.0) / gate_top * 100, "% ")
print('TSA time: %s, CCA time: %s' % (tsatime, ccatime))
def comparedepthDepth_TSA(depthpath, tsapath):
depthmap = read_tabu_files1(depthpath)
tsamap = read_tabu_files1(tsapath)
greater_top_gql = 0
greater_gql_top = 0
eq_gql_top = 0
gate_top = 0
gate_gql = 0
pub_res = 0
pro_top_gql = 0
pro_gql_top = 0
it = tsamap.keys()
for k in it:
v1list = tsamap.get(k)
if depthmap.get(k) == None:
continue
# // 比较swap个数
v1 = v1list[3]
v2 = depthmap.get(k)[3]
pub_res += 1
gate_gql += v1
gate_top += v2
if v1 < v2:
greater_gql_top += 1
pro_gql_top += v2 - v1
elif (v2 < v1):
greater_top_gql += 1
pro_top_gql += v1 - v2
else:
eq_gql_top += 1
print("number of successful TSA_depth case:", len(depthmap), ", TSA_num case:", len(tsamap))
print("both successful case:", pub_res)
print("depth of gates inserted: TSA_depth:", gate_top, " TSA_num:", gate_gql)
print("case: TSA_depth < TSA_num: ", greater_top_gql, ", TSA_num < TSA_depth: ", greater_gql_top, ", equal: ",
eq_gql_top)
print("(TSA_depth-TSA_num)/TSA_depth:", (gate_top - gate_gql + 0.0) / gate_top * 100, "% ")
def comparedepthCCA_Depth(ccapath, depthpath):
ccamap = read_tabu_files1(ccapath)
depthmap = read_tabu_files1(depthpath)
greater_top_gql = 0
greater_gql_top = 0
eq_gql_top = 0
gate_top = 0
gate_gql = 0
pub_res = 0
pro_top_gql = 0
pro_gql_top = 0
it = depthmap.keys()
for k in it:
v1list = depthmap.get(k)
if ccamap.get(k) == None:
continue
# // 比较swap个数
v1 = v1list[3]
v2 = ccamap.get(k)[3]
pub_res += 1
gate_gql += v1
gate_top += v2
if v1 < v2:
greater_gql_top += 1
pro_gql_top += v2 - v1
elif (v2 < v1):
greater_top_gql += 1
pro_top_gql += v1 - v2
else:
eq_gql_top += 1
print("number of successful TSA_cca case:", len(ccamap), " TSA_depth case:", len(depthmap))
print("both successful case:", pub_res)
print("depth of gates inserted: TSA_cca:", gate_top, " TSA_depth:", gate_gql)
print("case: TSA_cca < TSA_depth: ", greater_top_gql, ", TSA_depth < TSA_cca: ", greater_gql_top, ", equal: ",
eq_gql_top)
print("(TSA_depth-TSA_cca)/TSA_depth:", (gate_gql - gate_top + 0.0) / gate_gql * 100, "% ")
def compareGA_TSA(gamap,tsamap):
tsagate = 0
optmgate = 0
ori = 0
count = 0
tsagreat=0
optmgreat=0
eq=0
it = tsamap.keys()
for k in it:
optm = gamap.get(k)
if optm!=None and optm[1][3]!= 9999999:
count += 1
ori += tsamap.get(k)[1]
tsagate += tsamap.get(k)[4] * 3
optmgate += optm[1][3] * 3
if tsamap.get(k)[4]<optm[1][3]:
tsagreat+=1
elif tsamap.get(k)[4]>optm[1][3]:
optmgreat+=1
else:
eq+=1
print("number of successful FiDSL case:", len(gamap), ", TSA case:", len(tsamap))
print("both successful case:", count)
print("case: GA < TSA: ", optmgreat + ", TSA < GA: ", tsagreat + ", equal: ", eq)
print("ORI: ", ori, " number of gates inserted: TSA_num: ", tsagate,
"GA: ", optmgate)
print("(GA-TSA_num)/GA: ", (tsagate-optmgate + 0.0) / optmgate)
#evaldepth ./results/test/tsa_ccamindepth ./results/test/tsa_depthmindepth ./results/test/tsamindepth
if __name__ == '__main__':
if sys.argv[1].__eq__('best'):
if sys.argv[2]!='':
selectTheBestResultFromFiles(sys.argv[2])
else:
print('please input the correct parameters')
elif sys.argv[1].__eq__('minigate'):
if sys.argv[2]!='' and sys.argv[3]!='' and sys.argv[4]!='':
selectTheMinimalGatesFromFiles(sys.argv[2],sys.argv[3],sys.argv[4])
else:
print('please input the correct parameters')
elif sys.argv[1].__eq__('minidepth'):
if sys.argv[2]!='' and sys.argv[3]!='' and sys.argv[4]!='':
selectTheMinimalDepthFromFiles(sys.argv[2],sys.argv[3],sys.argv[4])
else:
print('please input the correct parameters')
elif sys.argv[1].__eq__('ini'):
caculateTheAdjustOptm()
caculateTheAdjustSABRE()
caculateTheAdjustFiDSL()
caculateTheAdjustTSA()
caculateTheAdjustCCA()
elif sys.argv[1].__eq__('adj'):
caculateOptm_()
caculateSABRE_()
caculateFiDSL_()
caculateTSA_()
elif sys.argv[1].__eq__('pairwise'):
#paiwise type sabre fidsl tsa cca
print("--------------------SABRE VS TSA_num--------------------")
compareSABRE_TSA(sys.argv[3], sys.argv[5], sys.argv[2])
print("--------------------FiDSL VS TSA_num--------------------")
compareFiDSL_TSA(sys.argv[4], sys.argv[5], sys.argv[2])
print("--------------------TSA_cca VS TSA_num--------------------")
comparenumCCA_TSA(sys.argv[6], sys.argv[5], sys.argv[2])
elif sys.argv[1].__eq__('evalnum'):
# eval cca depth tsa
print("--------------------TSA_cca VS TSA_num--------------------")
comparenumCCA_TSA(sys.argv[2], sys.argv[4])
print("--------------------TSA_depth VS TSA_num--------------------")
comparenumDepth_TSA(sys.argv[3], sys.argv[4])
print("--------------------TSA_cca VS TSA_depth--------------------")
comparenumCCA_Depth(sys.argv[2], sys.argv[3])
elif sys.argv[1].__eq__('evaldepth'):
# eval cca depth tsa
print("--------------------TSA_cca VS TSA_num--------------------")
comparedepthCCA_TSA(sys.argv[2], sys.argv[4])
print("--------------------TSA_depth VS TSA_num--------------------")
comparedepthDepth_TSA(sys.argv[3], sys.argv[4])
print("--------------------TSA_cca VS TSA_depth--------------------")
comparedepthCCA_Depth(sys.argv[2], sys.argv[3])
else:
print('please input the correct parameters') | 35.484058 | 209 | 0.552442 | 4,845 | 36,726 | 3.962436 | 0.052632 | 0.026461 | 0.010157 | 0.011668 | 0.829826 | 0.781436 | 0.777581 | 0.754506 | 0.725649 | 0.705126 | 0 | 0.039954 | 0.291238 | 36,726 | 1,035 | 210 | 35.484058 | 0.69758 | 0.027147 | 0 | 0.744548 | 0 | 0.002077 | 0.172255 | 0.078886 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026999 | false | 0 | 0.002077 | 0 | 0.034268 | 0.130841 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
540ddce64801df803e11924e0f69af88ff491ad0 | 59 | py | Python | utime/preprocessing/dataset_preparation/sleep_edf_153/__init__.py | aluquecerp/U-Time | c792259825b57e49544684ce2997f3ac8db84c6e | [
"MIT"
] | 1 | 2022-03-15T12:31:30.000Z | 2022-03-15T12:31:30.000Z | utime/preprocessing/dataset_preparation/sleep_edf_153/__init__.py | amiyapatanaik/U-Time | a9ed4892da77d165a71dbfef1d069d782c909757 | [
"MIT"
] | null | null | null | utime/preprocessing/dataset_preparation/sleep_edf_153/__init__.py | amiyapatanaik/U-Time | a9ed4892da77d165a71dbfef1d069d782c909757 | [
"MIT"
] | null | null | null | from .download_sleep_edf_153 import download_sleep_edf_153
| 29.5 | 58 | 0.915254 | 10 | 59 | 4.8 | 0.6 | 0.541667 | 0.666667 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109091 | 0.067797 | 59 | 1 | 59 | 59 | 0.763636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
5888becc6b64300475ef8ccac6c4560d44a0855a | 83 | py | Python | notebook/pass_with_open.py | vhn0912/python-snippets | 80b2e1d6b2b8f12ae30d6dbe86d25bb2b3a02038 | [
"MIT"
] | 174 | 2018-05-30T21:14:50.000Z | 2022-03-25T07:59:37.000Z | notebook/pass_with_open.py | vhn0912/python-snippets | 80b2e1d6b2b8f12ae30d6dbe86d25bb2b3a02038 | [
"MIT"
] | 5 | 2019-08-10T03:22:02.000Z | 2021-07-12T20:31:17.000Z | notebook/pass_with_open.py | vhn0912/python-snippets | 80b2e1d6b2b8f12ae30d6dbe86d25bb2b3a02038 | [
"MIT"
] | 53 | 2018-04-27T05:26:35.000Z | 2022-03-25T07:59:37.000Z | with open('temp/empty.txt', 'w'):
pass
with open('temp/empty.txt', 'w'): pass
| 16.6 | 38 | 0.60241 | 14 | 83 | 3.571429 | 0.5 | 0.32 | 0.48 | 0.68 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0.156627 | 83 | 4 | 39 | 20.75 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0.361446 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.666667 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 10 |
545d7b111f1ab15bb8b92eb6cfc0c652840bdc29 | 3,612 | py | Python | tests/view/core/test_stylesheet.py | IonTeLOS/wasf | 2e77dd65afffbbf1545e9ced2296dcbd0ab3c8e4 | [
"Zlib"
] | null | null | null | tests/view/core/test_stylesheet.py | IonTeLOS/wasf | 2e77dd65afffbbf1545e9ced2296dcbd0ab3c8e4 | [
"Zlib"
] | null | null | null | tests/view/core/test_stylesheet.py | IonTeLOS/wasf | 2e77dd65afffbbf1545e9ced2296dcbd0ab3c8e4 | [
"Zlib"
] | null | null | null | from unittest import TestCase
from waffles import stylesheet
class StylesheetTest(TestCase):
def test__process_var_of_vars__it_should_remove_vars_pointing_to_themselves(self):
var_map = {
'abc': 'aaa',
'xxx': '@xxx'
}
stylesheet.process_var_of_vars(var_map)
self.assertEqual(1, len(var_map))
self.assertIn('abc', var_map)
self.assertEqual('aaa', var_map['abc'])
def test__process_var_of_vars__it_should_remove_vars_pointing_to_unknown_vars(self):
var_map = {
'abc': 'aaa',
'xxx': '@def'
}
stylesheet.process_var_of_vars(var_map)
self.assertEqual(1, len(var_map))
self.assertIn('abc', var_map)
self.assertEqual('aaa', var_map['abc'])
def test__process_var_of_vars__it_should_not_replace_invalid_expressions(self):
var_map = {
'abc': 'aaa',
'bcd': '@ xpto' # has a space between @ and 'xpto'
}
self.assertEqual(2, len(var_map))
self.assertIn('abc', var_map)
self.assertEqual('aaa', var_map['abc'])
self.assertIn('bcd', var_map)
self.assertEqual('@ xpto', var_map['bcd'])
def test__process_var_of_vars__it_should_replace_value_at_first_iteration(self):
var_map = {
'abc': 'aaa',
'xxx': '@abc'
}
stylesheet.process_var_of_vars(var_map)
self.assertEqual(2, len(var_map))
self.assertIn('abc', var_map)
self.assertEqual('aaa', var_map['abc'])
self.assertIn('xxx', var_map)
self.assertEqual('aaa', var_map['xxx'])
def test__process_var_of_vars__it_should_replace_value_at_second_iteration(self):
var_map = {
'abc': 'aaa',
'def': '@abc',
'xxx': '@def'
}
stylesheet.process_var_of_vars(var_map)
self.assertEqual(3, len(var_map))
self.assertIn('abc', var_map)
self.assertEqual('aaa', var_map['abc'])
self.assertIn('def', var_map)
self.assertEqual('aaa', var_map['def'])
self.assertIn('xxx', var_map)
self.assertEqual('aaa', var_map['xxx'])
def test__process_var_of_vars__it_should_replace_value_at_third_iteration(self):
var_map = {
'abc': 'aaa',
'def': '@abc',
'fgh': '@def',
'xxx': '@fgh'
}
stylesheet.process_var_of_vars(var_map)
self.assertEqual(4, len(var_map))
self.assertIn('abc', var_map)
self.assertEqual('aaa', var_map['abc'])
self.assertIn('def', var_map)
self.assertEqual('aaa', var_map['def'])
self.assertIn('fgh', var_map)
self.assertEqual('aaa', var_map['fgh'])
self.assertIn('xxx', var_map)
self.assertEqual('aaa', var_map['xxx'])
def test__process_var_of_vars__it_should_replace_multiple_vars(self):
var_map = {
'abc': 'aaa',
'def': '@abc',
'fgh': 'bbb',
'ijk': '@fgh',
'lmn': '@ijk'
}
stylesheet.process_var_of_vars(var_map)
self.assertEqual(5, len(var_map))
self.assertIn('abc', var_map)
self.assertEqual('aaa', var_map['abc'])
self.assertIn('def', var_map)
self.assertEqual('aaa', var_map['def'])
self.assertIn('fgh', var_map)
self.assertEqual('bbb', var_map['fgh'])
self.assertIn('ijk', var_map)
self.assertEqual('bbb', var_map['ijk'])
self.assertIn('lmn', var_map)
self.assertEqual('bbb', var_map['lmn'])
| 30.871795 | 88 | 0.581118 | 449 | 3,612 | 4.311804 | 0.124722 | 0.173554 | 0.160124 | 0.260331 | 0.858988 | 0.841426 | 0.804752 | 0.764463 | 0.717975 | 0.652893 | 0 | 0.00267 | 0.274086 | 3,612 | 116 | 89 | 31.137931 | 0.735698 | 0.008859 | 0 | 0.612903 | 0 | 0 | 0.083566 | 0 | 0 | 0 | 0 | 0 | 0.462366 | 1 | 0.075269 | false | 0 | 0.021505 | 0 | 0.107527 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
548f91b27b5815858e9a71ef7d931e623f0525e7 | 505 | py | Python | utils/board.py | Mordrog/pyarcanoid | 0f5ef282edd2557e32bc4fc0aadd8c53ac825a33 | [
"MIT"
] | null | null | null | utils/board.py | Mordrog/pyarcanoid | 0f5ef282edd2557e32bc4fc0aadd8c53ac825a33 | [
"MIT"
] | null | null | null | utils/board.py | Mordrog/pyarcanoid | 0f5ef282edd2557e32bc4fc0aadd8c53ac825a33 | [
"MIT"
] | null | null | null | import settings
def board_left():
return settings.BOARD_POSITION[0]
def board_right():
return settings.BOARD_POSITION[0] + settings.BOARD_WIDTH
def board_top():
return settings.BOARD_POSITION[1]
def board_bottom():
return settings.BOARD_POSITION[1] + settings.BOARD_HEIGHT
def board_center_x():
return (settings.BOARD_POSITION[0] + settings.BOARD_WIDTH) / 2
def board_center_y():
return (settings.BOARD_POSITION[1] + settings.BOARD_HEIGHT) / 2
| 19.423077 | 68 | 0.70495 | 66 | 505 | 5.121212 | 0.272727 | 0.384615 | 0.337278 | 0.47929 | 0.715976 | 0.550296 | 0.550296 | 0.550296 | 0 | 0 | 0 | 0.019704 | 0.19604 | 505 | 25 | 69 | 20.2 | 0.812808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.461538 | true | 0 | 0.076923 | 0.461538 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
549c70e039532c8fe60d4bc6e168cc5ba6c45647 | 99,568 | bzl | Python | third_party/xnnpack.buck.bzl | stungkit/pytorch | 0f05e398705bf15406bce79f7ee57d3935ad2abd | [
"Intel"
] | null | null | null | third_party/xnnpack.buck.bzl | stungkit/pytorch | 0f05e398705bf15406bce79f7ee57d3935ad2abd | [
"Intel"
] | 1 | 2022-01-10T18:39:28.000Z | 2022-01-10T19:15:57.000Z | third_party/xnnpack.buck.bzl | stungkit/pytorch | 0f05e398705bf15406bce79f7ee57d3935ad2abd | [
"Intel"
] | 1 | 2022-03-26T14:42:50.000Z | 2022-03-26T14:42:50.000Z | load("//tools/build_defs:glob_defs.bzl", "subdir_glob")
def define_xnnpack():
cxx_library(
name = "XNNPACK",
srcs = ["XNNPACK/src/allocator.c", "XNNPACK/src/init.c", "XNNPACK/src/memory-planner.c", "XNNPACK/src/operator-delete.c", "XNNPACK/src/runtime.c", "XNNPACK/src/subgraph.c", "XNNPACK/src/tensor.c", "XNNPACK/src/datatype-strings.c", "XNNPACK/src/operator-strings.c", "XNNPACK/src/subgraph-strings.c"],
deps = [":operators", ":subgraph", ":tables", ":ukernels_scalar", "//third_party:cpuinfo", "//third_party:pthreadpool", "//third_party:pthreadpool_header", ":arm_lib", ":x86_and_x86_64_lib"],
exported_deps = [],
compiler_flags = ["-w"],
preferred_linkage = "static",
exported_headers = {"xnnpack.h": "XNNPACK/include/xnnpack.h"},
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0", "-DXNN_NO_Q8_OPERATORS", "-DXNN_NO_F16_OPERATORS", "-DXNN_NO_NCHW_OPERATORS", "-DXNN_NO_QU8_OPERATORS", "-DXNN_NO_S8_OPERATORS", "-DXNN_NO_U8_OPERATORS", "-DXNN_NO_VCVT_OPERATORS", "-DXNN_NO_X32_OPERATORS", "-DXNN_NO_X8_OPERATORS", "-DXNN_NO_XX_OPERATORS"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_scalar",
srcs = ["XNNPACK/wrappers/params-init.c", "XNNPACK/wrappers/u8-lut32norm/scalar.c", "XNNPACK/wrappers/xx-copy/memcpy.c", "XNNPACK/wrappers/x8-lut/gen/lut-scalar-x4.c", "XNNPACK/wrappers/x32-depthtospace2d-chw2hwc/scalar.c"],
deps = [":interface", "//third_party:FP16", "//third_party:FXdiv"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "operators",
srcs = ["XNNPACK/src/operators/argmax-pooling-nhwc.c", "XNNPACK/src/operators/average-pooling-nhwc.c", "XNNPACK/src/operators/binary-elementwise-nd.c", "XNNPACK/src/operators/channel-shuffle-nc.c", "XNNPACK/src/operators/constant-pad-nd.c", "XNNPACK/src/operators/convolution-nchw.c", "XNNPACK/src/operators/convolution-nhwc.c", "XNNPACK/src/operators/deconvolution-nhwc.c", "XNNPACK/src/operators/depth-to-space-nchw2nhwc.c", "XNNPACK/src/operators/depth-to-space-nhwc.c", "XNNPACK/src/operators/fully-connected-nc.c", "XNNPACK/src/operators/global-average-pooling-ncw.c", "XNNPACK/src/operators/global-average-pooling-nwc.c", "XNNPACK/src/operators/lut-elementwise-nc.c", "XNNPACK/src/operators/max-pooling-nhwc.c", "XNNPACK/src/operators/prelu-nc.c", "XNNPACK/src/operators/resize-bilinear-nchw.c", "XNNPACK/src/operators/resize-bilinear-nhwc.c", "XNNPACK/src/operators/softmax-nc.c", "XNNPACK/src/operators/unary-elementwise-nc.c", "XNNPACK/src/operators/unpooling-nhwc.c", "XNNPACK/src/indirection.c", "XNNPACK/src/operator-run.c", "XNNPACK/src/packing.c"],
deps = [":interface", "//third_party:cpuinfo", "//third_party:FP16", "//third_party:FXdiv", "//third_party:clog"],
exported_deps = [],
compiler_flags = ["-w", "-Os"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "arm_lib",
srcs = [],
deps = [":jit_memory", ":ukernels_asm_aarch32", ":ukernels_asm_aarch64", ":ukernels_neon", ":ukernels_neon_aarch64", ":ukernels_neon_dot", ":ukernels_neon_fma", ":ukernels_neon_fp16", ":ukernels_neon_fp16arith_aarch64", ":ukernels_neon_v8", ":ukernels_scalar_aarch32"],
exported_deps = [],
compiler_flags = ["-w"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "third-party/XNNPACK",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = [],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "x86_and_x86_64_lib",
srcs = [],
deps = [":ukernels_avx", ":ukernels_avx2", ":ukernels_avx512", ":ukernels_avx512skx", ":ukernels_f16c", ":ukernels_fma3", ":ukernels_sse", ":ukernels_sse2", ":ukernels_sse41", ":ukernels_ssse3", ":ukernels_xop"],
exported_deps = [],
compiler_flags = ["-w"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "third-party/XNNPACK",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = [],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "tables",
srcs = ["XNNPACK/src/tables/exp2-k-over-64.c", "XNNPACK/src/tables/exp2-k-over-2048.c", "XNNPACK/src/tables/exp2minus-k-over-4.c", "XNNPACK/src/tables/exp2minus-k-over-8.c", "XNNPACK/src/tables/exp2minus-k-over-16.c", "XNNPACK/src/tables/exp2minus-k-over-64.c", "XNNPACK/src/tables/exp2minus-k-over-2048.c"],
deps = [":interface", "//third_party:FP16", "//third_party:FXdiv", "//third_party:clog"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "subgraph",
srcs = ["XNNPACK/src/subgraph/abs.c", "XNNPACK/src/subgraph/add2.c", "XNNPACK/src/subgraph/argmax-pooling-2d.c", "XNNPACK/src/subgraph/average-pooling-2d.c", "XNNPACK/src/subgraph/bankers-rounding.c", "XNNPACK/src/subgraph/ceiling.c", "XNNPACK/src/subgraph/clamp.c", "XNNPACK/src/subgraph/convert.c", "XNNPACK/src/subgraph/convolution-2d.c", "XNNPACK/src/subgraph/deconvolution-2d.c", "XNNPACK/src/subgraph/depth-to-space.c", "XNNPACK/src/subgraph/depthwise-convolution-2d.c", "XNNPACK/src/subgraph/divide.c", "XNNPACK/src/subgraph/elu.c", "XNNPACK/src/subgraph/floor.c", "XNNPACK/src/subgraph/fully-connected.c", "XNNPACK/src/subgraph/global-average-pooling-2d.c", "XNNPACK/src/subgraph/hardswish.c", "XNNPACK/src/subgraph/leaky-relu.c", "XNNPACK/src/subgraph/max-pooling-2d.c", "XNNPACK/src/subgraph/maximum2.c", "XNNPACK/src/subgraph/minimum2.c", "XNNPACK/src/subgraph/multiply2.c", "XNNPACK/src/subgraph/negate.c", "XNNPACK/src/subgraph/prelu.c", "XNNPACK/src/subgraph/sigmoid.c", "XNNPACK/src/subgraph/softmax.c", "XNNPACK/src/subgraph/square-root.c", "XNNPACK/src/subgraph/square.c", "XNNPACK/src/subgraph/squared-difference.c", "XNNPACK/src/subgraph/static-constant-pad.c", "XNNPACK/src/subgraph/static-reshape.c", "XNNPACK/src/subgraph/static-resize-bilinear-2d.c", "XNNPACK/src/subgraph/subtract.c", "XNNPACK/src/subgraph/unpooling-2d.c"],
deps = [":interface", "//third_party:FP16", "//third_party:FXdiv", "//third_party:clog"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_avx512",
srcs = [],
deps = [":interface"],
exported_deps = [],
compiler_flags = ["-w", "-O2", "-mavx512f"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["x86", ["-mavx512f"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
platform_srcs = [["x86|x86_64|platform009", ["XNNPACK/wrappers/f32-dwconv/gen/up16x3-minmax-avx512f.c", "XNNPACK/wrappers/f32-dwconv/gen/up16x4-minmax-avx512f.c", "XNNPACK/wrappers/f32-dwconv/gen/up16x9-minmax-avx512f.c", "XNNPACK/wrappers/f32-dwconv/gen/up16x25-minmax-avx512f.c", "XNNPACK/wrappers/f32-gemm/gen/1x16-minmax-avx512f-broadcast.c", "XNNPACK/wrappers/f32-gemm/gen/7x16-minmax-avx512f-broadcast.c", "XNNPACK/wrappers/f32-igemm/gen/1x16-minmax-avx512f-broadcast.c", "XNNPACK/wrappers/f32-igemm/gen/7x16-minmax-avx512f-broadcast.c", "XNNPACK/wrappers/f32-prelu/gen/avx512f-2x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vadd-minmax-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vaddc-minmax-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vdiv-minmax-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vdivc-minmax-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vmax-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vmaxc-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vmin-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vminc-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vmul-minmax-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vmulc-minmax-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vrdivc-minmax-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vrsubc-minmax-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vsqrdiff-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vsqrdiffc-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vsub-minmax-avx512f-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vsubc-minmax-avx512f-x32.c", "XNNPACK/wrappers/f32-vclamp/gen/vclamp-avx512f-x16.c", "XNNPACK/wrappers/f32-velu/gen/velu-avx512f-rr1-lut16-p3-perm-x64.c", "XNNPACK/wrappers/f32-vhswish/gen/vhswish-avx512f-x16.c", "XNNPACK/wrappers/f32-vlrelu/gen/vlrelu-avx512f-x16.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndd-avx512f-x16.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndne-avx512f-x16.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndu-avx512f-x16.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndz-avx512f-x16.c", "XNNPACK/wrappers/f32-vsigmoid/gen/vsigmoid-avx512f-rr2-lut32-p2-perm2-scalef-div-x64.c", "XNNPACK/wrappers/f32-vunary/gen/vabs-avx512f-x16.c", "XNNPACK/wrappers/f32-vunary/gen/vneg-avx512f-x16.c", "XNNPACK/wrappers/f32-vunary/gen/vsqr-avx512f-x16.c"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_neon_fp16arith_aarch64",
srcs = ["XNNPACK/wrappers/f16-dwconv/gen/up8x25-minmax-neonfp16arith-acc2.c", "XNNPACK/wrappers/f16-dwconv/gen/up16x3-minmax-neonfp16arith.c", "XNNPACK/wrappers/f16-dwconv/gen/up16x4-minmax-neonfp16arith.c", "XNNPACK/wrappers/f16-dwconv/gen/up16x9-minmax-neonfp16arith.c", "XNNPACK/wrappers/f16-gavgpool/gen/7p7x-minmax-neonfp16arith-c8.c", "XNNPACK/wrappers/f16-gavgpool/gen/7x-minmax-neonfp16arith-c8.c", "XNNPACK/wrappers/f16-gemm/gen/1x16-minmax-neonfp16arith-ld64.c", "XNNPACK/wrappers/f16-gemm/gen/6x16-minmax-neonfp16arith-ld64.c", "XNNPACK/wrappers/f16-ibilinear/gen/neonfp16arith-c8.c", "XNNPACK/wrappers/f16-igemm/gen/1x16-minmax-neonfp16arith-ld64.c", "XNNPACK/wrappers/f16-igemm/gen/6x16-minmax-neonfp16arith-ld64.c", "XNNPACK/wrappers/f16-maxpool/9p8x-minmax-neonfp16arith-c8.c", "XNNPACK/wrappers/f16-prelu/gen/neonfp16arith-2x16.c", "XNNPACK/wrappers/f16-vbinary/gen/vadd-minmax-neonfp16arith-x16.c", "XNNPACK/wrappers/f16-vbinary/gen/vaddc-minmax-neonfp16arith-x16.c", "XNNPACK/wrappers/f16-vbinary/gen/vmul-minmax-neonfp16arith-x16.c", "XNNPACK/wrappers/f16-vbinary/gen/vmulc-minmax-neonfp16arith-x16.c", "XNNPACK/wrappers/f16-vclamp/gen/vclamp-neonfp16arith-x16.c", "XNNPACK/wrappers/f16-vhswish/gen/vhswish-neonfp16arith-x16.c", "XNNPACK/wrappers/f16-vlrelu/gen/vlrelu-neonfp16arith-x16.c", "XNNPACK/wrappers/f16-vmulcaddc/gen/c8-minmax-neonfp16arith-2x.c"],
deps = [":interface", "//third_party:FP16"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["(aarch64|arm64)", ["-march=armv8.2-a+fp16"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_avx",
srcs = [],
deps = [":interface"],
exported_deps = [],
compiler_flags = ["-w", "-O2", "-mavx"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["x86", ["-mavx"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
platform_srcs = [["x86|x86_64|platform009", ["XNNPACK/wrappers/f16-f32-vcvt/gen/vcvt-avx-int16-x16.c", "XNNPACK/wrappers/f32-dwconv/gen/up8x25-minmax-avx.c", "XNNPACK/wrappers/f32-dwconv/gen/up16x3-minmax-avx.c", "XNNPACK/wrappers/f32-dwconv/gen/up16x4-minmax-avx.c", "XNNPACK/wrappers/f32-dwconv/gen/up16x9-minmax-avx.c", "XNNPACK/wrappers/f32-f16-vcvt/gen/vcvt-avx-x24.c", "XNNPACK/wrappers/f32-gemm/gen/1x16-minmax-avx-broadcast.c", "XNNPACK/wrappers/f32-gemm/gen/5x16-minmax-avx-broadcast.c", "XNNPACK/wrappers/f32-igemm/gen/1x16-minmax-avx-broadcast.c", "XNNPACK/wrappers/f32-igemm/gen/5x16-minmax-avx-broadcast.c", "XNNPACK/wrappers/f32-prelu/gen/avx-2x16.c", "XNNPACK/wrappers/f32-qs8-vcvt/gen/vcvt-avx-x32.c", "XNNPACK/wrappers/f32-qu8-vcvt/gen/vcvt-avx-x32.c", "XNNPACK/wrappers/f32-vbinary/gen/vadd-minmax-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vaddc-minmax-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vdiv-minmax-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vdivc-minmax-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vmax-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vmaxc-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vmin-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vminc-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vmul-minmax-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vmulc-minmax-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vrdivc-minmax-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vrsubc-minmax-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vsqrdiff-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vsqrdiffc-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vsub-minmax-avx-x16.c", "XNNPACK/wrappers/f32-vbinary/gen/vsubc-minmax-avx-x16.c", "XNNPACK/wrappers/f32-vclamp/gen/vclamp-avx-x16.c", "XNNPACK/wrappers/f32-velu/gen/velu-avx-rr2-lut4-p4-perm-x32.c", "XNNPACK/wrappers/f32-vhswish/gen/vhswish-avx-x16.c", "XNNPACK/wrappers/f32-vlrelu/gen/vlrelu-avx-x16.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndd-avx-x16.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndne-avx-x16.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndu-avx-x16.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndz-avx-x16.c", "XNNPACK/wrappers/f32-vsigmoid/gen/vsigmoid-avx-rr2-p5-nr2-x40.c", "XNNPACK/wrappers/f32-vsqrt/gen/avx-sqrt-x8.c", "XNNPACK/wrappers/f32-vunary/gen/vabs-avx-x16.c", "XNNPACK/wrappers/f32-vunary/gen/vneg-avx-x16.c", "XNNPACK/wrappers/f32-vunary/gen/vsqr-avx-x16.c", "XNNPACK/wrappers/qc8-dwconv/gen/up16x9-minmax-fp32-avx-mul16-add16.c", "XNNPACK/wrappers/qc8-dwconv/gen/up16x25-minmax-fp32-avx-mul16-add16.c", "XNNPACK/wrappers/qc8-gemm/gen/1x4c8-minmax-fp32-avx-ld128.c", "XNNPACK/wrappers/qc8-gemm/gen/2x4c8-minmax-fp32-avx-ld128.c", "XNNPACK/wrappers/qc8-igemm/gen/1x4c8-minmax-fp32-avx-ld128.c", "XNNPACK/wrappers/qc8-igemm/gen/2x4c8-minmax-fp32-avx-ld128.c", "XNNPACK/wrappers/qs8-dwconv/gen/up16x9-minmax-fp32-avx-mul16-add16.c", "XNNPACK/wrappers/qs8-dwconv/gen/up16x25-minmax-fp32-avx-mul16-add16.c", "XNNPACK/wrappers/qs8-f32-vcvt/gen/vcvt-avx-x32.c", "XNNPACK/wrappers/qs8-gemm/gen/1x4c8-minmax-fp32-avx-ld128.c", "XNNPACK/wrappers/qs8-gemm/gen/2x4c8-minmax-fp32-avx-ld128.c", "XNNPACK/wrappers/qs8-igemm/gen/1x4c8-minmax-fp32-avx-ld128.c", "XNNPACK/wrappers/qs8-igemm/gen/2x4c8-minmax-fp32-avx-ld128.c", "XNNPACK/wrappers/qs8-vadd/gen/minmax-avx-mul32-ld32-x8.c", "XNNPACK/wrappers/qs8-vaddc/gen/minmax-avx-mul32-ld32-x8.c", "XNNPACK/wrappers/qs8-vmul/gen/minmax-fp32-avx-mul16-ld64-x16.c", "XNNPACK/wrappers/qs8-vmulc/gen/minmax-fp32-avx-mul16-ld64-x16.c", "XNNPACK/wrappers/qu8-dwconv/gen/up16x9-minmax-fp32-avx-mul16.c", "XNNPACK/wrappers/qu8-dwconv/gen/up16x25-minmax-fp32-avx-mul16.c", "XNNPACK/wrappers/qu8-f32-vcvt/gen/vcvt-avx-x32.c", "XNNPACK/wrappers/qu8-gemm/gen/1x4c8-minmax-fp32-avx-ld128.c", "XNNPACK/wrappers/qu8-gemm/gen/2x4c8-minmax-fp32-avx-ld128.c", "XNNPACK/wrappers/qu8-igemm/gen/1x4c8-minmax-fp32-avx-ld128.c", "XNNPACK/wrappers/qu8-igemm/gen/2x4c8-minmax-fp32-avx-ld128.c", "XNNPACK/wrappers/qu8-vadd/gen/minmax-avx-mul32-ld32-x8.c", "XNNPACK/wrappers/qu8-vaddc/gen/minmax-avx-mul32-ld32-x8.c", "XNNPACK/wrappers/qu8-vmul/gen/minmax-fp32-avx-mul16-ld64-x16.c", "XNNPACK/wrappers/qu8-vmulc/gen/minmax-fp32-avx-mul16-ld64-x16.c", "XNNPACK/wrappers/x8-lut/gen/lut-avx-x64.c"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_sse41",
srcs = [],
deps = [":interface", "//third_party:FP16"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["x86", ["-msse4.1"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
platform_srcs = [["x86|x86_64|platform009", ["XNNPACK/wrappers/f16-f32-vcvt/gen/vcvt-sse41-int16-x16.c", "XNNPACK/wrappers/f32-f16-vcvt/gen/vcvt-sse41-x8.c", "XNNPACK/wrappers/f32-prelu/gen/sse41-2x8.c", "XNNPACK/wrappers/f32-qs8-vcvt/gen/vcvt-sse41-x32.c", "XNNPACK/wrappers/f32-vlrelu/gen/vlrelu-sse41-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndd-sse41-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndne-sse41-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndu-sse41-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndz-sse41-x8.c", "XNNPACK/wrappers/f32-vsigmoid/gen/vsigmoid-sse41-rr2-lut64-p2-div-x8.c", "XNNPACK/wrappers/qc8-dwconv/gen/up8x9-minmax-fp32-sse41-mul16.c", "XNNPACK/wrappers/qc8-dwconv/gen/up8x25-minmax-fp32-sse41-mul16.c", "XNNPACK/wrappers/qc8-gemm/gen/1x4c8-minmax-fp32-sse41-ld64.c", "XNNPACK/wrappers/qc8-gemm/gen/3x4c8-minmax-fp32-sse41-ld64.c", "XNNPACK/wrappers/qc8-igemm/gen/1x4c8-minmax-fp32-sse41-ld64.c", "XNNPACK/wrappers/qc8-igemm/gen/3x4c8-minmax-fp32-sse41-ld64.c", "XNNPACK/wrappers/qs8-dwconv/gen/up8x9-minmax-fp32-sse41-mul16-add16.c", "XNNPACK/wrappers/qs8-dwconv/gen/up8x25-minmax-fp32-sse41-mul16-add16.c", "XNNPACK/wrappers/qs8-f32-vcvt/gen/vcvt-sse41-x16.c", "XNNPACK/wrappers/qs8-gavgpool/gen/7p7x-minmax-fp32-sse41-c8.c", "XNNPACK/wrappers/qs8-gavgpool/gen/7x-minmax-fp32-sse41-c8.c", "XNNPACK/wrappers/qs8-gemm/gen/1x4c8-minmax-fp32-sse41-ld64.c", "XNNPACK/wrappers/qs8-gemm/gen/3x4c8-minmax-fp32-sse41-ld64.c", "XNNPACK/wrappers/qs8-igemm/gen/1x4c8-minmax-fp32-sse41-ld64.c", "XNNPACK/wrappers/qs8-igemm/gen/3x4c8-minmax-fp32-sse41-ld64.c", "XNNPACK/wrappers/qs8-vadd/gen/minmax-sse41-mul16-ld64-x8.c", "XNNPACK/wrappers/qs8-vaddc/gen/minmax-sse41-mul16-ld64-x8.c", "XNNPACK/wrappers/qs8-vmul/gen/minmax-fp32-sse41-mul16-ld64-x16.c", "XNNPACK/wrappers/qs8-vmulc/gen/minmax-fp32-sse41-mul16-ld64-x16.c", "XNNPACK/wrappers/qu8-dwconv/gen/up8x9-minmax-fp32-sse41-mul16.c", "XNNPACK/wrappers/qu8-dwconv/gen/up8x25-minmax-fp32-sse41-mul16.c", "XNNPACK/wrappers/qu8-f32-vcvt/gen/vcvt-sse41-x16.c", "XNNPACK/wrappers/qu8-gavgpool/gen/7p7x-minmax-fp32-sse41-c8.c", "XNNPACK/wrappers/qu8-gavgpool/gen/7x-minmax-fp32-sse41-c8.c", "XNNPACK/wrappers/qu8-gemm/gen/1x4c8-minmax-fp32-sse41-ld64.c", "XNNPACK/wrappers/qu8-gemm/gen/3x4c8-minmax-fp32-sse41-ld64.c", "XNNPACK/wrappers/qu8-igemm/gen/1x4c8-minmax-fp32-sse41-ld64.c", "XNNPACK/wrappers/qu8-igemm/gen/3x4c8-minmax-fp32-sse41-ld64.c", "XNNPACK/wrappers/qu8-vadd/gen/minmax-sse41-mul16-ld64-x8.c", "XNNPACK/wrappers/qu8-vaddc/gen/minmax-sse41-mul16-ld64-x8.c", "XNNPACK/wrappers/qu8-vmul/gen/minmax-fp32-sse41-mul16-ld64-x16.c", "XNNPACK/wrappers/qu8-vmulc/gen/minmax-fp32-sse41-mul16-ld64-x16.c", "XNNPACK/wrappers/s8-ibilinear/gen/sse41-c16.c", "XNNPACK/wrappers/s8-maxpool/9p8x-minmax-sse41-c16.c", "XNNPACK/wrappers/s8-vclamp/sse41-x64.c", "XNNPACK/wrappers/u8-ibilinear/gen/sse41-c16.c"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_neon",
srcs = ["XNNPACK/wrappers/f16-f32-vcvt/gen/vcvt-neon-int16-x16.c", "XNNPACK/wrappers/f32-argmaxpool/4x-neon-c4.c", "XNNPACK/wrappers/f32-argmaxpool/9p8x-neon-c4.c", "XNNPACK/wrappers/f32-argmaxpool/9x-neon-c4.c", "XNNPACK/wrappers/f32-avgpool/9p8x-minmax-neon-c4.c", "XNNPACK/wrappers/f32-avgpool/9x-minmax-neon-c4.c", "XNNPACK/wrappers/f32-conv-hwc2chw/3x3s2p1c3x4-neon-2x2.c", "XNNPACK/wrappers/f32-dwconv/gen/up8x3-minmax-neon.c", "XNNPACK/wrappers/f32-dwconv/gen/up8x4-minmax-neon.c", "XNNPACK/wrappers/f32-dwconv/gen/up8x9-minmax-neon.c", "XNNPACK/wrappers/f32-dwconv/gen/up8x25-minmax-neon-acc2.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/3x3p1-minmax-neon-2x4.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/3x3s2p1-minmax-neon-1x4.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/5x5p2-minmax-neon-1x4.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/5x5s2p2-minmax-neon-1x4.c", "XNNPACK/wrappers/f32-f16-vcvt/gen/vcvt-neon-x8.c", "XNNPACK/wrappers/f32-gavgpool-cw/neon-x4.c", "XNNPACK/wrappers/f32-gavgpool/7p7x-minmax-neon-c4.c", "XNNPACK/wrappers/f32-gavgpool/7x-minmax-neon-c4.c", "XNNPACK/wrappers/f32-gemm/gen/1x8-minmax-neon-lane-ld64.c", "XNNPACK/wrappers/f32-gemm/gen/4x2-minmax-neon-lane-ld64.c", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-neon-lane-ld64.c", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-neon-lane-ld128.c", "XNNPACK/wrappers/f32-ibilinear-chw/gen/neon-p8.c", "XNNPACK/wrappers/f32-ibilinear/gen/neon-c8.c", "XNNPACK/wrappers/f32-igemm/gen/1x8-minmax-neon-lane-ld64.c", "XNNPACK/wrappers/f32-igemm/gen/4x2-minmax-neon-lane-ld64.c", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-neon-lane-ld64.c", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-neon-lane-ld128.c", "XNNPACK/wrappers/f32-maxpool/9p8x-minmax-neon-c4.c", "XNNPACK/wrappers/f32-pavgpool/9p8x-minmax-neon-c4.c", "XNNPACK/wrappers/f32-pavgpool/9x-minmax-neon-c4.c", "XNNPACK/wrappers/f32-prelu/gen/neon-2x8.c", "XNNPACK/wrappers/f32-qs8-vcvt/gen/vcvt-neon-x32.c", "XNNPACK/wrappers/f32-qu8-vcvt/gen/vcvt-neon-x32.c", "XNNPACK/wrappers/f32-raddstoreexpminusmax/gen/neon-rr2-lut64-p2-x8.c", "XNNPACK/wrappers/f32-rmax/neon.c", "XNNPACK/wrappers/f32-spmm/gen/32x1-minmax-neon.c", "XNNPACK/wrappers/f32-vbinary/gen/vadd-minmax-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vaddc-minmax-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmax-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmaxc-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmin-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vminc-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmul-minmax-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmulc-minmax-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vrsubc-minmax-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vsqrdiff-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vsqrdiffc-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vsub-minmax-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vsubc-minmax-neon-x8.c", "XNNPACK/wrappers/f32-vclamp/gen/vclamp-neon-x8.c", "XNNPACK/wrappers/f32-velu/gen/velu-neon-rr2-lut16-p3-x8.c", "XNNPACK/wrappers/f32-vhswish/gen/vhswish-neon-x16.c", "XNNPACK/wrappers/f32-vlrelu/gen/vlrelu-neon-x8.c", "XNNPACK/wrappers/f32-vmulcaddc/gen/c4-minmax-neon-2x.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndd-neon-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndne-neon-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndu-neon-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndz-neon-x8.c", "XNNPACK/wrappers/f32-vsigmoid/gen/vsigmoid-neon-rr2-lut64-p2-nr2recps-x8.c", "XNNPACK/wrappers/f32-vunary/gen/vabs-neon-x8.c", "XNNPACK/wrappers/f32-vunary/gen/vneg-neon-x8.c", "XNNPACK/wrappers/f32-vunary/gen/vsqr-neon-x8.c", "XNNPACK/wrappers/qc8-dwconv/gen/up8x25-minmax-fp32-neon-mla8-ld64.c", "XNNPACK/wrappers/qc8-dwconv/gen/up16x9-minmax-fp32-neon-mla8-ld64.c", "XNNPACK/wrappers/qc8-dwconv/gen/up16x25-minmax-fp32-neon-mla8-ld64.c", "XNNPACK/wrappers/qc8-gemm/gen/1x8c2s4-minmax-fp32-neon-mlal.c", "XNNPACK/wrappers/qc8-gemm/gen/2x8c2s4-minmax-fp32-neon-mlal.c", "XNNPACK/wrappers/qc8-igemm/gen/1x8c2s4-minmax-fp32-neon-mlal.c", "XNNPACK/wrappers/qc8-igemm/gen/2x8c2s4-minmax-fp32-neon-mlal.c", "XNNPACK/wrappers/qs8-dwconv/gen/up8x25-minmax-rndnu-neon-mla8-ld64.c", "XNNPACK/wrappers/qs8-dwconv/gen/up16x9-minmax-rndnu-neon-mla8-ld64.c", "XNNPACK/wrappers/qs8-dwconv/gen/up16x25-minmax-rndnu-neon-mla8-ld64.c", "XNNPACK/wrappers/qs8-f32-vcvt/gen/vcvt-neon-x32.c", "XNNPACK/wrappers/qs8-gavgpool/gen/7p7x-minmax-rndnu-neon-c8.c", "XNNPACK/wrappers/qs8-gavgpool/gen/7x-minmax-rndnu-neon-c8.c", "XNNPACK/wrappers/qs8-gemm/gen/1x8-minmax-rndnu-neon-mlal-lane.c", "XNNPACK/wrappers/qs8-gemm/gen/1x8c2s4-minmax-rndnu-neon-mlal.c", "XNNPACK/wrappers/qs8-gemm/gen/1x16-minmax-rndnu-neon-mlal-lane.c", "XNNPACK/wrappers/qs8-gemm/gen/2x8c2s4-minmax-rndnu-neon-mlal.c", "XNNPACK/wrappers/qs8-igemm/gen/1x8-minmax-rndnu-neon-mlal-lane.c", "XNNPACK/wrappers/qs8-igemm/gen/1x8c2s4-minmax-rndnu-neon-mlal.c", "XNNPACK/wrappers/qs8-igemm/gen/1x16-minmax-rndnu-neon-mlal-lane.c", "XNNPACK/wrappers/qs8-igemm/gen/2x8c2s4-minmax-rndnu-neon-mlal.c", "XNNPACK/wrappers/qs8-vadd/gen/minmax-neon-ld64-x16.c", "XNNPACK/wrappers/qs8-vadd/gen/minmax-neon-ld64-x32.c", "XNNPACK/wrappers/qs8-vaddc/gen/minmax-neon-ld64-x16.c", "XNNPACK/wrappers/qs8-vaddc/gen/minmax-neon-ld64-x32.c", "XNNPACK/wrappers/qs8-vmul/gen/minmax-rndnu-neon-ld64-x16.c", "XNNPACK/wrappers/qs8-vmulc/gen/minmax-rndnu-neon-ld64-x16.c", "XNNPACK/wrappers/qu8-avgpool/9p8x-minmax-neon-c8.c", "XNNPACK/wrappers/qu8-avgpool/9x-minmax-neon-c8.c", "XNNPACK/wrappers/qu8-dwconv/gen/up8x25-minmax-rndnu-neon-mul8.c", "XNNPACK/wrappers/qu8-dwconv/gen/up16x9-minmax-rndnu-neon-mul8.c", "XNNPACK/wrappers/qu8-f32-vcvt/gen/vcvt-neon-x32.c", "XNNPACK/wrappers/qu8-gavgpool/gen/7p7x-minmax-rndnu-neon-c8.c", "XNNPACK/wrappers/qu8-gavgpool/gen/7x-minmax-rndnu-neon-c8.c", "XNNPACK/wrappers/qu8-gemm/gen/1x8-minmax-rndnu-neon-mlal-lane.c", "XNNPACK/wrappers/qu8-gemm/gen/1x16-minmax-rndnu-neon-mlal-lane.c", "XNNPACK/wrappers/qu8-gemm/gen/3x8-minmax-rndnu-neon-mlal-lane.c", "XNNPACK/wrappers/qu8-gemm/gen/4x16-minmax-rndnu-neon-mlal-lane.c", "XNNPACK/wrappers/qu8-igemm/gen/1x8-minmax-rndnu-neon-mlal-lane.c", "XNNPACK/wrappers/qu8-igemm/gen/1x16-minmax-rndnu-neon-mlal-lane.c", "XNNPACK/wrappers/qu8-igemm/gen/3x8-minmax-rndnu-neon-mlal-lane.c", "XNNPACK/wrappers/qu8-igemm/gen/4x16-minmax-rndnu-neon-mlal-lane.c", "XNNPACK/wrappers/qu8-vadd/gen/minmax-neon-ld64-x16.c", "XNNPACK/wrappers/qu8-vadd/gen/minmax-neon-ld64-x32.c", "XNNPACK/wrappers/qu8-vaddc/gen/minmax-neon-ld64-x16.c", "XNNPACK/wrappers/qu8-vaddc/gen/minmax-neon-ld64-x32.c", "XNNPACK/wrappers/qu8-vmul/gen/minmax-rndnu-neon-ld64-x16.c", "XNNPACK/wrappers/qu8-vmulc/gen/minmax-rndnu-neon-ld64-x16.c", "XNNPACK/wrappers/s8-ibilinear/gen/neon-c8.c", "XNNPACK/wrappers/s8-ibilinear/gen/neon-c16.c", "XNNPACK/wrappers/s8-maxpool/9p8x-minmax-neon-c16.c", "XNNPACK/wrappers/s8-vclamp/neon-x64.c", "XNNPACK/wrappers/u8-ibilinear/gen/neon-c8.c", "XNNPACK/wrappers/u8-ibilinear/gen/neon-c16.c", "XNNPACK/wrappers/u8-maxpool/9p8x-minmax-neon-c16.c", "XNNPACK/wrappers/u8-rmax/neon.c", "XNNPACK/wrappers/u8-vclamp/neon-x64.c", "XNNPACK/wrappers/xx-fill/neon-x64.c", "XNNPACK/wrappers/xx-pad/neon.c", "XNNPACK/wrappers/x8-zip/xm-neon.c", "XNNPACK/wrappers/x8-zip/x2-neon.c", "XNNPACK/wrappers/x8-zip/x3-neon.c", "XNNPACK/wrappers/x8-zip/x4-neon.c", "XNNPACK/wrappers/x32-packx/x4-neon-st4.c", "XNNPACK/wrappers/x32-unpool/neon.c", "XNNPACK/wrappers/x32-zip/xm-neon.c", "XNNPACK/wrappers/x32-zip/x2-neon.c", "XNNPACK/wrappers/x32-zip/x3-neon.c", "XNNPACK/wrappers/x32-zip/x4-neon.c"],
deps = [":interface", "//third_party:FP16"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["^(android-armv7|iphoneos-armv7)$", ["-march=armv7-a", "-mfpu=neon", "-mfloat-abi=softfp"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_neon_dot",
srcs = [],
deps = [":interface", "//third_party:FP16"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["(aarch64|arm64)", ["-march=armv8.2-a+dotprod"]], ["^android-armv7$", ["-march=armv8.2-a+dotprod", "-mfpu=neon-fp-armv8", "-mfloat-abi=softfp"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
platform_srcs = [["^((?!iphoneos-armv7).)*$", ["XNNPACK/wrappers/qc8-gemm/gen/1x8c4-minmax-fp32-neondot.c", "XNNPACK/wrappers/qc8-gemm/gen/1x16c4-minmax-fp32-neondot.c", "XNNPACK/wrappers/qc8-gemm/gen/4x8c4-minmax-fp32-neondot.c", "XNNPACK/wrappers/qc8-gemm/gen/4x16c4-minmax-fp32-neondot.c", "XNNPACK/wrappers/qc8-igemm/gen/1x8c4-minmax-fp32-neondot.c", "XNNPACK/wrappers/qc8-igemm/gen/1x16c4-minmax-fp32-neondot.c", "XNNPACK/wrappers/qc8-igemm/gen/4x8c4-minmax-fp32-neondot.c", "XNNPACK/wrappers/qc8-igemm/gen/4x16c4-minmax-fp32-neondot.c", "XNNPACK/wrappers/qs8-gemm/gen/1x8c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qs8-gemm/gen/1x16c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qs8-gemm/gen/4x8c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qs8-gemm/gen/4x16c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qs8-igemm/gen/1x8c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qs8-igemm/gen/1x16c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qs8-igemm/gen/4x8c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qs8-igemm/gen/4x16c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qu8-gemm/gen/1x8c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qu8-gemm/gen/1x16c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qu8-gemm/gen/4x8c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qu8-gemm/gen/4x16c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qu8-igemm/gen/1x8c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qu8-igemm/gen/1x16c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qu8-igemm/gen/4x8c4-minmax-rndnu-neondot.c", "XNNPACK/wrappers/qu8-igemm/gen/4x16c4-minmax-rndnu-neondot.c"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_neon_aarch64",
srcs = ["XNNPACK/wrappers/f32-conv-hwc2chw/3x3s2p1c3x4-neonfma-2x2.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/3x3p1-minmax-neonfma-3x4.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/3x3s2p1-minmax-neonfma-2x4-acc2.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/5x5p2-minmax-neonfma-4x4.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/5x5s2p2-minmax-neonfma-1x4-acc2.c", "XNNPACK/wrappers/f32-gemm/gen/1x8-minmax-neonfma-lane-ld64.c", "XNNPACK/wrappers/f32-gemm/gen/4x2-minmax-neonfma-lane-ld64.c", "XNNPACK/wrappers/f32-gemm/gen/6x8-minmax-neonfma-lane-ld64.c", "XNNPACK/wrappers/f32-igemm/gen/1x8-minmax-neonfma-lane-ld64.c", "XNNPACK/wrappers/f32-igemm/gen/4x2-minmax-neonfma-lane-ld64.c", "XNNPACK/wrappers/f32-igemm/gen/6x8-minmax-neonfma-lane-ld64.c", "XNNPACK/wrappers/f32-spmm/gen/32x2-minmax-neonfma.c", "XNNPACK/wrappers/f32-spmm/gen/32x4-minmax-neonfma.c", "XNNPACK/wrappers/f32-vbinary/gen/vdiv-minmax-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vdivc-minmax-neon-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vrdivc-minmax-neon-x8.c", "XNNPACK/wrappers/f32-vsqrt/gen/neon-sqrt-x4.c", "XNNPACK/wrappers/x8-lut/gen/lut-neon-tbx128x4-x64.c"],
deps = [":interface", "//third_party:FP16"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["(aarch64|arm64)", ["-mfpu=neon-vfpv4"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_neon_v8",
srcs = ["XNNPACK/wrappers/f32-qs8-vcvt/gen/vcvt-neonv8-x32.c", "XNNPACK/wrappers/f32-qu8-vcvt/gen/vcvt-neonv8-x32.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndd-neonv8-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndne-neonv8-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndu-neonv8-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndz-neonv8-x8.c", "XNNPACK/wrappers/qc8-dwconv/gen/up8x25-minmax-fp32-neonv8-mla8-ld64.c", "XNNPACK/wrappers/qc8-dwconv/gen/up16x9-minmax-fp32-neonv8-mla8-ld64.c", "XNNPACK/wrappers/qc8-dwconv/gen/up16x25-minmax-fp32-neonv8-mla8-ld64.c", "XNNPACK/wrappers/qc8-gemm/gen/1x8-minmax-fp32-neonv8-mlal-lane-prfm.c", "XNNPACK/wrappers/qc8-gemm/gen/1x8-minmax-fp32-neonv8-mlal-lane.c", "XNNPACK/wrappers/qc8-gemm/gen/1x8c2s4-minmax-fp32-neonv8-mlal.c", "XNNPACK/wrappers/qc8-gemm/gen/1x8c8-minmax-fp32-neonv8-mlal.c", "XNNPACK/wrappers/qc8-gemm/gen/1x16-minmax-fp32-neonv8-mlal-lane.c", "XNNPACK/wrappers/qc8-gemm/gen/2x8c2s4-minmax-fp32-neonv8-mlal.c", "XNNPACK/wrappers/qc8-gemm/gen/2x8c8-minmax-fp32-neonv8-mlal.c", "XNNPACK/wrappers/qc8-gemm/gen/4x16-minmax-fp32-neonv8-mlal-lane.c", "XNNPACK/wrappers/qc8-igemm/gen/1x8-minmax-fp32-neonv8-mlal-lane-prfm.c", "XNNPACK/wrappers/qc8-igemm/gen/1x8-minmax-fp32-neonv8-mlal-lane.c", "XNNPACK/wrappers/qc8-igemm/gen/1x8c2s4-minmax-fp32-neonv8-mlal.c", "XNNPACK/wrappers/qc8-igemm/gen/1x8c8-minmax-fp32-neonv8-mlal.c", "XNNPACK/wrappers/qc8-igemm/gen/1x16-minmax-fp32-neonv8-mlal-lane.c", "XNNPACK/wrappers/qc8-igemm/gen/2x8c2s4-minmax-fp32-neonv8-mlal.c", "XNNPACK/wrappers/qc8-igemm/gen/2x8c8-minmax-fp32-neonv8-mlal.c", "XNNPACK/wrappers/qc8-igemm/gen/4x16-minmax-fp32-neonv8-mlal-lane.c"],
deps = [":interface", "//third_party:FP16"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["(aarch64|arm64)", ["-march=armv8-a", "-mfpu=neon-fp-armv8"]], ["^android-armv7$", ["-march=armv8-a", "-mfpu=neon-fp-armv8", "-mfloat-abi=softfp"]], ["^iphoneos-armv7$", ["-mcpu=cyclone", "-mtune=generic"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_avx512skx",
srcs = [],
deps = [":interface"],
exported_deps = [],
compiler_flags = ["-w", "-O2", "-mavx512f", "-mavx512cd", "-mavx512bw", "-mavx512dq", "-mavx512vl"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["^(i[3-6]86|x86|x86_64|AMD64)$", ["-mavx512f", "-mavx512cd", "-mavx512bw", "-mavx512dq", "-mavx512vl"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
platform_srcs = [["x86|x86_64|platform009", ["XNNPACK/wrappers/f16-f32-vcvt/gen/vcvt-avx512skx-x16.c", "XNNPACK/wrappers/f32-f16-vcvt/gen/vcvt-avx512skx-x16.c", "XNNPACK/wrappers/f32-qs8-vcvt/gen/vcvt-avx512skx-x128.c", "XNNPACK/wrappers/f32-qu8-vcvt/gen/vcvt-avx512skx-x128.c", "XNNPACK/wrappers/qc8-dwconv/gen/up32x9-minmax-fp32-avx512skx-mul32.c", "XNNPACK/wrappers/qc8-dwconv/gen/up32x25-minmax-fp32-avx512skx-mul32.c", "XNNPACK/wrappers/qc8-gemm/gen/1x16c8-minmax-fp32-avx512skx.c", "XNNPACK/wrappers/qc8-gemm/gen/4x16c8-minmax-fp32-avx512skx.c", "XNNPACK/wrappers/qc8-igemm/gen/1x16c8-minmax-fp32-avx512skx.c", "XNNPACK/wrappers/qc8-igemm/gen/4x16c8-minmax-fp32-avx512skx.c", "XNNPACK/wrappers/qs8-dwconv/gen/up32x9-minmax-fp32-avx512skx-mul32.c", "XNNPACK/wrappers/qs8-dwconv/gen/up32x25-minmax-fp32-avx512skx-mul32.c", "XNNPACK/wrappers/qs8-f32-vcvt/gen/vcvt-avx512skx-x32.c", "XNNPACK/wrappers/qs8-gemm/gen/1x16c8-minmax-fp32-avx512skx.c", "XNNPACK/wrappers/qs8-gemm/gen/4x16c8-minmax-fp32-avx512skx.c", "XNNPACK/wrappers/qs8-igemm/gen/1x16c8-minmax-fp32-avx512skx.c", "XNNPACK/wrappers/qs8-igemm/gen/4x16c8-minmax-fp32-avx512skx.c", "XNNPACK/wrappers/qs8-vadd/gen/minmax-avx512skx-mul32-ld128-x16.c", "XNNPACK/wrappers/qs8-vaddc/gen/minmax-avx512skx-mul32-ld128-x16.c", "XNNPACK/wrappers/qu8-dwconv/gen/up32x9-minmax-fp32-avx512skx-mul32.c", "XNNPACK/wrappers/qu8-dwconv/gen/up32x25-minmax-fp32-avx512skx-mul32.c", "XNNPACK/wrappers/qu8-f32-vcvt/gen/vcvt-avx512skx-x32.c", "XNNPACK/wrappers/qu8-gemm/gen/1x16c8-minmax-fp32-avx512skx.c", "XNNPACK/wrappers/qu8-gemm/gen/4x16c8-minmax-fp32-avx512skx.c", "XNNPACK/wrappers/qu8-igemm/gen/1x16c8-minmax-fp32-avx512skx.c", "XNNPACK/wrappers/qu8-igemm/gen/4x16c8-minmax-fp32-avx512skx.c", "XNNPACK/wrappers/qu8-vadd/gen/minmax-avx512skx-mul32-ld128-x16.c", "XNNPACK/wrappers/qu8-vaddc/gen/minmax-avx512skx-mul32-ld128-x16.c", "XNNPACK/wrappers/x8-lut/gen/lut-avx512skx-vpshufb-x64.c"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_neon_fp16",
srcs = ["XNNPACK/wrappers/f16-f32-vcvt/gen/vcvt-neonfp16-x16.c", "XNNPACK/wrappers/f32-f16-vcvt/gen/vcvt-neonfp16-x16.c"],
deps = [":interface"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["arm", ["-mfpu=neon-fp16"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "interface",
srcs = [],
deps = [],
exported_deps = ["//third_party:pthreadpool_header"],
compiler_flags = ["-w"],
preferred_linkage = "static",
exported_headers = {"xnnpack.h": "XNNPACK/include/xnnpack.h"},
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_fma3",
srcs = [],
deps = [":interface"],
exported_deps = [],
compiler_flags = ["-w", "-O2", "-mfma", "-mf16c"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["^(i[3-6]86|x86|x86_64|AMD64)$", ["-mfma", "-mf16c"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
platform_srcs = [["x86|x86_64|platform009", ["XNNPACK/wrappers/f16-dwconv/gen/up8x25-minmax-fma3-acc2.c", "XNNPACK/wrappers/f16-dwconv/gen/up16x3-minmax-fma3.c", "XNNPACK/wrappers/f16-dwconv/gen/up16x4-minmax-fma3.c", "XNNPACK/wrappers/f16-dwconv/gen/up16x9-minmax-fma3.c", "XNNPACK/wrappers/f16-ibilinear/gen/fma3-c8.c", "XNNPACK/wrappers/f16-vmulcaddc/gen/c8-minmax-fma3-2x.c", "XNNPACK/wrappers/f32-dwconv/gen/up8x25-minmax-fma3.c", "XNNPACK/wrappers/f32-dwconv/gen/up16x3-minmax-fma3.c", "XNNPACK/wrappers/f32-dwconv/gen/up16x4-minmax-fma3.c", "XNNPACK/wrappers/f32-dwconv/gen/up16x9-minmax-fma3.c", "XNNPACK/wrappers/f32-gemm/gen/1x16-minmax-fma3-broadcast.c", "XNNPACK/wrappers/f32-gemm/gen/1x16s4-minmax-fma3-broadcast.c", "XNNPACK/wrappers/f32-gemm/gen/4x16s4-minmax-fma3-broadcast.c", "XNNPACK/wrappers/f32-gemm/gen/5x16-minmax-fma3-broadcast.c", "XNNPACK/wrappers/f32-igemm/gen/1x16-minmax-fma3-broadcast.c", "XNNPACK/wrappers/f32-igemm/gen/1x16s4-minmax-fma3-broadcast.c", "XNNPACK/wrappers/f32-igemm/gen/4x16s4-minmax-fma3-broadcast.c", "XNNPACK/wrappers/f32-igemm/gen/5x16-minmax-fma3-broadcast.c", "XNNPACK/wrappers/f32-vhswish/gen/vhswish-fma3-x16.c"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "jit_memory",
srcs = ["XNNPACK/src/jit/aarch32-assembler.cc", "XNNPACK/src/jit/aarch64-assembler.cc", "XNNPACK/src/jit/assembler.cc", "XNNPACK/src/jit/memory.c"],
deps = [":interface", "//third_party:clog"],
exported_deps = [],
compiler_flags = ["-w", "-Os"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_sse2",
srcs = [],
deps = [":interface", "//third_party:FP16"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["x86", ["-msse2"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
platform_srcs = [["x86|x86_64|platform009", ["XNNPACK/wrappers/f16-f32-vcvt/gen/vcvt-sse2-int16-x32.c", "XNNPACK/wrappers/f32-argmaxpool/4x-sse2-c4.c", "XNNPACK/wrappers/f32-argmaxpool/9p8x-sse2-c4.c", "XNNPACK/wrappers/f32-argmaxpool/9x-sse2-c4.c", "XNNPACK/wrappers/f32-f16-vcvt/gen/vcvt-sse2-x16.c", "XNNPACK/wrappers/f32-prelu/gen/sse2-2x8.c", "XNNPACK/wrappers/f32-qs8-vcvt/gen/vcvt-sse2-x32.c", "XNNPACK/wrappers/f32-qu8-vcvt/gen/vcvt-sse2-x32.c", "XNNPACK/wrappers/f32-raddstoreexpminusmax/gen/sse2-rr2-p5-x20-acc2.c", "XNNPACK/wrappers/f32-velu/gen/velu-sse2-rr2-lut16-p3-x12.c", "XNNPACK/wrappers/f32-vlrelu/gen/vlrelu-sse2-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndd-sse2-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndne-sse2-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndu-sse2-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndz-sse2-x8.c", "XNNPACK/wrappers/f32-vsigmoid/gen/vsigmoid-sse2-rr2-lut64-p2-div-x8.c", "XNNPACK/wrappers/qc8-dwconv/gen/up8x9-minmax-fp32-sse2-mul16.c", "XNNPACK/wrappers/qc8-dwconv/gen/up8x25-minmax-fp32-sse2-mul16.c", "XNNPACK/wrappers/qc8-gemm/gen/1x4c8-minmax-fp32-sse2-ld64.c", "XNNPACK/wrappers/qc8-gemm/gen/3x4c8-minmax-fp32-sse2-ld64.c", "XNNPACK/wrappers/qc8-igemm/gen/1x4c8-minmax-fp32-sse2-ld64.c", "XNNPACK/wrappers/qc8-igemm/gen/3x4c8-minmax-fp32-sse2-ld64.c", "XNNPACK/wrappers/qs8-dwconv/gen/up8x9-minmax-fp32-sse2-mul16-add16.c", "XNNPACK/wrappers/qs8-dwconv/gen/up8x25-minmax-fp32-sse2-mul16-add16.c", "XNNPACK/wrappers/qs8-f32-vcvt/gen/vcvt-sse2-x32.c", "XNNPACK/wrappers/qs8-gavgpool/gen/7p7x-minmax-fp32-sse2-c8.c", "XNNPACK/wrappers/qs8-gavgpool/gen/7x-minmax-fp32-sse2-c8.c", "XNNPACK/wrappers/qs8-gemm/gen/1x4c8-minmax-fp32-sse2-ld64.c", "XNNPACK/wrappers/qs8-gemm/gen/3x4c8-minmax-fp32-sse2-ld64.c", "XNNPACK/wrappers/qs8-igemm/gen/1x4c8-minmax-fp32-sse2-ld64.c", "XNNPACK/wrappers/qs8-igemm/gen/3x4c8-minmax-fp32-sse2-ld64.c", "XNNPACK/wrappers/qs8-vadd/gen/minmax-sse2-mul16-ld64-x8.c", "XNNPACK/wrappers/qs8-vaddc/gen/minmax-sse2-mul16-ld64-x8.c", "XNNPACK/wrappers/qs8-vmul/gen/minmax-fp32-sse2-mul16-ld64-x8.c", "XNNPACK/wrappers/qs8-vmulc/gen/minmax-fp32-sse2-mul16-ld64-x8.c", "XNNPACK/wrappers/qu8-avgpool/9p8x-minmax-sse2-c8.c", "XNNPACK/wrappers/qu8-avgpool/9x-minmax-sse2-c8.c", "XNNPACK/wrappers/qu8-dwconv/gen/up8x9-minmax-fp32-sse2-mul16.c", "XNNPACK/wrappers/qu8-dwconv/gen/up8x25-minmax-fp32-sse2-mul16.c", "XNNPACK/wrappers/qu8-f32-vcvt/gen/vcvt-sse2-x32.c", "XNNPACK/wrappers/qu8-gavgpool/gen/7p7x-minmax-fp32-sse2-c8.c", "XNNPACK/wrappers/qu8-gavgpool/gen/7x-minmax-fp32-sse2-c8.c", "XNNPACK/wrappers/qu8-gemm/gen/1x4c8-minmax-fp32-sse2-ld64.c", "XNNPACK/wrappers/qu8-gemm/gen/3x4c8-minmax-fp32-sse2-ld64.c", "XNNPACK/wrappers/qu8-igemm/gen/1x4c8-minmax-fp32-sse2-ld64.c", "XNNPACK/wrappers/qu8-igemm/gen/3x4c8-minmax-fp32-sse2-ld64.c", "XNNPACK/wrappers/qu8-vadd/gen/minmax-sse2-mul16-ld64-x8.c", "XNNPACK/wrappers/qu8-vaddc/gen/minmax-sse2-mul16-ld64-x8.c", "XNNPACK/wrappers/qu8-vmul/gen/minmax-fp32-sse2-mul16-ld64-x8.c", "XNNPACK/wrappers/qu8-vmulc/gen/minmax-fp32-sse2-mul16-ld64-x8.c", "XNNPACK/wrappers/s8-ibilinear/gen/sse2-c8.c", "XNNPACK/wrappers/s8-maxpool/9p8x-minmax-sse2-c16.c", "XNNPACK/wrappers/s8-vclamp/sse2-x64.c", "XNNPACK/wrappers/u8-ibilinear/gen/sse2-c8.c", "XNNPACK/wrappers/u8-maxpool/9p8x-minmax-sse2-c16.c", "XNNPACK/wrappers/u8-rmax/sse2.c", "XNNPACK/wrappers/u8-vclamp/sse2-x64.c", "XNNPACK/wrappers/xx-fill/sse2-x64.c", "XNNPACK/wrappers/xx-pad/sse2.c", "XNNPACK/wrappers/x8-zip/xm-sse2.c", "XNNPACK/wrappers/x8-zip/x2-sse2.c", "XNNPACK/wrappers/x8-zip/x3-sse2.c", "XNNPACK/wrappers/x8-zip/x4-sse2.c", "XNNPACK/wrappers/x32-unpool/sse2.c", "XNNPACK/wrappers/x32-zip/xm-sse2.c", "XNNPACK/wrappers/x32-zip/x2-sse2.c", "XNNPACK/wrappers/x32-zip/x3-sse2.c", "XNNPACK/wrappers/x32-zip/x4-sse2.c"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_sse",
srcs = [],
deps = [":interface"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["x86", ["-msse"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
platform_srcs = [["x86|x86_64|platform009", ["XNNPACK/wrappers/f32-avgpool/9p8x-minmax-sse-c4.c", "XNNPACK/wrappers/f32-avgpool/9x-minmax-sse-c4.c", "XNNPACK/wrappers/f32-conv-hwc2chw/3x3s2p1c3x4-sse-2x2.c", "XNNPACK/wrappers/f32-dwconv/gen/up8x3-minmax-sse.c", "XNNPACK/wrappers/f32-dwconv/gen/up8x4-minmax-sse.c", "XNNPACK/wrappers/f32-dwconv/gen/up8x9-minmax-sse.c", "XNNPACK/wrappers/f32-dwconv/gen/up8x25-minmax-sse.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/3x3p1-minmax-sse-2x4-acc2.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/3x3s2p1-minmax-sse-1x4-acc3.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/5x5p2-minmax-sse-4x4.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/5x5s2p2-minmax-sse-2x4.c", "XNNPACK/wrappers/f32-gavgpool-cw/sse-x4.c", "XNNPACK/wrappers/f32-gavgpool/7p7x-minmax-sse-c4.c", "XNNPACK/wrappers/f32-gavgpool/7x-minmax-sse-c4.c", "XNNPACK/wrappers/f32-gemm/gen/1x8-minmax-sse-load1.c", "XNNPACK/wrappers/f32-gemm/gen/4x2c4-minmax-sse.c", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-sse-load1.c", "XNNPACK/wrappers/f32-ibilinear-chw/gen/sse-p8.c", "XNNPACK/wrappers/f32-ibilinear/gen/sse-c8.c", "XNNPACK/wrappers/f32-igemm/gen/1x8-minmax-sse-load1.c", "XNNPACK/wrappers/f32-igemm/gen/4x2c4-minmax-sse.c", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-sse-load1.c", "XNNPACK/wrappers/f32-maxpool/9p8x-minmax-sse-c4.c", "XNNPACK/wrappers/f32-pavgpool/9p8x-minmax-sse-c4.c", "XNNPACK/wrappers/f32-pavgpool/9x-minmax-sse-c4.c", "XNNPACK/wrappers/f32-rmax/sse.c", "XNNPACK/wrappers/f32-spmm/gen/32x1-minmax-sse.c", "XNNPACK/wrappers/f32-vbinary/gen/vadd-minmax-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vaddc-minmax-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vdiv-minmax-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vdivc-minmax-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmax-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmaxc-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmin-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vminc-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmul-minmax-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmulc-minmax-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vrdivc-minmax-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vrsubc-minmax-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vsqrdiff-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vsqrdiffc-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vsub-minmax-sse-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vsubc-minmax-sse-x8.c", "XNNPACK/wrappers/f32-vclamp/gen/vclamp-sse-x8.c", "XNNPACK/wrappers/f32-vhswish/gen/vhswish-sse-x8.c", "XNNPACK/wrappers/f32-vlrelu/gen/vlrelu-sse-x8.c", "XNNPACK/wrappers/f32-vmulcaddc/gen/c4-minmax-sse-2x.c", "XNNPACK/wrappers/f32-vsqrt/gen/sse-sqrt-x4.c", "XNNPACK/wrappers/f32-vunary/gen/vabs-sse-x8.c", "XNNPACK/wrappers/f32-vunary/gen/vneg-sse-x8.c", "XNNPACK/wrappers/f32-vunary/gen/vsqr-sse-x8.c", "XNNPACK/wrappers/x32-packx/x4-sse.c"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_asm_aarch32",
srcs = ["XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-aarch32-neon-cortex-a7.S", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-aarch32-neon-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-aarch32-neon-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-aarch32-neon-ld64.S", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-aarch32-neon-prfm-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-aarch32-neon-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/4x4-aarch32-vfp-ld64.S", "XNNPACK/wrappers/f32-gemm/4x4-minmax-aarch32-vfp-ld64.S", "XNNPACK/wrappers/f32-gemm/4x8-minmax-aarch32-neon-cortex-a55.S", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-aarch32-neon-cortex-a7.S", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-aarch32-neon-cortex-a53.S", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-aarch32-neon-cortex-a75.S", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-aarch32-neon-ld64.S", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-aarch32-neon-prfm-cortex-a53.S", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-aarch32-neon-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-igemm/4x8-minmax-aarch32-neon-cortex-a55.S", "XNNPACK/wrappers/qc8-gemm/gen/4x8-minmax-fp32-aarch32-neon-mlal-lane-cortex-a7.S", "XNNPACK/wrappers/qc8-gemm/gen/4x8-minmax-fp32-aarch32-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qc8-gemm/gen/4x8-minmax-fp32-aarch32-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qc8-gemm/gen/4x8-minmax-fp32-aarch32-neon-mlal-lane-prfm-cortex-a7.S", "XNNPACK/wrappers/qc8-gemm/gen/4x8-minmax-fp32-aarch32-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qc8-gemm/gen/4x8-minmax-fp32-aarch32-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qc8-gemm/gen/4x8-minmax-fp32-aarch32-neonv8-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qc8-gemm/gen/4x8-minmax-fp32-aarch32-neonv8-mlal-lane-ld64.S", "XNNPACK/wrappers/qc8-gemm/gen/4x8-minmax-fp32-aarch32-neonv8-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qc8-gemm/gen/4x8-minmax-fp32-aarch32-neonv8-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qc8-gemm/gen/4x8c4-minmax-fp32-aarch32-neondot-cortex-a55.S", "XNNPACK/wrappers/qc8-gemm/gen/4x8c4-minmax-fp32-aarch32-neondot-ld64.S", "XNNPACK/wrappers/qc8-igemm/gen/4x8-minmax-fp32-aarch32-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qc8-igemm/gen/4x8-minmax-fp32-aarch32-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qc8-igemm/gen/4x8-minmax-fp32-aarch32-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qc8-igemm/gen/4x8-minmax-fp32-aarch32-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qc8-igemm/gen/4x8-minmax-fp32-aarch32-neonv8-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qc8-igemm/gen/4x8-minmax-fp32-aarch32-neonv8-mlal-lane-ld64.S", "XNNPACK/wrappers/qc8-igemm/gen/4x8-minmax-fp32-aarch32-neonv8-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qc8-igemm/gen/4x8-minmax-fp32-aarch32-neonv8-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qc8-igemm/gen/4x8c4-minmax-fp32-aarch32-neondot-cortex-a55.S", "XNNPACK/wrappers/qc8-igemm/gen/4x8c4-minmax-fp32-aarch32-neondot-ld64.S", "XNNPACK/wrappers/qs8-gemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-cortex-a7.S", "XNNPACK/wrappers/qs8-gemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qs8-gemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-prfm-cortex-a7.S", "XNNPACK/wrappers/qs8-gemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qs8-gemm/gen/4x8c4-minmax-rndnu-aarch32-neondot-cortex-a55.S", "XNNPACK/wrappers/qs8-gemm/gen/4x8c4-minmax-rndnu-aarch32-neondot-ld64.S", "XNNPACK/wrappers/qs8-igemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qs8-igemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qs8-igemm/gen/4x8c4-minmax-rndnu-aarch32-neondot-cortex-a55.S", "XNNPACK/wrappers/qs8-igemm/gen/4x8c4-minmax-rndnu-aarch32-neondot-ld64.S", "XNNPACK/wrappers/qu8-gemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-cortex-a7.S", "XNNPACK/wrappers/qu8-gemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qu8-gemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qu8-gemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-prfm-cortex-a7.S", "XNNPACK/wrappers/qu8-gemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qu8-gemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qu8-igemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qu8-igemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qu8-igemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qu8-igemm/gen/4x8-minmax-rndnu-aarch32-neon-mlal-lane-prfm-ld64.S"],
deps = [":interface", ":jit_memory", "//third_party:FP16"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["^android-armv7$", ["-march=armv8.2-a+dotprod", "-mfpu=neon-fp-armv8", "-mfloat-abi=softfp"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_ssse3",
srcs = [],
deps = [":interface", "//third_party:FP16"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["x86", ["-mssse3"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
platform_srcs = [["x86|x86_64|platform009", ["XNNPACK/wrappers/f32-dwconv2d-chw/gen/3x3p1-minmax-ssse3-2x4-acc2.c"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_f16c",
srcs = [],
deps = [":interface"],
exported_deps = [],
compiler_flags = ["-w", "-O2", "-mf16c"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["x86", ["-mf16c"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
platform_srcs = [["x86|x86_64|platform009", ["XNNPACK/wrappers/f16-f32-vcvt/gen/vcvt-f16c-x16.c", "XNNPACK/wrappers/f16-gavgpool/gen/7p7x-minmax-f16c-c8.c", "XNNPACK/wrappers/f16-gavgpool/gen/7x-minmax-f16c-c8.c", "XNNPACK/wrappers/f16-maxpool/9p8x-minmax-f16c-c8.c", "XNNPACK/wrappers/f16-prelu/gen/f16c-2x16.c", "XNNPACK/wrappers/f16-vbinary/gen/vadd-minmax-f16c-x16.c", "XNNPACK/wrappers/f16-vbinary/gen/vaddc-minmax-f16c-x16.c", "XNNPACK/wrappers/f16-vbinary/gen/vmul-minmax-f16c-x16.c", "XNNPACK/wrappers/f16-vbinary/gen/vmulc-minmax-f16c-x16.c", "XNNPACK/wrappers/f16-vclamp/gen/vclamp-f16c-x16.c", "XNNPACK/wrappers/f16-vhswish/gen/vhswish-f16c-x16.c", "XNNPACK/wrappers/f16-vlrelu/gen/vlrelu-f16c-x16.c", "XNNPACK/wrappers/f32-f16-vcvt/gen/vcvt-f16c-x16.c"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_xop",
srcs = [],
deps = [":interface"],
exported_deps = [],
compiler_flags = ["-w", "-O2", "-mxop"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows-x86_64", ["-Drestrict="]], ["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
platform_srcs = [["x86|x86_64|platform009", ["XNNPACK/wrappers/qc8-dwconv/gen/up16x9-minmax-fp32-xop-mul16-add16.c", "XNNPACK/wrappers/qc8-dwconv/gen/up16x25-minmax-fp32-xop-mul16-add16.c", "XNNPACK/wrappers/qc8-gemm/gen/1x4c8-minmax-fp32-xop-ld64.c", "XNNPACK/wrappers/qc8-gemm/gen/2x4c8-minmax-fp32-xop-ld64.c", "XNNPACK/wrappers/qc8-igemm/gen/1x4c8-minmax-fp32-xop-ld64.c", "XNNPACK/wrappers/qc8-igemm/gen/2x4c8-minmax-fp32-xop-ld64.c", "XNNPACK/wrappers/qs8-dwconv/gen/up16x9-minmax-fp32-xop-mul16-add16.c", "XNNPACK/wrappers/qs8-dwconv/gen/up16x25-minmax-fp32-xop-mul16-add16.c", "XNNPACK/wrappers/qs8-gemm/gen/1x4c8-minmax-fp32-xop-ld64.c", "XNNPACK/wrappers/qs8-gemm/gen/2x4c8-minmax-fp32-xop-ld64.c", "XNNPACK/wrappers/qs8-igemm/gen/1x4c8-minmax-fp32-xop-ld64.c", "XNNPACK/wrappers/qs8-igemm/gen/2x4c8-minmax-fp32-xop-ld64.c", "XNNPACK/wrappers/qs8-vadd/gen/minmax-xop-mul32-ld32-x8.c", "XNNPACK/wrappers/qs8-vaddc/gen/minmax-xop-mul32-ld32-x8.c", "XNNPACK/wrappers/qu8-dwconv/gen/up16x9-minmax-fp32-xop-mul32.c", "XNNPACK/wrappers/qu8-dwconv/gen/up16x25-minmax-fp32-xop-mul32.c", "XNNPACK/wrappers/qu8-gemm/gen/1x4c8-minmax-fp32-xop-ld64.c", "XNNPACK/wrappers/qu8-gemm/gen/2x4c8-minmax-fp32-xop-ld64.c", "XNNPACK/wrappers/qu8-igemm/gen/1x4c8-minmax-fp32-xop-ld64.c", "XNNPACK/wrappers/qu8-igemm/gen/2x4c8-minmax-fp32-xop-ld64.c", "XNNPACK/wrappers/qu8-vadd/gen/minmax-xop-mul32-ld32-x8.c", "XNNPACK/wrappers/qu8-vaddc/gen/minmax-xop-mul32-ld32-x8.c"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_scalar_aarch32",
srcs = ["XNNPACK/wrappers/f16-f32-vcvt/gen/vcvt-scalar-x4.c", "XNNPACK/wrappers/f32-argmaxpool/4x-scalar-c1.c", "XNNPACK/wrappers/f32-argmaxpool/9p8x-scalar-c1.c", "XNNPACK/wrappers/f32-argmaxpool/9x-scalar-c1.c", "XNNPACK/wrappers/f32-avgpool/9p8x-minmax-scalar-c1.c", "XNNPACK/wrappers/f32-avgpool/9x-minmax-scalar-c1.c", "XNNPACK/wrappers/f32-conv-hwc/3x3s2p0p1c3x4-scalar-1x1.c", "XNNPACK/wrappers/f32-conv-hwc/3x3s2p1c3x4-scalar-1x1.c", "XNNPACK/wrappers/f32-conv-hwc2chw/3x3s2p1c3x4-scalar-1x1.c", "XNNPACK/wrappers/f32-dwconv/gen/up1x3-minmax-scalar-acc2.c", "XNNPACK/wrappers/f32-dwconv/gen/up1x3-scalar-acc2.c", "XNNPACK/wrappers/f32-dwconv/gen/up1x4-minmax-scalar-acc2.c", "XNNPACK/wrappers/f32-dwconv/gen/up1x4-scalar-acc2.c", "XNNPACK/wrappers/f32-dwconv/gen/up1x9-minmax-scalar-acc2.c", "XNNPACK/wrappers/f32-dwconv/gen/up1x9-scalar-acc2.c", "XNNPACK/wrappers/f32-dwconv/gen/up1x25-minmax-scalar-acc2.c", "XNNPACK/wrappers/f32-dwconv/gen/up1x25-scalar-acc2.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/3x3p1-minmax-scalar-4x1.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/3x3s2p1-minmax-scalar-2x1-acc2.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/5x5p2-minmax-scalar-2x1-acc2.c", "XNNPACK/wrappers/f32-dwconv2d-chw/gen/5x5s2p2-minmax-scalar-2x1-acc2.c", "XNNPACK/wrappers/f32-f16-vcvt/gen/vcvt-scalar-fabsf-x2.c", "XNNPACK/wrappers/f32-gavgpool-cw/scalar-x1.c", "XNNPACK/wrappers/f32-gavgpool/7p7x-minmax-scalar-c1.c", "XNNPACK/wrappers/f32-gavgpool/7x-minmax-scalar-c1.c", "XNNPACK/wrappers/f32-gemm/gen/1x4-minmax-scalar.c", "XNNPACK/wrappers/f32-gemm/gen/1x4-relu-scalar.c", "XNNPACK/wrappers/f32-gemm/gen/1x4-scalar.c", "XNNPACK/wrappers/f32-gemm/gen/4x2-minmax-scalar.c", "XNNPACK/wrappers/f32-gemm/gen/4x2-scalar.c", "XNNPACK/wrappers/f32-gemm/gen/4x4-minmax-scalar.c", "XNNPACK/wrappers/f32-gemm/gen/4x4-relu-scalar.c", "XNNPACK/wrappers/f32-gemm/gen/4x4-scalar.c", "XNNPACK/wrappers/f32-ibilinear-chw/gen/scalar-p4.c", "XNNPACK/wrappers/f32-ibilinear/gen/scalar-c2.c", "XNNPACK/wrappers/f32-igemm/gen/1x4-minmax-scalar.c", "XNNPACK/wrappers/f32-igemm/gen/1x4-relu-scalar.c", "XNNPACK/wrappers/f32-igemm/gen/1x4-scalar.c", "XNNPACK/wrappers/f32-igemm/gen/4x2-minmax-scalar.c", "XNNPACK/wrappers/f32-igemm/gen/4x2-scalar.c", "XNNPACK/wrappers/f32-igemm/gen/4x4-minmax-scalar.c", "XNNPACK/wrappers/f32-igemm/gen/4x4-relu-scalar.c", "XNNPACK/wrappers/f32-igemm/gen/4x4-scalar.c", "XNNPACK/wrappers/f32-maxpool/9p8x-minmax-scalar-c1.c", "XNNPACK/wrappers/f32-pavgpool/9p8x-minmax-scalar-c1.c", "XNNPACK/wrappers/f32-pavgpool/9x-minmax-scalar-c1.c", "XNNPACK/wrappers/f32-prelu/gen/scalar-2x4.c", "XNNPACK/wrappers/f32-qs8-vcvt/gen/vcvt-scalar-imagic-x4.c", "XNNPACK/wrappers/f32-qu8-vcvt/gen/vcvt-scalar-imagic-x4.c", "XNNPACK/wrappers/f32-raddstoreexpminusmax/gen/scalar-rr2-p5-x4-acc2.c", "XNNPACK/wrappers/f32-rmax/scalar.c", "XNNPACK/wrappers/f32-spmm/gen/8x1-minmax-scalar.c", "XNNPACK/wrappers/f32-spmm/gen/8x2-minmax-scalar.c", "XNNPACK/wrappers/f32-spmm/gen/8x4-minmax-scalar.c", "XNNPACK/wrappers/f32-vbinary/gen/vadd-minmax-scalar-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vaddc-minmax-scalar-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vdiv-minmax-scalar-x2.c", "XNNPACK/wrappers/f32-vbinary/gen/vdivc-minmax-scalar-x2.c", "XNNPACK/wrappers/f32-vbinary/gen/vmax-scalar-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmaxc-scalar-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmin-scalar-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vminc-scalar-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmul-minmax-scalar-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vmulc-minmax-scalar-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vrdivc-minmax-scalar-x2.c", "XNNPACK/wrappers/f32-vbinary/gen/vrsubc-minmax-scalar-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vsqrdiff-scalar-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vsqrdiffc-scalar-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vsub-minmax-scalar-x8.c", "XNNPACK/wrappers/f32-vbinary/gen/vsubc-minmax-scalar-x8.c", "XNNPACK/wrappers/f32-vclamp/gen/vclamp-scalar-x4.c", "XNNPACK/wrappers/f32-velu/gen/velu-scalar-rr2-lut16-p3-x4.c", "XNNPACK/wrappers/f32-vhswish/gen/vhswish-scalar-x4.c", "XNNPACK/wrappers/f32-vlrelu/gen/vlrelu-scalar-x4.c", "XNNPACK/wrappers/f32-vmulcaddc/gen/c1-minmax-scalar-2x.c", "XNNPACK/wrappers/f32-vrelu/gen/vrelu-scalar-x8.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndd-scalar-libm-x1.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndne-scalar-libm-x1.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndu-scalar-libm-x1.c", "XNNPACK/wrappers/f32-vrnd/gen/vrndz-scalar-libm-x1.c", "XNNPACK/wrappers/f32-vsigmoid/gen/vsigmoid-scalar-rr2-lut64-p2-div-x2.c", "XNNPACK/wrappers/f32-vsqrt/gen/scalar-sqrt-x1.c", "XNNPACK/wrappers/f32-vunary/gen/vabs-scalar-x4.c", "XNNPACK/wrappers/f32-vunary/gen/vneg-scalar-x4.c", "XNNPACK/wrappers/f32-vunary/gen/vsqr-scalar-x4.c", "XNNPACK/wrappers/qc8-dwconv/gen/up2x9-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qc8-dwconv/gen/up2x25-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qc8-gemm/gen/1x2-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qc8-gemm/gen/1x8-minmax-fp32-neon-mlal-lane.c", "XNNPACK/wrappers/qc8-gemm/gen/2x2-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qc8-igemm/gen/1x2-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qc8-igemm/gen/1x8-minmax-fp32-neon-mlal-lane.c", "XNNPACK/wrappers/qc8-igemm/gen/2x2-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qs8-dwconv/gen/up1x9-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qs8-dwconv/gen/up1x25-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qs8-f32-vcvt/gen/vcvt-scalar-x4.c", "XNNPACK/wrappers/qs8-gavgpool/gen/7p7x-minmax-fp32-scalar-imagic-c1.c", "XNNPACK/wrappers/qs8-gavgpool/gen/7x-minmax-fp32-scalar-imagic-c1.c", "XNNPACK/wrappers/qs8-gemm/gen/1x2-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qs8-gemm/gen/2x2-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qs8-igemm/gen/1x2-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qs8-igemm/gen/2x2-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qs8-vadd/gen/minmax-scalar-x1.c", "XNNPACK/wrappers/qs8-vaddc/gen/minmax-scalar-x1.c", "XNNPACK/wrappers/qs8-vmul/gen/minmax-fp32-scalar-x4.c", "XNNPACK/wrappers/qs8-vmulc/gen/minmax-fp32-scalar-x4.c", "XNNPACK/wrappers/qu8-avgpool/9p8x-minmax-scalar-c1.c", "XNNPACK/wrappers/qu8-avgpool/9x-minmax-scalar-c1.c", "XNNPACK/wrappers/qu8-dwconv/gen/up1x9-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qu8-dwconv/gen/up1x25-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qu8-f32-vcvt/gen/vcvt-scalar-x4.c", "XNNPACK/wrappers/qu8-gavgpool/gen/7p7x-minmax-fp32-scalar-imagic-c1.c", "XNNPACK/wrappers/qu8-gavgpool/gen/7x-minmax-fp32-scalar-imagic-c1.c", "XNNPACK/wrappers/qu8-gemm/gen/1x2-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qu8-gemm/gen/2x2-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qu8-igemm/gen/1x2-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qu8-igemm/gen/2x2-minmax-fp32-scalar-fmagic.c", "XNNPACK/wrappers/qu8-vadd/gen/minmax-scalar-x1.c", "XNNPACK/wrappers/qu8-vaddc/gen/minmax-scalar-x1.c", "XNNPACK/wrappers/qu8-vmul/gen/minmax-fp32-scalar-x4.c", "XNNPACK/wrappers/qu8-vmulc/gen/minmax-fp32-scalar-x4.c", "XNNPACK/wrappers/s8-ibilinear/gen/scalar-c1.c", "XNNPACK/wrappers/s8-maxpool/9p8x-minmax-scalar-c1.c", "XNNPACK/wrappers/s8-vclamp/scalar-x4.c", "XNNPACK/wrappers/u8-ibilinear/gen/scalar-c1.c", "XNNPACK/wrappers/u8-maxpool/9p8x-minmax-scalar-c1.c", "XNNPACK/wrappers/u8-rmax/scalar.c", "XNNPACK/wrappers/u8-vclamp/scalar-x4.c", "XNNPACK/wrappers/xx-fill/scalar-x16.c", "XNNPACK/wrappers/xx-pad/scalar.c", "XNNPACK/wrappers/x8-zip/xm-scalar.c", "XNNPACK/wrappers/x8-zip/x2-scalar.c", "XNNPACK/wrappers/x8-zip/x3-scalar.c", "XNNPACK/wrappers/x8-zip/x4-scalar.c", "XNNPACK/wrappers/x32-packx/x2-scalar.c", "XNNPACK/wrappers/x32-packx/x3-scalar.c", "XNNPACK/wrappers/x32-packx/x4-scalar.c", "XNNPACK/wrappers/x32-unpool/scalar.c", "XNNPACK/wrappers/x32-zip/xm-scalar.c", "XNNPACK/wrappers/x32-zip/x2-scalar.c", "XNNPACK/wrappers/x32-zip/x3-scalar.c", "XNNPACK/wrappers/x32-zip/x4-scalar.c"],
deps = [":interface", "//third_party:FP16"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["^(android-armv7|iphoneos-armv7)$", ["-march=armv7-a", "-mfpu=neon", "-mfloat-abi=softfp"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_neon_fma",
srcs = ["XNNPACK/wrappers/f32-dwconv/gen/up8x3-minmax-neonfma.c", "XNNPACK/wrappers/f32-dwconv/gen/up8x4-minmax-neonfma.c", "XNNPACK/wrappers/f32-dwconv/gen/up8x9-minmax-neonfma.c", "XNNPACK/wrappers/f32-dwconv/gen/up8x25-minmax-neonfma-acc2.c", "XNNPACK/wrappers/f32-gemm/gen/1x8s4-minmax-neonfma.c", "XNNPACK/wrappers/f32-gemm/gen/6x8s4-minmax-neonfma.c", "XNNPACK/wrappers/f32-ibilinear-chw/gen/neonfma-p8.c", "XNNPACK/wrappers/f32-ibilinear/gen/neonfma-c8.c", "XNNPACK/wrappers/f32-igemm/gen/1x8s4-minmax-neonfma.c", "XNNPACK/wrappers/f32-igemm/gen/6x8s4-minmax-neonfma.c", "XNNPACK/wrappers/f32-raddstoreexpminusmax/gen/neonfma-rr1-lut64-p2-x16.c", "XNNPACK/wrappers/f32-spmm/gen/32x1-minmax-neonfma-pipelined.c", "XNNPACK/wrappers/f32-velu/gen/velu-neonfma-rr1-lut16-p3-x16.c", "XNNPACK/wrappers/f32-velu/gen/velu-neonfma-rr1-p6-x8.c", "XNNPACK/wrappers/f32-vmulcaddc/gen/c4-minmax-neonfma-2x.c", "XNNPACK/wrappers/f32-vsigmoid/gen/vsigmoid-neonfma-rr1-lut64-p2-nr2recps-x16.c"],
deps = [":interface", "//third_party:FP16"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["arm", ["-mfpu=neon-vfpv4"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_avx2",
srcs = [],
deps = [":interface"],
exported_deps = [],
compiler_flags = ["-w", "-O2", "-mavx2", "-mfma", "-mf16c"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["x86", ["-mavx2", "-mfma", "-mf16c"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
platform_srcs = [["x86|x86_64|platform009", ["XNNPACK/wrappers/f16-gemm/gen/1x16-minmax-avx2-broadcast.c", "XNNPACK/wrappers/f16-gemm/gen/4x16-minmax-avx2-broadcast.c", "XNNPACK/wrappers/f16-igemm/gen/1x16-minmax-avx2-broadcast.c", "XNNPACK/wrappers/f16-igemm/gen/4x16-minmax-avx2-broadcast.c", "XNNPACK/wrappers/f32-qs8-vcvt/gen/vcvt-avx2-x64.c", "XNNPACK/wrappers/f32-qu8-vcvt/gen/vcvt-avx2-x64.c", "XNNPACK/wrappers/f32-velu/gen/velu-avx2-rr1-lut4-p4-perm-x56.c", "XNNPACK/wrappers/f32-vsigmoid/gen/vsigmoid-avx2-rr1-p5-div-x40.c", "XNNPACK/wrappers/qc8-dwconv/gen/up16x9-minmax-fp32-avx2-mul32.c", "XNNPACK/wrappers/qc8-dwconv/gen/up16x25-minmax-fp32-avx2-mul32.c", "XNNPACK/wrappers/qc8-gemm/gen/1x8c8-minmax-fp32-avx2.c", "XNNPACK/wrappers/qc8-gemm/gen/3x8c8-minmax-fp32-avx2.c", "XNNPACK/wrappers/qc8-igemm/gen/1x8c8-minmax-fp32-avx2.c", "XNNPACK/wrappers/qc8-igemm/gen/3x8c8-minmax-fp32-avx2.c", "XNNPACK/wrappers/qs8-dwconv/gen/up16x9-minmax-fp32-avx2-mul32.c", "XNNPACK/wrappers/qs8-dwconv/gen/up16x25-minmax-fp32-avx2-mul32.c", "XNNPACK/wrappers/qs8-f32-vcvt/gen/vcvt-avx2-x16.c", "XNNPACK/wrappers/qs8-gemm/gen/1x8c8-minmax-fp32-avx2.c", "XNNPACK/wrappers/qs8-gemm/gen/3x8c8-minmax-fp32-avx2.c", "XNNPACK/wrappers/qs8-igemm/gen/1x8c8-minmax-fp32-avx2.c", "XNNPACK/wrappers/qs8-igemm/gen/3x8c8-minmax-fp32-avx2.c", "XNNPACK/wrappers/qs8-vadd/gen/minmax-avx2-mul32-ld64-x16.c", "XNNPACK/wrappers/qs8-vaddc/gen/minmax-avx2-mul32-ld64-x16.c", "XNNPACK/wrappers/qu8-dwconv/gen/up16x9-minmax-fp32-avx2-mul32.c", "XNNPACK/wrappers/qu8-dwconv/gen/up16x25-minmax-fp32-avx2-mul32.c", "XNNPACK/wrappers/qu8-f32-vcvt/gen/vcvt-avx2-x16.c", "XNNPACK/wrappers/qu8-gemm/gen/1x8c8-minmax-fp32-avx2.c", "XNNPACK/wrappers/qu8-gemm/gen/3x8c8-minmax-fp32-avx2.c", "XNNPACK/wrappers/qu8-igemm/gen/1x8c8-minmax-fp32-avx2.c", "XNNPACK/wrappers/qu8-igemm/gen/3x8c8-minmax-fp32-avx2.c", "XNNPACK/wrappers/qu8-vadd/gen/minmax-avx2-mul32-ld64-x16.c", "XNNPACK/wrappers/qu8-vaddc/gen/minmax-avx2-mul32-ld64-x16.c", "XNNPACK/wrappers/x8-lut/gen/lut-avx2-x128.c"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
cxx_library(
name = "ukernels_asm_aarch64",
srcs = ["XNNPACK/wrappers/f16-gemm/gen-inc/1x8inc-minmax-aarch64-neonfp16arith-ld64.S", "XNNPACK/wrappers/f16-gemm/gen-inc/1x16inc-minmax-aarch64-neonfp16arith-ld32.S", "XNNPACK/wrappers/f16-gemm/gen-inc/4x8inc-minmax-aarch64-neonfp16arith-ld64.S", "XNNPACK/wrappers/f16-gemm/gen-inc/4x16inc-minmax-aarch64-neonfp16arith-ld32.S", "XNNPACK/wrappers/f16-gemm/gen-inc/6x8inc-minmax-aarch64-neonfp16arith-ld64.S", "XNNPACK/wrappers/f16-gemm/gen-inc/6x16inc-minmax-aarch64-neonfp16arith-cortex-a55.S", "XNNPACK/wrappers/f16-gemm/gen-inc/6x16inc-minmax-aarch64-neonfp16arith-cortex-a75.S", "XNNPACK/wrappers/f16-gemm/gen-inc/6x16inc-minmax-aarch64-neonfp16arith-ld32.S", "XNNPACK/wrappers/f16-gemm/gen-inc/8x8inc-minmax-aarch64-neonfp16arith-ld64.S", "XNNPACK/wrappers/f16-gemm/gen/1x8-minmax-aarch64-neonfp16arith-ld64.S", "XNNPACK/wrappers/f16-gemm/gen/1x16-minmax-aarch64-neonfp16arith-ld32.S", "XNNPACK/wrappers/f16-gemm/gen/4x8-minmax-aarch64-neonfp16arith-ld64.S", "XNNPACK/wrappers/f16-gemm/gen/4x16-minmax-aarch64-neonfp16arith-ld32.S", "XNNPACK/wrappers/f16-gemm/gen/6x8-minmax-aarch64-neonfp16arith-ld64.S", "XNNPACK/wrappers/f16-gemm/gen/6x16-minmax-aarch64-neonfp16arith-cortex-a55.S", "XNNPACK/wrappers/f16-gemm/gen/6x16-minmax-aarch64-neonfp16arith-cortex-a75.S", "XNNPACK/wrappers/f16-gemm/gen/6x16-minmax-aarch64-neonfp16arith-ld32.S", "XNNPACK/wrappers/f16-gemm/gen/8x8-minmax-aarch64-neonfp16arith-ld64.S", "XNNPACK/wrappers/f16-igemm/4x16-minmax-aarch64-neonfp16arith-ld32.S", "XNNPACK/wrappers/f32-dwconv/up4x9-minmax-aarch64-neonfma-cortex-a55.S", "XNNPACK/wrappers/f32-dwconv/up4x9-minmax-aarch64-neonfma.S", "XNNPACK/wrappers/f32-gemm/gen-inc/1x8inc-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen-inc/1x8inc-minmax-aarch64-neonfma-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen-inc/1x8inc-minmax-aarch64-neonfma-ld64.S", "XNNPACK/wrappers/f32-gemm/gen-inc/1x8inc-minmax-aarch64-neonfma-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen-inc/1x12inc-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen-inc/4x8inc-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen-inc/4x8inc-minmax-aarch64-neonfma-cortex-a55.S", "XNNPACK/wrappers/f32-gemm/gen-inc/4x8inc-minmax-aarch64-neonfma-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen-inc/4x8inc-minmax-aarch64-neonfma-ld64.S", "XNNPACK/wrappers/f32-gemm/gen-inc/4x8inc-minmax-aarch64-neonfma-ld128.S", "XNNPACK/wrappers/f32-gemm/gen-inc/4x8inc-minmax-aarch64-neonfma-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen-inc/4x12inc-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen-inc/5x8inc-minmax-aarch64-neonfma-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen-inc/5x8inc-minmax-aarch64-neonfma-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen-inc/6x8inc-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen-inc/6x8inc-minmax-aarch64-neonfma-cortex-a55.S", "XNNPACK/wrappers/f32-gemm/gen-inc/6x8inc-minmax-aarch64-neonfma-cortex-a73.S", "XNNPACK/wrappers/f32-gemm/gen-inc/6x8inc-minmax-aarch64-neonfma-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen-inc/6x8inc-minmax-aarch64-neonfma-ld64.S", "XNNPACK/wrappers/f32-gemm/gen-inc/6x8inc-minmax-aarch64-neonfma-ld128.S", "XNNPACK/wrappers/f32-gemm/gen-inc/6x8inc-minmax-aarch64-neonfma-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen/1x8-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen/1x8-minmax-aarch64-neonfma-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen/1x8-minmax-aarch64-neonfma-ld64.S", "XNNPACK/wrappers/f32-gemm/gen/1x8-minmax-aarch64-neonfma-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen/1x12-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-aarch64-neonfma-cortex-a55.S", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-aarch64-neonfma-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-aarch64-neonfma-ld64.S", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-aarch64-neonfma-ld128.S", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-aarch64-neonfma-prfm-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen/4x8-minmax-aarch64-neonfma-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen/4x12-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen/5x8-minmax-aarch64-neonfma-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen/5x8-minmax-aarch64-neonfma-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen/6x8-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen/6x8-minmax-aarch64-neonfma-cortex-a55.S", "XNNPACK/wrappers/f32-gemm/gen/6x8-minmax-aarch64-neonfma-cortex-a73.S", "XNNPACK/wrappers/f32-gemm/gen/6x8-minmax-aarch64-neonfma-cortex-a75.S", "XNNPACK/wrappers/f32-gemm/gen/6x8-minmax-aarch64-neonfma-ld64.S", "XNNPACK/wrappers/f32-gemm/gen/6x8-minmax-aarch64-neonfma-ld128.S", "XNNPACK/wrappers/f32-gemm/gen/6x8-minmax-aarch64-neonfma-prfm-cortex-a53.S", "XNNPACK/wrappers/f32-gemm/gen/6x8-minmax-aarch64-neonfma-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-igemm/gen/1x8-minmax-aarch64-neonfma-cortex-a75.S", "XNNPACK/wrappers/f32-igemm/gen/1x8-minmax-aarch64-neonfma-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-aarch64-neonfma-cortex-a75.S", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-aarch64-neonfma-ld64.S", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-aarch64-neonfma-ld128.S", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-aarch64-neonfma-prfm-cortex-a53.S", "XNNPACK/wrappers/f32-igemm/gen/4x8-minmax-aarch64-neonfma-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-igemm/gen/5x8-minmax-aarch64-neonfma-cortex-a75.S", "XNNPACK/wrappers/f32-igemm/gen/5x8-minmax-aarch64-neonfma-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-igemm/gen/6x8-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-igemm/gen/6x8-minmax-aarch64-neonfma-cortex-a75.S", "XNNPACK/wrappers/f32-igemm/gen/6x8-minmax-aarch64-neonfma-ld64.S", "XNNPACK/wrappers/f32-igemm/gen/6x8-minmax-aarch64-neonfma-ld128.S", "XNNPACK/wrappers/f32-igemm/gen/6x8-minmax-aarch64-neonfma-prfm-cortex-a53.S", "XNNPACK/wrappers/f32-igemm/gen/6x8-minmax-aarch64-neonfma-prfm-cortex-a75.S", "XNNPACK/wrappers/f32-igemm/1x8-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-igemm/1x12-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-igemm/4x8-minmax-aarch64-neonfma-cortex-a55.S", "XNNPACK/wrappers/f32-igemm/4x12-minmax-aarch64-neonfma-cortex-a53.S", "XNNPACK/wrappers/f32-igemm/6x8-minmax-aarch64-neonfma-cortex-a55.S", "XNNPACK/wrappers/f32-igemm/6x8-minmax-aarch64-neonfma-cortex-a73.S", "XNNPACK/wrappers/qc8-gemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal-cortex-a53.S", "XNNPACK/wrappers/qc8-gemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal-prfm-cortex-a53.S", "XNNPACK/wrappers/qc8-gemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal-prfm.S", "XNNPACK/wrappers/qc8-gemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal.S", "XNNPACK/wrappers/qc8-gemm/gen/1x16c4-minmax-fp32-aarch64-neondot-ld32.S", "XNNPACK/wrappers/qc8-gemm/gen/1x16c4-minmax-fp32-aarch64-neondot-ld64.S", "XNNPACK/wrappers/qc8-gemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal-cortex-a53.S", "XNNPACK/wrappers/qc8-gemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal-prfm-cortex-a53.S", "XNNPACK/wrappers/qc8-gemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal-prfm.S", "XNNPACK/wrappers/qc8-gemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal.S", "XNNPACK/wrappers/qc8-gemm/gen/2x8c8-minmax-fp32-aarch64-neon-mull.S", "XNNPACK/wrappers/qc8-gemm/gen/2x8c16-minmax-fp32-aarch64-neon-mlal.S", "XNNPACK/wrappers/qc8-gemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qc8-gemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qc8-gemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qc8-gemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qc8-gemm/gen/4x16c4-minmax-fp32-aarch64-neondot-cortex-a55.S", "XNNPACK/wrappers/qc8-gemm/gen/4x16c4-minmax-fp32-aarch64-neondot-ld32.S", "XNNPACK/wrappers/qc8-gemm/gen/4x16c4-minmax-fp32-aarch64-neondot-ld64.S", "XNNPACK/wrappers/qc8-gemm/gen/4x16c4-minmax-fp32-aarch64-neondot-ld128.S", "XNNPACK/wrappers/qc8-igemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal-cortex-a53.S", "XNNPACK/wrappers/qc8-igemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal-prfm-cortex-a53.S", "XNNPACK/wrappers/qc8-igemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal-prfm.S", "XNNPACK/wrappers/qc8-igemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal.S", "XNNPACK/wrappers/qc8-igemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal-cortex-a53.S", "XNNPACK/wrappers/qc8-igemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal-prfm-cortex-a53.S", "XNNPACK/wrappers/qc8-igemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal-prfm.S", "XNNPACK/wrappers/qc8-igemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal.S", "XNNPACK/wrappers/qc8-igemm/gen/2x8c16-minmax-fp32-aarch64-neon-mlal.S", "XNNPACK/wrappers/qc8-igemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qc8-igemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qc8-igemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qc8-igemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qc8-igemm/gen/4x16c4-minmax-fp32-aarch64-neondot-cortex-a55.S", "XNNPACK/wrappers/qc8-igemm/gen/4x16c4-minmax-fp32-aarch64-neondot-ld64.S", "XNNPACK/wrappers/qc8-igemm/gen/4x16c4-minmax-fp32-aarch64-neondot-ld128.S", "XNNPACK/wrappers/qs8-gemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal-prfm.S", "XNNPACK/wrappers/qs8-gemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal.S", "XNNPACK/wrappers/qs8-gemm/gen/1x8c8-minmax-rndnu-aarch64-neon-mlal-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/1x8c8-minmax-rndnu-aarch64-neon-mlal-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/1x8c8-minmax-rndnu-aarch64-neon-mlal-prfm.S", "XNNPACK/wrappers/qs8-gemm/gen/1x8c8-minmax-rndnu-aarch64-neon-mlal.S", "XNNPACK/wrappers/qs8-gemm/gen/1x16c4-minmax-fp32-aarch64-neondot-ld32.S", "XNNPACK/wrappers/qs8-gemm/gen/1x16c4-minmax-fp32-aarch64-neondot-ld64.S", "XNNPACK/wrappers/qs8-gemm/gen/1x16c4-minmax-rndnu-aarch64-neondot-ld32.S", "XNNPACK/wrappers/qs8-gemm/gen/1x16c4-minmax-rndnu-aarch64-neondot-ld64.S", "XNNPACK/wrappers/qs8-gemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal-prfm.S", "XNNPACK/wrappers/qs8-gemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal.S", "XNNPACK/wrappers/qs8-gemm/gen/2x8c8-minmax-fp32-aarch64-neon-mull.S", "XNNPACK/wrappers/qs8-gemm/gen/2x8c8-minmax-rndnu-aarch64-neon-mlal-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/2x8c8-minmax-rndnu-aarch64-neon-mlal-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/2x8c8-minmax-rndnu-aarch64-neon-mlal-prfm.S", "XNNPACK/wrappers/qs8-gemm/gen/2x8c8-minmax-rndnu-aarch64-neon-mlal.S", "XNNPACK/wrappers/qs8-gemm/gen/2x8c8-minmax-rndnu-aarch64-neon-mull.S", "XNNPACK/wrappers/qs8-gemm/gen/2x8c16-minmax-fp32-aarch64-neon-mlal.S", "XNNPACK/wrappers/qs8-gemm/gen/2x8c16-minmax-rndnu-aarch64-neon-mlal.S", "XNNPACK/wrappers/qs8-gemm/gen/4x8-minmax-rndnu-aarch64-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qs8-gemm/gen/4x8-minmax-rndnu-aarch64-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16c4-minmax-fp32-aarch64-neondot-cortex-a55.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16c4-minmax-fp32-aarch64-neondot-ld32.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16c4-minmax-fp32-aarch64-neondot-ld64.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16c4-minmax-fp32-aarch64-neondot-ld128.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16c4-minmax-rndnu-aarch64-neondot-cortex-a55.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16c4-minmax-rndnu-aarch64-neondot-ld32.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16c4-minmax-rndnu-aarch64-neondot-ld64.S", "XNNPACK/wrappers/qs8-gemm/gen/4x16c4-minmax-rndnu-aarch64-neondot-ld128.S", "XNNPACK/wrappers/qs8-igemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal-prfm.S", "XNNPACK/wrappers/qs8-igemm/gen/1x8c8-minmax-fp32-aarch64-neon-mlal.S", "XNNPACK/wrappers/qs8-igemm/gen/1x8c8-minmax-rndnu-aarch64-neon-mlal-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/1x8c8-minmax-rndnu-aarch64-neon-mlal-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/1x8c8-minmax-rndnu-aarch64-neon-mlal-prfm.S", "XNNPACK/wrappers/qs8-igemm/gen/1x8c8-minmax-rndnu-aarch64-neon-mlal.S", "XNNPACK/wrappers/qs8-igemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal-prfm.S", "XNNPACK/wrappers/qs8-igemm/gen/2x8c8-minmax-fp32-aarch64-neon-mlal.S", "XNNPACK/wrappers/qs8-igemm/gen/2x8c8-minmax-rndnu-aarch64-neon-mlal-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/2x8c8-minmax-rndnu-aarch64-neon-mlal-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/2x8c8-minmax-rndnu-aarch64-neon-mlal-prfm.S", "XNNPACK/wrappers/qs8-igemm/gen/2x8c8-minmax-rndnu-aarch64-neon-mlal.S", "XNNPACK/wrappers/qs8-igemm/gen/2x8c16-minmax-fp32-aarch64-neon-mlal.S", "XNNPACK/wrappers/qs8-igemm/gen/2x8c16-minmax-rndnu-aarch64-neon-mlal.S", "XNNPACK/wrappers/qs8-igemm/gen/4x8-minmax-rndnu-aarch64-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qs8-igemm/gen/4x8-minmax-rndnu-aarch64-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16-minmax-fp32-aarch64-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16c4-minmax-fp32-aarch64-neondot-cortex-a55.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16c4-minmax-fp32-aarch64-neondot-ld64.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16c4-minmax-fp32-aarch64-neondot-ld128.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16c4-minmax-rndnu-aarch64-neondot-cortex-a55.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16c4-minmax-rndnu-aarch64-neondot-ld64.S", "XNNPACK/wrappers/qs8-igemm/gen/4x16c4-minmax-rndnu-aarch64-neondot-ld128.S", "XNNPACK/wrappers/qu8-gemm/gen/4x8c4-minmax-rndnu-aarch64-neondot-cortex-a55.S", "XNNPACK/wrappers/qu8-gemm/gen/4x8c4-minmax-rndnu-aarch64-neondot-ld128.S", "XNNPACK/wrappers/qu8-gemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qu8-gemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-cortex-a75.S", "XNNPACK/wrappers/qu8-gemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qu8-gemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qu8-gemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-prfm-cortex-a75.S", "XNNPACK/wrappers/qu8-gemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qu8-gemm/gen/4x16c4-minmax-fp32-aarch64-neondot-cortex-a55.S", "XNNPACK/wrappers/qu8-gemm/gen/4x16c4-minmax-fp32-aarch64-neondot-ld128.S", "XNNPACK/wrappers/qu8-gemm/gen/4x16c4-minmax-rndnu-aarch64-neondot-cortex-a55.S", "XNNPACK/wrappers/qu8-gemm/gen/4x16c4-minmax-rndnu-aarch64-neondot-ld128.S", "XNNPACK/wrappers/qu8-igemm/gen/4x8c4-minmax-rndnu-aarch64-neondot-cortex-a55.S", "XNNPACK/wrappers/qu8-igemm/gen/4x8c4-minmax-rndnu-aarch64-neondot-ld128.S", "XNNPACK/wrappers/qu8-igemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-cortex-a53.S", "XNNPACK/wrappers/qu8-igemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-cortex-a75.S", "XNNPACK/wrappers/qu8-igemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-ld64.S", "XNNPACK/wrappers/qu8-igemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-prfm-cortex-a53.S", "XNNPACK/wrappers/qu8-igemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-prfm-cortex-a75.S", "XNNPACK/wrappers/qu8-igemm/gen/4x16-minmax-rndnu-aarch64-neon-mlal-lane-prfm-ld64.S", "XNNPACK/wrappers/qu8-igemm/gen/4x16c4-minmax-fp32-aarch64-neondot-cortex-a55.S", "XNNPACK/wrappers/qu8-igemm/gen/4x16c4-minmax-fp32-aarch64-neondot-ld128.S", "XNNPACK/wrappers/qu8-igemm/gen/4x16c4-minmax-rndnu-aarch64-neondot-cortex-a55.S", "XNNPACK/wrappers/qu8-igemm/gen/4x16c4-minmax-rndnu-aarch64-neondot-ld128.S"],
deps = [":interface", ":jit_memory", "//third_party:FP16"],
exported_deps = [],
compiler_flags = ["-w", "-O2"],
preferred_linkage = "static",
exported_preprocessor_flags = [],
header_namespace = "",
headers = subdir_glob([("XNNPACK/src", "**/*.S"), ("XNNPACK/src", "**/*.c"), ("XNNPACK/src", "**/*.h"), ("XNNPACK/include", "**/*.h")]),
linker_flags = [],
platform_compiler_flags = [["(aarch64|arm64)", ["-march=armv8.2-a+fp16+dotprod"]]],
platform_linker_flags = [],
platform_preprocessor_flags = [["windows", ["-D_WINDOWS", "-D_WIN32", "-DWIN32", "-DNOMINMAX", "-D_CRT_SECURE_NO_WARNINGS", "-D_USE_MATH_DEFINES"]], ["windows.*64$", ["-D_WIN64"]]],
preprocessor_flags = ["-DXNN_LOG_LEVEL=0"],
soname = "",
visibility = ["PUBLIC"],
)
| 169.621806 | 17,584 | 0.717379 | 14,668 | 99,568 | 4.800995 | 0.031361 | 0.226211 | 0.171994 | 0.096051 | 0.955184 | 0.939478 | 0.904233 | 0.860326 | 0.746766 | 0.602704 | 0 | 0.083361 | 0.075315 | 99,568 | 586 | 17,585 | 169.911263 | 0.681511 | 0 | 0 | 0.776978 | 0 | 0.982014 | 0.764633 | 0.686255 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001799 | true | 0 | 0 | 0 | 0.001799 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 14 |
54ae5eba0b5005bfd8211cb3aa35392b90793b41 | 1,638 | py | Python | tests/gfx_data.py | rofferom/seiken_densetsu_3 | 8d5ba61da90df277b8e19cd86ccf475ed5d78da6 | [
"MIT"
] | 5 | 2017-12-06T21:39:58.000Z | 2019-08-09T13:06:03.000Z | tests/gfx_data.py | rofferom/seiken_densetsu_3 | 8d5ba61da90df277b8e19cd86ccf475ed5d78da6 | [
"MIT"
] | 1 | 2018-02-24T13:22:55.000Z | 2018-02-24T13:22:55.000Z | tests/gfx_data.py | rofferom/seiken_densetsu_3 | 8d5ba61da90df277b8e19cd86ccf475ed5d78da6 | [
"MIT"
] | null | null | null | from collections import namedtuple
CharDump = namedtuple("CharDump", ["idx", "decoded"])
char_list = [
CharDump(
idx=0x47,
decoded=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
),
CharDump(
idx=0x0103,
decoded=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
)
]
decode_dump = {4081739: bytearray(b'\x02\x01\x7f\xf8H"~\xf9\\pj\xaaO\xe0O\xe0I O\xe0D\x88\x98\xf8\x00'), 4077039: bytearray(b'\x00\x00\x00\x01\x00\x02\x00\x00\x00\x03\x00\x02\x07`0`0c0`\x18\xe0\x07c\x00')}
| 81.9 | 499 | 0.404151 | 453 | 1,638 | 1.456954 | 0.0883 | 0.639394 | 0.804545 | 0.915152 | 0.615152 | 0.615152 | 0.615152 | 0.615152 | 0.607576 | 0.601515 | 0 | 0.411817 | 0.307692 | 1,638 | 19 | 500 | 86.210526 | 0.170194 | 0 | 0 | 0.133333 | 0 | 0.133333 | 0.09707 | 0.08547 | 0 | 0 | 0.006105 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
49bae485b695b23c7bf5f9c48520ad204c96e681 | 2,162 | py | Python | disprcnn/data/datasets/evaluation/kitti/__init__.py | reinforcementdriving/disprcnn | 3597cfaf1c0cf9cae6f1784369588d95367e1448 | [
"Apache-2.0"
] | 191 | 2020-03-27T07:29:21.000Z | 2022-02-22T13:36:47.000Z | disprcnn/data/datasets/evaluation/kitti/__init__.py | reinforcementdriving/disprcnn | 3597cfaf1c0cf9cae6f1784369588d95367e1448 | [
"Apache-2.0"
] | 39 | 2020-04-16T09:40:41.000Z | 2022-03-22T08:19:57.000Z | disprcnn/data/datasets/evaluation/kitti/__init__.py | reinforcementdriving/disprcnn | 3597cfaf1c0cf9cae6f1784369588d95367e1448 | [
"Apache-2.0"
] | 35 | 2020-04-08T09:47:32.000Z | 2022-01-18T08:19:46.000Z | from .kitti_eval import do_kitti_evaluation, do_kitti_pedestrian_evaluation, do_kitti_cyclist_evaluation
def kitti_evaluation(
dataset,
left_predictions,
right_predictions,
output_folder,
class2type,
box_only,
iou_types,
expected_results,
expected_results_sigma_tol,
eval_bbox3d,
):
return do_kitti_evaluation(
dataset=dataset,
left_predictions=left_predictions,
right_predictions=right_predictions,
box_only=box_only,
output_folder=output_folder,
class2type=class2type,
iou_types=iou_types,
expected_results=expected_results,
expected_results_sigma_tol=expected_results_sigma_tol,
eval_bbox3d=eval_bbox3d,
)
def kitti_pedestrian_evaluation(
dataset,
left_predictions,
right_predictions,
output_folder,
class2type,
box_only,
iou_types,
expected_results,
expected_results_sigma_tol,
eval_bbox3d,
):
return do_kitti_pedestrian_evaluation(
dataset=dataset,
left_predictions=left_predictions,
right_predictions=right_predictions,
box_only=box_only,
output_folder=output_folder,
class2type=class2type,
iou_types=iou_types,
expected_results=expected_results,
expected_results_sigma_tol=expected_results_sigma_tol,
eval_bbox3d=eval_bbox3d,
)
def kitti_cyclist_evaluation(
dataset,
left_predictions,
right_predictions,
output_folder,
class2type,
box_only,
iou_types,
expected_results,
expected_results_sigma_tol,
eval_bbox3d,
):
return do_kitti_cyclist_evaluation(
dataset=dataset,
left_predictions=left_predictions,
right_predictions=right_predictions,
box_only=box_only,
output_folder=output_folder,
class2type=class2type,
iou_types=iou_types,
expected_results=expected_results,
expected_results_sigma_tol=expected_results_sigma_tol,
eval_bbox3d=eval_bbox3d,
) | 28.077922 | 104 | 0.676688 | 220 | 2,162 | 6.140909 | 0.118182 | 0.199852 | 0.179867 | 0.199852 | 0.904515 | 0.904515 | 0.904515 | 0.904515 | 0.904515 | 0.904515 | 0 | 0.011458 | 0.273358 | 2,162 | 77 | 105 | 28.077922 | 0.848504 | 0 | 0 | 0.863014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041096 | false | 0 | 0.013699 | 0.041096 | 0.09589 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
49d722c7195a8052c20789c9a13960c38909a5c4 | 109 | py | Python | modelscript/scripts/permissions/all.py | ScribesZone/ModelScribes | a36be1047283f2e470dc2dd4353f2a714377bb7d | [
"MIT"
] | 1 | 2019-02-22T14:27:06.000Z | 2019-02-22T14:27:06.000Z | modelscript/scripts/permissions/all.py | ScribesZone/ModelScribes | a36be1047283f2e470dc2dd4353f2a714377bb7d | [
"MIT"
] | 4 | 2019-02-12T07:49:15.000Z | 2019-02-12T07:50:12.000Z | modelscript/scripts/permissions/all.py | ScribesZone/ModelScribes | a36be1047283f2e470dc2dd4353f2a714377bb7d | [
"MIT"
] | null | null | null | # coding=utf-8
import modelscript.scripts.permissions.parser
import modelscript.scripts.permissions.printer
| 21.8 | 46 | 0.853211 | 13 | 109 | 7.153846 | 0.692308 | 0.365591 | 0.516129 | 0.752688 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009804 | 0.06422 | 109 | 4 | 47 | 27.25 | 0.901961 | 0.110092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 8 |
3fa6717441a414c32d80f1902de7103a037d1876 | 243 | py | Python | abupy/SlippageBu/ABuSlippage.py | luqin/firefly | 2e5ab17f2d20deb3c68c927f6208ea89db7c639d | [
"MIT"
] | 1 | 2019-05-28T05:54:42.000Z | 2019-05-28T05:54:42.000Z | abupy/SlippageBu/ABuSlippage.py | momantang/cobrass | f11435d4836aa29078a3cd4beb4ca88967300c84 | [
"Apache-2.0"
] | 9 | 2020-03-24T16:45:25.000Z | 2022-03-11T23:40:51.000Z | abupy/SlippageBu/ABuSlippage.py | luqin/firefly | 2e5ab17f2d20deb3c68c927f6208ea89db7c639d | [
"MIT"
] | 1 | 2021-09-08T17:39:58.000Z | 2021-09-08T17:39:58.000Z | # -*- encoding:utf-8 -*-
from __future__ import absolute_import
# noinspection all
from . import ABuSlippageBuyBase as sbb
# noinspection all
from . import ABuSlippageSellBase as ssb
# noinspection all
from . import ABuSlippageBuyMean as sbm
| 24.3 | 40 | 0.790123 | 29 | 243 | 6.448276 | 0.551724 | 0.240642 | 0.304813 | 0.40107 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004831 | 0.148148 | 243 | 9 | 41 | 27 | 0.898551 | 0.300412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
3fdef1f25f3ca49155c8a720d4d58b13959c6c9f | 17,127 | py | Python | backend/webapp/api/discounts_views.py | MarcoLagalla/marette_backend | d01458028da151d6ee583a080b1146792e81f01b | [
"MIT"
] | 1 | 2020-06-26T14:34:03.000Z | 2020-06-26T14:34:03.000Z | backend/webapp/api/discounts_views.py | MarcoLagalla/marette_backend | d01458028da151d6ee583a080b1146792e81f01b | [
"MIT"
] | 26 | 2020-06-12T14:36:59.000Z | 2020-07-10T08:39:53.000Z | backend/webapp/api/discounts_views.py | MarcoLagalla/marette_backend | d01458028da151d6ee583a080b1146792e81f01b | [
"MIT"
] | null | null | null | import json
from django.db import transaction
from rest_framework import status
from rest_framework.authentication import SessionAuthentication, TokenAuthentication
from rest_framework.authtoken.models import Token
from rest_framework.permissions import IsAuthenticated, AllowAny
from rest_framework.response import Response
from rest_framework.views import APIView
from backend.account.permissions import IsBusiness, BusinessActivated
from .products_serializers import ProductDiscountSerializer
from .serializers import RestaurantDiscountSerializer
from ..models.models import Restaurant, Product, ProductDiscount, RestaurantDiscount
class ListDiscounts(APIView):
def get(self, request, id):
restaurant_id = id
try:
restaurant = Restaurant.objects.all().get(id=restaurant_id)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
discounts = ProductDiscount.objects.filter(restaurant=restaurant)
serializer = ProductDiscountSerializer(discounts, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
class DetailsDiscounts(APIView):
permission_classes = [AllowAny]
def get(self, request, id, d_id):
try:
discount = ProductDiscount.objects.all().filter(restaurant_id=id).get(id=d_id)
except ProductDiscount.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
serializer = ProductDiscountSerializer(discount)
return Response(serializer.data, status=status.HTTP_200_OK)
class AddDiscounts(APIView):
authentication_classes = [SessionAuthentication, TokenAuthentication]
permission_classes = [IsAuthenticated, IsBusiness, BusinessActivated]
@transaction.atomic()
def post(self, request, id):
# verifico che l'utente sia il proprietario del ristorante
try:
restaurant = Restaurant.objects.all().get(id=id)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
try:
token = Token.objects.all().get(user=restaurant.owner.user).key
except Token.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
if token == request.user.auth_token.key:
# utente loggato con token giusto
# cerco un ristorante con l'id richiesto e verifico la paternità
try:
restaurant = Restaurant.objects.all().get(id=id)
# verifico che sia proprietario del ristorante
if restaurant.owner.user == request.user:
# è il proprietario, può aggiungere un discount
serializer = ProductDiscountSerializer(data=request.data)
if serializer.is_valid():
discount = serializer.save(restaurant)
ret_data = ProductDiscountSerializer(instance=discount)
return Response(ret_data.data, status=status.HTTP_201_CREATED)
else:
return Response(serializer.errors,
status=status.HTTP_400_BAD_REQUEST)
else:
return Response(status=status.HTTP_401_UNAUTHORIZED)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
else:
return Response(status=status.HTTP_401_UNAUTHORIZED)
class EditDiscounts(APIView):
authentication_classes = [SessionAuthentication, TokenAuthentication]
permission_classes = [IsAuthenticated, IsBusiness, BusinessActivated]
@transaction.atomic()
def post(self, request, id, d_id):
# verifico che l'utente sia il proprietario del ristorante
try:
restaurant = Restaurant.objects.all().get(id=id)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
try:
token = Token.objects.all().get(user=restaurant.owner.user).key
except Token.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
if token == request.user.auth_token.key:
# utente loggato con token giusto
# cerco un ristorante con l'id richiesto e verifico la paternità
try:
restaurant = Restaurant.objects.all().get(id=id)
# verifico che sia proprietario del ristorante
if restaurant.owner.user == request.user:
# verifico che il discount prod sia del mio ristorante
try:
discount = ProductDiscount.objects.all() \
.filter(restaurant=restaurant).get(id=d_id)
if discount:
serializer = ProductDiscountSerializer(data=request.data)
if serializer.is_valid():
for key in serializer.validated_data:
setattr(discount, key, serializer.validated_data[key])
discount.restaurant = restaurant
discount.save()
return Response(status=status.HTTP_200_OK)
else:
return Response(serializer.errors,
status=status.HTTP_400_BAD_REQUEST)
except ProductDiscount.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
else:
return Response(status=status.HTTP_401_UNAUTHORIZED)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
else:
return Response(status=status.HTTP_401_UNAUTHORIZED)
class DeleteDiscounts(APIView):
authentication_classes = [SessionAuthentication, TokenAuthentication]
permission_classes = [IsAuthenticated, IsBusiness, BusinessActivated]
@transaction.atomic()
def post(self, request, id, d_id):
# verifico che l'utente sia il proprietario del ristorante
try:
restaurant = Restaurant.objects.all().get(id=id)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
try:
token = Token.objects.all().get(user=restaurant.owner.user).key
except Token.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
if token == request.user.auth_token.key:
# utente loggato con token giusto
# cerco un ristorante con l'id richiesto e verifico la paternità
try:
restaurant = Restaurant.objects.all().get(id=id)
# verifico che sia proprietario del ristorante
if restaurant.owner.user == request.user:
# verifico che il discount prod sia del mio ristorante
try:
discount = ProductDiscount.objects.all() \
.filter(restaurant=restaurant).get(id=d_id)
if discount:
discount.delete()
return Response(status=status.HTTP_200_OK)
except ProductDiscount.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
else:
return Response(status=status.HTTP_401_UNAUTHORIZED)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
else:
return Response(status=status.HTTP_401_UNAUTHORIZED)
class AddRestaurantDiscounts(APIView):
authentication_classes = [SessionAuthentication, TokenAuthentication]
permission_classes = [IsAuthenticated, IsBusiness, BusinessActivated]
@transaction.atomic()
def post(self, request, id):
# verifico che l'utente sia il proprietario del ristorante
try:
restaurant = Restaurant.objects.all().get(id=id)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
try:
token = Token.objects.all().get(user=restaurant.owner.user).key
except Token.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
if token == request.user.auth_token.key:
# utente loggato con token giusto
# cerco un ristorante con l'id richiesto e verifico la paternità
try:
restaurant = Restaurant.objects.all().get(id=id)
# verifico che sia proprietario del ristorante
if restaurant.owner.user == request.user:
# creo il discount
serializer = RestaurantDiscountSerializer(data=request.data)
if serializer.is_valid():
serializer.save(restaurant)
return Response(status=status.HTTP_201_CREATED)
else:
return Response(serializer.errors,
status=status.HTTP_400_BAD_REQUEST)
else:
return Response(status=status.HTTP_401_UNAUTHORIZED)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
else:
return Response(status=status.HTTP_401_UNAUTHORIZED)
class ListRestaurantDiscounts(APIView):
def get(self, request, id):
try:
restaurant = Restaurant.objects.all().get(id=id)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
try:
discounts = RestaurantDiscount.objects.all().filter(restaurant=restaurant)
except RestaurantDiscount.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
serializer = RestaurantDiscountSerializer(discounts, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
class DetailsRestaurantDiscounts(APIView):
def get(self, request, id, d_id):
try:
restaurant = Restaurant.objects.all().get(id=id)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
try:
discounts = RestaurantDiscount.objects.all().filter(restaurant=restaurant).get(id=d_id)
except RestaurantDiscount.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
serializer = RestaurantDiscountSerializer(discounts)
return Response(serializer.data, status=status.HTTP_200_OK)
class EditRestaurantDiscounts(APIView):
authentication_classes = [SessionAuthentication, TokenAuthentication]
permission_classes = [IsAuthenticated, IsBusiness, BusinessActivated]
@transaction.atomic()
def post(self, request, id, d_id):
# verifico che l'utente sia il proprietario del ristorante
try:
restaurant = Restaurant.objects.all().get(id=id)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
try:
token = Token.objects.all().get(user=restaurant.owner.user).key
except Token.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
if token == request.user.auth_token.key:
# utente loggato con token giusto
# cerco un ristorante con l'id richiesto e verifico la paternità
try:
restaurant = Restaurant.objects.all().get(id=id)
# verifico che sia proprietario del ristorante
if restaurant.owner.user == request.user:
try:
discount = ProductDiscount.objects.all() \
.filter(restaurant=restaurant).get(id=d_id)
if discount:
serializer = RestaurantDiscountSerializer(data=request.data)
if serializer.is_valid():
for key in serializer.validated_data:
setattr(discount, key, serializer.validated_data[key])
discount.restaurant = restaurant
discount.save()
return Response(status=status.HTTP_200_OK)
else:
return Response(serializer.errors,
status=status.HTTP_400_BAD_REQUEST)
except ProductDiscount.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
else:
return Response(status=status.HTTP_401_UNAUTHORIZED)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
else:
return Response(status=status.HTTP_401_UNAUTHORIZED)
class DeleteRestaurantDiscounts(APIView):
authentication_classes = [SessionAuthentication, TokenAuthentication]
permission_classes = [IsAuthenticated, IsBusiness, BusinessActivated]
@transaction.atomic()
def post(self, request, id, d_id):
# verifico che l'utente sia il proprietario del ristorante
try:
restaurant = Restaurant.objects.all().get(id=id)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
try:
token = Token.objects.all().get(user=restaurant.owner.user).key
except Token.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
if token == request.user.auth_token.key:
# utente loggato con token giusto
# cerco un ristorante con l'id richiesto e verifico la paternità
try:
restaurant = Restaurant.objects.all().get(id=id)
# verifico che sia proprietario del ristorante
if restaurant.owner.user == request.user:
# verifico che il prodotto sia un prodotto del mio ristorante
try:
discount = RestaurantDiscount.objects.all() \
.filter(restaurant=restaurant).get(id=d_id)
if discount:
discount.delete()
return Response(status=status.HTTP_200_OK)
except RestaurantDiscount.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
else:
return Response(status=status.HTTP_401_UNAUTHORIZED)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
else:
return Response(status=status.HTTP_401_UNAUTHORIZED)
class SetDiscounts(APIView):
authentication_classes = [SessionAuthentication, TokenAuthentication]
permission_classes = [IsAuthenticated, IsBusiness, BusinessActivated]
@transaction.atomic()
def post(self, request, id, p_id):
try:
restaurant = Restaurant.objects.all().get(id=id)
except Restaurant.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
try:
token = Token.objects.all().get(user=restaurant.owner.user).key
except Token.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
try:
product = Product.objects.all().filter(restaurant=restaurant).get(id=p_id)
except Product.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
if token == request.user.auth_token.key and request.user == restaurant.owner.user:
try:
data = request.data
except json.JSONDecodeError as err:
return Response(err, status=status.HTTP_400_BAD_REQUEST)
for id in data['discounts']:
try:
ProductDiscount.objects.get(id=id)
except ProductDiscount.DoesNotExist:
return Response(status=status.HTTP_400_BAD_REQUEST)
product.discounts.clear()
for id in data['discounts']:
try:
discount = ProductDiscount.objects.all().filter(restaurant=restaurant).get(id=id)
product.discounts.add(discount)
except ProductDiscount.DoesNotExist:
return Response({'error': 'Sconto Non Esistente'},status=status.HTTP_404_NOT_FOUND)
product.save()
data = {'final_price': product.get_price_with_discount(),
'message': 'Sconto Aggiunto correttamente'}
return Response(data, status=status.HTTP_201_CREATED)
else:
return Response({'error': 'Accesso Non Autorizzato'}, status=status.HTTP_401_UNAUTHORIZED)
| 40.298824 | 103 | 0.615344 | 1,660 | 17,127 | 6.21506 | 0.08494 | 0.084133 | 0.096152 | 0.123486 | 0.856451 | 0.844335 | 0.827954 | 0.81923 | 0.810992 | 0.799457 | 0 | 0.0158 | 0.312664 | 17,127 | 424 | 104 | 40.393868 | 0.860601 | 0.082326 | 0 | 0.804054 | 0 | 0 | 0.007525 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037162 | false | 0 | 0.040541 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b763f17380a0631b74125d981a1921262b6f5fa6 | 146 | py | Python | control/paths/__init__.py | BrancoLab/LocomotionControl | 6dc16c29c13b31f6ad70af954a237e379ee10846 | [
"MIT"
] | null | null | null | control/paths/__init__.py | BrancoLab/LocomotionControl | 6dc16c29c13b31f6ad70af954a237e379ee10846 | [
"MIT"
] | 2 | 2020-11-23T16:32:11.000Z | 2020-11-23T16:32:11.000Z | control/paths/__init__.py | BrancoLab/LocomotionControl | 6dc16c29c13b31f6ad70af954a237e379ee10846 | [
"MIT"
] | null | null | null | from control.paths.bspline import BSpline
from control.paths.dubins_path import DubinPath
from control.paths.waypoints import Waypoints, Waypoint
| 36.5 | 55 | 0.863014 | 20 | 146 | 6.25 | 0.5 | 0.264 | 0.384 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089041 | 146 | 3 | 56 | 48.666667 | 0.93985 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b76f6942d6e6e380894625c80b5902585222d78a | 16,554 | py | Python | simplepbi/dashboards/__init__.py | ladataweb/SimplePBI | b9743bbb75517edda7eeea1388ab109cc8599c7c | [
"MIT"
] | null | null | null | simplepbi/dashboards/__init__.py | ladataweb/SimplePBI | b9743bbb75517edda7eeea1388ab109cc8599c7c | [
"MIT"
] | null | null | null | simplepbi/dashboards/__init__.py | ladataweb/SimplePBI | b9743bbb75517edda7eeea1388ab109cc8599c7c | [
"MIT"
] | null | null | null | '''
/¯¯¯¯¯¯¯¯¯\
/ \
| | __ | *********************************************
| | | \ | Code writen by Ignacio and Martin.
| | | | |
| |__|_ | | La Data Web
| |__/ | *********************************************
\ /
\__________/
'''
import json
import requests
from simplepbi import utils
import pandas as pd
class Dashboards():
"""Simple library to use the Power BI api and obtain dashboards from it.
"""
def __init__(self, token):
"""Create a simplePBI object to request admin API
Args:
token: String
Bearer Token to use the Power Bi Rest API
"""
self.token = token
def get_dashboard(self, dashboard_id):
"""Returns the specified dashboard from My workspace.
### Parameters
----
dashboard_id: str uuid
The Power Bi dashboard id. You can take it from PBI Service URL
### Returns
----
Dict:
A dictionary containing a dashboard in My workspace.
"""
try:
url = "https://api.powerbi.com/v1.0/myorg/dashboards/{}".format(dashboard_id)
res = requests.get(url, headers={'Content-Type': 'application/json', "Authorization": "Bearer {}".format(self.token)})
res.raise_for_status()
return res.json()
except requests.exceptions.HTTPError as ex:
print("HTTP Error: ", ex, "\nText: ", ex.response.text)
except requests.exceptions.RequestException as e:
print("Request exception: ", e)
def get_dashboard_in_group(self, workspace_id, dashboard_id):
"""Returns the specified dashboard from the specified workspace.
### Parameters
----
workspace_id: str uuid
The Power Bi workspace id. You can take it from PBI Service URL
dashboard_id: str uuid
The Power Bi dashboard id. You can take it from PBI Service URL
### Returns
----
Dict:
A dictionary containing a dashboard in the workspace.
"""
try:
url = "https://api.powerbi.com/v1.0/myorg/groups/{}/dashboards/{}".format(workspace_id, dashboard_id)
res = requests.get(url, headers={'Content-Type': 'application/json', "Authorization": "Bearer {}".format(self.token)})
res.raise_for_status()
return res.json()
except requests.exceptions.HTTPError as ex:
print("HTTP Error: ", ex, "\nText: ", ex.response.text)
except requests.exceptions.RequestException as e:
print("Request exception: ", e)
def get_dashboards(self):
"""Returns a list of dashboards from My workspace.
### Parameters
----
None
### Returns
----
Dict:
A dictionary containing all the dashboards in My workspace.
"""
try:
url = "https://api.powerbi.com/v1.0/myorg/dashboards"
res = requests.get(url, headers={'Content-Type': 'application/json', "Authorization": "Bearer {}".format(self.token)})
res.raise_for_status()
return res.json()
except requests.exceptions.HTTPError as ex:
print("HTTP Error: ", ex, "\nText: ", ex.response.text)
except requests.exceptions.RequestException as e:
print("Request exception: ", e)
def get_dashboards_in_group(self, workspace_id):
"""Returns a list of dashboards from the specified workspace.
### Parameters
----
workspace_id: str uuid
The Power Bi workspace id. You can take it from PBI Service URL
### Returns
----
Dict:
A dictionary containing all the dashboards in the workspace.
"""
try:
url = "https://api.powerbi.com/v1.0/myorg/groups/{}/dashboards".format(workspace_id)
res = requests.get(url, headers={'Content-Type': 'application/json', "Authorization": "Bearer {}".format(self.token)})
res.raise_for_status()
return res.json()
except requests.exceptions.HTTPError as ex:
print("HTTP Error: ", ex, "\nText: ", ex.response.text)
except requests.exceptions.RequestException as e:
print("Request exception: ", e)
def get_tile(self, dashboard_id, tile_id):
"""Returns the specified tile within the specified dashboard from My workspace.
Supported tiles include datasets and live tiles that contain an entire report page.
### Parameters
----
dashboard_id: str uuid
The Power Bi dashboard id. You can take it from PBI Service URL
tile_id: str uuid
The tile id
### Returns
----
Dict:
A dictionary containing a tile in dashboard from My workspace.
"""
try:
url = "https://api.powerbi.com/v1.0/myorg/dashboards/{}/tiles/{}".format(dashboard_id, tile_id)
res = requests.get(url, headers={'Content-Type': 'application/json', "Authorization": "Bearer {}".format(self.token)})
res.raise_for_status()
return res.json()
except requests.exceptions.HTTPError as ex:
print("HTTP Error: ", ex, "\nText: ", ex.response.text)
except requests.exceptions.RequestException as e:
print("Request exception: ", e)
def get_tile_in_group(self, workspace_id, dashboard_id, tile_id):
"""Returns the specified tile within the specified dashboard from the specified workspace.
Supported tiles include datasets and live tiles that contain an entire report page.
### Parameters
----
workspace_id: str uuid
The Power Bi workspace id. You can take it from PBI Service URL
dashboard_id: str uuid
The Power Bi dashboard id. You can take it from PBI Service URL
tile_id: str
The tile id
### Returns
----
Dict:
A dictionary containing a tile in a dashboard from the workspace.
"""
try:
url = "https://api.powerbi.com/v1.0/myorg/groups/{}/dashboards/{}/tiles/{}".format(workspace_id, dashboard_id, tile_id)
res = requests.get(url, headers={'Content-Type': 'application/json', "Authorization": "Bearer {}".format(self.token)})
res.raise_for_status()
return res.json()
except requests.exceptions.HTTPError as ex:
print("HTTP Error: ", ex, "\nText: ", ex.response.text)
except requests.exceptions.RequestException as e:
print("Request exception: ", e)
def get_tiles(self, dashboard_id):
"""Returns a list of tiles within the specified dashboard from My workspace.
Supported tiles include datasets and live tiles that contain an entire report page.
### Parameters
----
dashboard_id: str uuid
The Power Bi dashboard id. You can take it from PBI Service URL
### Returns
----
Dict:
A dictionary containing all the tiles in a dashboard from My workspace.
"""
try:
url = "https://api.powerbi.com/v1.0/myorg/dashboards/{}/tiles".format(dashboard_id)
res = requests.get(url, headers={'Content-Type': 'application/json', "Authorization": "Bearer {}".format(self.token)})
res.raise_for_status()
return res.json()
except requests.exceptions.HTTPError as ex:
print("HTTP Error: ", ex, "\nText: ", ex.response.text)
except requests.exceptions.RequestException as e:
print("Request exception: ", e)
def get_tiles_in_group(self, workspace_id, dashboard_id):
"""Returns a list of tiles within the specified dashboard from the specified workspace.
Supported tiles include datasets and live tiles that contain an entire report page.
### Parameters
----
workspace_id: str uuid
The Power Bi workspace id. You can take it from PBI Service URL
dashboard_id: str uuid
The Power Bi dashboard id. You can take it from PBI Service URL
### Returns
----
Dict:
A dictionary containing all the tiles in a dashboard from a workspace.
"""
try:
url = "https://api.powerbi.com/v1.0/myorg/groups/{}/dashboards/{}/tiles".format(workspace_id, dashboard_id)
res = requests.get(url, headers={'Content-Type': 'application/json', "Authorization": "Bearer {}".format(self.token)})
res.raise_for_status()
return res.json()
except requests.exceptions.HTTPError as ex:
print("HTTP Error: ", ex, "\nText: ", ex.response.text)
except requests.exceptions.RequestException as e:
print("Request exception: ", e)
def clone_tile_in_dashboard(self, dashboard_id, tile_id, target_dashboard_id, target_dataset_id = None, target_report_id = None, target_workspace_id = None):
"""Clones the specified tile from My workspace.
When a tile is cloned to another workspace and bound to another report and dataset, it's cloned as is with its underlying query containing the original report filters.
If the target report ID and target dataset are missing, errors can occur.
### Parameters
----
dashboard_id: str uuid
The Power Bi dashboard id. You can take it from PBI Service URL
tile_id: str
The tile id
### Request Body
----
target_dashboard_id: str
The target dashboard ID
target_dataset_id: str uuid
(Optional) A parameter for specifying a target model ID. When cloning a tile linked to a dataset, pass the target model ID to rebind the new tile to a different dataset.
target_report_id: str uuid
(Optional) A parameter for specifying a target report ID. When cloning a tile linked to a report, pass the target report ID to rebind the new tile to a different report.
target_workspace_id: str uuid
(Optional) A parameter for specifying a target workspace ID. An empty GUID (00000000-0000-0000-0000-000000000000) indicates 'My Workspace'. If this parameter isn't provided, the tile will be cloned within the same workspace as the source tile.
### Returns
----
Response object from requests library. 200 OK
"""
try:
url= "https://api.powerbi.com/v1.0/myorg/dashboards/{}/tiles/{}/Clone".format(dashboard_id, tile_id)
body = {
"targetDashboardId": target_dashboard_id
}
if target_report_id != None:
body["targetReportId"]=target_report_id
if target_dataset_id != None:
body["targetModelId"]=target_dataset_id
if target_workspace_id != None:
body["targetWorkspaceId"] = target_workspace_id
headers={'Content-Type': 'application/json', "Authorization": "Bearer {}".format(self.token)}
res = requests.post(url, data = json.dumps(body), headers = headers)
res.raise_for_status()
return res
except requests.exceptions.HTTPError as ex:
print("HTTP Error: ", ex, "\nText: ", ex.response.text)
except requests.exceptions.RequestException as e:
print("Request exception: ", e)
def clone_tile_in_dashboard_in_group(self, workspace_id, dashboard_id, tile_id, target_dashboard_id, target_dataset_id = None, target_report_id = None, target_workspace_id = None):
"""Clones the specified tile from the specified workspace.
When a tile is cloned to another workspace and bound to another report and dataset, it's cloned as is with its underlying query containing the original report filters.
If the target report ID and target dataset are missing, errors can occur.
### Parameters
----
workspace_id: str uuid
The Power Bi workspace id. You can take it from PBI Service URL
dashboard_id: str uuid
The Power Bi dashboard id. You can take it from PBI Service URL
tile_id: str
The tile id
### Request Body
----
target_dashboard_id: str
The target dashboard ID
target_dataset_id: str uuid
(Optional) A parameter for specifying a target model ID. When cloning a tile linked to a dataset, pass the target model ID to rebind the new tile to a different dataset.
target_report_id: str uuid
(Optional) A parameter for specifying a target report ID. When cloning a tile linked to a report, pass the target report ID to rebind the new tile to a different report.
target_workspace_id: str uuid
(Optional) A parameter for specifying a target workspace ID. An empty GUID (00000000-0000-0000-0000-000000000000) indicates 'My Workspace'. If this parameter isn't provided, the tile will be cloned within the same workspace as the source tile.
### Returns
----
Response object from requests library. 200 OK
"""
try:
url= "https://api.powerbi.com/v1.0/myorg/groups/{}/dashboards/{}/tiles/{}/Clone".format(workspace_id, dashboard_id, tile_id)
body = {
"targetDashboardId": target_dashboard_id
}
if target_report_id != None:
body["targetReportId"]=target_report_id
if target_dataset_id != None:
body["targetModelId"]=target_dataset_id
if target_workspace_id != None:
body["targetWorkspaceId"] = target_workspace_id
headers={'Content-Type': 'application/json', "Authorization": "Bearer {}".format(self.token)}
res = requests.post(url, data = json.dumps(body), headers = headers)
res.raise_for_status()
return res
except requests.exceptions.HTTPError as ex:
print("HTTP Error: ", ex, "\nText: ", ex.response.text)
except requests.exceptions.RequestException as e:
print("Request exception: ", e)
def add_dashboard(self, workspace_name):
"""Creates a new empty dashboard in My workspace.
### Parameters
----
None
### Request Body
----
workspace_name: str
The name of the new dashboard
### Returns
----
Response object from requests library. 200 OK
"""
try:
url= "https://api.powerbi.com/v1.0/myorg/dashboards"
body = {
"name": workspace_name
}
headers={'Content-Type': 'application/json', "Authorization": "Bearer {}".format(self.token)}
res = requests.post(url, data = json.dumps(body), headers = headers)
res.raise_for_status()
return res
except requests.exceptions.HTTPError as ex:
print("HTTP Error: ", ex, "\nText: ", ex.response.text)
except requests.exceptions.RequestException as e:
print("Request exception: ", e)
def add_dashboard_in_group(self, workspace_id, workspace_name):
"""Creates a new empty dashboard in the specified workspace.
### Parameters
----
workspace_id: str uuid
The Power Bi workspace id. You can take it from PBI Service URL
### Request Body
----
workspace_name: str
The name of the new dashboard
### Returns
----
Response object from requests library. 200 OK
"""
try:
url= "https://api.powerbi.com/v1.0/myorg/groups/{}/dashboards".format(workspace_id)
body = {
"name": workspace_name
}
headers={'Content-Type': 'application/json', "Authorization": "Bearer {}".format(self.token)}
res = requests.post(url, data = json.dumps(body), headers = headers)
res.raise_for_status()
return res
except requests.exceptions.HTTPError as ex:
print("HTTP Error: ", ex, "\nText: ", ex.response.text)
except requests.exceptions.RequestException as e:
print("Request exception: ", e)
| 45.855956 | 255 | 0.597862 | 1,950 | 16,554 | 4.981026 | 0.087692 | 0.0453 | 0.059302 | 0.018532 | 0.960568 | 0.953773 | 0.945022 | 0.936683 | 0.92021 | 0.909709 | 0 | 0.00863 | 0.300048 | 16,554 | 361 | 256 | 45.855956 | 0.82886 | 0.382204 | 0 | 0.8 | 0 | 0.006452 | 0.213137 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083871 | false | 0 | 0.025806 | 0 | 0.193548 | 0.154839 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b77f5d97a9b80990fe7d6295b4158f7230745dc0 | 1,406 | py | Python | thelper/nn/segmentation/deeplabv3.py | fmigneault/thelper | 9e45e45d33ed0bf25ead272fdac61a4e81735d25 | [
"Apache-2.0"
] | null | null | null | thelper/nn/segmentation/deeplabv3.py | fmigneault/thelper | 9e45e45d33ed0bf25ead272fdac61a4e81735d25 | [
"Apache-2.0"
] | null | null | null | thelper/nn/segmentation/deeplabv3.py | fmigneault/thelper | 9e45e45d33ed0bf25ead272fdac61a4e81735d25 | [
"Apache-2.0"
] | null | null | null | import torchvision
import thelper.nn.segmentation.base
class DeepLabV3ResNet50(thelper.nn.segmentation.base.SegmModelBase):
"""
This class is a thin wrapper for :func:`torchvision.models.segmentation.deeplabv3_resnet101`
(``torchvision > 0.6``).
.. note::
Contributed by Mario Beaulieu <mario.beaulieu@crim.ca>.
.. seealso::
| Liang-Chieh et al., `Rethinking Atrous Convolution for Semantic Image Segmentation
<https://arxiv.org/abs/1706.05587>`_ [arXiv], 2017.
"""
def __init__(self, task, pretrained=False):
self.model_cls = torchvision.models.segmentation.deeplabv3_resnet50
self.in_channels = 256
super().__init__(task, pretrained=pretrained)
class DeepLabV3ResNet101(thelper.nn.segmentation.base.SegmModelBase):
"""
This class is a thin wrapper for :func:`torchvision.models.segmentation.deeplabv3_resnet101`
(``torchvision > 0.6``).
.. note::
Contributed by Mario Beaulieu <mario.beaulieu@crim.ca>.
.. seealso::
| Liang-Chieh et al., `Rethinking Atrous Convolution for Semantic Image Segmentation
<https://arxiv.org/abs/1706.05587>`_ [arXiv], 2017.
"""
def __init__(self, task, pretrained=False):
self.model_cls = torchvision.models.segmentation.deeplabv3_resnet101
self.in_channels = 256
super().__init__(task, pretrained=pretrained)
| 35.15 | 96 | 0.6899 | 157 | 1,406 | 6.012739 | 0.375796 | 0.072034 | 0.122881 | 0.161017 | 0.893008 | 0.883475 | 0.883475 | 0.883475 | 0.883475 | 0.777542 | 0 | 0.051011 | 0.191323 | 1,406 | 39 | 97 | 36.051282 | 0.779244 | 0.497155 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b78bf300e4deb09ebd8b32f3088de11f1acf698f | 22,535 | py | Python | tests/test_maze.py | pji/imggen | 173bd9e6aeba208d1e0f1ef74857c0d6d28530c7 | [
"MIT"
] | null | null | null | tests/test_maze.py | pji/imggen | 173bd9e6aeba208d1e0f1ef74857c0d6d28530c7 | [
"MIT"
] | null | null | null | tests/test_maze.py | pji/imggen | 173bd9e6aeba208d1e0f1ef74857c0d6d28530c7 | [
"MIT"
] | null | null | null | """
test_maze
~~~~~~~~~~
Unit tests for the imggen.maze module.
"""
import numpy as np
from imggen import maze as m
from tests.common import SourceTestCase
# Test cases.
class MazeTestCase(SourceTestCase):
def test_maze_fill(self):
"""When given the size of an array, return an array of that
size filled with a maze.
"""
# Expected value.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
], dtype=np.uint8)
# Test data and state.
cls = m.Maze
kwargs = {
'width': .34,
'unit': (1, 3, 3),
'seed': 'spam',
}
# Run test and determine result.
self.fill_test(exp, cls, kwargs)
def test_maze_fill_origin_br_zero_inset(self):
"""When given that the origin should be in the middle of
the fill, the maze's path should start being drawn from
the bottom right of the fill.
"""
# Expected value.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0xff],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0xff],
[0x00, 0x00, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0xff],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0xff, 0xff],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0xff, 0xff],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff],
],
], dtype=np.uint8)
# Test data and state.
cls = m.Maze
kwargs = {
'width': .34,
'inset': (0, 0, 0),
'origin': 'br',
'unit': (1, 3, 3),
'seed': 'spam',
}
# Run test and determine result.
self.fill_test(exp, cls, kwargs)
def test_maze_fill_origin_mm_zero_inset(self):
"""When given that the origin should be in the middle of
the fill, the maze's path should start being drawn from
the middle of the fill.
"""
# Expected value.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0xff],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0xff],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0xff],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0xff],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0xff],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0xff, 0xff],
],
], dtype=np.uint8)
# Test data and state.
cls = m.Maze
kwargs = {
'width': .34,
'inset': (0, 0, 0),
'origin': 'mm',
'unit': (1, 3, 3),
'seed': 'spam',
}
# Run test and determine result.
self.fill_test(exp, cls, kwargs)
def test_maze_fill_origin_tl_zero_inset(self):
"""When given that the origin should be in the middle of
the fill, the maze's path should start being drawn from
the top left of the fill.
"""
# Expected value.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff],
[0x00, 0x00, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0xff, 0xff],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0xff, 0xff],
[0x00, 0x00, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0xff],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff],
],
], dtype=np.uint8)
# Test data and state.
cls = m.Maze
kwargs = {
'width': .34,
'inset': (0, 0, 0),
'origin': 'tl',
'unit': (1, 3, 3),
'seed': 'spam',
}
# Run test and determine result.
self.fill_test(exp, cls, kwargs)
class AnimatedMazeTestCase(SourceTestCase):
def test_animatedmaze_fill(self):
"""When given the size of an array, return an array of that
size filled with the animation of a maze being created.
"""
# Expected value.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
], dtype=np.uint8)
# Test data and state.
cls = m.AnimatedMaze
kwargs = {
'width': .34,
'inset': (0, 1, 1),
'unit': (1, 3, 3),
'origin': 'mm',
'seed': 'spam',
}
# Run test and determine result.
self.fill_test(exp, cls, kwargs)
def test_animatedmaze_fill_with_delay(self):
"""If a delay is given, add that number of empty frames at
the beginning of the image data.
"""
# Expected value.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
], dtype=np.uint8)
# Test data and state.
cls = m.AnimatedMaze
kwargs = {
'delay': 2,
'width': .34,
'inset': (0, 1, 1),
'unit': (1, 3, 3),
'origin': 'mm',
'seed': 'spam',
}
size = (4, 9, 9)
# Run test and determine result.
self.maxDiff = None
self.fill_test(exp, cls, kwargs, size)
def test_animatedmaze_fill_with_linger(self):
"""If a linger is given, add that number of copies of the
last frame at the end of the image data.
"""
# Expected value.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
], dtype=np.uint8)
# Test data and state.
cls = m.AnimatedMaze
kwargs = {
'linger': 2,
'width': .34,
'inset': (0, 1, 1),
'unit': (1, 3, 3),
'origin': 'mm',
'seed': 'spam',
}
size = (4, 9, 9)
# Run test and determine result.
self.maxDiff = None
self.fill_test(exp, cls, kwargs, size)
class SolvedMazeTestCase(SourceTestCase):
def test_maze_fill(self):
"""When given the size of an array, return an array of that
size filled with the solution for a maze.
"""
# Expected value.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
], dtype=np.uint8)
# Test data and state.
cls = m.SolvedMaze
kwargs = {
'width': .34,
'unit': (1, 3, 3),
'seed': 'spam',
}
# Run test and determine result.
self.fill_test(exp, cls, kwargs)
def test_maze_fill_with_end(self):
"""When given a end location, end the maze solution in
that location in the maze.
"""
# Expected value.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
], dtype=np.uint8)
# Test data and state.
cls = m.SolvedMaze
kwargs = {
'end': 'bl',
'width': .34,
'unit': (1, 3, 3),
'seed': 'spam',
}
# Run test and determine result.
self.fill_test(exp, cls, kwargs)
def test_maze_fill_with_start(self):
"""When given a start location, begin the maze solution in
that location in the maze.
"""
# Expected value.
exp = np.array([
[
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
[0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00],
],
], dtype=np.uint8)
# Test data and state.
cls = m.SolvedMaze
kwargs = {
'start': 'bl',
'width': .34,
'unit': (1, 3, 3),
'seed': 'spam',
}
# Run test and determine result.
self.fill_test(exp, cls, kwargs)
| 45.342052 | 77 | 0.483781 | 2,708 | 22,535 | 4.007016 | 0.038405 | 1.076398 | 1.497374 | 1.860842 | 0.957239 | 0.952355 | 0.9483 | 0.9483 | 0.9483 | 0.9483 | 0 | 0.373182 | 0.374529 | 22,535 | 496 | 78 | 45.433468 | 0.396665 | 0.080053 | 0 | 0.857143 | 0 | 0 | 0.013252 | 0 | 0 | 0 | 0.387696 | 0 | 0 | 1 | 0.02551 | false | 0 | 0.007653 | 0 | 0.040816 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
b793bfcce05b4ee2c76c0e50d89624c65b7b3ff8 | 3,862 | py | Python | munactives/migrations/0001_initial.py | VaJau2/MunactivesMap-Django | 3fee9f77de8a73244cc85e38786cf3860214daf4 | [
"MIT"
] | null | null | null | munactives/migrations/0001_initial.py | VaJau2/MunactivesMap-Django | 3fee9f77de8a73244cc85e38786cf3860214daf4 | [
"MIT"
] | 1 | 2020-04-07T14:29:11.000Z | 2020-04-07T14:29:11.000Z | munactives/migrations/0001_initial.py | VaJau2/MunactivesMap-Django | 3fee9f77de8a73244cc85e38786cf3860214daf4 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.3 on 2020-02-09 10:48
import django.contrib.gis.db.models.fields
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='munitipal_land',
fields=[
('id', models.IntegerField(primary_key=True, serialize=False)),
('name', models.CharField(max_length=100)),
('purpose', models.TextField()),
('holder', models.TextField()),
('square', models.FloatField()),
('describe', models.TextField()),
('coordinates', django.contrib.gis.db.models.fields.PolygonField(srid=4326)),
('holder_num', models.IntegerField()),
],
),
migrations.CreateModel(
name='spr_type_company',
fields=[
('type', models.CharField(max_length=50, primary_key=True, serialize=False)),
('description', models.TextField()),
],
),
migrations.CreateModel(
name='spr_type_foundation',
fields=[
('type', models.CharField(max_length=50, primary_key=True, serialize=False)),
('description', models.TextField()),
('purpose', models.TextField()),
],
),
migrations.CreateModel(
name='spr_type_stock',
fields=[
('type', models.CharField(max_length=50, primary_key=True, serialize=False)),
('description', models.TextField()),
],
),
migrations.CreateModel(
name='housing_stock',
fields=[
('id', models.IntegerField(primary_key=True, serialize=False)),
('address', models.TextField()),
('holder', models.TextField()),
('square', models.FloatField()),
('describe', models.TextField()),
('coordinates', django.contrib.gis.db.models.fields.PointField(srid=4326)),
('floors', models.IntegerField()),
('holder_num', models.IntegerField()),
('type', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='munactives.spr_type_stock')),
],
),
migrations.CreateModel(
name='foundation',
fields=[
('id', models.IntegerField(primary_key=True, serialize=False)),
('name', models.CharField(max_length=100)),
('address', models.TextField()),
('holder', models.TextField()),
('square', models.FloatField()),
('describe', models.TextField()),
('coordinates', django.contrib.gis.db.models.fields.PointField(srid=4326)),
('holder_num', models.IntegerField()),
('type', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='munactives.spr_type_foundation')),
],
),
migrations.CreateModel(
name='company',
fields=[
('id', models.IntegerField(primary_key=True, serialize=False)),
('name', models.CharField(max_length=100)),
('address', models.TextField()),
('holder', models.TextField()),
('square', models.FloatField()),
('describe', models.TextField()),
('coordinates', django.contrib.gis.db.models.fields.PointField(srid=4326)),
('holder_num', models.IntegerField()),
('type', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, to='munactives.spr_type_company')),
],
),
]
| 41.085106 | 126 | 0.534697 | 328 | 3,862 | 6.192073 | 0.204268 | 0.118168 | 0.086164 | 0.079271 | 0.813392 | 0.797637 | 0.765633 | 0.739045 | 0.739045 | 0.712457 | 0 | 0.017437 | 0.316934 | 3,862 | 93 | 127 | 41.526882 | 0.752464 | 0.011652 | 0 | 0.755814 | 1 | 0 | 0.11979 | 0.021494 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.034884 | 0 | 0.081395 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b7c4413967cd896ba74b1a3094e30c437d69b384 | 7,904 | py | Python | tests/test_xero_api_purchase_orders.py | aracnid/i-xero2 | 449bb8bccb9e76935cedc49eff776bb8804d756b | [
"MIT"
] | null | null | null | tests/test_xero_api_purchase_orders.py | aracnid/i-xero2 | 449bb8bccb9e76935cedc49eff776bb8804d756b | [
"MIT"
] | null | null | null | tests/test_xero_api_purchase_orders.py | aracnid/i-xero2 | 449bb8bccb9e76935cedc49eff776bb8804d756b | [
"MIT"
] | null | null | null | """Tests Xero API PurchaseOrders.
"""
from datetime import datetime, date, timedelta
from xero_python.accounting import Contact
from xero_python.accounting import LineItem
from xero_python.accounting import PurchaseOrder
from xero_python.exceptions import NotFoundException
from i_xero2 import XeroInterface
import pytest
@pytest.fixture(name='xero')
def fixture_xero_interface():
"""Pytest fixture to initialize and return the XeroInterface object.
"""
return XeroInterface()
def test_create_purchase_orders(xero):
date_value = datetime.now().astimezone()
contact = Contact(
contact_id = 'c7127731-d324-4e26-a03e-854ce9a3a269')
line_item = LineItem(
description = "Foobar",
quantity = 1.0,
unit_amount = 20.0,
account_code = '000'
)
line_items = []
line_items.append(line_item)
purchase_order = PurchaseOrder(
contact = contact,
date = date_value,
line_items = line_items,
reference = "test_create_purchase_orders()",
status = "DRAFT")
purchase_order_list_created = xero.create_purchase_orders(
purchase_order_list=[purchase_order]
)
assert purchase_order_list_created
assert len(purchase_order_list_created) == 1
def test_read_purchase_order_by_id(xero):
purchase_order_number = 'PO-0001'
purchase_order_id = '818032de-c60f-4bf6-98d5-b74df380e1d2'
purchase_order = xero.read_purchase_orders(id=purchase_order_id)
assert purchase_order.purchase_order_id == purchase_order_id
assert purchase_order.purchase_order_number == purchase_order_number
def test_read_purchase_order_by_number(xero):
purchase_order_number = 'PO-0001'
purchase_order_id = '818032de-c60f-4bf6-98d5-b74df380e1d2'
purchase_order = xero.read_purchase_orders(number=purchase_order_number)
assert purchase_order.purchase_order_id == purchase_order_id
assert purchase_order.purchase_order_number == purchase_order_number
def test_read_purchase_order_by_number_indirect(xero):
purchase_order_number = 'PO-0001'
purchase_order_id = '818032de-c60f-4bf6-98d5-b74df380e1d2'
purchase_order = xero.read_purchase_orders_indirect(number=purchase_order_number)
assert purchase_order.purchase_order_id == purchase_order_id
assert purchase_order.purchase_order_number == purchase_order_number
def test_read_purchase_order_by_number_indirect_on_exception(xero, monkeypatch):
# create monkeypatch
def mock_get_purchase_order_by_number(*args, **kwargs):
raise NotFoundException
monkeypatch.setattr(xero.accounting_api, 'get_purchase_order_by_number', mock_get_purchase_order_by_number)
purchase_order_number = 'PO-0001'
purchase_order_id = '818032de-c60f-4bf6-98d5-b74df380e1d2'
purchase_order = xero.read_purchase_orders(number=purchase_order_number)
assert purchase_order.purchase_order_id == purchase_order_id
assert purchase_order.purchase_order_number == purchase_order_number
def test_read_purchase_orders(xero):
status = 'DRAFT'
sort = 'Date ASC'
purchase_order_list = xero.read_purchase_orders(
status=status,
order=sort
)
assert purchase_order_list
assert len(purchase_order_list) > 0
def test_read_purchase_orders_by_date_range(xero):
start = date.fromisoformat('2021-11-23')
end = date.today()
sort = 'Date ASC'
purchase_order_list = xero.read_purchase_orders(
date_from=start.isoformat(),
date_to=end.isoformat(),
order=sort
)
assert purchase_order_list
assert len(purchase_order_list) > 0
def test_update_purchase_orders(xero):
date_value = datetime.now().astimezone()
contact = Contact(
contact_id = 'c7127731-d324-4e26-a03e-854ce9a3a269')
line_item = LineItem(
description="Foobar",
quantity=1.0,
unit_amount=20.0,
account_code='400'
)
line_items = []
line_items.append(line_item)
purchase_order = PurchaseOrder(
contact=contact,
date=date_value,
line_items=line_items,
reference="test_update_purchase_orders(): created",
status="DRAFT")
purchase_order_list_created = xero.create_purchase_orders(
purchase_order_list=[purchase_order]
)
purchase_order = purchase_order_list_created[0]
# update journal
purchase_order.reference = "test_update_purchase_orders()"
purchase_order_list_updated = xero.update_purchase_orders(
purchase_order_list=[purchase_order]
)
# verify
assert purchase_order_list_updated[0].reference == purchase_order.reference
def test_delete_purchase_orders_by_id(xero):
reference = "test_delete_purchase_orders_by_id(): created"
date_value = datetime.now().astimezone()
contact = Contact(
contact_id = 'c7127731-d324-4e26-a03e-854ce9a3a269')
line_item = LineItem(
description="Foobar",
quantity=1.0,
unit_amount=20.0,
account_code='400'
)
line_items = []
line_items.append(line_item)
purchase_order = PurchaseOrder(
contact=contact,
date=date_value,
line_items=line_items,
reference=reference,
status="DRAFT")
purchase_order_list_created = xero.create_purchase_orders(
purchase_order_list=[purchase_order]
)
purchase_order = purchase_order_list_created[0]
# delete journal
purchase_order_id = purchase_order.purchase_order_id
purchase_order_deleted = xero.delete_purchase_orders(
id=purchase_order_id
)[0]
assert purchase_order_deleted.purchase_order_id == purchase_order_id
assert purchase_order_deleted.reference == reference
def test_delete_purchase_orders_by_filter(xero):
date_value = datetime.now().astimezone()
contact = Contact(
contact_id = 'c7127731-d324-4e26-a03e-854ce9a3a269')
line_item = LineItem(
description="Foobar",
quantity=1.0,
unit_amount=20.0,
account_code='400'
)
line_items = []
line_items.append(line_item)
purchase_order = PurchaseOrder(
contact=contact,
date=date_value,
line_items=line_items,
reference="test_delete_purchase_orders_by_filter(): created",
status="DRAFT")
purchase_order_list_created = xero.create_purchase_orders(
purchase_order_list=[purchase_order]
)
purchase_order = purchase_order_list_created[0]
start = date.today()
sort = 'Date ASC'
purchase_orders_deleted = xero.delete_purchase_orders(
date_from=start.isoformat(),
status='DRAFT',
order=sort
)
assert purchase_orders_deleted
assert len(purchase_orders_deleted) > 0
def test_delete_purchase_orders_by_list_of_objects(xero):
date_value = datetime.now().astimezone()
contact = Contact(
contact_id = 'c7127731-d324-4e26-a03e-854ce9a3a269')
line_item = LineItem(
description="Foobar",
quantity=1.0,
unit_amount=20.0,
account_code='400'
)
line_items = []
line_items.append(line_item)
purchase_order = PurchaseOrder(
contact=contact,
date=date_value,
line_items=line_items,
reference="test_delete_purchase_orders_by_list_of_objects(): created",
status="DRAFT")
purchase_order_list_created = xero.create_purchase_orders(
purchase_order_list=[purchase_order]
)
purchase_order = purchase_order_list_created[0]
start = date.today()
sort = 'Date ASC'
purchase_order_list = xero.read_purchase_orders(
date_from=start.isoformat(),
status='DRAFT',
order=sort
)
purchase_orders_deleted = xero.delete_purchase_orders(
purchase_order_list=purchase_order_list
)
assert purchase_orders_deleted
assert len(purchase_orders_deleted) > 0
| 31.742972 | 111 | 0.716726 | 949 | 7,904 | 5.570074 | 0.11275 | 0.248392 | 0.090049 | 0.083617 | 0.853954 | 0.810632 | 0.770904 | 0.721907 | 0.71207 | 0.699584 | 0 | 0.042914 | 0.201037 | 7,904 | 248 | 112 | 31.870968 | 0.794141 | 0.01999 | 0 | 0.671642 | 0 | 0 | 0.097788 | 0.073082 | 0 | 0 | 0 | 0 | 0.104478 | 1 | 0.064677 | false | 0 | 0.034826 | 0 | 0.104478 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b7d2275754a3381e1e1f73c4692d52f23bcfa871 | 7,466 | py | Python | tests/device_tests/test_device_args.py | SX-Aurora/orchespy | 6b85a78831c8e3e05df7143101ca3418817fcbbd | [
"BSD-3-Clause"
] | null | null | null | tests/device_tests/test_device_args.py | SX-Aurora/orchespy | 6b85a78831c8e3e05df7143101ca3418817fcbbd | [
"BSD-3-Clause"
] | null | null | null | tests/device_tests/test_device_args.py | SX-Aurora/orchespy | 6b85a78831c8e3e05df7143101ca3418817fcbbd | [
"BSD-3-Clause"
] | null | null | null | from orchespy import device
from orchespy.devicetype import CUDAGPU, Host, VE
import sys
import pytest
import numpy as np
if "cupy" in sys.modules:
import cupy as cp
if "nlcpy" in sys.modules:
import nlcpy as vp
no_nlcpy = pytest.mark.skipif(
"nlcpy" not in sys.modules, reason=' test require nlcpy. ')
no_cupy = pytest.mark.skipif(
"cupy" not in sys.modules, reason=' test require cupy. ')
# for tests with an argument
@device(Host)
def sum_at_host(*args):
return sum(args)
@device(CUDAGPU)
def sum_at_gpu(*args):
return sum(args)
@device(VE)
def sum_at_ve(*args):
return sum(args)
@pytest.mark.parametrize('shape', [(2), (2, 2), (2, 2, 2), (2, 3), (2, 3, 4)])
@pytest.mark.parametrize('dtype', [
'i4', 'i8', 'u4', 'u8', 'f4', 'f8', 'c8', 'c16'
])
@pytest.mark.parametrize('order', ['C', 'F'])
class TestDeviceArgs:
def test_device_args_np_host(self, shape, dtype, order):
x1 = np.ones(shape, dtype=dtype, order=order)
x2 = np.ones(shape, dtype=dtype, order=order)
y = sum_at_host(x1, x2)
assert(isinstance(y, np.ndarray))
expected = np.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_cupy
def test_device_args_cp_host(self, shape, dtype, order):
x1 = cp.ones(shape, dtype=dtype, order=order)
x2 = cp.ones(shape, dtype=dtype, order=order)
y = sum_at_host(x1, x2)
assert(isinstance(y, np.ndarray))
expected = np.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_nlcpy
def test_device_args_vp_host(self, shape, dtype, order):
x1 = vp.ones(shape, dtype=dtype, order=order)
x2 = vp.ones(shape, dtype=dtype, order=order)
y = sum_at_host(x1, x2)
assert(isinstance(y, np.ndarray))
expected = np.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_cupy
def test_device_args_np_cp_host(self, shape, dtype, order):
x1 = np.ones(shape, dtype=dtype, order=order)
x2 = cp.ones(shape, dtype=dtype, order=order)
y = sum_at_host(x1, x2)
assert(isinstance(y, np.ndarray))
expected = np.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_nlcpy
def test_device_args_np_vp_host(self, shape, dtype, order):
x1 = np.ones(shape, dtype=dtype, order=order)
x2 = vp.ones(shape, dtype=dtype, order=order)
y = sum_at_host(x1, x2)
assert(isinstance(y, np.ndarray))
expected = np.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_cupy
@no_nlcpy
def test_device_args_cp_vp_host(self, shape, dtype, order):
x1 = cp.ones(shape, dtype=dtype, order=order)
x2 = vp.ones(shape, dtype=dtype, order=order)
y = sum_at_host(x1, x2)
assert(isinstance(y, np.ndarray))
expected = np.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_cupy
def test_device_args_np_gpu(self, shape, dtype, order):
print(shape, dtype, order)
x1 = np.ones(shape, dtype=dtype, order=order)
x2 = np.ones(shape, dtype=dtype, order=order)
y = sum_at_gpu(x1, x2)
assert(isinstance(y, cp.ndarray))
expected = cp.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_cupy
def test_device_args_cp_gpu(self, shape, dtype, order):
x1 = cp.ones(shape, dtype=dtype, order=order)
x2 = cp.ones(shape, dtype=dtype, order=order)
y = sum_at_gpu(x1, x2)
assert(isinstance(y, cp.ndarray))
expected = cp.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_cupy
@no_nlcpy
def test_device_args_vp_gpu(self, shape, dtype, order):
x1 = vp.ones(shape, dtype=dtype, order=order)
x2 = vp.ones(shape, dtype=dtype, order=order)
y = sum_at_gpu(x1, x2)
assert(isinstance(y, cp.ndarray))
expected = cp.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_cupy
def test_device_args_np_cp_gpu(self, shape, dtype, order):
x1 = np.ones(shape, dtype=dtype, order=order)
x2 = cp.ones(shape, dtype=dtype, order=order)
y = sum_at_gpu(x1, x2)
assert(isinstance(y, cp.ndarray))
expected = cp.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_cupy
@no_nlcpy
def test_device_args_np_vp_gpu(self, shape, dtype, order):
x1 = np.ones(shape, dtype=dtype, order=order)
x2 = vp.ones(shape, dtype=dtype, order=order)
y = sum_at_gpu(x1, x2)
assert(isinstance(y, cp.ndarray))
expected = cp.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_cupy
@no_nlcpy
def test_device_args_cp_vp_gpu(self, shape, dtype, order):
x1 = cp.ones(shape, dtype=dtype, order=order)
x2 = vp.ones(shape, dtype=dtype, order=order)
y = sum_at_gpu(x1, x2)
assert(isinstance(y, cp.ndarray))
expected = cp.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_nlcpy
def test_device_args_np_ve(self, shape, dtype, order):
print(shape, dtype, order)
x1 = np.ones(shape, dtype=dtype, order=order)
x2 = np.ones(shape, dtype=dtype, order=order)
y = sum_at_ve(x1, x2)
assert(isinstance(y, vp.ndarray))
expected = vp.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_cupy
@no_nlcpy
def test_device_args_cp_ve(self, shape, dtype, order):
x1 = cp.ones(shape, dtype=dtype, order=order)
x2 = cp.ones(shape, dtype=dtype, order=order)
y = sum_at_ve(x1, x2)
assert(isinstance(y, vp.ndarray))
expected = vp.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_nlcpy
def test_device_args_vp_ve(self, shape, dtype, order):
x1 = vp.ones(shape, dtype=dtype, order=order)
x2 = vp.ones(shape, dtype=dtype, order=order)
y = sum_at_ve(x1, x2)
assert(isinstance(y, vp.ndarray))
expected = vp.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_cupy
@no_nlcpy
def test_device_args_np_cp_ve(self, shape, dtype, order):
x1 = np.ones(shape, dtype=dtype, order=order)
x2 = cp.ones(shape, dtype=dtype, order=order)
y = sum_at_ve(x1, x2)
assert(isinstance(y, vp.ndarray))
expected = vp.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_nlcpy
def test_device_args_np_vp_ve(self, shape, dtype, order):
x1 = np.ones(shape, dtype=dtype, order=order)
x2 = vp.ones(shape, dtype=dtype, order=order)
y = sum_at_ve(x1, x2)
assert(isinstance(y, vp.ndarray))
expected = vp.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
@no_cupy
@no_nlcpy
def test_device_args_cp_vp_ve(self, shape, dtype, order):
x1 = cp.ones(shape, dtype=dtype, order=order)
x2 = vp.ones(shape, dtype=dtype, order=order)
y = sum_at_ve(x1, x2)
assert(isinstance(y, vp.ndarray))
expected = vp.full(shape, 2, dtype=dtype, order=order)
assert((y == expected).all())
| 35.552381 | 78 | 0.615189 | 1,100 | 7,466 | 4.04 | 0.062727 | 0.166517 | 0.182268 | 0.243024 | 0.900765 | 0.885014 | 0.883438 | 0.866112 | 0.865662 | 0.865662 | 0 | 0.019404 | 0.240691 | 7,466 | 209 | 79 | 35.722488 | 0.764509 | 0.003482 | 0 | 0.761111 | 0 | 0 | 0.012503 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.116667 | false | 0 | 0.038889 | 0.016667 | 0.177778 | 0.011111 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
4d00d3faadd22ee93466e81e485daaee1b773136 | 5,242 | py | Python | dbservice/apps/homes/migrations/0007_auto_20150902_0549.py | aagaard/dbservice | 47daadab307e6744ef151dd4e0aacff27dcda881 | [
"MIT"
] | 1 | 2020-04-27T16:30:50.000Z | 2020-04-27T16:30:50.000Z | dbservice/apps/homes/migrations/0007_auto_20150902_0549.py | aagaard/dbservice | 47daadab307e6744ef151dd4e0aacff27dcda881 | [
"MIT"
] | null | null | null | dbservice/apps/homes/migrations/0007_auto_20150902_0549.py | aagaard/dbservice | 47daadab307e6744ef151dd4e0aacff27dcda881 | [
"MIT"
] | 1 | 2021-01-13T02:16:56.000Z | 2021-01-13T02:16:56.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import datetime
class Migration(migrations.Migration):
dependencies = [
('homes', '0006_auto_20150315_1012'),
]
operations = [
migrations.AddField(
model_name='appliance',
name='created',
field=models.DateTimeField(
default=datetime.datetime(1, 1, 1, 0, 0),
auto_now_add=True
),
preserve_default=False,
),
migrations.AddField(
model_name='appliance',
name='last_modified',
field=models.DateTimeField(
auto_now=True,
default=datetime.datetime(1, 1, 1, 0, 0)
),
preserve_default=False,
),
migrations.AddField(
model_name='energyconsumptionperiod',
name='created',
field=models.DateTimeField(
default=datetime.datetime(1, 1, 1, 0, 0),
auto_now_add=True
),
preserve_default=False,
),
migrations.AddField(
model_name='energyconsumptionperiod',
name='last_modified',
field=models.DateTimeField(
auto_now=True,
default=datetime.datetime(1, 1, 1, 0, 0)
),
preserve_default=False,
),
migrations.AddField(
model_name='energyproductionperiod',
name='created',
field=models.DateTimeField(
default=datetime.datetime(1, 1, 1, 0, 0),
auto_now_add=True
),
preserve_default=False,
),
migrations.AddField(
model_name='energyproductionperiod',
name='last_modified',
field=models.DateTimeField(
auto_now=True,
default=datetime.datetime(1, 1, 1, 0, 0)
),
preserve_default=False,
),
migrations.AddField(
model_name='fixedvaluemeterport',
name='created',
field=models.DateTimeField(
default=datetime.datetime(1, 1, 1, 0, 0),
auto_now_add=True
),
preserve_default=False,
),
migrations.AddField(
model_name='fixedvaluemeterport',
name='last_modified',
field=models.DateTimeField(
auto_now=True,
default=datetime.datetime(1, 1, 1, 0, 0)
),
preserve_default=False,
),
migrations.AddField(
model_name='mainmeter',
name='created',
field=models.DateTimeField(
default=datetime.datetime(1, 1, 1, 0, 0),
auto_now_add=True
),
preserve_default=False,
),
migrations.AddField(
model_name='mainmeter',
name='last_modified',
field=models.DateTimeField(
auto_now=True,
default=datetime.datetime(1, 1, 1, 0, 0)
),
preserve_default=False,
),
migrations.AddField(
model_name='measurement',
name='created',
field=models.DateTimeField(
default=datetime.datetime(1, 1, 1, 0, 0),
auto_now_add=True
),
preserve_default=False,
),
migrations.AddField(
model_name='measurement',
name='last_modified',
field=models.DateTimeField(
auto_now=True,
default=datetime.datetime(1, 1, 1, 0, 0)
),
preserve_default=False,
),
migrations.AddField(
model_name='meterport',
name='created',
field=models.DateTimeField(
default=datetime.datetime(1, 1, 1, 0, 0),
auto_now_add=True
),
preserve_default=False,
),
migrations.AddField(
model_name='meterport',
name='last_modified',
field=models.DateTimeField(
auto_now=True,
default=datetime.datetime(1, 1, 1, 0, 0)
),
preserve_default=False,
),
migrations.AddField(
model_name='residentialhome',
name='last_modified',
field=models.DateTimeField(
auto_now=True,
default=datetime.datetime(1, 1, 1, 0, 0)
),
preserve_default=False,
),
migrations.AddField(
model_name='submeter',
name='created',
field=models.DateTimeField(
default=datetime.datetime(1, 1, 1, 0, 0),
auto_now_add=True
),
preserve_default=False,
),
migrations.AddField(
model_name='submeter',
name='last_modified',
field=models.DateTimeField(
auto_now=True,
default=datetime.datetime(1, 1, 1, 0, 0)
),
preserve_default=False,
),
]
| 31.017751 | 57 | 0.497138 | 452 | 5,242 | 5.597345 | 0.103982 | 0.026877 | 0.154545 | 0.181423 | 0.927273 | 0.927273 | 0.909486 | 0.909486 | 0.909486 | 0.829644 | 0 | 0.032609 | 0.403281 | 5,242 | 168 | 58 | 31.202381 | 0.776215 | 0.004006 | 0 | 0.938272 | 0 | 0 | 0.083541 | 0.021652 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.018519 | 0 | 0.037037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4d0c59ca2fc8fb559f324d6ccab493ab02448de9 | 15,890 | py | Python | tests/test_nunitresults.py | RichieBzzzt/ru-neatest | 5a1422489b557b573432d8717eb1d4e633c85d0f | [
"MIT"
] | null | null | null | tests/test_nunitresults.py | RichieBzzzt/ru-neatest | 5a1422489b557b573432d8717eb1d4e633c85d0f | [
"MIT"
] | null | null | null | tests/test_nunitresults.py | RichieBzzzt/ru-neatest | 5a1422489b557b573432d8717eb1d4e633c85d0f | [
"MIT"
] | null | null | null | import pytest
import json
from runeatest import nunitresults
from runeatest import pysparkconnect
from runeatest import utils
from runeatest import testreporter
def test_get_nunit_header(mocker):
x = '{"tags": {"opId": "ServerBackend-f421e441fa310430","browserUserAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36","orgId": "1009391617598028","userAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36","clusterId": "0216-124733-lone970","user": "eter.natus@galar.com","principalIdpObjectId": "71b45910-e7b4-44d8-82f7-bf6fac4630d0","browserHostName": "uksouth.azuredatabricks.net","parentOpId": "RPCClient-bb9b9591c29c01f7","jettyRpcType": "InternalDriverBackendMessages$DriverBackendRequest"},"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
t = ("2020-9-13", "13:20:16")
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
mocker.patch("runeatest.utils.get_date_and_time", return_value=t)
results = []
results.append(testreporter.add_testcase("test name", False))
results.append(testreporter.add_testcase("test name 2", True))
expected = '<test-results name="/Users/lorem.ipsum@fake.io/runeatest" total="2" date="2020-9-13" time="13:20:16">\n<environment nunit-version="2.6.0.12035" clr-version="2.0.50727.4963" os-version="uksouth.azuredatabricks.net" platform="Win32NT" cwd="C:\\Program Files\\NUnit 2.6\\bin\\" machine-name="0216-124733-lone970" user="eter.natus@galar.com" user-domain="1009391617598028"/>\n<culture-info current-culture="en-US" current-uiculture="en-US"/>'
actual = nunitresults.get_nunit_header(results, context)
assert expected == actual
def test_get_nunit_footer():
expected = "</results>\n</test-suite>\n</test-results>"
actual = nunitresults.get_nunit_footer()
assert expected == actual
def test_get_test_suite_result_one_passed(mocker):
x = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results = []
results.append(testreporter.add_testcase("test name", True))
expected = '<test-suite type="TestFixture" name="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="success" success="True" time="0.000" asserts="0"><results>'
actual = nunitresults.get_test_suite_results(results, context)
assert expected == actual
def test_get_test_suite_result_one_failed(mocker):
x = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results = []
results.append(testreporter.add_testcase("test name", False))
expected = '<test-suite type="TestFixture" name="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="failure" success="False" time="0.000" asserts="0"><results>'
actual = nunitresults.get_test_suite_results(results, context)
assert expected == actual
def test_get_test_suite_result_one_failed_one_passed(mocker):
x = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results = []
results.append(testreporter.add_testcase("test name", False))
results.append(testreporter.add_testcase("test name 2", True))
expected = '<test-suite type="TestFixture" name="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="failure" success="False" time="0.000" asserts="0"><results>'
actual = nunitresults.get_test_suite_results(results, context)
assert expected == actual
def test_get_test_suite_result_one_passed_one_failed(mocker):
x = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results = []
results.append(testreporter.add_testcase("test name", False))
results.append(testreporter.add_testcase("test name 2", True))
expected = '<test-suite type="TestFixture" name="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="failure" success="False" time="0.000" asserts="0"><results>'
actual = nunitresults.get_test_suite_results(results, context)
assert expected == actual
def test_get_test_suite_result_all_failed(mocker):
x = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results = []
results.append(testreporter.add_testcase("test name", False))
results.append(testreporter.add_testcase("test name 2", False))
expected = '<test-suite type="TestFixture" name="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="failure" success="False" time="0.000" asserts="0"><results>'
actual = nunitresults.get_test_suite_results(results, context)
assert expected == actual
def test_get_test_suite_result_all_passed(mocker):
x = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results = []
results.append(testreporter.add_testcase("test name", True))
results.append(testreporter.add_testcase("test name 2", True))
expected = '<test-suite type="TestFixture" name="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="success" success="True" time="0.000" asserts="0"><results>'
actual = nunitresults.get_test_suite_results(results, context)
assert expected == actual
def test_get_test_suite_result_all_passed(mocker):
x = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results = []
results.append(
testreporter.add_testcase(
"test name",
True,
"test may also have failed but because it did not this is not included in output",
)
)
results.append(
testreporter.add_testcase(
"test name 2",
True,
"test may have failed but because it did not this is not included in output",
)
)
expected = '<test-suite type="TestFixture" name="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="success" success="True" time="0.000" asserts="0"><results>'
actual = nunitresults.get_test_suite_results(results, context)
assert expected == actual
def test_get_test_case_results_one_failure(mocker):
x = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results = []
results.append(
testreporter.add_testcase(
"test name", False, "this description describes the test", "oh dear"
)
)
expected = '<test-case name="test name" description="this description describes the test" classname="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="failure" success="False" time="0.000" asserts="1">\n<failure><message>oh dear\n</message></failure>\n</test-case>'
actual = nunitresults.get_test_case_results(results)
assert expected == actual[0]
def test_get_test_case_results_one_pass(mocker):
x = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results = []
results.append(
testreporter.add_testcase(
"test name", True, "this description describes the test"
)
)
expected = '<test-case name="test name" description="this description describes the test" classname="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="success" success="True" time="0.000" asserts="1"/>'
actual = nunitresults.get_test_case_results(results)
assert expected == actual[0]
def test_get_test_case_results_one_pass_one_fail(mocker):
x = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results = []
results.append(
testreporter.add_testcase(
"test name", True, "this description describes the test"
)
)
results.append(
testreporter.add_testcase(
"test name 2", False, "this description describes the test", "oops"
)
)
expected0 = '<test-case name="test name" description="this description describes the test" classname="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="success" success="True" time="0.000" asserts="1"/>'
expected1 = '<test-case name="test name 2" description="this description describes the test" classname="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="failure" success="False" time="0.000" asserts="1">\n<failure><message>oops\n</message></failure>\n</test-case>'
actual = nunitresults.get_test_case_results(results)
assert expected0 == actual[0]
assert expected1 == actual[1]
def test_get_test_case_results_all_pass(mocker):
x = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results = []
results.append(
testreporter.add_testcase(
"test name", True, "this description describes the test"
)
)
results.append(
testreporter.add_testcase(
"test name 2", True, "this description describes the test"
)
)
expected0 = '<test-case name="test name" description="this description describes the test" classname="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="success" success="True" time="0.000" asserts="1"/>'
expected1 = '<test-case name="test name 2" description="this description describes the test" classname="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="success" success="True" time="0.000" asserts="1"/>'
actual = nunitresults.get_test_case_results(results)
assert expected0 == actual[0]
assert expected1 == actual[1]
def test_get_test_case_results_all_fail(mocker):
x = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results = []
results.append(
testreporter.add_testcase(
"test name", False, "this description describes the test", "why"
)
)
results.append(
testreporter.add_testcase(
"test name 2", False, "this description describes the test", "why oh why"
)
)
expected0 = '<test-case name="test name" description="this description describes the test" classname="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="failure" success="False" time="0.000" asserts="1">\n<failure><message>why\n</message></failure>\n</test-case>'
expected1 = '<test-case name="test name 2" description="this description describes the test" classname="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="failure" success="False" time="0.000" asserts="1">\n<failure><message>why oh why\n</message></failure>\n</test-case>'
actual = nunitresults.get_test_case_results(results)
assert expected0 == actual[0]
assert expected1 == actual[1]
def test_convert_to_nunit_results_format(mocker):
x = '{"tags": {"opId": "ServerBackend-f421e441fa310430","browserUserAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36","orgId": "1009391617598028","userAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36","clusterId": "0216-124733-lone970","user": "eter.natus@galar.com","principalIdpObjectId": "71b45910-e7b4-44d8-82f7-bf6fac4630d0","browserHostName": "uksouth.azuredatabricks.net","parentOpId": "RPCClient-bb9b9591c29c01f7","jettyRpcType": "InternalDriverBackendMessages$DriverBackendRequest"},"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
t = ("2020-9-13", "13:20:16")
mocker.patch("runeatest.utils.get_date_and_time", return_value=t)
results = []
results.append(
testreporter.add_testcase(
"test name",
False,
"this description describes the test",
"this test has failed",
)
)
results.append(
testreporter.add_testcase(
"test name 2",
False,
"this description describes the test",
"this test has also failed",
)
)
expected = '<test-results name="/Users/lorem.ipsum@fake.io/runeatest" total="2" date="2020-9-13" time="13:20:16">\n<environment nunit-version="2.6.0.12035" clr-version="2.0.50727.4963" os-version="uksouth.azuredatabricks.net" platform="Win32NT" cwd="C:\\Program Files\\NUnit 2.6\\bin\\" machine-name="0216-124733-lone970" user="eter.natus@galar.com" user-domain="1009391617598028"/>\n<culture-info current-culture="en-US" current-uiculture="en-US"/>\n<test-suite type="TestFixture" name="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="failure" success="False" time="0.000" asserts="0"><results>\n<test-case name="test name" description="this description describes the test" classname="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="failure" success="False" time="0.000" asserts="1">\n<failure><message>this test has failed\n</message></failure>\n</test-case>\n<test-case name="test name 2" description="this description describes the test" classname="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="failure" success="False" time="0.000" asserts="1">\n<failure><message>this test has also failed\n</message></failure>\n</test-case>\n</results>\n</test-suite>\n</test-results>'
actual = nunitresults.convert_to_nunit_results_format(results)
assert expected == actual
def test_get_test_case_results_all_pass_different_notebookpaths(mocker):
x = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/runeatest"}}'
context = json.loads(x)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results = []
results.append(
testreporter.add_testcase(
"test name", True, "this description describes the test"
)
)
y = '{"extraContext":{"notebook_path":"/Users/lorem.ipsum@fake.io/eternatus"}}'
context = json.loads(y)
mocker.patch("runeatest.pysparkconnect.get_context", return_value=context)
results.append(
testreporter.add_testcase(
"test name 2", True, "this description describes the test"
)
)
expected0 = '<test-case name="test name" description="this description describes the test" classname="/Users/lorem.ipsum@fake.io/runeatest" executed="True" result="success" success="True" time="0.000" asserts="1"/>'
expected1 = '<test-case name="test name 2" description="this description describes the test" classname="/Users/lorem.ipsum@fake.io/eternatus" executed="True" result="success" success="True" time="0.000" asserts="1"/>'
actual = nunitresults.get_test_case_results(results)
assert expected0 == actual[0]
assert expected1 == actual[1]
| 58.634686 | 1,223 | 0.711013 | 2,036 | 15,890 | 5.436149 | 0.085953 | 0.034333 | 0.0515 | 0.065233 | 0.969642 | 0.963498 | 0.961782 | 0.958529 | 0.944254 | 0.920492 | 0 | 0.040611 | 0.138389 | 15,890 | 270 | 1,224 | 58.851852 | 0.767804 | 0 | 0 | 0.696203 | 0 | 0.088608 | 0.568408 | 0.310258 | 0 | 0 | 0 | 0 | 0.160338 | 1 | 0.067511 | false | 0.037975 | 0.025316 | 0 | 0.092827 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1503381f4ad3db89dadb674b76aefd0e1851b199 | 147,752 | py | Python | dataloader/definitions/labels_file_incremental_learn.py | ifnspaml/IFN_Dataloader | 864a10f0dfea27cbbe46f18cf3c9958768f3c0be | [
"MIT"
] | 6 | 2020-09-18T06:25:20.000Z | 2021-11-16T22:33:37.000Z | dataloader/definitions/labels_file_incremental_learn.py | ifnspaml/IFN_Dataloader | 864a10f0dfea27cbbe46f18cf3c9958768f3c0be | [
"MIT"
] | null | null | null | dataloader/definitions/labels_file_incremental_learn.py | ifnspaml/IFN_Dataloader | 864a10f0dfea27cbbe46f18cf3c9958768f3c0be | [
"MIT"
] | 1 | 2021-03-05T08:36:49.000Z | 2021-03-05T08:36:49.000Z | from collections import namedtuple
import numpy as np
Label = namedtuple( 'Label' , [
'name' , # The identifier of this label.
# We use them to uniquely name a class
'id' , # An integer ID that is associated with this label.
# The IDs are used to represent the label in ground truth images
# An ID of -1 means that this label does not have an ID and thus
# is ignored when creating ground truth images (e.g. license plate).
# Do not modify these IDs, since exactly these IDs are expected by the
# evaluation server.
'trainId' , # Feel free to modify these IDs as suitable for your method. Then create
# ground truth images with train IDs, using the tools provided in the
# 'preparation' folder. However, make sure to validate or submit results
# to our evaluation server using the regular IDs above!
# For trainIds, multiple labels might have the same ID. Then, these labels
# are mapped to the same class in the ground truth images. For the inverse
# mapping, we use the label that is defined first in the list below.
# For example, mapping all void-type classes to the same ID in training,
# might make sense for some approaches.
# Max value is 255!
'category' , # The name of the category that this label belongs to
'categoryId' , # The ID of this category. Used to create ground truth images
# on category level.
'hasInstances', # Whether this label distinguishes between single instances or not
'ignoreInEval', # Whether pixels having this class as ground truth label are ignored
# during evaluations or not
'color' , # The color of this label
] )
class ClassDefinitions(object):
"""This class contains the classdefintions for the segmentation masks and the
procedures to work with them"""
def __init__(self, classlabels):
self.labels = classlabels
for i, label in zip(range(len(self.labels)), self.labels):
if isinstance(label.color, int):
self.labels[i] = label._replace(color=tuple([int(label.color/(256.0**2)) % 256,
int(label.color/256.0) % 256,
int(label.color) % 256]))
def getlabels(self):
return self.labels
def getname2label(self):
name2label = {label.name: label for label in self.labels}
return name2label
def getid2label(self):
id2label = {label.id: label for label in self.labels}
return id2label
def gettrainid2label(self):
trainid2label = {label.trainId: label for label in reversed(self.labels)}
return trainid2label
def getcategory2label(self):
category2labels = {}
for label in self.labels:
category = label.category
if category in category2labels:
category2labels[category].append(label)
else:
category2labels[category] = [label]
def assureSingleInstanceName(self,name):
# if the name is known, it is not a group
name2label = self.getname2label()
if name in name2label:
return name
# test if the name actually denotes a group
if not name.endswith("group"):
return None
# remove group
name = name[:-len("group")]
# test if the new name exists
if not name in name2label:
return None
# test if the new name denotes a label that actually has instances
if not name2label[name].hasInstances:
return None
# all good then
return name
#--------------------------------------------------------------------------------
# A list of all labels
#--------------------------------------------------------------------------------
# Please adapt the train IDs as appropriate for your approach.
# Note that you might want to ignore labels with ID 255 during training.
# Further note that the current train IDs are only a suggestion. You can use whatever you like.
# Make sure to provide your results using the original IDs and not the training IDs.
# Note that many IDs are ignored in evaluation and thus you never need to predict these!
labels_apollo_lanes = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'void' , 0 , 0, 'void' , 0 , False , False , ( 0, 0, 0) ),
Label( 's_w_d' , 200 , 1 , 'dividing' , 1 , False , False , ( 70, 130, 180) ),
Label( 's_y_d' , 204 , 2 , 'dividing' , 1 , False , False , (220, 20, 60) ),
Label( 'ds_w_dn' , 213 , 3 , 'dividing' , 1 , False , True , (128, 0, 128) ),
Label( 'ds_y_dn' , 209 , 4 , 'dividing' , 1 , False , False , (255, 0, 0) ),
Label( 'sb_w_do' , 206 , 5 , 'dividing' , 1 , False , True , ( 0, 0, 60) ),
Label( 'sb_y_do' , 207 , 6 , 'dividing' , 1 , False , True , ( 0, 60, 100) ),
Label( 'b_w_g' , 201 , 7 , 'guiding' , 2 , False , False , ( 0, 0, 142) ),
Label( 'b_y_g' , 203 , 8 , 'guiding' , 2 , False , False , (119, 11, 32) ),
Label( 'db_w_g' , 211 , 9 , 'guiding' , 2 , False , True , (244, 35, 232) ),
Label( 'db_y_g' , 208 , 10 , 'guiding' , 2 , False , True , ( 0, 0, 160) ),
Label( 'db_w_s' , 216 , 11 , 'stopping' , 3 , False , True , (153, 153, 153) ),
Label( 's_w_s' , 217 , 12 , 'stopping' , 3 , False , False , (220, 220, 0) ),
Label( 'ds_w_s' , 215 , 13 , 'stopping' , 3 , False , True , (250, 170, 30) ),
Label( 's_w_c' , 218 , 14 , 'chevron' , 4 , False , True , (102, 102, 156) ),
Label( 's_y_c' , 219 , 15 , 'chevron' , 4 , False , True , (128, 0, 0) ),
Label( 's_w_p' , 210 , 16 , 'parking' , 5 , False , False , (128, 64, 128) ),
Label( 's_n_p' , 232 , 17 , 'parking' , 5 , False , True , (238, 232, 170) ),
Label( 'c_wy_z' , 214 , 18 , 'zebra' , 6 , False , False , (190, 153, 153) ),
Label( 'a_w_u' , 202 , 19 , 'thru/turn' , 7 , False , True , ( 0, 0, 230) ),
Label( 'a_w_t' , 220 , 20 , 'thru/turn' , 7 , False , False , (128, 128, 0) ),
Label( 'a_w_tl' , 221 , 21 , 'thru/turn' , 7 , False , False , (128, 78, 160) ),
Label( 'a_w_tr' , 222 , 22 , 'thru/turn' , 7 , False , False , (150, 100, 100) ),
Label( 'a_w_tlr' , 231 , 23 , 'thru/turn' , 7 , False , True , (255, 165, 0) ),
Label( 'a_w_l' , 224 , 24 , 'thru/turn' , 7 , False , False , (180, 165, 180) ),
Label( 'a_w_r' , 225 , 25 , 'thru/turn' , 7 , False , False , (107, 142, 35) ),
Label( 'a_w_lr' , 226 , 26 , 'thru/turn' , 7 , False , False , (201, 255, 229) ),
Label( 'a_n_lu' , 230 , 27 , 'thru/turn' , 7 , False , True , (0, 191, 255) ),
Label( 'a_w_tu' , 228 , 28 , 'thru/turn' , 7 , False , True , ( 51, 255, 51) ),
Label( 'a_w_m' , 229 , 29 , 'thru/turn' , 7 , False , True , (250, 128, 114) ),
Label( 'a_y_t' , 233 , 30 , 'thru/turn' , 7 , False , True , (127, 255, 0) ),
Label( 'b_n_sr' , 205 , 31 , 'reduction' , 8 , False , False , (255, 128, 0) ),
Label( 'd_wy_za' , 212 , 32 , 'attention' , 9 , False , True , ( 0, 255, 255) ),
Label( 'r_wy_np' , 227 , 33 , 'no parking' , 10 , False , False , (178, 132, 190) ),
Label( 'vom_wy_n' , 223 , 34 , 'others' , 11 , False , True , (128, 128, 64) ),
Label( 'om_n_n' , 250 , 35 , 'others' , 11 , False , False , (102, 0, 204) ),
Label( 'noise' , 249 , 255 , 'ignored' , 255 , False , True , ( 0, 153, 153) ),
Label( 'ignored' , 255 , 255 , 'ignored' , 255 , False , True , (255, 255, 255) ),
])
labels_cityscape_seg = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'unlabeled' , 0 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'rectification border' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'out of roi' , 3 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'static' , 4 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'dynamic' , 5 , 255 , 'void' , 0 , False , True , (111, 74, 0) ),
Label( 'ground' , 6 , 255 , 'void' , 0 , False , True , ( 81, 0, 81) ),
Label( 'road' , 7 , 0 , 'flat' , 1 , False , False , (128, 64,128) ),
Label( 'sidewalk' , 8 , 1 , 'flat' , 1 , False , False , (244, 35,232) ),
Label( 'parking' , 9 , 255 , 'flat' , 1 , False , True , (250,170,160) ),
Label( 'rail track' , 10 , 255 , 'flat' , 1 , False , True , (230,150,140) ),
Label( 'building' , 11 , 2 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'wall' , 12 , 3 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'fence' , 13 , 4 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 14 , 255 , 'construction' , 2 , False , True , (180,165,180) ),
Label( 'bridge' , 15 , 255 , 'construction' , 2 , False , True , (150,100,100) ),
Label( 'tunnel' , 16 , 255 , 'construction' , 2 , False , True , (150,120, 90) ),
Label( 'pole' , 17 , 5 , 'object' , 3 , False , False , (153,153,153) ),
Label( 'polegroup' , 18 , 255 , 'object' , 3 , False , True , (153,153,153) ),
Label( 'traffic light' , 19 , 6 , 'object' , 3 , False , False , (250,170, 30) ),
Label( 'traffic sign' , 20 , 7 , 'object' , 3 , False , False , (220,220, 0) ),
Label( 'vegetation' , 21 , 8 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'terrain' , 22 , 9 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'sky' , 23 , 10 , 'sky' , 5 , False , False , ( 70,130,180) ),
Label( 'person' , 24 , 11 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'rider' , 25 , 12 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'car' , 26 , 13 , 'vehicle' , 7 , True , False , ( 0, 0,142) ),
Label( 'truck' , 27 , 14 , 'vehicle' , 7 , True , False , ( 0, 0, 70) ),
Label( 'bus' , 28 , 15 , 'vehicle' , 7 , True , False , ( 0, 60,100) ),
Label( 'caravan' , 29 , 255 , 'vehicle' , 7 , True , True , ( 0, 0, 90) ),
Label( 'trailer' , 30 , 255 , 'vehicle' , 7 , True , True , ( 0, 0,110) ),
Label( 'train' , 31 , 16 , 'vehicle' , 7 , True , False , ( 0, 80,100) ),
Label( 'motorcycle' , 32 , 17 , 'vehicle' , 7 , True , False , ( 0, 0,230) ),
Label( 'bicycle' , 33 , 18 , 'vehicle' , 7 , True , False , (119, 11, 32) ),
Label( 'license plate' , -1 , 255 , 'vehicle' , 7 , False , True , ( 0, 0,142) ),
])
labels_cityscape_seg_train1 = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'unlabeled' , 0 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'rectification border' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'out of roi' , 3 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'static' , 4 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'dynamic' , 5 , 255 , 'void' , 0 , False , True , (111, 74, 0) ),
Label( 'ground' , 6 , 255 , 'void' , 0 , False , True , ( 81, 0, 81) ),
Label( 'road' , 7 , 0 , 'flat' , 1 , False , False , (128, 64,128) ),
Label( 'sidewalk' , 8 , 1 , 'flat' , 1 , False , False , (244, 35,232) ),
Label( 'parking' , 9 , 255 , 'flat' , 1 , False , True , (250,170,160) ),
Label( 'rail track' , 10 , 255 , 'flat' , 1 , False , True , (230,150,140) ),
Label( 'building' , 11 , 255 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'wall' , 12 , 255 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'fence' , 13 , 255 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 14 , 255 , 'construction' , 2 , False , True , (180,165,180) ),
Label( 'bridge' , 15 , 255 , 'construction' , 2 , False , True , (150,100,100) ),
Label( 'tunnel' , 16 , 255 , 'construction' , 2 , False , True , (150,120, 90) ),
Label( 'pole' , 17 , 255 , 'object' , 3 , False , False , (153,153,153) ),
Label( 'polegroup' , 18 , 255 , 'object' , 3 , False , True , (153,153,153) ),
Label( 'traffic light' , 19 , 255 , 'object' , 3 , False , False , (250,170, 30) ),
Label( 'traffic sign' , 20 , 255 , 'object' , 3 , False , False , (220,220, 0) ),
Label( 'vegetation' , 21 , 2 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'terrain' , 22 , 3 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'sky' , 23 , 4 , 'sky' , 5 , False , False , ( 70,130,180) ),
Label( 'person' , 24 , 255 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'rider' , 25 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'car' , 26 , 255 , 'vehicle' , 7 , True , False , ( 0, 0,142) ),
Label( 'truck' , 27 , 255 , 'vehicle' , 7 , True , False , ( 0, 0, 70) ),
Label( 'bus' , 28 , 255 , 'vehicle' , 7 , True , False , ( 0, 60,100) ),
Label( 'caravan' , 29 , 255 , 'vehicle' , 7 , True , True , ( 0, 0, 90) ),
Label( 'trailer' , 30 , 255 , 'vehicle' , 7 , True , True , ( 0, 0,110) ),
Label( 'train' , 31 , 255 , 'vehicle' , 7 , True , False , ( 0, 80,100) ),
Label( 'motorcycle' , 32 , 255 , 'vehicle' , 7 , True , False , ( 0, 0,230) ),
Label( 'bicycle' , 33 , 255 , 'vehicle' , 7 , True , False , (119, 11, 32) ),
Label( 'license plate' , -1 , 255 , 'vehicle' , 7 , False , True , ( 0, 0,142) ),
])
labels_cityscape_seg_train2 = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'unlabeled' , 0 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'rectification border' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'out of roi' , 3 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'static' , 4 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'dynamic' , 5 , 255 , 'void' , 0 , False , True , (111, 74, 0) ),
Label( 'ground' , 6 , 255 , 'void' , 0 , False , True , ( 81, 0, 81) ),
Label( 'road' , 7 , 255 , 'flat' , 1 , False , False , (128, 64,128) ),
Label( 'sidewalk' , 8 , 255 , 'flat' , 1 , False , False , (244, 35,232) ),
Label( 'parking' , 9 , 255 , 'flat' , 1 , False , True , (250,170,160) ),
Label( 'rail track' , 10 , 255 , 'flat' , 1 , False , True , (230,150,140) ),
Label( 'building' , 11 , 5 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'wall' , 12 , 6 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'fence' , 13 , 7 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 14 , 255 , 'construction' , 2 , False , True , (180,165,180) ),
Label( 'bridge' , 15 , 255 , 'construction' , 2 , False , True , (150,100,100) ),
Label( 'tunnel' , 16 , 255 , 'construction' , 2 , False , True , (150,120, 90) ),
Label( 'pole' , 17 , 8 , 'object' , 3 , False , False , (153,153,153) ),
Label( 'polegroup' , 18 , 255 , 'object' , 3 , False , True , (153,153,153) ),
Label( 'traffic light' , 19 , 9 , 'object' , 3 , False , False , (250,170, 30) ),
Label( 'traffic sign' , 20 , 10 , 'object' , 3 , False , False , (220,220, 0) ),
Label( 'vegetation' , 21 , 255 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'terrain' , 22 , 255 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'sky' , 23 , 255 , 'sky' , 5 , False , False , ( 70,130,180) ),
Label( 'person' , 24 , 255 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'rider' , 25 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'car' , 26 , 255 , 'vehicle' , 7 , True , False , ( 0, 0,142) ),
Label( 'truck' , 27 , 255 , 'vehicle' , 7 , True , False , ( 0, 0, 70) ),
Label( 'bus' , 28 , 255 , 'vehicle' , 7 , True , False , ( 0, 60,100) ),
Label( 'caravan' , 29 , 255 , 'vehicle' , 7 , True , True , ( 0, 0, 90) ),
Label( 'trailer' , 30 , 255 , 'vehicle' , 7 , True , True , ( 0, 0,110) ),
Label( 'train' , 31 , 255 , 'vehicle' , 7 , True , False , ( 0, 80,100) ),
Label( 'motorcycle' , 32 , 255 , 'vehicle' , 7 , True , False , ( 0, 0,230) ),
Label( 'bicycle' , 33 , 255 , 'vehicle' , 7 , True , False , (119, 11, 32) ),
Label( 'license plate' , -1 , 255 , 'vehicle' , 7 , False , True , ( 0, 0,142) ),
])
labels_cityscape_seg_train2_only = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'unlabeled' , 0 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'rectification border' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'out of roi' , 3 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'static' , 4 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'dynamic' , 5 , 255 , 'void' , 0 , False , True , (111, 74, 0) ),
Label( 'ground' , 6 , 255 , 'void' , 0 , False , True , ( 81, 0, 81) ),
Label( 'road' , 7 , 255 , 'flat' , 1 , False , False , (128, 64,128) ),
Label( 'sidewalk' , 8 , 255 , 'flat' , 1 , False , False , (244, 35,232) ),
Label( 'parking' , 9 , 255 , 'flat' , 1 , False , True , (250,170,160) ),
Label( 'rail track' , 10 , 255 , 'flat' , 1 , False , True , (230,150,140) ),
Label( 'building' , 11 , 0 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'wall' , 12 , 1 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'fence' , 13 , 2 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 14 , 255 , 'construction' , 2 , False , True , (180,165,180) ),
Label( 'bridge' , 15 , 255 , 'construction' , 2 , False , True , (150,100,100) ),
Label( 'tunnel' , 16 , 255 , 'construction' , 2 , False , True , (150,120, 90) ),
Label( 'pole' , 17 , 3 , 'object' , 3 , False , False , (153,153,153) ),
Label( 'polegroup' , 18 , 255 , 'object' , 3 , False , True , (153,153,153) ),
Label( 'traffic light' , 19 , 4 , 'object' , 3 , False , False , (250,170, 30) ),
Label( 'traffic sign' , 20 , 5 , 'object' , 3 , False , False , (220,220, 0) ),
Label( 'vegetation' , 21 , 255 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'terrain' , 22 , 255 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'sky' , 23 , 255 , 'sky' , 5 , False , False , ( 70,130,180) ),
Label( 'person' , 24 , 255 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'rider' , 25 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'car' , 26 , 255 , 'vehicle' , 7 , True , False , ( 0, 0,142) ),
Label( 'truck' , 27 , 255 , 'vehicle' , 7 , True , False , ( 0, 0, 70) ),
Label( 'bus' , 28 , 255 , 'vehicle' , 7 , True , False , ( 0, 60,100) ),
Label( 'caravan' , 29 , 255 , 'vehicle' , 7 , True , True , ( 0, 0, 90) ),
Label( 'trailer' , 30 , 255 , 'vehicle' , 7 , True , True , ( 0, 0,110) ),
Label( 'train' , 31 , 255 , 'vehicle' , 7 , True , False , ( 0, 80,100) ),
Label( 'motorcycle' , 32 , 255 , 'vehicle' , 7 , True , False , ( 0, 0,230) ),
Label( 'bicycle' , 33 , 255 , 'vehicle' , 7 , True , False , (119, 11, 32) ),
Label( 'license plate' , -1 , 255 , 'vehicle' , 7 , False , True , ( 0, 0,142) ),
])
labels_cityscape_seg_train2_eval = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'unlabeled' , 0 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'rectification border' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'out of roi' , 3 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'static' , 4 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'dynamic' , 5 , 255 , 'void' , 0 , False , True , (111, 74, 0) ),
Label( 'ground' , 6 , 255 , 'void' , 0 , False , True , ( 81, 0, 81) ),
Label( 'road' , 7 , 0 , 'flat' , 1 , False , False , (128, 64,128) ),
Label( 'sidewalk' , 8 , 1 , 'flat' , 1 , False , False , (244, 35,232) ),
Label( 'parking' , 9 , 255 , 'flat' , 1 , False , True , (250,170,160) ),
Label( 'rail track' , 10 , 255 , 'flat' , 1 , False , True , (230,150,140) ),
Label( 'building' , 11 , 5 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'wall' , 12 , 6 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'fence' , 13 , 7 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 14 , 255 , 'construction' , 2 , False , True , (180,165,180) ),
Label( 'bridge' , 15 , 255 , 'construction' , 2 , False , True , (150,100,100) ),
Label( 'tunnel' , 16 , 255 , 'construction' , 2 , False , True , (150,120, 90) ),
Label( 'pole' , 17 , 8 , 'object' , 3 , False , False , (153,153,153) ),
Label( 'polegroup' , 18 , 255 , 'object' , 3 , False , True , (153,153,153) ),
Label( 'traffic light' , 19 , 9 , 'object' , 3 , False , False , (250,170, 30) ),
Label( 'traffic sign' , 20 , 10 , 'object' , 3 , False , False , (220,220, 0) ),
Label( 'vegetation' , 21 , 2 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'terrain' , 22 , 3 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'sky' , 23 , 4 , 'sky' , 5 , False , False , ( 70,130,180) ),
Label( 'person' , 24 , 255 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'rider' , 25 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'car' , 26 , 255 , 'vehicle' , 7 , True , False , ( 0, 0,142) ),
Label( 'truck' , 27 , 255 , 'vehicle' , 7 , True , False , ( 0, 0, 70) ),
Label( 'bus' , 28 , 255 , 'vehicle' , 7 , True , False , ( 0, 60,100) ),
Label( 'caravan' , 29 , 255 , 'vehicle' , 7 , True , True , ( 0, 0, 90) ),
Label( 'trailer' , 30 , 255 , 'vehicle' , 7 , True , True , ( 0, 0,110) ),
Label( 'train' , 31 , 255 , 'vehicle' , 7 , True , False , ( 0, 80,100) ),
Label( 'motorcycle' , 32 , 255 , 'vehicle' , 7 , True , False , ( 0, 0,230) ),
Label( 'bicycle' , 33 , 255 , 'vehicle' , 7 , True , False , (119, 11, 32) ),
Label( 'license plate' , -1 , 255 , 'vehicle' , 7 , False , True , ( 0, 0,142) ),
])
labels_cityscape_seg_train3 = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'unlabeled' , 0 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'rectification border' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'out of roi' , 3 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'static' , 4 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'dynamic' , 5 , 255 , 'void' , 0 , False , True , (111, 74, 0) ),
Label( 'ground' , 6 , 255 , 'void' , 0 , False , True , ( 81, 0, 81) ),
Label( 'road' , 7 , 255 , 'flat' , 1 , False , False , (128, 64,128) ),
Label( 'sidewalk' , 8 , 255 , 'flat' , 1 , False , False , (244, 35,232) ),
Label( 'parking' , 9 , 255 , 'flat' , 1 , False , True , (250,170,160) ),
Label( 'rail track' , 10 , 255 , 'flat' , 1 , False , True , (230,150,140) ),
Label( 'building' , 11 , 255 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'wall' , 12 , 255 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'fence' , 13 , 255 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 14 , 255 , 'construction' , 2 , False , True , (180,165,180) ),
Label( 'bridge' , 15 , 255 , 'construction' , 2 , False , True , (150,100,100) ),
Label( 'tunnel' , 16 , 255 , 'construction' , 2 , False , True , (150,120, 90) ),
Label( 'pole' , 17 , 255 , 'object' , 3 , False , False , (153,153,153) ),
Label( 'polegroup' , 18 , 255 , 'object' , 3 , False , True , (153,153,153) ),
Label( 'traffic light' , 19 , 255 , 'object' , 3 , False , False , (250,170, 30) ),
Label( 'traffic sign' , 20 , 255 , 'object' , 3 , False , False , (220,220, 0) ),
Label( 'vegetation' , 21 , 255 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'terrain' , 22 , 255 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'sky' , 23 , 255 , 'sky' , 5 , False , False , ( 70,130,180) ),
Label( 'person' , 24 , 11 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'rider' , 25 , 12 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'car' , 26 , 13 , 'vehicle' , 7 , True , False , ( 0, 0,142) ),
Label( 'truck' , 27 , 14 , 'vehicle' , 7 , True , False , ( 0, 0, 70) ),
Label( 'bus' , 28 , 15 , 'vehicle' , 7 , True , False , ( 0, 60,100) ),
Label( 'caravan' , 29 , 255 , 'vehicle' , 7 , True , True , ( 0, 0, 90) ),
Label( 'trailer' , 30 , 255 , 'vehicle' , 7 , True , True , ( 0, 0,110) ),
Label( 'train' , 31 , 16 , 'vehicle' , 7 , True , False , ( 0, 80,100) ),
Label( 'motorcycle' , 32 , 17 , 'vehicle' , 7 , True , False , ( 0, 0,230) ),
Label( 'bicycle' , 33 , 18 , 'vehicle' , 7 , True , False , (119, 11, 32) ),
Label( 'license plate' , -1 , 255 , 'vehicle' , 7 , False , True , ( 0, 0,142) ),
])
labels_cityscape_seg_train3_only = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'unlabeled' , 0 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'rectification border' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'out of roi' , 3 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'static' , 4 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'dynamic' , 5 , 255 , 'void' , 0 , False , True , (111, 74, 0) ),
Label( 'ground' , 6 , 255 , 'void' , 0 , False , True , ( 81, 0, 81) ),
Label( 'road' , 7 , 255 , 'flat' , 1 , False , False , (128, 64,128) ),
Label( 'sidewalk' , 8 , 255 , 'flat' , 1 , False , False , (244, 35,232) ),
Label( 'parking' , 9 , 255 , 'flat' , 1 , False , True , (250,170,160) ),
Label( 'rail track' , 10 , 255 , 'flat' , 1 , False , True , (230,150,140) ),
Label( 'building' , 11 , 255 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'wall' , 12 , 255 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'fence' , 13 , 255 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 14 , 255 , 'construction' , 2 , False , True , (180,165,180) ),
Label( 'bridge' , 15 , 255 , 'construction' , 2 , False , True , (150,100,100) ),
Label( 'tunnel' , 16 , 255 , 'construction' , 2 , False , True , (150,120, 90) ),
Label( 'pole' , 17 , 255 , 'object' , 3 , False , False , (153,153,153) ),
Label( 'polegroup' , 18 , 255 , 'object' , 3 , False , True , (153,153,153) ),
Label( 'traffic light' , 19 , 255 , 'object' , 3 , False , False , (250,170, 30) ),
Label( 'traffic sign' , 20 , 255 , 'object' , 3 , False , False , (220,220, 0) ),
Label( 'vegetation' , 21 , 255 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'terrain' , 22 , 255 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'sky' , 23 , 255 , 'sky' , 5 , False , False , ( 70,130,180) ),
Label( 'person' , 24 , 0 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'rider' , 25 , 1 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'car' , 26 , 2 , 'vehicle' , 7 , True , False , ( 0, 0,142) ),
Label( 'truck' , 27 , 3 , 'vehicle' , 7 , True , False , ( 0, 0, 70) ),
Label( 'bus' , 28 , 4 , 'vehicle' , 7 , True , False , ( 0, 60,100) ),
Label( 'caravan' , 29 , 255 , 'vehicle' , 7 , True , True , ( 0, 0, 90) ),
Label( 'trailer' , 30 , 255 , 'vehicle' , 7 , True , True , ( 0, 0,110) ),
Label( 'train' , 31 , 5 , 'vehicle' , 7 , True , False , ( 0, 80,100) ),
Label( 'motorcycle' , 32 , 6 , 'vehicle' , 7 , True , False , ( 0, 0,230) ),
Label( 'bicycle' , 33 , 7 , 'vehicle' , 7 , True , False , (119, 11, 32) ),
Label( 'license plate' , -1 , 255 , 'vehicle' , 7 , False , True , ( 0, 0,142) ),
])
labels_cityscape_seg_train3_eval = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'unlabeled' , 0 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'rectification border' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'out of roi' , 3 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'static' , 4 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'dynamic' , 5 , 255 , 'void' , 0 , False , True , (111, 74, 0) ),
Label( 'ground' , 6 , 255 , 'void' , 0 , False , True , ( 81, 0, 81) ),
Label( 'road' , 7 , 0 , 'flat' , 1 , False , False , (128, 64,128) ),
Label( 'sidewalk' , 8 , 1 , 'flat' , 1 , False , False , (244, 35,232) ),
Label( 'parking' , 9 , 255 , 'flat' , 1 , False , True , (250,170,160) ),
Label( 'rail track' , 10 , 255 , 'flat' , 1 , False , True , (230,150,140) ),
Label( 'building' , 11 , 5 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'wall' , 12 , 6 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'fence' , 13 , 7 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 14 , 255 , 'construction' , 2 , False , True , (180,165,180) ),
Label( 'bridge' , 15 , 255 , 'construction' , 2 , False , True , (150,100,100) ),
Label( 'tunnel' , 16 , 255 , 'construction' , 2 , False , True , (150,120, 90) ),
Label( 'pole' , 17 , 8 , 'object' , 3 , False , False , (153,153,153) ),
Label( 'polegroup' , 18 , 255 , 'object' , 3 , False , True , (153,153,153) ),
Label( 'traffic light' , 19 , 9 , 'object' , 3 , False , False , (250,170, 30) ),
Label( 'traffic sign' , 20 , 10 , 'object' , 3 , False , False , (220,220, 0) ),
Label( 'vegetation' , 21 , 2 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'terrain' , 22 , 3 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'sky' , 23 , 4 , 'sky' , 5 , False , False , ( 70,130,180) ),
Label( 'person' , 24 , 11 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'rider' , 25 , 12 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'car' , 26 , 13 , 'vehicle' , 7 , True , False , ( 0, 0,142) ),
Label( 'truck' , 27 , 14 , 'vehicle' , 7 , True , False , ( 0, 0, 70) ),
Label( 'bus' , 28 , 15 , 'vehicle' , 7 , True , False , ( 0, 60,100) ),
Label( 'caravan' , 29 , 255 , 'vehicle' , 7 , True , True , ( 0, 0, 90) ),
Label( 'trailer' , 30 , 255 , 'vehicle' , 7 , True , True , ( 0, 0,110) ),
Label( 'train' , 31 , 16 , 'vehicle' , 7 , True , False , ( 0, 80,100) ),
Label( 'motorcycle' , 32 , 17 , 'vehicle' , 7 , True , False , ( 0, 0,230) ),
Label( 'bicycle' , 33 , 18 , 'vehicle' , 7 , True , False , (119, 11, 32) ),
Label( 'license plate' , -1 , 255 , 'vehicle' , 7 , False , True , ( 0, 0,142) ),
])
labels_kitti_seg = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'unlabeled' , 0 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'rectification border' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'out of roi' , 3 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'static' , 4 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'dynamic' , 5 , 255 , 'void' , 0 , False , True , (111, 74, 0) ),
Label( 'ground' , 6 , 255 , 'void' , 0 , False , True , ( 81, 0, 81) ),
Label( 'road' , 7 , 0 , 'flat' , 1 , False , False , (128, 64,128) ),
Label( 'sidewalk' , 8 , 1 , 'flat' , 1 , False , False , (244, 35,232) ),
Label( 'parking' , 9 , 255 , 'flat' , 1 , False , True , (250,170,160) ),
Label( 'rail track' , 10 , 255 , 'flat' , 1 , False , True , (230,150,140) ),
Label( 'building' , 11 , 2 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'wall' , 12 , 3 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'fence' , 13 , 4 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 14 , 255 , 'construction' , 2 , False , True , (180,165,180) ),
Label( 'bridge' , 15 , 255 , 'construction' , 2 , False , True , (150,100,100) ),
Label( 'tunnel' , 16 , 255 , 'construction' , 2 , False , True , (150,120, 90) ),
Label( 'pole' , 17 , 5 , 'object' , 3 , False , False , (153,153,153) ),
Label( 'polegroup' , 18 , 255 , 'object' , 3 , False , True , (153,153,153) ),
Label( 'traffic light' , 19 , 6 , 'object' , 3 , False , False , (250,170, 30) ),
Label( 'traffic sign' , 20 , 7 , 'object' , 3 , False , False , (220,220, 0) ),
Label( 'vegetation' , 21 , 8 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'terrain' , 22 , 9 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'sky' , 23 , 10 , 'sky' , 5 , False , False , ( 70,130,180) ),
Label( 'person' , 24 , 11 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'rider' , 25 , 12 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'car' , 26 , 13 , 'vehicle' , 7 , True , False , ( 0, 0,142) ),
Label( 'truck' , 27 , 14 , 'vehicle' , 7 , True , False , ( 0, 0, 70) ),
Label( 'bus' , 28 , 15 , 'vehicle' , 7 , True , False , ( 0, 60,100) ),
Label( 'caravan' , 29 , 255 , 'vehicle' , 7 , True , True , ( 0, 0, 90) ),
Label( 'trailer' , 30 , 255 , 'vehicle' , 7 , True , True , ( 0, 0,110) ),
Label( 'train' , 31 , 16 , 'vehicle' , 7 , True , False , ( 0, 80,100) ),
Label( 'motorcycle' , 32 , 17 , 'vehicle' , 7 , True , False , ( 0, 0,230) ),
Label( 'bicycle' , 33 , 18 , 'vehicle' , 7 , True , False , (119, 11, 32) ),
Label( 'license plate' , -1 , 255 , 'vehicle' , 7 , False , True , ( 0, 0,142) ),
])
labels_apollo_seg = ClassDefinitions([
# name id trainId category catId hasInstanceignoreInEval color
Label('others' , 0, 255 , '其他' , 0 ,False , True , 0x000000 ),
Label('rover' , 1, 255 , '其他' , 0 ,False , True , 0X000000 ),
Label('sky' , 17, 0 , '天空' , 1 ,False , False , 0x4682B4 ),
Label('car' , 33, 1 , '移动物体', 2 ,True , False , 0x00008E ),
Label('car_groups' , 161, 1 , '移动物体', 2 ,True , False , 0x00008E ),
Label('motorbicycle' , 34, 2 , '移动物体', 2 ,True , False , 0x0000E6 ),
Label('motorbicycle_group' , 162, 2 , '移动物体', 2 ,True , False , 0x0000E6 ),
Label('bicycle' , 35, 3 , '移动物体', 2 ,True , False , 0x770B20 ),
Label('bicycle_group' , 163, 3 , '移动物体', 2 ,True , False , 0x770B20 ),
Label('person' , 36, 4 , '移动物体', 2 ,True , False , 0x0080c0 ),
Label('person_group' , 164, 4 , '移动物体', 2 ,True , False , 0x0080c0 ),
Label('rider' , 37, 5 , '移动物体', 2 ,True , False , 0x804080 ),
Label('rider_group' , 165, 5 , '移动物体', 2 ,True , False , 0x804080 ),
Label('truck' , 38, 6 , '移动物体', 2 ,True , False , 0x8000c0 ),
Label('truck_group' , 166, 6 , '移动物体', 2 ,True , False , 0x8000c0 ),
Label('bus' , 39, 7 , '移动物体', 2 ,True , False , 0xc00040 ),
Label('bus_group' , 167, 7 , '移动物体', 2 ,True , False , 0xc00040 ),
Label('tricycle' , 40, 8 , '移动物体', 2 ,True , False , 0x8080c0 ),
Label('tricycle_group' , 168, 8 , '移动物体', 2 ,True , False , 0x8080c0 ),
Label('road' , 49, 9 , '平面' , 3 ,False , False , 0xc080c0 ),
Label('siderwalk' , 50, 10 , '平面' , 3 ,False , False , 0xc08040 ),
Label('traffic_cone' , 65, 11 , '路间障碍', 4 ,False , False , 0x000040 ),
Label('road_pile' , 66, 12 , '路间障碍', 4 ,False , False , 0x0000c0 ),
Label('fence' , 67, 13 , '路间障碍', 4 ,False , False , 0x404080 ),
Label('traffic_light' , 81, 14 , '路边物体', 5 ,False , False , 0xc04080 ),
Label('pole' , 82, 15 , '路边物体', 5 ,False , False , 0xc08080 ),
Label('traffic_sign' , 83, 16 , '路边物体', 5 ,False , False , 0x004040 ),
Label('wall' , 84, 17 , '路边物体', 5 ,False , False , 0xc0c080 ),
Label('dustbin' , 85, 18 , '路边物体', 5 ,False , False , 0x4000c0 ),
Label('billboard' , 86, 19 , '路边物体', 5 ,False , False , 0xc000c0 ),
Label('building' , 97, 20 , '建筑' , 6 ,False , False , 0xc00080 ),
Label('bridge' , 98, 255 , '建筑' , 6 ,False , True , 0x808000 ),
Label('tunnel' , 99, 255 , '建筑' , 6 ,False , True , 0x800000 ),
Label('overpass' , 100, 255 , '建筑' , 6 ,False , True , 0x408040 ),
Label('vegatation' , 113, 21 , '自然' , 7 ,False , False , 0x808040 ),
Label('unlabeled' , 255, 255 , '未标注' , 8 ,False , True , 0xFFFFFF ),
])
labels_mapillary_seg = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'car mount' , 0 , 255 , 'void' , 0 , False , True , ( 32, 32, 32) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , (120, 10, 10) ),
Label( 'unlabeled' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'barrier' , 3 , 0 , 'construction' , 2 , False , False , ( 90,120,150) ),
Label( 'bike lane' , 4 , 1 , 'construction' , 2 , False , False , (128, 64,255) ),
Label( 'bridge' , 5 , 2 , 'construction' , 2 , False , False , (150,100,100) ),
Label( 'building' , 6 , 3 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'crosswalk - plain' , 7 , 4 , 'construction' , 2 , True , False , (140,140,200) ),
Label( 'curb' , 8 , 5 , 'construction' , 2 , False , False , (196,196,196) ),
Label( 'curb cut' , 9 , 6 , 'construction' , 2 , False , False , (170,170,170) ),
Label( 'fence' , 10 , 7 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 11 , 8 , 'construction' , 2 , False , False , (180,165,180) ),
Label( 'parking' , 12 , 9 , 'construction' , 2 , False , False , (250,170,160) ),
Label( 'pedestrian area' , 13 , 10 , 'construction' , 2 , False , False , ( 96, 96, 96) ),
Label( 'rail track' , 14 , 11 , 'construction' , 2 , False , False , (230,150,140) ),
Label( 'road' , 15 , 12 , 'construction' , 2 , False , False , (128, 64,128) ),
Label( 'service lane' , 16 , 13 , 'construction' , 2 , False , False , (110,110,110) ),
Label( 'sidewalk' , 17 , 14 , 'construction' , 2 , False , False , (244, 35,232) ),
Label( 'tunnel' , 18 , 15 , 'construction' , 2 , False , False , (150,120, 90) ),
Label( 'wall' , 19 , 16 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'banner' , 20 , 17 , 'object' , 3 , True , False , (255,255,128) ),
Label( 'bench' , 21 , 18 , 'object' , 3 , True , False , (250, 0, 30) ),
Label( 'bicycle' , 22 , 19 , 'object' , 3 , True , False , (119, 11, 32) ),
Label( 'bike rack' , 23 , 20 , 'object' , 3 , True , False , (100,140,180) ),
Label( 'billboard' , 24 , 21 , 'object' , 3 , True , False , (220,220,220) ),
Label( 'boat' , 25 , 22 , 'object' , 3 , True , False , (150, 0,255) ),
Label( 'bus' , 26 , 23 , 'object' , 3 , True , False , ( 0, 60,100) ),
Label( 'cctv camera' , 27 , 24 , 'object' , 3 , True , False , (222, 40, 40) ),
Label( 'car' , 28 , 25 , 'object' , 3 , True , False , ( 0, 0,142) ),
Label( 'caravan' , 29 , 26 , 'object' , 3 , True , False , ( 0, 0, 90) ),
Label( 'catch basin' , 30 , 27 , 'object' , 3 , True , False , (220,128,128) ),
Label( 'fire hydrant' , 31 , 28 , 'object' , 3 , True , False , (100,170, 30) ),
Label( 'junction box' , 32 , 29 , 'object' , 3 , True , False , ( 40, 40, 40) ),
Label( 'mailbox' , 33 , 30 , 'object' , 3 , True , False , ( 33, 33, 33) ),
Label( 'manhole' , 34 , 31 , 'object' , 3 , True , False , (100,128,160) ),
Label( 'motorcycle' , 35 , 32 , 'object' , 3 , True , False , ( 0, 0,230) ),
Label( 'on rails' , 36 , 33 , 'object' , 3 , False , False , ( 0, 80,100) ),
Label( 'other vehicle' , 37 , 34 , 'object' , 3 , True , False , (128, 64, 64) ),
Label( 'phone booth' , 38 , 35 , 'object' , 3 , True , False , (142, 0, 0) ),
Label( 'pole' , 39 , 36 , 'object' , 3 , True , False , (153,153,153) ),
Label( 'pothole' , 40 , 37 , 'object' , 3 , False , False , ( 70,100,150) ),
Label( 'street light' , 41 , 38 , 'object' , 3 , True , False , (210,170,100) ),
Label( 'traffic light' , 42 , 39 , 'object' , 3 , True , False , (250,170, 30) ),
Label( 'traffic sign (back)' , 43 , 40 , 'object' , 3 , True , False , (192,192,192) ),
Label( 'traffic sign (front)' , 44 , 41 , 'object' , 3 , True , False , (220,220, 0) ),
Label( 'traffic sign frame' , 45 , 42 , 'object' , 3 , True , False , (128,128,128) ),
Label( 'trailer' , 46 , 43 , 'object' , 3 , True , False , ( 0, 0,110) ),
Label( 'trash can' , 47 , 44 , 'object' , 3 , True , False , (140,140, 20) ),
Label( 'truck' , 48 , 45 , 'object' , 3 , True , False , ( 0, 0, 70) ),
Label( 'utility pole' , 49 , 46 , 'object' , 3 , True , False , ( 0, 0, 80) ),
Label( 'wheeled slow' , 50 , 47 , 'object' , 3 , True , False , ( 0, 0,192) ),
Label( 'mountain' , 51 , 48 , 'nature' , 4 , False , False , ( 64,170, 64) ),
Label( 'sand' , 52 , 49 , 'nature' , 4 , False , False , (230,160, 50) ),
Label( 'sky' , 53 , 50 , 'nature' , 4 , False , False , ( 70,130,180) ),
Label( 'snow' , 54 , 51 , 'nature' , 4 , False , False , (190,255,255) ),
Label( 'terrain' , 55 , 52 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'vegetation' , 56 , 53 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'water' , 57 , 54 , 'nature' , 4 , False , False , ( 0,170, 30) ),
Label( 'bicyclist' , 58 , 55 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'motorcyclist' , 59 , 56 , 'human' , 6 , True , False , (255, 0,100) ),
Label( 'other rider' , 60 , 57 , 'human' , 6 , True , False , (255, 0,200) ),
Label( 'person' , 61 , 58 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'lane marking - crosswalk' , 62 , 59 , 'marking' , 8 , True , False , (200,128,128) ),
Label( 'lane marking - general' , 63 , 60 , 'marking' , 8 , False , False , (255,255,255) ),
Label( 'bird' , 64 , 61 , 'animal' , 9 , True , False , (165, 42, 42) ),
Label( 'ground animal' , 65 , 62 , 'animal' , 9 , True , False , ( 0,192, 0) ),
])
labels_mapillary_seg_cityscapes = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'car mount' , 0 , 255 , 'void' , 0 , False , True , ( 32, 32, 32) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , (120, 10, 10) ),
Label( 'unlabeled' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'barrier' , 3 , 255 , 'construction' , 2 , False , False , ( 90,120,150) ),
Label( 'bike lane' , 4 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt, aber nicht 100% eindeutig, z.b. Radstreifen
Label( 'bridge' , 5 , 255 , 'construction' , 2 , False , False , (150,100,100) ),
Label( 'building' , 6 , 5 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'crosswalk - plain' , 7 , 0 , 'construction' , 2 , True , False , (128, 64,128) ),
Label( 'curb' , 8 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'curb cut' , 9 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'fence' , 10 , 7 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 11 , 255 , 'construction' , 2 , False , False , (180,165,180) ),
Label( 'parking' , 12 , 255 , 'construction' , 2 , False , False , (250,170,160) ),
Label( 'pedestrian area' , 13 , 255 , 'construction' , 2 , False , False , ( 96, 96, 96) ),
Label( 'rail track' , 14 , 255 , 'construction' , 2 , False , False , (230,150,140) ),
Label( 'road' , 15 , 0 , 'construction' , 2 , False , False , (128, 64,128) ),
Label( 'service lane' , 16 , 0 , 'construction' , 2 , False , False , (128, 64,128) ), # passt denke ich
Label( 'sidewalk' , 17 , 1 , 'construction' , 2 , False , False , (244, 35,232) ),
Label( 'tunnel' , 18 , 255 , 'construction' , 2 , False , False , (150,120, 90) ),
Label( 'wall' , 19 , 6 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'banner' , 20 , 255 , 'object' , 3 , True , False , (255,255,128) ), #
Label( 'bench' , 21 , 255 , 'object' , 3 , True , False , (250, 0, 30) ),
Label( 'bicycle' , 22 , 18 , 'object' , 3 , True , False , (119, 11, 32) ),
Label( 'bike rack' , 23 , 255 , 'object' , 3 , True , False , (100,140,180) ),
Label( 'billboard' , 24 , 255 , 'object' , 3 , True , False , (220,220,220) ),
Label( 'boat' , 25 , 255 , 'object' , 3 , True , False , (150, 0,255) ),
Label( 'bus' , 26 , 15 , 'object' , 3 , True , False , ( 0, 60,100) ),
Label( 'cctv camera' , 27 , 255 , 'object' , 3 , True , False , (222, 40, 40) ),
Label( 'car' , 28 , 13 , 'object' , 3 , True , False , ( 0, 0,142) ),
Label( 'caravan' , 29 , 255 , 'object' , 3 , True , False , ( 0, 0, 90) ),
Label( 'catch basin' , 30 , 255 , 'object' , 3 , True , False , (220,128,128) ),
Label( 'fire hydrant' , 31 , 255 , 'object' , 3 , True , False , (100,170, 30) ),
Label( 'junction box' , 32 , 255 , 'object' , 3 , True , False , ( 40, 40, 40) ),
Label( 'mailbox' , 33 , 255 , 'object' , 3 , True , False , ( 33, 33, 33) ),
Label( 'manhole' , 34 , 0 , 'object' , 3 , True , False , (128, 64,128) ),
Label( 'motorcycle' , 35 , 17 , 'object' , 3 , True , False , ( 0, 0,230) ),
Label( 'on rails' , 36 , 16 , 'object' , 3 , False , False , ( 0, 80,100) ),
Label( 'other vehicle' , 37 , 255 , 'object' , 3 , True , False , (128, 64, 64) ),
Label( 'phone booth' , 38 , 255 , 'object' , 3 , True , False , (142, 0, 0) ),
Label( 'pole' , 39 , 8 , 'object' , 3 , True , False , (153,153,153) ),
Label( 'pothole' , 40 , 0 , 'object' , 3 , False , False , (128, 64,128) ),
Label( 'street light' , 41 , 255 , 'object' , 3 , True , False , (210,170,100) ),
Label( 'traffic light' , 42 , 9 , 'object' , 3 , True , False , (250,170, 30) ),
Label( 'traffic sign (back)' , 43 , 255 , 'object' , 3 , True , False , (192,192,192) ),
Label( 'traffic sign (front)' , 44 , 10 , 'object' , 3 , True , False , (220,220, 0) ),
Label( 'traffic sign frame' , 45 , 255 , 'object' , 3 , True , False , (128,128,128) ), #
Label( 'trailer' , 46 , 255 , 'object' , 3 , True , False , ( 0, 0,110) ),
Label( 'trash can' , 47 , 255 , 'object' , 3 , True , False , (140,140, 20) ),
Label( 'truck' , 48 , 14 , 'object' , 3 , True , False , ( 0, 0, 70) ),
Label( 'utility pole' , 49 , 255 , 'object' , 3 , True , False , ( 0, 0, 80) ),
Label( 'wheeled slow' , 50 , 255 , 'object' , 3 , True , False , ( 0, 0,192) ),
Label( 'mountain' , 51 , 255 , 'nature' , 4 , False , False , ( 64,170, 64) ),
Label( 'sand' , 52 , 255 , 'nature' , 4 , False , False , (230,160, 50) ),
Label( 'sky' , 53 , 4 , 'nature' , 4 , False , False , ( 70,130,180) ),
Label( 'snow' , 54 , 255 , 'nature' , 4 , False , False , (190,255,255) ),
Label( 'terrain' , 55 , 3 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'vegetation' , 56 , 2 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'water' , 57 , 255 , 'nature' , 4 , False , False , ( 0,170, 30) ),
Label( 'bicyclist' , 58 , 12 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'motorcyclist' , 59 , 12 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'other rider' , 60 , 12 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'person' , 61 , 11 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'lane marking - crosswalk' , 62 , 0 , 'marking' , 8 , True , False , (128, 64,128) ),
Label( 'lane marking - general' , 63 , 0 , 'marking' , 8 , False , False , (128, 64,128) ),
Label( 'bird' , 64 , 255 , 'animal' , 9 , True , False , (165, 42, 42) ),
Label( 'ground animal' , 65 , 255 , 'animal' , 9 , True , False , ( 0,192, 0) ),
])
labels_mapillary_seg_cityscapes_train1 = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'car mount' , 0 , 255 , 'void' , 0 , False , True , ( 32, 32, 32) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , (120, 10, 10) ),
Label( 'unlabeled' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'barrier' , 3 , 255 , 'construction' , 2 , False , False , ( 90,120,150) ),
Label( 'bike lane' , 4 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt, aber nicht 100% eindeutig, z.b. Radstreifen
Label( 'bridge' , 5 , 255 , 'construction' , 2 , False , False , (150,100,100) ),
Label( 'building' , 6 , 255 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'crosswalk - plain' , 7 , 0 , 'construction' , 2 , True , False , (128, 64,128) ),
Label( 'curb' , 8 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'curb cut' , 9 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'fence' , 10 , 255 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 11 , 255 , 'construction' , 2 , False , False , (180,165,180) ),
Label( 'parking' , 12 , 255 , 'construction' , 2 , False , False , (250,170,160) ),
Label( 'pedestrian area' , 13 , 255 , 'construction' , 2 , False , False , ( 96, 96, 96) ),
Label( 'rail track' , 14 , 255 , 'construction' , 2 , False , False , (230,150,140) ),
Label( 'road' , 15 , 0 , 'construction' , 2 , False , False , (128, 64,128) ),
Label( 'service lane' , 16 , 0 , 'construction' , 2 , False , False , (128, 64,128) ), # passt denke ich
Label( 'sidewalk' , 17 , 1 , 'construction' , 2 , False , False , (244, 35,232) ),
Label( 'tunnel' , 18 , 255 , 'construction' , 2 , False , False , (150,120, 90) ),
Label( 'wall' , 19 , 255 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'banner' , 20 , 255 , 'object' , 3 , True , False , (255,255,128) ), #
Label( 'bench' , 21 , 255 , 'object' , 3 , True , False , (250, 0, 30) ),
Label( 'bicycle' , 22 , 255 , 'object' , 3 , True , False , (119, 11, 32) ),
Label( 'bike rack' , 23 , 255 , 'object' , 3 , True , False , (100,140,180) ),
Label( 'billboard' , 24 , 255 , 'object' , 3 , True , False , (220,220,220) ),
Label( 'boat' , 25 , 255 , 'object' , 3 , True , False , (150, 0,255) ),
Label( 'bus' , 26 , 255 , 'object' , 3 , True , False , ( 0, 60,100) ),
Label( 'cctv camera' , 27 , 255 , 'object' , 3 , True , False , (222, 40, 40) ),
Label( 'car' , 28 , 255 , 'object' , 3 , True , False , ( 0, 0,142) ),
Label( 'caravan' , 29 , 255 , 'object' , 3 , True , False , ( 0, 0, 90) ),
Label( 'catch basin' , 30 , 255 , 'object' , 3 , True , False , (220,128,128) ),
Label( 'fire hydrant' , 31 , 255 , 'object' , 3 , True , False , (100,170, 30) ),
Label( 'junction box' , 32 , 255 , 'object' , 3 , True , False , ( 40, 40, 40) ),
Label( 'mailbox' , 33 , 255 , 'object' , 3 , True , False , ( 33, 33, 33) ),
Label( 'manhole' , 34 , 0 , 'object' , 3 , True , False , (128, 64,128) ),
Label( 'motorcycle' , 35 , 255 , 'object' , 3 , True , False , ( 0, 0,230) ),
Label( 'on rails' , 36 , 255 , 'object' , 3 , False , False , ( 0, 80,100) ),
Label( 'other vehicle' , 37 , 255 , 'object' , 3 , True , False , (128, 64, 64) ),
Label( 'phone booth' , 38 , 255 , 'object' , 3 , True , False , (142, 0, 0) ),
Label( 'pole' , 39 , 255 , 'object' , 3 , True , False , (153,153,153) ),
Label( 'pothole' , 40 , 0 , 'object' , 3 , False , False , (128, 64,128) ),
Label( 'street light' , 41 , 255 , 'object' , 3 , True , False , (210,170,100) ),
Label( 'traffic light' , 42 , 255 , 'object' , 3 , True , False , (250,170, 30) ),
Label( 'traffic sign (back)' , 43 , 255 , 'object' , 3 , True , False , (192,192,192) ),
Label( 'traffic sign (front)' , 44 , 255 , 'object' , 3 , True , False , (220,220, 0) ),
Label( 'traffic sign frame' , 45 , 255 , 'object' , 3 , True , False , (128,128,128) ), #
Label( 'trailer' , 46 , 255 , 'object' , 3 , True , False , ( 0, 0,110) ),
Label( 'trash can' , 47 , 255 , 'object' , 3 , True , False , (140,140, 20) ),
Label( 'truck' , 48 , 255 , 'object' , 3 , True , False , ( 0, 0, 70) ),
Label( 'utility pole' , 49 , 255 , 'object' , 3 , True , False , ( 0, 0, 80) ),
Label( 'wheeled slow' , 50 , 255 , 'object' , 3 , True , False , ( 0, 0,192) ),
Label( 'mountain' , 51 , 255 , 'nature' , 4 , False , False , ( 64,170, 64) ),
Label( 'sand' , 52 , 255 , 'nature' , 4 , False , False , (230,160, 50) ),
Label( 'sky' , 53 , 4 , 'nature' , 4 , False , False , ( 70,130,180) ),
Label( 'snow' , 54 , 255 , 'nature' , 4 , False , False , (190,255,255) ),
Label( 'terrain' , 55 , 3 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'vegetation' , 56 , 2 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'water' , 57 , 255 , 'nature' , 4 , False , False , ( 0,170, 30) ),
Label( 'bicyclist' , 58 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'motorcyclist' , 59 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'other rider' , 60 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'person' , 61 , 255 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'lane marking - crosswalk' , 62 , 0 , 'marking' , 8 , True , False , (128, 64,128) ),
Label( 'lane marking - general' , 63 , 0 , 'marking' , 8 , False , False , (128, 64,128) ),
Label( 'bird' , 64 , 255 , 'animal' , 9 , True , False , (165, 42, 42) ),
Label( 'ground animal' , 65 , 255 , 'animal' , 9 , True , False , ( 0,192, 0) ),
])
labels_mapillary_seg_cityscapes_train2 = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'car mount' , 0 , 255 , 'void' , 0 , False , True , ( 32, 32, 32) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , (120, 10, 10) ),
Label( 'unlabeled' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'barrier' , 3 , 255 , 'construction' , 2 , False , False , ( 90,120,150) ),
Label( 'bike lane' , 4 , 255 , 'construction' , 2 , False , False , (244, 35,232) ), # passt, aber nicht 100% eindeutig, z.b. Radstreifen
Label( 'bridge' , 5 , 255 , 'construction' , 2 , False , False , (150,100,100) ),
Label( 'building' , 6 , 5 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'crosswalk - plain' , 7 , 255 , 'construction' , 2 , True , False , (128, 64,128) ),
Label( 'curb' , 8 , 255 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'curb cut' , 9 , 255 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'fence' , 10 , 7 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 11 , 255 , 'construction' , 2 , False , False , (180,165,180) ),
Label( 'parking' , 12 , 255 , 'construction' , 2 , False , False , (250,170,160) ),
Label( 'pedestrian area' , 13 , 255 , 'construction' , 2 , False , False , ( 96, 96, 96) ),
Label( 'rail track' , 14 , 255 , 'construction' , 2 , False , False , (230,150,140) ),
Label( 'road' , 15 , 255 , 'construction' , 2 , False , False , (128, 64,128) ),
Label( 'service lane' , 16 , 255 , 'construction' , 2 , False , False , (128, 64,128) ), # passt denke ich
Label( 'sidewalk' , 17 , 255 , 'construction' , 2 , False , False , (244, 35,232) ),
Label( 'tunnel' , 18 , 255 , 'construction' , 2 , False , False , (150,120, 90) ),
Label( 'wall' , 19 , 6 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'banner' , 20 , 255 , 'object' , 3 , True , False , (255,255,128) ), #
Label( 'bench' , 21 , 255 , 'object' , 3 , True , False , (250, 0, 30) ),
Label( 'bicycle' , 22 , 255 , 'object' , 3 , True , False , (119, 11, 32) ),
Label( 'bike rack' , 23 , 255 , 'object' , 3 , True , False , (100,140,180) ),
Label( 'billboard' , 24 , 255 , 'object' , 3 , True , False , (220,220,220) ),
Label( 'boat' , 25 , 255 , 'object' , 3 , True , False , (150, 0,255) ),
Label( 'bus' , 26 , 255 , 'object' , 3 , True , False , ( 0, 60,100) ),
Label( 'cctv camera' , 27 , 255 , 'object' , 3 , True , False , (222, 40, 40) ),
Label( 'car' , 28 , 255 , 'object' , 3 , True , False , ( 0, 0,142) ),
Label( 'caravan' , 29 , 255 , 'object' , 3 , True , False , ( 0, 0, 90) ),
Label( 'catch basin' , 30 , 255 , 'object' , 3 , True , False , (220,128,128) ),
Label( 'fire hydrant' , 31 , 255 , 'object' , 3 , True , False , (100,170, 30) ),
Label( 'junction box' , 32 , 255 , 'object' , 3 , True , False , ( 40, 40, 40) ),
Label( 'mailbox' , 33 , 255 , 'object' , 3 , True , False , ( 33, 33, 33) ),
Label( 'manhole' , 34 , 255 , 'object' , 3 , True , False , (128, 64,128) ),
Label( 'motorcycle' , 35 , 255 , 'object' , 3 , True , False , ( 0, 0,230) ),
Label( 'on rails' , 36 , 255 , 'object' , 3 , False , False , ( 0, 80,100) ),
Label( 'other vehicle' , 37 , 255 , 'object' , 3 , True , False , (128, 64, 64) ),
Label( 'phone booth' , 38 , 255 , 'object' , 3 , True , False , (142, 0, 0) ),
Label( 'pole' , 39 , 8 , 'object' , 3 , True , False , (153,153,153) ),
Label( 'pothole' , 40 , 255 , 'object' , 3 , False , False , (128, 64,128) ),
Label( 'street light' , 41 , 255 , 'object' , 3 , True , False , (210,170,100) ),
Label( 'traffic light' , 42 , 9 , 'object' , 3 , True , False , (250,170, 30) ),
Label( 'traffic sign (back)' , 43 , 255 , 'object' , 3 , True , False , (192,192,192) ),
Label( 'traffic sign (front)' , 44 , 10 , 'object' , 3 , True , False , (220,220, 0) ),
Label( 'traffic sign frame' , 45 , 255 , 'object' , 3 , True , False , (128,128,128) ), #
Label( 'trailer' , 46 , 255 , 'object' , 3 , True , False , ( 0, 0,110) ),
Label( 'trash can' , 47 , 255 , 'object' , 3 , True , False , (140,140, 20) ),
Label( 'truck' , 48 , 255 , 'object' , 3 , True , False , ( 0, 0, 70) ),
Label( 'utility pole' , 49 , 255 , 'object' , 3 , True , False , ( 0, 0, 80) ),
Label( 'wheeled slow' , 50 , 255 , 'object' , 3 , True , False , ( 0, 0,192) ),
Label( 'mountain' , 51 , 255 , 'nature' , 4 , False , False , ( 64,170, 64) ),
Label( 'sand' , 52 , 255 , 'nature' , 4 , False , False , (230,160, 50) ),
Label( 'sky' , 53 , 255 , 'nature' , 4 , False , False , ( 70,130,180) ),
Label( 'snow' , 54 , 255 , 'nature' , 4 , False , False , (190,255,255) ),
Label( 'terrain' , 55 , 255 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'vegetation' , 56 , 255 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'water' , 57 , 255 , 'nature' , 4 , False , False , ( 0,170, 30) ),
Label( 'bicyclist' , 58 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'motorcyclist' , 59 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'other rider' , 60 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'person' , 61 , 255 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'lane marking - crosswalk' , 62 , 255 , 'marking' , 8 , True , False , (128, 64,128) ),
Label( 'lane marking - general' , 63 , 255 , 'marking' , 8 , False , False , (128, 64,128) ),
Label( 'bird' , 64 , 255 , 'animal' , 9 , True , False , (165, 42, 42) ),
Label( 'ground animal' , 65 , 255 , 'animal' , 9 , True , False , ( 0,192, 0) ),
])
labels_mapillary_seg_cityscapes_train2_only = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'car mount' , 0 , 255 , 'void' , 0 , False , True , ( 32, 32, 32) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , (120, 10, 10) ),
Label( 'unlabeled' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'barrier' , 3 , 255 , 'construction' , 2 , False , False , ( 90,120,150) ),
Label( 'bike lane' , 4 , 255 , 'construction' , 2 , False , False , (244, 35,232) ), # passt, aber nicht 100% eindeutig, z.b. Radstreifen
Label( 'bridge' , 5 , 255 , 'construction' , 2 , False , False , (150,100,100) ),
Label( 'building' , 6 , 0 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'crosswalk - plain' , 7 , 255 , 'construction' , 2 , True , False , (128, 64,128) ),
Label( 'curb' , 8 , 255 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'curb cut' , 9 , 255 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'fence' , 10 , 2 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 11 , 255 , 'construction' , 2 , False , False , (180,165,180) ),
Label( 'parking' , 12 , 255 , 'construction' , 2 , False , False , (250,170,160) ),
Label( 'pedestrian area' , 13 , 255 , 'construction' , 2 , False , False , ( 96, 96, 96) ),
Label( 'rail track' , 14 , 255 , 'construction' , 2 , False , False , (230,150,140) ),
Label( 'road' , 15 , 255 , 'construction' , 2 , False , False , (128, 64,128) ),
Label( 'service lane' , 16 , 255 , 'construction' , 2 , False , False , (128, 64,128) ), # passt denke ich
Label( 'sidewalk' , 17 , 255 , 'construction' , 2 , False , False , (244, 35,232) ),
Label( 'tunnel' , 18 , 255 , 'construction' , 2 , False , False , (150,120, 90) ),
Label( 'wall' , 19 , 1 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'banner' , 20 , 255 , 'object' , 3 , True , False , (255,255,128) ), #
Label( 'bench' , 21 , 255 , 'object' , 3 , True , False , (250, 0, 30) ),
Label( 'bicycle' , 22 , 255 , 'object' , 3 , True , False , (119, 11, 32) ),
Label( 'bike rack' , 23 , 255 , 'object' , 3 , True , False , (100,140,180) ),
Label( 'billboard' , 24 , 255 , 'object' , 3 , True , False , (220,220,220) ),
Label( 'boat' , 25 , 255 , 'object' , 3 , True , False , (150, 0,255) ),
Label( 'bus' , 26 , 255 , 'object' , 3 , True , False , ( 0, 60,100) ),
Label( 'cctv camera' , 27 , 255 , 'object' , 3 , True , False , (222, 40, 40) ),
Label( 'car' , 28 , 255 , 'object' , 3 , True , False , ( 0, 0,142) ),
Label( 'caravan' , 29 , 255 , 'object' , 3 , True , False , ( 0, 0, 90) ),
Label( 'catch basin' , 30 , 255 , 'object' , 3 , True , False , (220,128,128) ),
Label( 'fire hydrant' , 31 , 255 , 'object' , 3 , True , False , (100,170, 30) ),
Label( 'junction box' , 32 , 255 , 'object' , 3 , True , False , ( 40, 40, 40) ),
Label( 'mailbox' , 33 , 255 , 'object' , 3 , True , False , ( 33, 33, 33) ),
Label( 'manhole' , 34 , 255 , 'object' , 3 , True , False , (128, 64,128) ),
Label( 'motorcycle' , 35 , 255 , 'object' , 3 , True , False , ( 0, 0,230) ),
Label( 'on rails' , 36 , 255 , 'object' , 3 , False , False , ( 0, 80,100) ),
Label( 'other vehicle' , 37 , 255 , 'object' , 3 , True , False , (128, 64, 64) ),
Label( 'phone booth' , 38 , 255 , 'object' , 3 , True , False , (142, 0, 0) ),
Label( 'pole' , 39 , 3 , 'object' , 3 , True , False , (153,153,153) ),
Label( 'pothole' , 40 , 255 , 'object' , 3 , False , False , (128, 64,128) ),
Label( 'street light' , 41 , 255 , 'object' , 3 , True , False , (210,170,100) ),
Label( 'traffic light' , 42 , 4 , 'object' , 3 , True , False , (250,170, 30) ),
Label( 'traffic sign (back)' , 43 , 255 , 'object' , 3 , True , False , (192,192,192) ),
Label( 'traffic sign (front)' , 44 , 5 , 'object' , 3 , True , False , (220,220, 0) ),
Label( 'traffic sign frame' , 45 , 255 , 'object' , 3 , True , False , (128,128,128) ), #
Label( 'trailer' , 46 , 255 , 'object' , 3 , True , False , ( 0, 0,110) ),
Label( 'trash can' , 47 , 255 , 'object' , 3 , True , False , (140,140, 20) ),
Label( 'truck' , 48 , 255 , 'object' , 3 , True , False , ( 0, 0, 70) ),
Label( 'utility pole' , 49 , 255 , 'object' , 3 , True , False , ( 0, 0, 80) ),
Label( 'wheeled slow' , 50 , 255 , 'object' , 3 , True , False , ( 0, 0,192) ),
Label( 'mountain' , 51 , 255 , 'nature' , 4 , False , False , ( 64,170, 64) ),
Label( 'sand' , 52 , 255 , 'nature' , 4 , False , False , (230,160, 50) ),
Label( 'sky' , 53 , 255 , 'nature' , 4 , False , False , ( 70,130,180) ),
Label( 'snow' , 54 , 255 , 'nature' , 4 , False , False , (190,255,255) ),
Label( 'terrain' , 55 , 255 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'vegetation' , 56 , 255 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'water' , 57 , 255 , 'nature' , 4 , False , False , ( 0,170, 30) ),
Label( 'bicyclist' , 58 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'motorcyclist' , 59 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'other rider' , 60 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'person' , 61 , 255 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'lane marking - crosswalk' , 62 , 255 , 'marking' , 8 , True , False , (128, 64,128) ),
Label( 'lane marking - general' , 63 , 255 , 'marking' , 8 , False , False , (128, 64,128) ),
Label( 'bird' , 64 , 255 , 'animal' , 9 , True , False , (165, 42, 42) ),
Label( 'ground animal' , 65 , 255 , 'animal' , 9 , True , False , ( 0,192, 0) ),
])
labels_mapillary_seg_cityscapes_train2_eval = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'car mount' , 0 , 255 , 'void' , 0 , False , True , ( 32, 32, 32) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , (120, 10, 10) ),
Label( 'unlabeled' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'barrier' , 3 , 255 , 'construction' , 2 , False , False , ( 90,120,150) ),
Label( 'bike lane' , 4 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt, aber nicht 100% eindeutig, z.b. Radstreifen
Label( 'bridge' , 5 , 255 , 'construction' , 2 , False , False , (150,100,100) ),
Label( 'building' , 6 , 5 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'crosswalk - plain' , 7 , 0 , 'construction' , 2 , True , False , (128, 64,128) ),
Label( 'curb' , 8 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'curb cut' , 9 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'fence' , 10 , 7 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 11 , 255 , 'construction' , 2 , False , False , (180,165,180) ),
Label( 'parking' , 12 , 255 , 'construction' , 2 , False , False , (250,170,160) ),
Label( 'pedestrian area' , 13 , 255 , 'construction' , 2 , False , False , ( 96, 96, 96) ),
Label( 'rail track' , 14 , 255 , 'construction' , 2 , False , False , (230,150,140) ),
Label( 'road' , 15 , 0 , 'construction' , 2 , False , False , (128, 64,128) ),
Label( 'service lane' , 16 , 0 , 'construction' , 2 , False , False , (128, 64,128) ), # passt denke ich
Label( 'sidewalk' , 17 , 1 , 'construction' , 2 , False , False , (244, 35,232) ),
Label( 'tunnel' , 18 , 255 , 'construction' , 2 , False , False , (150,120, 90) ),
Label( 'wall' , 19 , 6 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'banner' , 20 , 255 , 'object' , 3 , True , False , (255,255,128) ), #
Label( 'bench' , 21 , 255 , 'object' , 3 , True , False , (250, 0, 30) ),
Label( 'bicycle' , 22 , 255 , 'object' , 3 , True , False , (119, 11, 32) ),
Label( 'bike rack' , 23 , 255 , 'object' , 3 , True , False , (100,140,180) ),
Label( 'billboard' , 24 , 255 , 'object' , 3 , True , False , (220,220,220) ),
Label( 'boat' , 25 , 255 , 'object' , 3 , True , False , (150, 0,255) ),
Label( 'bus' , 26 , 255 , 'object' , 3 , True , False , ( 0, 60,100) ),
Label( 'cctv camera' , 27 , 255 , 'object' , 3 , True , False , (222, 40, 40) ),
Label( 'car' , 28 , 255 , 'object' , 3 , True , False , ( 0, 0,142) ),
Label( 'caravan' , 29 , 255 , 'object' , 3 , True , False , ( 0, 0, 90) ),
Label( 'catch basin' , 30 , 255 , 'object' , 3 , True , False , (220,128,128) ),
Label( 'fire hydrant' , 31 , 255 , 'object' , 3 , True , False , (100,170, 30) ),
Label( 'junction box' , 32 , 255 , 'object' , 3 , True , False , ( 40, 40, 40) ),
Label( 'mailbox' , 33 , 255 , 'object' , 3 , True , False , ( 33, 33, 33) ),
Label( 'manhole' , 34 , 0 , 'object' , 3 , True , False , (128, 64,128) ),
Label( 'motorcycle' , 35 , 255 , 'object' , 3 , True , False , ( 0, 0,230) ),
Label( 'on rails' , 36 , 255 , 'object' , 3 , False , False , ( 0, 80,100) ),
Label( 'other vehicle' , 37 , 255 , 'object' , 3 , True , False , (128, 64, 64) ),
Label( 'phone booth' , 38 , 255 , 'object' , 3 , True , False , (142, 0, 0) ),
Label( 'pole' , 39 , 8 , 'object' , 3 , True , False , (153,153,153) ),
Label( 'pothole' , 40 , 0 , 'object' , 3 , False , False , (128, 64,128) ),
Label( 'street light' , 41 , 255 , 'object' , 3 , True , False , (210,170,100) ),
Label( 'traffic light' , 42 , 9 , 'object' , 3 , True , False , (250,170, 30) ),
Label( 'traffic sign (back)' , 43 , 255 , 'object' , 3 , True , False , (192,192,192) ),
Label( 'traffic sign (front)' , 44 , 10 , 'object' , 3 , True , False , (220,220, 0) ),
Label( 'traffic sign frame' , 45 , 255 , 'object' , 3 , True , False , (128,128,128) ), #
Label( 'trailer' , 46 , 255 , 'object' , 3 , True , False , ( 0, 0,110) ),
Label( 'trash can' , 47 , 255 , 'object' , 3 , True , False , (140,140, 20) ),
Label( 'truck' , 48 , 255 , 'object' , 3 , True , False , ( 0, 0, 70) ),
Label( 'utility pole' , 49 , 255 , 'object' , 3 , True , False , ( 0, 0, 80) ),
Label( 'wheeled slow' , 50 , 255 , 'object' , 3 , True , False , ( 0, 0,192) ),
Label( 'mountain' , 51 , 255 , 'nature' , 4 , False , False , ( 64,170, 64) ),
Label( 'sand' , 52 , 255 , 'nature' , 4 , False , False , (230,160, 50) ),
Label( 'sky' , 53 , 4 , 'nature' , 4 , False , False , ( 70,130,180) ),
Label( 'snow' , 54 , 255 , 'nature' , 4 , False , False , (190,255,255) ),
Label( 'terrain' , 55 , 3 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'vegetation' , 56 , 2 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'water' , 57 , 255 , 'nature' , 4 , False , False , ( 0,170, 30) ),
Label( 'bicyclist' , 58 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'motorcyclist' , 59 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'other rider' , 60 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'person' , 61 , 255 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'lane marking - crosswalk' , 62 , 0 , 'marking' , 8 , True , False , (128, 64,128) ),
Label( 'lane marking - general' , 63 , 0 , 'marking' , 8 , False , False , (128, 64,128) ),
Label( 'bird' , 64 , 255 , 'animal' , 9 , True , False , (165, 42, 42) ),
Label( 'ground animal' , 65 , 255 , 'animal' , 9 , True , False , ( 0,192, 0) ),
])
labels_mapillary_seg_cityscapes_train3 = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'car mount' , 0 , 255 , 'void' , 0 , False , True , ( 32, 32, 32) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , (120, 10, 10) ),
Label( 'unlabeled' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'barrier' , 3 , 255 , 'construction' , 2 , False , False , ( 90,120,150) ),
Label( 'bike lane' , 4 , 255 , 'construction' , 2 , False , False , (244, 35,232) ), # passt, aber nicht 100% eindeutig, z.b. Radstreifen
Label( 'bridge' , 5 , 255 , 'construction' , 2 , False , False , (150,100,100) ),
Label( 'building' , 6 , 255 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'crosswalk - plain' , 7 , 255 , 'construction' , 2 , True , False , (128, 64,128) ),
Label( 'curb' , 8 , 255 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'curb cut' , 9 , 255 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'fence' , 10 , 255 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 11 , 255 , 'construction' , 2 , False , False , (180,165,180) ),
Label( 'parking' , 12 , 255 , 'construction' , 2 , False , False , (250,170,160) ),
Label( 'pedestrian area' , 13 , 255 , 'construction' , 2 , False , False , ( 96, 96, 96) ),
Label( 'rail track' , 14 , 255 , 'construction' , 2 , False , False , (230,150,140) ),
Label( 'road' , 15 , 255 , 'construction' , 2 , False , False , (128, 64,128) ),
Label( 'service lane' , 16 , 255 , 'construction' , 2 , False , False , (128, 64,128) ), # passt denke ich
Label( 'sidewalk' , 17 , 255 , 'construction' , 2 , False , False , (244, 35,232) ),
Label( 'tunnel' , 18 , 255 , 'construction' , 2 , False , False , (150,120, 90) ),
Label( 'wall' , 19 , 255 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'banner' , 20 , 255 , 'object' , 3 , True , False , (255,255,128) ), #
Label( 'bench' , 21 , 255 , 'object' , 3 , True , False , (250, 0, 30) ),
Label( 'bicycle' , 22 , 18 , 'object' , 3 , True , False , (119, 11, 32) ),
Label( 'bike rack' , 23 , 255 , 'object' , 3 , True , False , (100,140,180) ),
Label( 'billboard' , 24 , 255 , 'object' , 3 , True , False , (220,220,220) ),
Label( 'boat' , 25 , 255 , 'object' , 3 , True , False , (150, 0,255) ),
Label( 'bus' , 26 , 15 , 'object' , 3 , True , False , ( 0, 60,100) ),
Label( 'cctv camera' , 27 , 255 , 'object' , 3 , True , False , (222, 40, 40) ),
Label( 'car' , 28 , 13 , 'object' , 3 , True , False , ( 0, 0,142) ),
Label( 'caravan' , 29 , 255 , 'object' , 3 , True , False , ( 0, 0, 90) ),
Label( 'catch basin' , 30 , 255 , 'object' , 3 , True , False , (220,128,128) ),
Label( 'fire hydrant' , 31 , 255 , 'object' , 3 , True , False , (100,170, 30) ),
Label( 'junction box' , 32 , 255 , 'object' , 3 , True , False , ( 40, 40, 40) ),
Label( 'mailbox' , 33 , 255 , 'object' , 3 , True , False , ( 33, 33, 33) ),
Label( 'manhole' , 34 , 255 , 'object' , 3 , True , False , (128, 64,128) ),
Label( 'motorcycle' , 35 , 17 , 'object' , 3 , True , False , ( 0, 0,230) ),
Label( 'on rails' , 36 , 16 , 'object' , 3 , False , False , ( 0, 80,100) ),
Label( 'other vehicle' , 37 , 255 , 'object' , 3 , True , False , (128, 64, 64) ),
Label( 'phone booth' , 38 , 255 , 'object' , 3 , True , False , (142, 0, 0) ),
Label( 'pole' , 39 , 255 , 'object' , 3 , True , False , (153,153,153) ),
Label( 'pothole' , 40 , 255 , 'object' , 3 , False , False , (128, 64,128) ),
Label( 'street light' , 41 , 255 , 'object' , 3 , True , False , (210,170,100) ),
Label( 'traffic light' , 42 , 255 , 'object' , 3 , True , False , (250,170, 30) ),
Label( 'traffic sign (back)' , 43 , 255 , 'object' , 3 , True , False , (192,192,192) ),
Label( 'traffic sign (front)' , 44 , 255 , 'object' , 3 , True , False , (220,220, 0) ),
Label( 'traffic sign frame' , 45 , 255 , 'object' , 3 , True , False , (128,128,128) ), #
Label( 'trailer' , 46 , 255 , 'object' , 3 , True , False , ( 0, 0,110) ),
Label( 'trash can' , 47 , 255 , 'object' , 3 , True , False , (140,140, 20) ),
Label( 'truck' , 48 , 14 , 'object' , 3 , True , False , ( 0, 0, 70) ),
Label( 'utility pole' , 49 , 255 , 'object' , 3 , True , False , ( 0, 0, 80) ),
Label( 'wheeled slow' , 50 , 255 , 'object' , 3 , True , False , ( 0, 0,192) ),
Label( 'mountain' , 51 , 255 , 'nature' , 4 , False , False , ( 64,170, 64) ),
Label( 'sand' , 52 , 255 , 'nature' , 4 , False , False , (230,160, 50) ),
Label( 'sky' , 53 , 255 , 'nature' , 4 , False , False , ( 70,130,180) ),
Label( 'snow' , 54 , 255 , 'nature' , 4 , False , False , (190,255,255) ),
Label( 'terrain' , 55 , 255 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'vegetation' , 56 , 255 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'water' , 57 , 255 , 'nature' , 4 , False , False , ( 0,170, 30) ),
Label( 'bicyclist' , 58 , 12 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'motorcyclist' , 59 , 12 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'other rider' , 60 , 12 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'person' , 61 , 11 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'lane marking - crosswalk' , 62 , 255 , 'marking' , 8 , True , False , (128, 64,128) ),
Label( 'lane marking - general' , 63 , 255 , 'marking' , 8 , False , False , (128, 64,128) ),
Label( 'bird' , 64 , 255 , 'animal' , 9 , True , False , (165, 42, 42) ),
Label( 'ground animal' , 65 , 255 , 'animal' , 9 , True , False , ( 0,192, 0) ),
])
labels_mapillary_seg_cityscapes_train3_eval = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'car mount' , 0 , 255 , 'void' , 0 , False , True , ( 32, 32, 32) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , (120, 10, 10) ),
Label( 'unlabeled' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'barrier' , 3 , 255 , 'construction' , 2 , False , False , ( 90,120,150) ),
Label( 'bike lane' , 4 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt, aber nicht 100% eindeutig, z.b. Radstreifen
Label( 'bridge' , 5 , 255 , 'construction' , 2 , False , False , (150,100,100) ),
Label( 'building' , 6 , 5 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'crosswalk - plain' , 7 , 0 , 'construction' , 2 , True , False , (128, 64,128) ),
Label( 'curb' , 8 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'curb cut' , 9 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'fence' , 10 , 7 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 11 , 255 , 'construction' , 2 , False , False , (180,165,180) ),
Label( 'parking' , 12 , 255 , 'construction' , 2 , False , False , (250,170,160) ),
Label( 'pedestrian area' , 13 , 255 , 'construction' , 2 , False , False , ( 96, 96, 96) ),
Label( 'rail track' , 14 , 255 , 'construction' , 2 , False , False , (230,150,140) ),
Label( 'road' , 15 , 0 , 'construction' , 2 , False , False , (128, 64,128) ),
Label( 'service lane' , 16 , 0 , 'construction' , 2 , False , False , (128, 64,128) ), # passt denke ich
Label( 'sidewalk' , 17 , 1 , 'construction' , 2 , False , False , (244, 35,232) ),
Label( 'tunnel' , 18 , 255 , 'construction' , 2 , False , False , (150,120, 90) ),
Label( 'wall' , 19 , 6 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'banner' , 20 , 255 , 'object' , 3 , True , False , (255,255,128) ), #
Label( 'bench' , 21 , 255 , 'object' , 3 , True , False , (250, 0, 30) ),
Label( 'bicycle' , 22 , 18 , 'object' , 3 , True , False , (119, 11, 32) ),
Label( 'bike rack' , 23 , 255 , 'object' , 3 , True , False , (100,140,180) ),
Label( 'billboard' , 24 , 255 , 'object' , 3 , True , False , (220,220,220) ),
Label( 'boat' , 25 , 255 , 'object' , 3 , True , False , (150, 0,255) ),
Label( 'bus' , 26 , 15 , 'object' , 3 , True , False , ( 0, 60,100) ),
Label( 'cctv camera' , 27 , 255 , 'object' , 3 , True , False , (222, 40, 40) ),
Label( 'car' , 28 , 13 , 'object' , 3 , True , False , ( 0, 0,142) ),
Label( 'caravan' , 29 , 255 , 'object' , 3 , True , False , ( 0, 0, 90) ),
Label( 'catch basin' , 30 , 255 , 'object' , 3 , True , False , (220,128,128) ),
Label( 'fire hydrant' , 31 , 255 , 'object' , 3 , True , False , (100,170, 30) ),
Label( 'junction box' , 32 , 255 , 'object' , 3 , True , False , ( 40, 40, 40) ),
Label( 'mailbox' , 33 , 255 , 'object' , 3 , True , False , ( 33, 33, 33) ),
Label( 'manhole' , 34 , 0 , 'object' , 3 , True , False , (128, 64,128) ),
Label( 'motorcycle' , 35 , 17 , 'object' , 3 , True , False , ( 0, 0,230) ),
Label( 'on rails' , 36 , 16 , 'object' , 3 , False , False , ( 0, 80,100) ),
Label( 'other vehicle' , 37 , 255 , 'object' , 3 , True , False , (128, 64, 64) ),
Label( 'phone booth' , 38 , 255 , 'object' , 3 , True , False , (142, 0, 0) ),
Label( 'pole' , 39 , 8 , 'object' , 3 , True , False , (153,153,153) ),
Label( 'pothole' , 40 , 0 , 'object' , 3 , False , False , (128, 64,128) ),
Label( 'street light' , 41 , 255 , 'object' , 3 , True , False , (210,170,100) ),
Label( 'traffic light' , 42 , 9 , 'object' , 3 , True , False , (250,170, 30) ),
Label( 'traffic sign (back)' , 43 , 255 , 'object' , 3 , True , False , (192,192,192) ),
Label( 'traffic sign (front)' , 44 , 10 , 'object' , 3 , True , False , (220,220, 0) ),
Label( 'traffic sign frame' , 45 , 255 , 'object' , 3 , True , False , (128,128,128) ), #
Label( 'trailer' , 46 , 255 , 'object' , 3 , True , False , ( 0, 0,110) ),
Label( 'trash can' , 47 , 255 , 'object' , 3 , True , False , (140,140, 20) ),
Label( 'truck' , 48 , 14 , 'object' , 3 , True , False , ( 0, 0, 70) ),
Label( 'utility pole' , 49 , 255 , 'object' , 3 , True , False , ( 0, 0, 80) ),
Label( 'wheeled slow' , 50 , 255 , 'object' , 3 , True , False , ( 0, 0,192) ),
Label( 'mountain' , 51 , 255 , 'nature' , 4 , False , False , ( 64,170, 64) ),
Label( 'sand' , 52 , 255 , 'nature' , 4 , False , False , (230,160, 50) ),
Label( 'sky' , 53 , 4 , 'nature' , 4 , False , False , ( 70,130,180) ),
Label( 'snow' , 54 , 255 , 'nature' , 4 , False , False , (190,255,255) ),
Label( 'terrain' , 55 , 3 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'vegetation' , 56 , 2 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'water' , 57 , 255 , 'nature' , 4 , False , False , ( 0,170, 30) ),
Label( 'bicyclist' , 58 , 12 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'motorcyclist' , 59 , 12 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'other rider' , 60 , 12 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'person' , 61 , 11 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'lane marking - crosswalk' , 62 , 0 , 'marking' , 8 , True , False , (128, 64,128) ),
Label( 'lane marking - general' , 63 , 0 , 'marking' , 8 , False , False , (128, 64,128) ),
Label( 'bird' , 64 , 255 , 'animal' , 9 , True , False , (165, 42, 42) ),
Label( 'ground animal' , 65 , 255 , 'animal' , 9 , True , False , ( 0,192, 0) ),
])
labels_mapillary_seg_cityscapes_train4 = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'car mount' , 0 , 255 , 'void' , 0 , False , True , ( 32, 32, 32) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , (120, 10, 10) ),
Label( 'unlabeled' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'barrier' , 3 , 255 , 'construction' , 2 , False , False , ( 90,120,150) ),
Label( 'bike lane' , 4 , 255 , 'construction' , 2 , False , False , (244, 35,232) ), # passt, aber nicht 100% eindeutig, z.b. Radstreifen
Label( 'bridge' , 5 , 255 , 'construction' , 2 , False , False , (150,100,100) ),
Label( 'building' , 6 , 255 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'crosswalk - plain' , 7 , 255 , 'construction' , 2 , True , False , (128, 64,128) ),
Label( 'curb' , 8 , 255 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'curb cut' , 9 , 255 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'fence' , 10 , 255 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 11 , 255 , 'construction' , 2 , False , False , (180,165,180) ),
Label( 'parking' , 12 , 255 , 'construction' , 2 , False , False , (250,170,160) ),
Label( 'pedestrian area' , 13 , 255 , 'construction' , 2 , False , False , ( 96, 96, 96) ),
Label( 'rail track' , 14 , 255 , 'construction' , 2 , False , False , (230,150,140) ),
Label( 'road' , 15 , 255 , 'construction' , 2 , False , False , (128, 64,128) ),
Label( 'service lane' , 16 , 255 , 'construction' , 2 , False , False , (128, 64,128) ), # passt denke ich
Label( 'sidewalk' , 17 , 255 , 'construction' , 2 , False , False , (244, 35,232) ),
Label( 'tunnel' , 18 , 255 , 'construction' , 2 , False , False , (150,120, 90) ),
Label( 'wall' , 19 , 255 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'banner' , 20 , 255 , 'object' , 3 , True , False , (255,255,128) ), #
Label( 'bench' , 21 , 255 , 'object' , 3 , True , False , (250, 0, 30) ),
Label( 'bicycle' , 22 , 255 , 'object' , 3 , True , False , (119, 11, 32) ),
Label( 'bike rack' , 23 , 255 , 'object' , 3 , True , False , (100,140,180) ),
Label( 'billboard' , 24 , 255 , 'object' , 3 , True , False , (220,220,220) ),
Label( 'boat' , 25 , 255 , 'object' , 3 , True , False , (150, 0,255) ),
Label( 'bus' , 26 , 255 , 'object' , 3 , True , False , ( 0, 60,100) ),
Label( 'cctv camera' , 27 , 255 , 'object' , 3 , True , False , (222, 40, 40) ),
Label( 'car' , 28 , 255 , 'object' , 3 , True , False , ( 0, 0,142) ),
Label( 'caravan' , 29 , 255 , 'object' , 3 , True , False , ( 0, 0, 90) ),
Label( 'catch basin' , 30 , 255 , 'object' , 3 , True , False , (220,128,128) ),
Label( 'fire hydrant' , 31 , 255 , 'object' , 3 , True , False , (100,170, 30) ),
Label( 'junction box' , 32 , 255 , 'object' , 3 , True , False , ( 40, 40, 40) ),
Label( 'mailbox' , 33 , 255 , 'object' , 3 , True , False , ( 33, 33, 33) ),
Label( 'manhole' , 34 , 5 , 'object' , 3 , True , False , (100,128,160) ),
Label( 'motorcycle' , 35 , 255 , 'object' , 3 , True , False , ( 0, 0,230) ),
Label( 'on rails' , 36 , 255 , 'object' , 3 , False , False , ( 0, 80,100) ),
Label( 'other vehicle' , 37 , 255 , 'object' , 3 , True , False , (128, 64, 64) ),
Label( 'phone booth' , 38 , 255 , 'object' , 3 , True , False , (142, 0, 0) ),
Label( 'pole' , 39 , 255 , 'object' , 3 , True , False , (153,153,153) ),
Label( 'pothole' , 40 , 255 , 'object' , 3 , False , False , (128, 64,128) ),
Label( 'street light' , 41 , 255 , 'object' , 3 , True , False , (210,170,100) ),
Label( 'traffic light' , 42 , 255 , 'object' , 3 , True , False , (250,170, 30) ),
Label( 'traffic sign (back)' , 43 , 255 , 'object' , 3 , True , False , (192,192,192) ),
Label( 'traffic sign (front)' , 44 , 255 , 'object' , 3 , True , False , (220,220, 0) ),
Label( 'traffic sign frame' , 45 , 255 , 'object' , 3 , True , False , (128,128,128) ), #
Label( 'trailer' , 46 , 255 , 'object' , 3 , True , False , ( 0, 0,110) ),
Label( 'trash can' , 47 , 255 , 'object' , 3 , True , False , (140,140, 20) ),
Label( 'truck' , 48 , 255 , 'object' , 3 , True , False , ( 0, 0, 70) ),
Label( 'utility pole' , 49 , 255 , 'object' , 3 , True , False , ( 0, 0, 80) ),
Label( 'wheeled slow' , 50 , 255 , 'object' , 3 , True , False , ( 0, 0,192) ),
Label( 'mountain' , 51 , 255 , 'nature' , 4 , False , False , ( 64,170, 64) ),
Label( 'sand' , 52 , 255 , 'nature' , 4 , False , False , (230,160, 50) ),
Label( 'sky' , 53 , 255 , 'nature' , 4 , False , False , ( 70,130,180) ),
Label( 'snow' , 54 , 255 , 'nature' , 4 , False , False , (190,255,255) ),
Label( 'terrain' , 55 , 255 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'vegetation' , 56 , 255 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'water' , 57 , 255 , 'nature' , 4 , False , False , ( 0,170, 30) ),
Label( 'bicyclist' , 58 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'motorcyclist' , 59 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'other rider' , 60 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'person' , 61 , 255 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'lane marking - crosswalk' , 62 , 255 , 'marking' , 8 , True , False , (128, 64,128) ),
Label( 'lane marking - general' , 63 , 6 , 'marking' , 8 , False , False , (255,255,255) ),
Label( 'bird' , 64 , 255 , 'animal' , 9 , True , False , (165, 42, 42) ),
Label( 'ground animal' , 65 , 255 , 'animal' , 9 , True , False , ( 0,192, 0) ),
])
labels_mapillary_seg_cityscapes_train4_eval = ClassDefinitions([
# name id trainId category catId hasInstances ignoreInEval color
Label( 'car mount' , 0 , 255 , 'void' , 0 , False , True , ( 32, 32, 32) ),
Label( 'ego vehicle' , 1 , 255 , 'void' , 0 , False , True , (120, 10, 10) ),
Label( 'unlabeled' , 2 , 255 , 'void' , 0 , False , True , ( 0, 0, 0) ),
Label( 'barrier' , 3 , 255 , 'construction' , 2 , False , False , ( 90,120,150) ),
Label( 'bike lane' , 4 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt, aber nicht 100% eindeutig, z.b. Radstreifen
Label( 'bridge' , 5 , 255 , 'construction' , 2 , False , False , (150,100,100) ),
Label( 'building' , 6 , 255 , 'construction' , 2 , False , False , ( 70, 70, 70) ),
Label( 'crosswalk - plain' , 7 , 0 , 'construction' , 2 , True , False , (128, 64,128) ),
Label( 'curb' , 8 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'curb cut' , 9 , 1 , 'construction' , 2 , False , False , (244, 35,232) ), # passt
Label( 'fence' , 10 , 255 , 'construction' , 2 , False , False , (190,153,153) ),
Label( 'guard rail' , 11 , 255 , 'construction' , 2 , False , False , (180,165,180) ),
Label( 'parking' , 12 , 255 , 'construction' , 2 , False , False , (250,170,160) ),
Label( 'pedestrian area' , 13 , 255 , 'construction' , 2 , False , False , ( 96, 96, 96) ),
Label( 'rail track' , 14 , 255 , 'construction' , 2 , False , False , (230,150,140) ),
Label( 'road' , 15 , 0 , 'construction' , 2 , False , False , (128, 64,128) ),
Label( 'service lane' , 16 , 0 , 'construction' , 2 , False , False , (128, 64,128) ), # passt denke ich
Label( 'sidewalk' , 17 , 1 , 'construction' , 2 , False , False , (244, 35,232) ),
Label( 'tunnel' , 18 , 255 , 'construction' , 2 , False , False , (150,120, 90) ),
Label( 'wall' , 19 , 255 , 'construction' , 2 , False , False , (102,102,156) ),
Label( 'banner' , 20 , 255 , 'object' , 3 , True , False , (255,255,128) ), #
Label( 'bench' , 21 , 255 , 'object' , 3 , True , False , (250, 0, 30) ),
Label( 'bicycle' , 22 , 255 , 'object' , 3 , True , False , (119, 11, 32) ),
Label( 'bike rack' , 23 , 255 , 'object' , 3 , True , False , (100,140,180) ),
Label( 'billboard' , 24 , 255 , 'object' , 3 , True , False , (220,220,220) ),
Label( 'boat' , 25 , 255 , 'object' , 3 , True , False , (150, 0,255) ),
Label( 'bus' , 26 , 255 , 'object' , 3 , True , False , ( 0, 60,100) ),
Label( 'cctv camera' , 27 , 255 , 'object' , 3 , True , False , (222, 40, 40) ),
Label( 'car' , 28 , 255 , 'object' , 3 , True , False , ( 0, 0,142) ),
Label( 'caravan' , 29 , 255 , 'object' , 3 , True , False , ( 0, 0, 90) ),
Label( 'catch basin' , 30 , 255 , 'object' , 3 , True , False , (220,128,128) ),
Label( 'fire hydrant' , 31 , 255 , 'object' , 3 , True , False , (100,170, 30) ),
Label( 'junction box' , 32 , 255 , 'object' , 3 , True , False , ( 40, 40, 40) ),
Label( 'mailbox' , 33 , 255 , 'object' , 3 , True , False , ( 33, 33, 33) ),
Label( 'manhole' , 34 , 5 , 'object' , 3 , True , False , (100,128,160) ),
Label( 'motorcycle' , 35 , 255 , 'object' , 3 , True , False , ( 0, 0,230) ),
Label( 'on rails' , 36 , 255 , 'object' , 3 , False , False , ( 0, 80,100) ),
Label( 'other vehicle' , 37 , 255 , 'object' , 3 , True , False , (128, 64, 64) ),
Label( 'phone booth' , 38 , 255 , 'object' , 3 , True , False , (142, 0, 0) ),
Label( 'pole' , 39 , 255 , 'object' , 3 , True , False , (153,153,153) ),
Label( 'pothole' , 40 , 0 , 'object' , 3 , False , False , (128, 64,128) ),
Label( 'street light' , 41 , 255 , 'object' , 3 , True , False , (210,170,100) ),
Label( 'traffic light' , 42 , 255 , 'object' , 3 , True , False , (250,170, 30) ),
Label( 'traffic sign (back)' , 43 , 255 , 'object' , 3 , True , False , (192,192,192) ),
Label( 'traffic sign (front)' , 44 , 255 , 'object' , 3 , True , False , (220,220, 0) ),
Label( 'traffic sign frame' , 45 , 255 , 'object' , 3 , True , False , (128,128,128) ), #
Label( 'trailer' , 46 , 255 , 'object' , 3 , True , False , ( 0, 0,110) ),
Label( 'trash can' , 47 , 255 , 'object' , 3 , True , False , (140,140, 20) ),
Label( 'truck' , 48 , 255 , 'object' , 3 , True , False , ( 0, 0, 70) ),
Label( 'utility pole' , 49 , 255 , 'object' , 3 , True , False , ( 0, 0, 80) ),
Label( 'wheeled slow' , 50 , 255 , 'object' , 3 , True , False , ( 0, 0,192) ),
Label( 'mountain' , 51 , 255 , 'nature' , 4 , False , False , ( 64,170, 64) ),
Label( 'sand' , 52 , 255 , 'nature' , 4 , False , False , (230,160, 50) ),
Label( 'sky' , 53 , 4 , 'nature' , 4 , False , False , ( 70,130,180) ),
Label( 'snow' , 54 , 255 , 'nature' , 4 , False , False , (190,255,255) ),
Label( 'terrain' , 55 , 3 , 'nature' , 4 , False , False , (152,251,152) ),
Label( 'vegetation' , 56 , 2 , 'nature' , 4 , False , False , (107,142, 35) ),
Label( 'water' , 57 , 255 , 'nature' , 4 , False , False , ( 0,170, 30) ),
Label( 'bicyclist' , 58 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'motorcyclist' , 59 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'other rider' , 60 , 255 , 'human' , 6 , True , False , (255, 0, 0) ),
Label( 'person' , 61 , 255 , 'human' , 6 , True , False , (220, 20, 60) ),
Label( 'lane marking - crosswalk' , 62 , 0 , 'marking' , 8 , True , False , (128, 64,128) ),
Label( 'lane marking - general' , 63 , 6 , 'marking' , 8 , False , False , (255,255,255) ),
Label( 'bird' , 64 , 255 , 'animal' , 9 , True , False , (165, 42, 42) ),
Label( 'ground animal' , 65 , 255 , 'animal' , 9 , True , False , ( 0,192, 0) ),
])
def decode_labels(mask, num_images=1, database='Cityscapes', mode = 'id'):
"""
Decode batch of segmentation masks.
:param mask: result of inference after taking argmax. 3D numpy array
:param num_images: number of images to decode from the batch.
:param num_classes: number of classes to predict (including background).
:param database: the underlying database of the labels (currently Pascal VOC2012 or Cityscapes).
:return: A batch with num_images RGB images of the same size as the input. 4D numpy array
"""
assert mode in ['id', 'trainid']
assert database in ['Cityscapes', 'KITTI']
n, h, w = mask.shape
assert (n >= num_images), 'Batch size %d should be greater or equal than number of images to save %d.' % (
n, num_images)
database_dict = {'Cityscapes': labels_cityscape_seg, 'KITTI': labels_kitti_seg}
database_def = database_dict[database]
mode_dict = {'id': database_def.getid2label, 'trainid': database_def.gettrainid2label}
label_dict = mode_dict[mode]()
outputs = np.zeros((num_images, h, w, 3), dtype=np.uint8)
for i in range(num_images):
img = mask[i]
new_img = np.zeros((h, w, 3))
for key in label_dict:
new_img[img == key, :] = np.array(label_dict[key].color)
outputs[i] = np.array(new_img)
return outputs
if __name__ == "__main__":
labels = labels_apollo_lanes.getlabels()
# Print all the labels
print("List of apolloscapes labels:")
print("")
print(" {:>21} | {:>3} | {:>7} | {:>14} | {:>10} | {:>12} | {:>12}".format( 'name', 'id', 'trainId', 'category', 'categoryId', 'hasInstances', 'ignoreInEval' ))
print(" " + ('-' * 98))
for label in labels:
print(" {:>21} | {:>3} | {:>7} | {:>14} | {:>10} | {:>12} | {:>12}".format( label.name, label.id, label.trainId, label.category, label.categoryId, label.hasInstances, label.ignoreInEval ))
print("")
print("Example usages:")
# Map from name to label
name = 'noise'
name2label = labels_apollo_lanes.getname2label()
id = name2label[name].id
print("ID of label '{name}': {id}".format( name=name, id=id ))
# Map from ID to label
id2label = labels_apollo_lanes.getid2label()
category = id2label[id].category
print("Category of label with ID '{id}': {category}".format( id=id, category=category ))
# Map from trainID to label
trainId = 0
trainId2label = labels_apollo_lanes.gettrainid2label()
name = trainId2label[trainId].name
print("Name of label with trainID '{id}': {name}".format( id=trainId, name=name ))
| 113.133231 | 199 | 0.344083 | 13,278 | 147,752 | 3.813526 | 0.044962 | 0.081405 | 0.062999 | 0.091634 | 0.875681 | 0.865945 | 0.860139 | 0.847144 | 0.847144 | 0.846157 | 0 | 0.183905 | 0.521536 | 147,752 | 1,305 | 200 | 113.219923 | 0.532365 | 0.039208 | 0 | 0.780694 | 0 | 0.001693 | 0.119072 | 0 | 0 | 0 | 0.002031 | 0 | 0.00254 | 1 | 0.006774 | false | 0.000847 | 0.001693 | 0.000847 | 0.017782 | 0.008467 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
1515bfd1b9066fb0c54315e5c59ac3095352c87c | 3,552 | py | Python | paypalcheckoutsdk/payments/authorizations_get_request.py | taqma/Checkout-Python-SDK | ebb26e78408cc70a159436b7b43ac249cc169766 | [
"BSD-Source-Code"
] | null | null | null | paypalcheckoutsdk/payments/authorizations_get_request.py | taqma/Checkout-Python-SDK | ebb26e78408cc70a159436b7b43ac249cc169766 | [
"BSD-Source-Code"
] | null | null | null | paypalcheckoutsdk/payments/authorizations_get_request.py | taqma/Checkout-Python-SDK | ebb26e78408cc70a159436b7b43ac249cc169766 | [
"BSD-Source-Code"
] | null | null | null | # This class was generated on Tue, 10 Jul 2018 10:40:35 PDT by version 0.1.0-dev+0ee05a-dirty of Braintree SDK Generator
# authorizations_get_request.py
# @version 0.1.0-dev+0ee05a-dirty
# @type request
# @data H4sIAAAAAAAC/+xbbW/juPF///8UA90f6AawrXSf7s7v0t1sN+1lk8beAkUaJLQ4tnihSB05sqMu9rsXJCXbkuxLrmdnW8AvFgsNR9b8Zn6cB4n5En1iGUbDiBWUaiP+xUhoZQczpKgXvUebGJE7UTSMRqleWOBITEgLU22AKajvQw45KzNU1INJCWfvB1Ev+luBprxkhmVIaGw0vL7pRR+RcTRt6QdtsrbsklHakH2JxmXurLVkhJpFvejvzAg2kbgJxa3gUS/6K5bVYgfQOEU4ew96CpTiBiQe4yIVSQqkwaZ6UaN34E6MYWWw57gXXSHjF0qW0XDKpEUn+KUQBnk0JFNgL7o0OkdDAm00VIWUX2+CDloKP+KETmRzrSwG2RLwyTqwLu7HcW5AR4YpyxKn9JvwVIJ1QKvInGuF5YbAZLpQ1DBzKeoamxTGoEpKYIpD0At8g6lQTCWCyXXre2CLJAVmgcGESaYSBG2WOHmBu8O3jXm1ybeJ5tjA2V7pwr2m1CD2k5QZlhAaOBtd9F+//OP3K0e4e29exFwnNhaKcGY8E2IuDCYUG7QU18p9p2zjI6CUEQiOisRUoPUsr5V2weDeo16ZM1k0vVFLul7wK71qv2VilhJMcPjP4vj4VVJI/z+GKynC1YkC7ws0nh0VNIdUinuEu79c/uMuOIEZBKUJqMxFwqQsYWoCd5gchB+N619tPQM4JiJjcnnH5meNP71fe5YtJlzMBUfuLNRAqS4sU5xSu/lxcY3wgzY+TqZyPqgim6BxSao2JJcswSoBNxnSA4sI1+9q2TtHhN9Km53ktidwIzHICG9JZK390pB3ecIZoU8MTqMHQsH1mSI0Cqm55jyUMbp5kRLldhjHpLW0A4E0HWgzi1PKZGymyatXr378zqIPbv/N4O3RAEaYaMWtj+UyEotUSFwjDtg1LZ032DSROrn/pdCE61G2ZLSaBcknTTW743U5jH30Z4VkBvAhN2itY11utCOUhVkhuE9xk4KAa7Se2QZ/xoSASQlCzZkU3DtjSbe2Qb8zIT5x/+NDLgLlunHurh1i/b8c61a35S+7Eb1k5SWT/RkqNIyQuwZsWuW8bo8yeCbT1VyLBNsNY0PchZKhSVKmqM9xKlQTSnVrdS3s/rH9JNQ9rBvZQSmFurcNgLWkie1EAXN2uZpjUPooXX88GZ9enIzA31KXFJaLWM/RzAUu4u9SRqiZ7XuVdhl5u/uWC1XiNRpJZSnbFC8umGsBXHjW+vpikglaFl20PrWwZ6JeanDaQFAJNjTFOsslEgIxM0OCz1c/DWCsIWP3WFkfYuUanJ5TnwgVVjKkVHNYCEoDG68/X53BGLPc3dEPuZOQP5o+3775/vjIc2AArlPJDfZzoxOXt9TMJehEFjw89O7/73pw9+Ku55P03dEdLBtcO/CZ785hvQMROtN7LKFmmcOqlR9SXEvlGeWancoFAWPAw1wArQucIi9+psB5NnX4ty79NQb23LRS541JCddXH97By+PXb1chWCwWqwCYaeL+OY0BPdDRoNrqk6qtdx6qiPFs+B2nWuArURf5x/H4sqbhssjSFvI+EwKDsmF+uN4wnnnnegNdcXbhe3SjvPnxhx+Wfcbro3qssWjmaP2kqupywargOaIXimUTMSt0YWUJvBFiixlTJBJbv7AI23DkOn6f/K8qC22LQ0wxbxuzVsyUKz02dvf2a0jty8GDg3G0jwI1SlLMWDcWtpavwrEUdSOynqfdpL9D9q/Kjp64LmvDqwzORej/zgizZkHtrjWN361HT6SEiym4R20wU8qLZmWpJdurvS0m/eD2KkN7B2eFJfBtpu+4Z0woG7rPdf3fWe7b0FT5K9BU2YZWSXYDTSv//ijTYZLfE8Rt/PJVwbOoya2mfJ+82m5ZjoqH1x4t0xoL+7RtWzKfGjZzqe0KrZZF1QCvLNy4/C18KDo5Q2zMFCcK3MIecvBuBgOvseMdce7aoy2dVrfLeqzHd11nohXhA/VRJZoLNQO/lZ/hdfBEKGbK0+qxDeM7S5tafUWoumaHYn9eSBJ5YXJtEZavQ86ZkHD6QKisSxHw4vzs/PQILpkhuFA4dP16xsjFbnUPWstmCH/SXKB9tKl5efz6zdEzNWfU7qzp8ab6P/bPeKGH4NkHzqwneeLtbjxx84ScoXTzy0m43mf2ulC4vfxqha3yW0t2V35X+jvOMdv4ljNKR8RM09Pr0hbzNLA8l2WYp4Op4N/0IzgUTCVo/wCfr85sD6z7Cb/krtfmcP/NY/A8lSd3I75Ra3e2kHZWv0V9zLeYt1+7bp46p4QhY9SdVloLh5nlMLMcZpbDzHKYWQ4zy2FmOcwsh5nlMLMcZpbDzLKnmWVrRhIkWympknRzUhhI3PLOk8QIpUQDl0ZT+CS24QOQV7nN11XWvgVtWN2AAOco3c5d6YGeTtEgb39uDcdQoGOY/yJ3Xh3saH1Iy1mZMzlIdBYXNl7ghOW5jbM8jy0mhRFUxsHO/ur5R/sv21zYvCC8TRjhTJtOn7tpeXvSS7QK46BdnWZM9Nz7sD7csv3g8L4ynSVGRRPXUtTEcqa4cFAtLFKkFDsWg7CAUszERIbjOSFma5wZPNfufDoqx+6w9O0PSxU533husyk/nOX77z3Ld/O1F70L7XoVa9cWuG0jtIp/tj69fiTKz8OBjmH059NxFP4eIxpG8fxlXFHOxs2/G4m/tP8C42vUi0b3Il9ac/qQY0LIR57O7zTHaPjy+Pjr//0bAAD//w==
# DO NOT EDIT
import braintreehttp
try:
from urllib.parse import quote # Python 2.X
except ImportError:
from urllib.parse import quote # Python 3+
class AuthorizationsGetRequest:
"""
Shows details for an authorized payment, by ID.
"""
def __init__(self, authorization_id):
self.verb = "GET"
self.path = "/v2/payments/authorizations/{authorization_id}?".replace("{authorization_id}", quote(str(authorization_id)))
self.headers = {}
self.headers["Content-Type"] = "application/json"
self.body = None
| 136.615385 | 2,776 | 0.904279 | 190 | 3,552 | 16.852632 | 0.810526 | 0.018738 | 0.005621 | 0.006246 | 0.034978 | 0.034978 | 0.014991 | 0 | 0 | 0 | 0 | 0.144962 | 0.044482 | 3,552 | 25 | 2,777 | 142.08 | 0.798468 | 0.858953 | 0 | 0.166667 | 1 | 0 | 0.200837 | 0.098326 | 0 | 1 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
151b9771737399e11f5fb459ef93a231e3824ffa | 6,691 | py | Python | pm/models/situ_social_activity_community.py | aosojnik/pipeline-manager-features-models | e5232f1c1b2073253a1c505dc9fff0d3d839dd6c | [
"MIT"
] | null | null | null | pm/models/situ_social_activity_community.py | aosojnik/pipeline-manager-features-models | e5232f1c1b2073253a1c505dc9fff0d3d839dd6c | [
"MIT"
] | null | null | null | pm/models/situ_social_activity_community.py | aosojnik/pipeline-manager-features-models | e5232f1c1b2073253a1c505dc9fff0d3d839dd6c | [
"MIT"
] | null | null | null | ##################################################################################
##########--this is an autogenerated python model definition for proDEX--#########
##--original file: Community_Involvement_Assessment_v05.dxi --##
##################################################################################
from .lib.proDEX import *
situ_social_activity_community = Node()
feat_community_habitual_weekly_vs_goal = Atrib()
feat_community_habitual_relative = Atrib()
feat_community_nonhabitual = Atrib()
situ_social_activity_community.setName('situ_social_activity_community')
feat_community_habitual_weekly_vs_goal.setName('feat_community_habitual_weekly_vs_goal')
feat_community_habitual_relative.setName('feat_community_habitual_relative')
feat_community_nonhabitual.setName('feat_community_nonhabitual')
situ_social_activity_community.setValues(['very_low', 'low', 'medium', 'high', 'very_high'])
feat_community_habitual_weekly_vs_goal.setValues(['low', 'medium', 'high'])
feat_community_habitual_relative.setValues(['decrease', 'stable', 'increase'])
feat_community_nonhabitual.setValues(['at_none', 'at_some', 'at_all'])
situ_social_activity_community.addChild(feat_community_habitual_weekly_vs_goal)
feat_community_habitual_weekly_vs_goal.setParent(situ_social_activity_community)
situ_social_activity_community.addChild(feat_community_habitual_relative)
feat_community_habitual_relative.setParent(situ_social_activity_community)
situ_social_activity_community.addChild(feat_community_nonhabitual)
feat_community_nonhabitual.setParent(situ_social_activity_community)
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'low', feat_community_habitual_relative:'decrease', feat_community_nonhabitual:'at_none'}, 'very_low'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'low', feat_community_habitual_relative:'decrease', feat_community_nonhabitual:'at_some'}, 'very_low'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'low', feat_community_habitual_relative:'decrease', feat_community_nonhabitual:'at_all'}, 'very_low'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'low', feat_community_habitual_relative:'stable', feat_community_nonhabitual:'at_none'}, 'low'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'low', feat_community_habitual_relative:'stable', feat_community_nonhabitual:'at_some'}, 'low'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'low', feat_community_habitual_relative:'stable', feat_community_nonhabitual:'at_all'}, 'medium'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'low', feat_community_habitual_relative:'increase', feat_community_nonhabitual:'at_none'}, 'medium'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'low', feat_community_habitual_relative:'increase', feat_community_nonhabitual:'at_some'}, 'medium'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'low', feat_community_habitual_relative:'increase', feat_community_nonhabitual:'at_all'}, 'high'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'medium', feat_community_habitual_relative:'decrease', feat_community_nonhabitual:'at_none'}, 'low'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'medium', feat_community_habitual_relative:'decrease', feat_community_nonhabitual:'at_some'}, 'medium'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'medium', feat_community_habitual_relative:'decrease', feat_community_nonhabitual:'at_all'}, 'medium'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'medium', feat_community_habitual_relative:'stable', feat_community_nonhabitual:'at_none'}, 'medium'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'medium', feat_community_habitual_relative:'stable', feat_community_nonhabitual:'at_some'}, 'medium'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'medium', feat_community_habitual_relative:'stable', feat_community_nonhabitual:'at_all'}, 'high'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'medium', feat_community_habitual_relative:'increase', feat_community_nonhabitual:'at_none'}, 'medium'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'medium', feat_community_habitual_relative:'increase', feat_community_nonhabitual:'at_some'}, 'high'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'medium', feat_community_habitual_relative:'increase', feat_community_nonhabitual:'at_all'}, 'high'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'high', feat_community_habitual_relative:'decrease', feat_community_nonhabitual:'at_none'}, 'medium'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'high', feat_community_habitual_relative:'decrease', feat_community_nonhabitual:'at_some'}, 'high'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'high', feat_community_habitual_relative:'decrease', feat_community_nonhabitual:'at_all'}, 'high'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'high', feat_community_habitual_relative:'stable', feat_community_nonhabitual:'at_none'}, 'high'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'high', feat_community_habitual_relative:'stable', feat_community_nonhabitual:'at_some'}, 'very_high'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'high', feat_community_habitual_relative:'stable', feat_community_nonhabitual:'at_all'}, 'very_high'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'high', feat_community_habitual_relative:'increase', feat_community_nonhabitual:'at_none'}, 'high'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'high', feat_community_habitual_relative:'increase', feat_community_nonhabitual:'at_some'}, 'very_high'])
situ_social_activity_community.addFunctionRow([{feat_community_habitual_weekly_vs_goal:'high', feat_community_habitual_relative:'increase', feat_community_nonhabitual:'at_all'}, 'very_high'])
| 115.362069 | 192 | 0.84382 | 803 | 6,691 | 6.429639 | 0.058531 | 0.249274 | 0.268449 | 0.193492 | 0.892311 | 0.885919 | 0.852218 | 0.850668 | 0.84234 | 0.814449 | 0 | 0.000307 | 0.025407 | 6,691 | 57 | 193 | 117.385965 | 0.791443 | 0.018383 | 0 | 0 | 1 | 0 | 0.134953 | 0.019749 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021739 | 0 | 0.021739 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
12818f8a57a8ea050f07aa6d8231f694525533c6 | 2,027 | py | Python | tests/data/test_degenerate_colinear_01.py | ideasman42/isect_segments-bentley_ottmann | 19deb3c5be4c2b91689b87548a875054b43e9952 | [
"MIT"
] | 80 | 2015-12-04T15:06:49.000Z | 2022-03-02T18:08:15.000Z | test/data/test_degenerate_colinear_01.py | lolistoy/sweepline | 82a2464f984c119dd438489c5f826e9693a7fabf | [
"MIT"
] | 25 | 2015-10-18T13:58:28.000Z | 2021-06-23T21:54:54.000Z | test/data/test_degenerate_colinear_01.py | lolistoy/sweepline | 82a2464f984c119dd438489c5f826e9693a7fabf | [
"MIT"
] | 37 | 2016-07-06T01:38:33.000Z | 2022-02-19T03:53:14.000Z | data = (
((-0.900000, 0.900000), (1.000000, -1.000000)),
((-1.000000, 1.000000), (0.900000, -0.900000)),
((-0.900000, 0.800000), (0.900000, -1.000000)),
((-1.000000, 0.900000), (0.800000, -0.900000)),
((-0.900000, 0.700000), (0.800000, -1.000000)),
((-1.000000, 0.800000), (0.700000, -0.900000)),
((-0.900000, 0.600000), (0.700000, -1.000000)),
((-1.000000, 0.700000), (0.600000, -0.900000)),
((0.900000, -0.800000), (-0.900000, 1.000000)),
((1.000000, -0.900000), (-0.800000, 0.900000)),
((0.900000, -0.700000), (-0.800000, 1.000000)),
((1.000000, -0.800000), (-0.700000, 0.900000)),
((0.900000, -0.600000), (-0.700000, 1.000000)),
((1.000000, -0.700000), (-0.600000, 0.900000)),
((0.300000, 0.300000), (1.000000, 1.000000)),
((0.200000, 0.200000), (0.900000, 0.900000)),
((0.400000, 0.300000), (1.000000, 0.900000)),
((0.300000, 0.200000), (0.900000, 0.800000)),
((0.500000, 0.300000), (1.000000, 0.800000)),
((0.400000, 0.200000), (0.900000, 0.700000)),
((0.600000, 0.300000), (1.000000, 0.700000)),
((0.500000, 0.200000), (0.900000, 0.600000)),
((0.800000, 0.900000), (0.200000, 0.300000)),
((0.900000, 1.000000), (0.300000, 0.400000)),
((0.700000, 0.900000), (0.200000, 0.400000)),
((0.800000, 1.000000), (0.300000, 0.500000)),
((0.600000, 0.900000), (0.200000, 0.500000)),
((0.700000, 1.000000), (0.300000, 0.600000)),
((-0.900000, -0.900000), (-0.200000, -0.200000)),
((-1.000000, -1.000000), (-0.300000, -0.300000)),
((-0.800000, -0.900000), (-0.200000, -0.300000)),
((-0.900000, -1.000000), (-0.300000, -0.400000)),
((-0.700000, -0.900000), (-0.200000, -0.400000)),
((-0.800000, -1.000000), (-0.300000, -0.500000)),
((-0.600000, -0.900000), (-0.200000, -0.500000)),
((-0.700000, -1.000000), (-0.300000, -0.600000)),
((-0.400000, -0.300000), (-1.000000, -0.900000)),
((-0.300000, -0.200000), (-0.900000, -0.800000)),
((-0.500000, -0.300000), (-1.000000, -0.800000)),
((-0.400000, -0.200000), (-0.900000, -0.700000)),
((-0.600000, -0.300000), (-1.000000, -0.700000)),
((-0.500000, -0.200000), (-0.900000, -0.600000)),
)
| 45.044444 | 49 | 0.582141 | 337 | 2,027 | 3.501484 | 0.035608 | 0.237288 | 0.237288 | 0.130508 | 0.970339 | 0.909322 | 0.891525 | 0.861864 | 0.861864 | 0.861864 | 0 | 0.633962 | 0.084854 | 2,027 | 44 | 50 | 46.068182 | 0.002156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
1297a6471383b49d053a9289166affe664fcfc39 | 142,492 | py | Python | api/generated/python/azure-iiot-opc-registry/azure_opc_registry_client.py | jaz230/Industrial-IoT | bd4c5abfe579cbb7086a621e8381978e6c70a563 | [
"MIT"
] | 1 | 2020-01-22T12:03:08.000Z | 2020-01-22T12:03:08.000Z | api/generated/python/azure-iiot-opc-registry/azure_opc_registry_client.py | likithadt/Industrial-IoT | d4ea7b330eff08455ca0556fed76aa74d2034da5 | [
"MIT"
] | null | null | null | api/generated/python/azure-iiot-opc-registry/azure_opc_registry_client.py | likithadt/Industrial-IoT | d4ea7b330eff08455ca0556fed76aa74d2034da5 | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator 2.3.33.0
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.service_client import ServiceClient
from msrest import Configuration, Serializer, Deserializer
from .version import VERSION
from msrest.pipeline import ClientRawResponse
from msrest.exceptions import HttpOperationError
from . import models
class AzureOpcRegistryClientConfiguration(Configuration):
"""Configuration for AzureOpcRegistryClient
Note that all parameters used to create this instance are saved as instance
attributes.
:param credentials: Subscription credentials which uniquely identify
client subscription.
:type credentials: None
:param str base_url: Service URL
"""
def __init__(
self, credentials, base_url=None):
if credentials is None:
raise ValueError("Parameter 'credentials' must not be None.")
if not base_url:
base_url = 'http://localhost:9080'
super(AzureOpcRegistryClientConfiguration, self).__init__(base_url)
self.add_user_agent('azureopcregistryclient/{}'.format(VERSION))
self.credentials = credentials
class AzureOpcRegistryClient(object):
"""Azure Industrial IoT OPC UA Registry Service
:ivar config: Configuration for client.
:vartype config: AzureOpcRegistryClientConfiguration
:param credentials: Subscription credentials which uniquely identify
client subscription.
:type credentials: None
:param str base_url: Service URL
"""
def __init__(
self, credentials, base_url=None):
self.config = AzureOpcRegistryClientConfiguration(credentials, base_url)
self._client = ServiceClient(self.config.credentials, self.config)
client_models = {k: v for k, v in models.__dict__.items() if isinstance(v, type)}
self.api_version = 'v2'
self._serialize = Serializer(client_models)
self._deserialize = Deserializer(client_models)
def register_server(
self, body, custom_headers=None, raw=False, **operation_config):
"""Register new server.
Registers a server solely using a discovery url. Requires that the
onboarding agent service is running and the server can be located by a
supervisor in its network using the discovery url.
:param body: Server registration request
:type body:
~azure-iiot-opc-registry.models.ServerRegistrationRequestApiModel
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.register_server.metadata['url']
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'ServerRegistrationRequestApiModel')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
register_server.metadata = {'url': '/v2/applications'}
def create_application(
self, body, custom_headers=None, raw=False, **operation_config):
"""Create new application.
The application is registered using the provided information, but it is
not associated with a supervisor. This is useful for when you need to
register clients or you want to register a server that is located in a
network not reachable through a Twin module.
:param body: Application registration request
:type body:
~azure-iiot-opc-registry.models.ApplicationRegistrationRequestApiModel
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: ApplicationRegistrationResponseApiModel or ClientRawResponse
if raw=true
:rtype:
~azure-iiot-opc-registry.models.ApplicationRegistrationResponseApiModel
or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.create_application.metadata['url']
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'ApplicationRegistrationRequestApiModel')
# Construct and send request
request = self._client.put(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ApplicationRegistrationResponseApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
create_application.metadata = {'url': '/v2/applications'}
def delete_all_disabled_applications(
self, not_seen_for=None, custom_headers=None, raw=False, **operation_config):
"""Purge applications.
Purges all applications that have not been seen for a specified amount
of time.
:param not_seen_for: A duration in milliseconds
:type not_seen_for: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.delete_all_disabled_applications.metadata['url']
# Construct parameters
query_parameters = {}
if not_seen_for is not None:
query_parameters['notSeenFor'] = self._serialize.query("not_seen_for", not_seen_for, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
delete_all_disabled_applications.metadata = {'url': '/v2/applications'}
def get_list_of_applications(
self, continuation_token=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Get list of applications.
Get all registered applications in paged form. The returned model can
contain a continuation token if more results are available. Call this
operation again using the token to retrieve more results.
:param continuation_token: Optional Continuation token
:type continuation_token: str
:param page_size: Optional number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: ApplicationInfoListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.ApplicationInfoListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_list_of_applications.metadata['url']
# Construct parameters
query_parameters = {}
if continuation_token is not None:
query_parameters['continuationToken'] = self._serialize.query("continuation_token", continuation_token, 'str')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ApplicationInfoListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_list_of_applications.metadata = {'url': '/v2/applications'}
def disable_application(
self, application_id, custom_headers=None, raw=False, **operation_config):
"""Disable an enabled application.
A manager can disable an application.
:param application_id: The application id
:type application_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.disable_application.metadata['url']
path_format_arguments = {
'applicationId': self._serialize.url("application_id", application_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
disable_application.metadata = {'url': '/v2/applications/{applicationId}/disable'}
def enable_application(
self, application_id, custom_headers=None, raw=False, **operation_config):
"""Re-enable a disabled application.
A manager can enable an application.
:param application_id: The application id
:type application_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.enable_application.metadata['url']
path_format_arguments = {
'applicationId': self._serialize.url("application_id", application_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
enable_application.metadata = {'url': '/v2/applications/{applicationId}/enable'}
def discover_server(
self, body, custom_headers=None, raw=False, **operation_config):
"""Discover servers.
Registers servers by running a discovery scan in a supervisor's
network. Requires that the onboarding agent service is running.
:param body: Discovery request
:type body: ~azure-iiot-opc-registry.models.DiscoveryRequestApiModel
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.discover_server.metadata['url']
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'DiscoveryRequestApiModel')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
discover_server.metadata = {'url': '/v2/applications/discover'}
def cancel(
self, request_id, custom_headers=None, raw=False, **operation_config):
"""Cancel discovery.
Cancels a discovery request using the request identifier.
:param request_id: Discovery request
:type request_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.cancel.metadata['url']
path_format_arguments = {
'requestId': self._serialize.url("request_id", request_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
cancel.metadata = {'url': '/v2/applications/discover/{requestId}'}
def get_application_registration(
self, application_id, custom_headers=None, raw=False, **operation_config):
"""Get application registration.
:param application_id: Application id for the server
:type application_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: ApplicationRegistrationApiModel or ClientRawResponse if
raw=true
:rtype:
~azure-iiot-opc-registry.models.ApplicationRegistrationApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_application_registration.metadata['url']
path_format_arguments = {
'applicationId': self._serialize.url("application_id", application_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ApplicationRegistrationApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_application_registration.metadata = {'url': '/v2/applications/{applicationId}'}
def update_application_registration(
self, application_id, body, custom_headers=None, raw=False, **operation_config):
"""Update application registration.
The application information is updated with new properties. Note that
this information might be overridden if the application is
re-discovered during a discovery run (recurring or one-time).
:param application_id: The identifier of the application
:type application_id: str
:param body: Application update request
:type body:
~azure-iiot-opc-registry.models.ApplicationRegistrationUpdateApiModel
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.update_application_registration.metadata['url']
path_format_arguments = {
'applicationId': self._serialize.url("application_id", application_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'ApplicationRegistrationUpdateApiModel')
# Construct and send request
request = self._client.patch(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
update_application_registration.metadata = {'url': '/v2/applications/{applicationId}'}
def delete_application(
self, application_id, custom_headers=None, raw=False, **operation_config):
"""Unregister application.
Unregisters and deletes application and all its associated endpoints.
:param application_id: The identifier of the application
:type application_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.delete_application.metadata['url']
path_format_arguments = {
'applicationId': self._serialize.url("application_id", application_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
delete_application.metadata = {'url': '/v2/applications/{applicationId}'}
def get_list_of_sites(
self, continuation_token=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Get list of sites.
List all sites applications are registered in.
:param continuation_token: Optional Continuation token
:type continuation_token: str
:param page_size: Optional number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: ApplicationSiteListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.ApplicationSiteListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_list_of_sites.metadata['url']
# Construct parameters
query_parameters = {}
if continuation_token is not None:
query_parameters['continuationToken'] = self._serialize.query("continuation_token", continuation_token, 'str')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ApplicationSiteListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_list_of_sites.metadata = {'url': '/v2/applications/sites'}
def query_applications(
self, body, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Query applications.
List applications that match a query model. The returned model can
contain a continuation token if more results are available. Call the
GetListOfApplications operation using the token to retrieve more
results.
:param body: Application query
:type body:
~azure-iiot-opc-registry.models.ApplicationRegistrationQueryApiModel
:param page_size: Optional number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: ApplicationInfoListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.ApplicationInfoListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.query_applications.metadata['url']
# Construct parameters
query_parameters = {}
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'ApplicationRegistrationQueryApiModel')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ApplicationInfoListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
query_applications.metadata = {'url': '/v2/applications/query'}
def get_filtered_list_of_applications(
self, body, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Get filtered list of applications.
Get a list of applications filtered using the specified query
parameters. The returned model can contain a continuation token if more
results are available. Call the GetListOfApplications operation using
the token to retrieve more results.
:param body: Applications Query model
:type body:
~azure-iiot-opc-registry.models.ApplicationRegistrationQueryApiModel
:param page_size: Number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: ApplicationInfoListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.ApplicationInfoListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_filtered_list_of_applications.metadata['url']
# Construct parameters
query_parameters = {}
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'ApplicationRegistrationQueryApiModel')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ApplicationInfoListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_filtered_list_of_applications.metadata = {'url': '/v2/applications/query'}
def subscribe(
self, body=None, custom_headers=None, raw=False, **operation_config):
"""Subscribe for application events.
Register a client to receive application events through SignalR.
:param body: The user that will receive application events.
:type body: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.subscribe.metadata['url']
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
if body is not None:
body_content = self._serialize.body(body, 'str')
else:
body_content = None
# Construct and send request
request = self._client.put(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
subscribe.metadata = {'url': '/v2/applications/events'}
def unsubscribe(
self, user_id, custom_headers=None, raw=False, **operation_config):
"""Unsubscribe from application events.
Unregister a user and stop it from receiving events.
:param user_id: The user id that will not receive any more events
:type user_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.unsubscribe.metadata['url']
path_format_arguments = {
'userId': self._serialize.url("user_id", user_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
unsubscribe.metadata = {'url': '/v2/applications/events/{userId}'}
def get_discoverer(
self, discoverer_id, only_server_state=None, custom_headers=None, raw=False, **operation_config):
"""Get discoverer registration information.
Returns a discoverer's registration and connectivity information. A
discoverer id corresponds to the twin modules module identity.
:param discoverer_id: Discoverer identifier
:type discoverer_id: str
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: DiscovererApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.DiscovererApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_discoverer.metadata['url']
path_format_arguments = {
'discovererId': self._serialize.url("discoverer_id", discoverer_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('DiscovererApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_discoverer.metadata = {'url': '/v2/discovery/{discovererId}'}
def update_discoverer(
self, discoverer_id, body, custom_headers=None, raw=False, **operation_config):
"""Update discoverer information.
Allows a caller to configure recurring discovery runs on the twin
module identified by the discoverer id or update site information.
:param discoverer_id: discoverer identifier
:type discoverer_id: str
:param body: Patch request
:type body: ~azure-iiot-opc-registry.models.DiscovererUpdateApiModel
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.update_discoverer.metadata['url']
path_format_arguments = {
'discovererId': self._serialize.url("discoverer_id", discoverer_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'DiscovererUpdateApiModel')
# Construct and send request
request = self._client.patch(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
update_discoverer.metadata = {'url': '/v2/discovery/{discovererId}'}
def set_discovery_mode(
self, discoverer_id, mode, body=None, custom_headers=None, raw=False, **operation_config):
"""Enable server discovery.
Allows a caller to configure recurring discovery runs on the discovery
module identified by the module id.
:param discoverer_id: discoverer identifier
:type discoverer_id: str
:param mode: Discovery mode. Possible values include: 'Off', 'Local',
'Network', 'Fast', 'Scan'
:type mode: str or ~azure-iiot-opc-registry.models.DiscoveryMode
:param body: Discovery configuration
:type body: ~azure-iiot-opc-registry.models.DiscoveryConfigApiModel
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.set_discovery_mode.metadata['url']
path_format_arguments = {
'discovererId': self._serialize.url("discoverer_id", discoverer_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['mode'] = self._serialize.query("mode", mode, 'DiscoveryMode')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
if body is not None:
body_content = self._serialize.body(body, 'DiscoveryConfigApiModel')
else:
body_content = None
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
set_discovery_mode.metadata = {'url': '/v2/discovery/{discovererId}'}
def get_list_of_discoverers(
self, only_server_state=None, continuation_token=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Get list of discoverers.
Get all registered discoverers and therefore twin modules in paged
form. The returned model can contain a continuation token if more
results are available. Call this operation again using the token to
retrieve more results.
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param continuation_token: Optional Continuation token
:type continuation_token: str
:param page_size: Optional number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: DiscovererListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.DiscovererListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_list_of_discoverers.metadata['url']
# Construct parameters
query_parameters = {}
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
if continuation_token is not None:
query_parameters['continuationToken'] = self._serialize.query("continuation_token", continuation_token, 'str')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('DiscovererListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_list_of_discoverers.metadata = {'url': '/v2/discovery'}
def query_discoverers(
self, body, only_server_state=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Query discoverers.
Get all discoverers that match a specified query. The returned model
can contain a continuation token if more results are available. Call
the GetListOfDiscoverers operation using the token to retrieve more
results.
:param body: Discoverers query model
:type body: ~azure-iiot-opc-registry.models.DiscovererQueryApiModel
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param page_size: Number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: DiscovererListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.DiscovererListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.query_discoverers.metadata['url']
# Construct parameters
query_parameters = {}
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'DiscovererQueryApiModel')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('DiscovererListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
query_discoverers.metadata = {'url': '/v2/discovery/query'}
def get_filtered_list_of_discoverers(
self, site_id=None, discovery=None, connected=None, only_server_state=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Get filtered list of discoverers.
Get a list of discoverers filtered using the specified query
parameters. The returned model can contain a continuation token if more
results are available. Call the GetListOfDiscoverers operation using
the token to retrieve more results.
:param site_id: Site of the discoverer
:type site_id: str
:param discovery: Discovery mode of discoverer. Possible values
include: 'Off', 'Local', 'Network', 'Fast', 'Scan'
:type discovery: str or ~azure-iiot-opc-registry.models.DiscoveryMode
:param connected: Included connected or disconnected
:type connected: bool
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param page_size: Number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: DiscovererListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.DiscovererListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_filtered_list_of_discoverers.metadata['url']
# Construct parameters
query_parameters = {}
if site_id is not None:
query_parameters['siteId'] = self._serialize.query("site_id", site_id, 'str')
if discovery is not None:
query_parameters['discovery'] = self._serialize.query("discovery", discovery, 'DiscoveryMode')
if connected is not None:
query_parameters['connected'] = self._serialize.query("connected", connected, 'bool')
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('DiscovererListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_filtered_list_of_discoverers.metadata = {'url': '/v2/discovery/query'}
def subscribe1(
self, body=None, custom_headers=None, raw=False, **operation_config):
"""Subscribe to discoverer registry events.
Register a user to receive discoverer events through SignalR.
:param body: The user id that will receive discoverer events.
:type body: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.subscribe1.metadata['url']
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
if body is not None:
body_content = self._serialize.body(body, 'str')
else:
body_content = None
# Construct and send request
request = self._client.put(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
subscribe1.metadata = {'url': '/v2/discovery/events'}
def unsubscribe1(
self, user_id, custom_headers=None, raw=False, **operation_config):
"""Unsubscribe registry events.
Unregister a user and stop it from receiving discoverer events.
:param user_id: The user id that will not receive any more discoverer
events
:type user_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.unsubscribe1.metadata['url']
path_format_arguments = {
'userId': self._serialize.url("user_id", user_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
unsubscribe1.metadata = {'url': '/v2/discovery/events/{userId}'}
def subscribe_by_discoverer_id(
self, discoverer_id, body=None, custom_headers=None, raw=False, **operation_config):
"""Subscribe to discovery progress from discoverer.
Register a client to receive discovery progress events through SignalR
from a particular discoverer.
:param discoverer_id: The discoverer to subscribe to
:type discoverer_id: str
:param body: The user id that will receive discovery events.
:type body: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.subscribe_by_discoverer_id.metadata['url']
path_format_arguments = {
'discovererId': self._serialize.url("discoverer_id", discoverer_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
if body is not None:
body_content = self._serialize.body(body, 'str')
else:
body_content = None
# Construct and send request
request = self._client.put(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
subscribe_by_discoverer_id.metadata = {'url': '/v2/discovery/{discovererId}/events'}
def subscribe_by_request_id(
self, request_id, body=None, custom_headers=None, raw=False, **operation_config):
"""Subscribe to discovery progress for a request.
Register a client to receive discovery progress events through SignalR
for a particular request.
:param request_id: The request to monitor
:type request_id: str
:param body: The user id that will receive discovery events.
:type body: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.subscribe_by_request_id.metadata['url']
path_format_arguments = {
'requestId': self._serialize.url("request_id", request_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
if body is not None:
body_content = self._serialize.body(body, 'str')
else:
body_content = None
# Construct and send request
request = self._client.put(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
subscribe_by_request_id.metadata = {'url': '/v2/discovery/requests/{requestId}/events'}
def unsubscribe_by_request_id(
self, request_id, user_id, custom_headers=None, raw=False, **operation_config):
"""Unsubscribe from discovery progress for a request.
Unregister a client and stop it from receiving discovery events for a
particular request.
:param request_id: The request to unsubscribe from
:type request_id: str
:param user_id: The user id that will not receive any more discovery
progress
:type user_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.unsubscribe_by_request_id.metadata['url']
path_format_arguments = {
'requestId': self._serialize.url("request_id", request_id, 'str'),
'userId': self._serialize.url("user_id", user_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
unsubscribe_by_request_id.metadata = {'url': '/v2/discovery/requests/{requestId}/events/{userId}'}
def unsubscribe_by_discoverer_id(
self, discoverer_id, user_id, custom_headers=None, raw=False, **operation_config):
"""Unsubscribe from discovery progress from discoverer.
Unregister a client and stop it from receiving discovery events.
:param discoverer_id: The discoverer to unsubscribe from
:type discoverer_id: str
:param user_id: The user id that will not receive any more discovery
progress
:type user_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.unsubscribe_by_discoverer_id.metadata['url']
path_format_arguments = {
'discovererId': self._serialize.url("discoverer_id", discoverer_id, 'str'),
'userId': self._serialize.url("user_id", user_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
unsubscribe_by_discoverer_id.metadata = {'url': '/v2/discovery/{discovererId}/events/{userId}'}
def activate_endpoint(
self, endpoint_id, custom_headers=None, raw=False, **operation_config):
"""Activate endpoint.
Activates an endpoint for subsequent use in twin service. All endpoints
must be activated using this API or through a activation filter during
application registration or discovery.
:param endpoint_id: endpoint identifier
:type endpoint_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.activate_endpoint.metadata['url']
path_format_arguments = {
'endpointId': self._serialize.url("endpoint_id", endpoint_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
activate_endpoint.metadata = {'url': '/v2/endpoints/{endpointId}/activate'}
def get_endpoint(
self, endpoint_id, only_server_state=None, custom_headers=None, raw=False, **operation_config):
"""Get endpoint information.
Gets information about an endpoint.
:param endpoint_id: endpoint identifier
:type endpoint_id: str
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: EndpointInfoApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.EndpointInfoApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_endpoint.metadata['url']
path_format_arguments = {
'endpointId': self._serialize.url("endpoint_id", endpoint_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('EndpointInfoApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_endpoint.metadata = {'url': '/v2/endpoints/{endpointId}'}
def get_list_of_endpoints(
self, only_server_state=None, continuation_token=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Get list of endpoints.
Get all registered endpoints in paged form. The returned model can
contain a continuation token if more results are available. Call this
operation again using the token to retrieve more results.
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param continuation_token: Optional Continuation token
:type continuation_token: str
:param page_size: Optional number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: EndpointInfoListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.EndpointInfoListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_list_of_endpoints.metadata['url']
# Construct parameters
query_parameters = {}
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
if continuation_token is not None:
query_parameters['continuationToken'] = self._serialize.query("continuation_token", continuation_token, 'str')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('EndpointInfoListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_list_of_endpoints.metadata = {'url': '/v2/endpoints'}
def query_endpoints(
self, body, only_server_state=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Query endpoints.
Return endpoints that match the specified query. The returned model can
contain a continuation token if more results are available. Call the
GetListOfEndpoints operation using the token to retrieve more results.
:param body: Query to match
:type body:
~azure-iiot-opc-registry.models.EndpointRegistrationQueryApiModel
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param page_size: Optional number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: EndpointInfoListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.EndpointInfoListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.query_endpoints.metadata['url']
# Construct parameters
query_parameters = {}
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'EndpointRegistrationQueryApiModel')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('EndpointInfoListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
query_endpoints.metadata = {'url': '/v2/endpoints/query'}
def get_filtered_list_of_endpoints(
self, url=None, certificate=None, security_mode=None, security_policy=None, activated=None, connected=None, endpoint_state=None, include_not_seen_since=None, discoverer_id=None, application_id=None, supervisor_id=None, site_or_gateway_id=None, only_server_state=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Get filtered list of endpoints.
Get a list of endpoints filtered using the specified query parameters.
The returned model can contain a continuation token if more results are
available. Call the GetListOfEndpoints operation using the token to
retrieve more results.
:param url: Endoint url for direct server access
:type url: str
:param certificate: Certificate of the endpoint
:type certificate: bytearray
:param security_mode: Security Mode. Possible values include: 'Best',
'Sign', 'SignAndEncrypt', 'None'
:type security_mode: str or
~azure-iiot-opc-registry.models.SecurityMode
:param security_policy: Security policy uri
:type security_policy: str
:param activated: Whether the endpoint was activated
:type activated: bool
:param connected: Whether the endpoint is connected on supervisor.
:type connected: bool
:param endpoint_state: The last state of the the activated endpoint.
Possible values include: 'Connecting', 'NotReachable', 'Busy',
'NoTrust', 'CertificateInvalid', 'Ready', 'Error'
:type endpoint_state: str or
~azure-iiot-opc-registry.models.EndpointConnectivityState
:param include_not_seen_since: Whether to include endpoints that were
soft deleted
:type include_not_seen_since: bool
:param discoverer_id: Discoverer id to filter with
:type discoverer_id: str
:param application_id: Application id to filter
:type application_id: str
:param supervisor_id: Supervisor id to filter with
:type supervisor_id: str
:param site_or_gateway_id: Site or gateway id to filter with
:type site_or_gateway_id: str
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param page_size: Optional number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: EndpointInfoListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.EndpointInfoListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_filtered_list_of_endpoints.metadata['url']
# Construct parameters
query_parameters = {}
if url is not None:
query_parameters['url'] = self._serialize.query("url", url, 'str')
if certificate is not None:
query_parameters['certificate'] = self._serialize.query("certificate", certificate, 'bytearray')
if security_mode is not None:
query_parameters['securityMode'] = self._serialize.query("security_mode", security_mode, 'SecurityMode')
if security_policy is not None:
query_parameters['securityPolicy'] = self._serialize.query("security_policy", security_policy, 'str')
if activated is not None:
query_parameters['activated'] = self._serialize.query("activated", activated, 'bool')
if connected is not None:
query_parameters['connected'] = self._serialize.query("connected", connected, 'bool')
if endpoint_state is not None:
query_parameters['endpointState'] = self._serialize.query("endpoint_state", endpoint_state, 'EndpointConnectivityState')
if include_not_seen_since is not None:
query_parameters['includeNotSeenSince'] = self._serialize.query("include_not_seen_since", include_not_seen_since, 'bool')
if discoverer_id is not None:
query_parameters['discovererId'] = self._serialize.query("discoverer_id", discoverer_id, 'str')
if application_id is not None:
query_parameters['applicationId'] = self._serialize.query("application_id", application_id, 'str')
if supervisor_id is not None:
query_parameters['supervisorId'] = self._serialize.query("supervisor_id", supervisor_id, 'str')
if site_or_gateway_id is not None:
query_parameters['siteOrGatewayId'] = self._serialize.query("site_or_gateway_id", site_or_gateway_id, 'str')
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('EndpointInfoListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_filtered_list_of_endpoints.metadata = {'url': '/v2/endpoints/query'}
def deactivate_endpoint(
self, endpoint_id, custom_headers=None, raw=False, **operation_config):
"""Deactivate endpoint.
Deactivates the endpoint and disable access through twin service.
:param endpoint_id: endpoint identifier
:type endpoint_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.deactivate_endpoint.metadata['url']
path_format_arguments = {
'endpointId': self._serialize.url("endpoint_id", endpoint_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
deactivate_endpoint.metadata = {'url': '/v2/endpoints/{endpointId}/deactivate'}
def subscribe2(
self, body=None, custom_headers=None, raw=False, **operation_config):
"""Subscribe for endpoint events.
Register a user to receive endpoint events through SignalR.
:param body: The user id that will receive endpoint events.
:type body: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.subscribe2.metadata['url']
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
if body is not None:
body_content = self._serialize.body(body, 'str')
else:
body_content = None
# Construct and send request
request = self._client.put(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
subscribe2.metadata = {'url': '/v2/endpoints/events'}
def unsubscribe2(
self, user_id, custom_headers=None, raw=False, **operation_config):
"""Unsubscribe from endpoint events.
Unregister a user and stop it from receiving endpoint events.
:param user_id: The user id that will not receive any more endpoint
events
:type user_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.unsubscribe2.metadata['url']
path_format_arguments = {
'userId': self._serialize.url("user_id", user_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
unsubscribe2.metadata = {'url': '/v2/endpoints/events/{userId}'}
def get_gateway(
self, gateway_id, custom_headers=None, raw=False, **operation_config):
"""Get Gateway registration information.
Returns a Gateway's registration and connectivity information. A
Gateway id corresponds to the twin modules module identity.
:param gateway_id: Gateway identifier
:type gateway_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: GatewayInfoApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.GatewayInfoApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_gateway.metadata['url']
path_format_arguments = {
'GatewayId': self._serialize.url("gateway_id", gateway_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('GatewayInfoApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_gateway.metadata = {'url': '/v2/gateways/{GatewayId}'}
def update_gateway(
self, gateway_id, body, custom_headers=None, raw=False, **operation_config):
"""Update Gateway configuration.
Allows a caller to configure operations on the Gateway module
identified by the Gateway id.
:param gateway_id: Gateway identifier
:type gateway_id: str
:param body: Patch request
:type body: ~azure-iiot-opc-registry.models.GatewayUpdateApiModel
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.update_gateway.metadata['url']
path_format_arguments = {
'GatewayId': self._serialize.url("gateway_id", gateway_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'GatewayUpdateApiModel')
# Construct and send request
request = self._client.patch(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
update_gateway.metadata = {'url': '/v2/gateways/{GatewayId}'}
def get_list_of_gateway(
self, continuation_token=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Get list of Gateways.
Get all registered Gateways and therefore twin modules in paged form.
The returned model can contain a continuation token if more results are
available. Call this operation again using the token to retrieve more
results.
:param continuation_token: Optional Continuation token
:type continuation_token: str
:param page_size: Optional number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: GatewayListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.GatewayListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_list_of_gateway.metadata['url']
# Construct parameters
query_parameters = {}
if continuation_token is not None:
query_parameters['continuationToken'] = self._serialize.query("continuation_token", continuation_token, 'str')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('GatewayListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_list_of_gateway.metadata = {'url': '/v2/gateways'}
def query_gateway(
self, body, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Query Gateways.
Get all Gateways that match a specified query. The returned model can
contain a continuation token if more results are available. Call the
GetListOfGateway operation using the token to retrieve more results.
:param body: Gateway query model
:type body: ~azure-iiot-opc-registry.models.GatewayQueryApiModel
:param page_size: Number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: GatewayListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.GatewayListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.query_gateway.metadata['url']
# Construct parameters
query_parameters = {}
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'GatewayQueryApiModel')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('GatewayListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
query_gateway.metadata = {'url': '/v2/gateways/query'}
def get_filtered_list_of_gateway(
self, site_id=None, connected=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Get filtered list of Gateways.
Get a list of Gateways filtered using the specified query parameters.
The returned model can contain a continuation token if more results are
available. Call the GetListOfGateway operation using the token to
retrieve more results.
:param site_id: Site of the Gateway
:type site_id: str
:param connected: Included connected or disconnected
:type connected: bool
:param page_size: Number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: GatewayListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.GatewayListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_filtered_list_of_gateway.metadata['url']
# Construct parameters
query_parameters = {}
if site_id is not None:
query_parameters['siteId'] = self._serialize.query("site_id", site_id, 'str')
if connected is not None:
query_parameters['connected'] = self._serialize.query("connected", connected, 'bool')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('GatewayListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_filtered_list_of_gateway.metadata = {'url': '/v2/gateways/query'}
def subscribe3(
self, body=None, custom_headers=None, raw=False, **operation_config):
"""Subscribe to Gateway registry events.
Register a user to receive Gateway events through SignalR.
:param body: The user id that will receive Gateway events.
:type body: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.subscribe3.metadata['url']
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
if body is not None:
body_content = self._serialize.body(body, 'str')
else:
body_content = None
# Construct and send request
request = self._client.put(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
subscribe3.metadata = {'url': '/v2/gateways/events'}
def unsubscribe3(
self, user_id, custom_headers=None, raw=False, **operation_config):
"""Unsubscribe registry events.
Unregister a user and stop it from receiving Gateway events.
:param user_id: The user id that will not receive any more Gateway
events
:type user_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.unsubscribe3.metadata['url']
path_format_arguments = {
'userId': self._serialize.url("user_id", user_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
unsubscribe3.metadata = {'url': '/v2/gateways/events/{userId}'}
def get_publisher(
self, publisher_id, only_server_state=None, custom_headers=None, raw=False, **operation_config):
"""Get publisher registration information.
Returns a publisher's registration and connectivity information. A
publisher id corresponds to the twin modules module identity.
:param publisher_id: Publisher identifier
:type publisher_id: str
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: PublisherApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.PublisherApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_publisher.metadata['url']
path_format_arguments = {
'publisherId': self._serialize.url("publisher_id", publisher_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('PublisherApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_publisher.metadata = {'url': '/v2/publishers/{publisherId}'}
def update_publisher(
self, publisher_id, body, custom_headers=None, raw=False, **operation_config):
"""Update publisher configuration.
Allows a caller to configure operations on the publisher module
identified by the publisher id.
:param publisher_id: Publisher identifier
:type publisher_id: str
:param body: Patch request
:type body: ~azure-iiot-opc-registry.models.PublisherUpdateApiModel
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.update_publisher.metadata['url']
path_format_arguments = {
'publisherId': self._serialize.url("publisher_id", publisher_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'PublisherUpdateApiModel')
# Construct and send request
request = self._client.patch(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
update_publisher.metadata = {'url': '/v2/publishers/{publisherId}'}
def get_list_of_publisher(
self, only_server_state=None, continuation_token=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Get list of publishers.
Get all registered publishers and therefore twin modules in paged form.
The returned model can contain a continuation token if more results are
available. Call this operation again using the token to retrieve more
results.
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param continuation_token: Optional Continuation token
:type continuation_token: str
:param page_size: Optional number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: PublisherListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.PublisherListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_list_of_publisher.metadata['url']
# Construct parameters
query_parameters = {}
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
if continuation_token is not None:
query_parameters['continuationToken'] = self._serialize.query("continuation_token", continuation_token, 'str')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('PublisherListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_list_of_publisher.metadata = {'url': '/v2/publishers'}
def query_publisher(
self, body, only_server_state=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Query publishers.
Get all publishers that match a specified query. The returned model can
contain a continuation token if more results are available. Call the
GetListOfPublisher operation using the token to retrieve more results.
:param body: Publisher query model
:type body: ~azure-iiot-opc-registry.models.PublisherQueryApiModel
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param page_size: Number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: PublisherListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.PublisherListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.query_publisher.metadata['url']
# Construct parameters
query_parameters = {}
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'PublisherQueryApiModel')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('PublisherListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
query_publisher.metadata = {'url': '/v2/publishers/query'}
def get_filtered_list_of_publisher(
self, site_id=None, connected=None, only_server_state=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Get filtered list of publishers.
Get a list of publishers filtered using the specified query parameters.
The returned model can contain a continuation token if more results are
available. Call the GetListOfPublisher operation using the token to
retrieve more results.
:param site_id: Site for the publishers
:type site_id: str
:param connected: Included connected or disconnected
:type connected: bool
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param page_size: Number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: PublisherListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.PublisherListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_filtered_list_of_publisher.metadata['url']
# Construct parameters
query_parameters = {}
if site_id is not None:
query_parameters['siteId'] = self._serialize.query("site_id", site_id, 'str')
if connected is not None:
query_parameters['connected'] = self._serialize.query("connected", connected, 'bool')
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('PublisherListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_filtered_list_of_publisher.metadata = {'url': '/v2/publishers/query'}
def subscribe4(
self, body=None, custom_headers=None, raw=False, **operation_config):
"""Subscribe to publisher registry events.
Register a user to receive publisher events through SignalR.
:param body: The user id that will receive publisher events.
:type body: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.subscribe4.metadata['url']
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
if body is not None:
body_content = self._serialize.body(body, 'str')
else:
body_content = None
# Construct and send request
request = self._client.put(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
subscribe4.metadata = {'url': '/v2/publishers/events'}
def unsubscribe4(
self, user_id, custom_headers=None, raw=False, **operation_config):
"""Unsubscribe registry events.
Unregister a user and stop it from receiving publisher events.
:param user_id: The user id that will not receive any more publisher
events
:type user_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.unsubscribe4.metadata['url']
path_format_arguments = {
'userId': self._serialize.url("user_id", user_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
unsubscribe4.metadata = {'url': '/v2/publishers/events/{userId}'}
def get_supervisor(
self, supervisor_id, only_server_state=None, custom_headers=None, raw=False, **operation_config):
"""Get supervisor registration information.
Returns a supervisor's registration and connectivity information. A
supervisor id corresponds to the twin modules module identity.
:param supervisor_id: Supervisor identifier
:type supervisor_id: str
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: SupervisorApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.SupervisorApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_supervisor.metadata['url']
path_format_arguments = {
'supervisorId': self._serialize.url("supervisor_id", supervisor_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('SupervisorApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_supervisor.metadata = {'url': '/v2/supervisors/{supervisorId}'}
def update_supervisor(
self, supervisor_id, body, custom_headers=None, raw=False, **operation_config):
"""Update supervisor information.
Allows a caller to configure recurring discovery runs on the twin
module identified by the supervisor id or update site information.
:param supervisor_id: supervisor identifier
:type supervisor_id: str
:param body: Patch request
:type body: ~azure-iiot-opc-registry.models.SupervisorUpdateApiModel
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.update_supervisor.metadata['url']
path_format_arguments = {
'supervisorId': self._serialize.url("supervisor_id", supervisor_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'SupervisorUpdateApiModel')
# Construct and send request
request = self._client.patch(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
update_supervisor.metadata = {'url': '/v2/supervisors/{supervisorId}'}
def get_supervisor_status(
self, supervisor_id, custom_headers=None, raw=False, **operation_config):
"""Get runtime status of supervisor.
Allows a caller to get runtime status for a supervisor.
:param supervisor_id: supervisor identifier
:type supervisor_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: SupervisorStatusApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.SupervisorStatusApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_supervisor_status.metadata['url']
path_format_arguments = {
'supervisorId': self._serialize.url("supervisor_id", supervisor_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('SupervisorStatusApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_supervisor_status.metadata = {'url': '/v2/supervisors/{supervisorId}/status'}
def reset_supervisor(
self, supervisor_id, custom_headers=None, raw=False, **operation_config):
"""Reset supervisor.
Allows a caller to reset the twin module using its supervisor identity
identifier.
:param supervisor_id: supervisor identifier
:type supervisor_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.reset_supervisor.metadata['url']
path_format_arguments = {
'supervisorId': self._serialize.url("supervisor_id", supervisor_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
reset_supervisor.metadata = {'url': '/v2/supervisors/{supervisorId}/reset'}
def get_list_of_supervisors(
self, only_server_state=None, continuation_token=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Get list of supervisors.
Get all registered supervisors and therefore twin modules in paged
form. The returned model can contain a continuation token if more
results are available. Call this operation again using the token to
retrieve more results.
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param continuation_token: Optional Continuation token
:type continuation_token: str
:param page_size: Optional number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: SupervisorListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.SupervisorListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_list_of_supervisors.metadata['url']
# Construct parameters
query_parameters = {}
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
if continuation_token is not None:
query_parameters['continuationToken'] = self._serialize.query("continuation_token", continuation_token, 'str')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('SupervisorListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_list_of_supervisors.metadata = {'url': '/v2/supervisors'}
def query_supervisors(
self, body, only_server_state=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Query supervisors.
Get all supervisors that match a specified query. The returned model
can contain a continuation token if more results are available. Call
the GetListOfSupervisors operation using the token to retrieve more
results.
:param body: Supervisors query model
:type body: ~azure-iiot-opc-registry.models.SupervisorQueryApiModel
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param page_size: Number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: SupervisorListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.SupervisorListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.query_supervisors.metadata['url']
# Construct parameters
query_parameters = {}
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
body_content = self._serialize.body(body, 'SupervisorQueryApiModel')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('SupervisorListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
query_supervisors.metadata = {'url': '/v2/supervisors/query'}
def get_filtered_list_of_supervisors(
self, site_id=None, connected=None, only_server_state=None, page_size=None, custom_headers=None, raw=False, **operation_config):
"""Get filtered list of supervisors.
Get a list of supervisors filtered using the specified query
parameters. The returned model can contain a continuation token if more
results are available. Call the GetListOfSupervisors operation using
the token to retrieve more results.
:param site_id: Site for the supervisors
:type site_id: str
:param connected: Included connected or disconnected
:type connected: bool
:param only_server_state: Whether to include only server state, or
display current client state of the endpoint if available
:type only_server_state: bool
:param page_size: Number of results to return
:type page_size: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: SupervisorListApiModel or ClientRawResponse if raw=true
:rtype: ~azure-iiot-opc-registry.models.SupervisorListApiModel or
~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.get_filtered_list_of_supervisors.metadata['url']
# Construct parameters
query_parameters = {}
if site_id is not None:
query_parameters['siteId'] = self._serialize.query("site_id", site_id, 'str')
if connected is not None:
query_parameters['connected'] = self._serialize.query("connected", connected, 'bool')
if only_server_state is not None:
query_parameters['onlyServerState'] = self._serialize.query("only_server_state", only_server_state, 'bool')
if page_size is not None:
query_parameters['pageSize'] = self._serialize.query("page_size", page_size, 'int')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('SupervisorListApiModel', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_filtered_list_of_supervisors.metadata = {'url': '/v2/supervisors/query'}
def subscribe5(
self, body=None, custom_headers=None, raw=False, **operation_config):
"""Subscribe to supervisor registry events.
Register a user to receive supervisor events through SignalR.
:param body: The user id that will receive supervisor events.
:type body: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.subscribe5.metadata['url']
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json-patch+json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct body
if body is not None:
body_content = self._serialize.body(body, 'str')
else:
body_content = None
# Construct and send request
request = self._client.put(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
subscribe5.metadata = {'url': '/v2/supervisors/events'}
def unsubscribe5(
self, user_id, custom_headers=None, raw=False, **operation_config):
"""Unsubscribe registry events.
Unregister a user and stop it from receiving supervisor events.
:param user_id: The user id that will not receive any more supervisor
events
:type user_id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises:
:class:`HttpOperationError<msrest.exceptions.HttpOperationError>`
"""
# Construct URL
url = self.unsubscribe5.metadata['url']
path_format_arguments = {
'userId': self._serialize.url("user_id", user_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if custom_headers:
header_parameters.update(custom_headers)
# Construct and send request
request = self._client.delete(url, query_parameters)
response = self._client.send(request, header_parameters, stream=False, **operation_config)
if response.status_code not in [200]:
raise HttpOperationError(self._deserialize, response)
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
unsubscribe5.metadata = {'url': '/v2/supervisors/events/{userId}'}
| 42.244886 | 348 | 0.673301 | 15,177 | 142,492 | 6.147328 | 0.027015 | 0.032884 | 0.025295 | 0.018436 | 0.904167 | 0.889226 | 0.877296 | 0.86318 | 0.857253 | 0.845527 | 0 | 0.003889 | 0.247495 | 142,492 | 3,372 | 349 | 42.257414 | 0.866217 | 0.378477 | 0 | 0.786172 | 0 | 0 | 0.105926 | 0.035317 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.004277 | 0 | 0.110478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4208ecfb4aa4234220d78092067fe011adf241d2 | 221 | py | Python | pyramda/dictionary/getitem_test.py | sergiors/pyramda | 5bf200888809b1bc946e813e29460f204bccd13e | [
"MIT"
] | 124 | 2015-07-30T21:34:25.000Z | 2022-02-19T08:45:50.000Z | pyramda/dictionary/getitem_test.py | sergiors/pyramda | 5bf200888809b1bc946e813e29460f204bccd13e | [
"MIT"
] | 37 | 2015-08-31T23:02:20.000Z | 2022-02-04T04:45:28.000Z | pyramda/dictionary/getitem_test.py | sergiors/pyramda | 5bf200888809b1bc946e813e29460f204bccd13e | [
"MIT"
] | 20 | 2015-08-04T18:59:09.000Z | 2021-12-13T08:08:59.000Z | from .getitem import getitem
from pyramda.private.asserts import assert_equal
def getitem_nocurry_item():
assert_equal(getitem("a", {"a": 1}), 1)
def getitem_curry_item():
assert_equal(getitem("a")({"a": 1}))
| 20.090909 | 48 | 0.705882 | 32 | 221 | 4.65625 | 0.4375 | 0.221477 | 0.201342 | 0.295302 | 0.33557 | 0.33557 | 0.33557 | 0 | 0 | 0 | 0 | 0.015707 | 0.135747 | 221 | 10 | 49 | 22.1 | 0.764398 | 0 | 0 | 0 | 0 | 0 | 0.0181 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
42732064acfa5b163378d2182b7cf4d0d1e1917a | 6,852 | py | Python | openprocurement/audit/monitoring/tests/test_credentials.py | ProzorroUKR/openprocurement.audit.api | a17836e29bca28d9151c091e1d2e42de9f70b949 | [
"Apache-2.0"
] | 1 | 2018-05-21T08:14:55.000Z | 2018-05-21T08:14:55.000Z | openprocurement/audit/monitoring/tests/test_credentials.py | ProzorroUKR/openprocurement.audit.api | a17836e29bca28d9151c091e1d2e42de9f70b949 | [
"Apache-2.0"
] | 59 | 2018-05-18T02:09:47.000Z | 2019-05-29T12:10:06.000Z | openprocurement/audit/monitoring/tests/test_credentials.py | ProzorroUKR/openprocurement.audit.api | a17836e29bca28d9151c091e1d2e42de9f70b949 | [
"Apache-2.0"
] | 1 | 2020-06-15T11:04:25.000Z | 2020-06-15T11:04:25.000Z | from hashlib import sha512
from unittest import mock
from openprocurement_client.exceptions import ResourceError
from openprocurement.audit.monitoring.tests.base import BaseWebTest
from openprocurement.audit.monitoring.tests.utils import get_errors_field_names
class MonitoringCredentialsResourceTest(BaseWebTest):
def setUp(self):
super(MonitoringCredentialsResourceTest, self).setUp()
self.create_monitoring()
def test_credentials_no_access_token(self):
self.app.authorization = ('Basic', (self.broker_name, self.broker_pass))
response = self.app.patch_json(
'/monitorings/{}/credentials'.format(self.monitoring_id),
status=403
)
self.assertEqual(response.status_code, 403)
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(('body', 'data'), next(get_errors_field_names(response, 'No access token was provided.')))
@mock.patch('openprocurement.audit.monitoring.validation.TendersClient')
def test_credentials_query_param_access_token(self, client_class_mock):
client_class_mock.return_value.extract_credentials.return_value = {
'data': {'tender_token': sha512(b'tender_token').hexdigest()}
}
self.app.authorization = ('Basic', (self.broker_name, self.broker_pass))
response = self.app.patch_json(
'/monitorings/{}/credentials?acc_token={}'.format(self.monitoring_id, 'tender_token')
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content_type, 'application/json')
self.assertIn('access', response.json)
@mock.patch('openprocurement.audit.monitoring.validation.TendersClient')
def test_credentials_query_param_wrong_access_token(self, client_class_mock):
client_class_mock.return_value.extract_credentials.return_value = {
'data': {'tender_token': sha512(b'tender_token').hexdigest()}
}
self.app.authorization = ('Basic', (self.broker_name, self.broker_pass))
response = self.app.patch_json(
'/monitorings/{}/credentials?acc_token={}'.format(self.monitoring_id, 'wrong_token'),
status=403
)
self.assertEqual(response.status_code, 403)
self.assertEqual(response.content_type, 'application/json')
@mock.patch('openprocurement.audit.monitoring.validation.TendersClient')
def test_credentials_no_tender(self, client_class_mock):
client_class_mock.return_value.extract_credentials.side_effect = ResourceError(mock.Mock(status_code=404))
self.app.authorization = ('Basic', (self.broker_name, self.broker_pass))
response = self.app.patch_json(
'/monitorings/{}/credentials?acc_token={}'.format(self.monitoring_id, 'tender_token'),
status=403
)
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(
('body', 'data'),
next(get_errors_field_names(response, 'Tender {} not found'.format("f" * 32))))
@mock.patch('openprocurement.audit.monitoring.validation.TendersClient')
def test_credentials_tender_error(self, client_class_mock):
client_class_mock.return_value.extract_credentials.side_effect = ResourceError(mock.Mock(status_code=555))
self.app.authorization = ('Basic', (self.broker_name, self.broker_pass))
response = self.app.patch_json(
'/monitorings/{}/credentials?acc_token={}'.format(self.monitoring_id, 'tender_token'),
status=555
)
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(
response.json,
{'status': 'error', 'errors': [
{'location': 'body', 'name': 'data', 'description': 'Unsuccessful tender request'}
]}
)
@mock.patch('openprocurement.audit.monitoring.validation.TendersClient')
def test_credentials_header_access_token(self, client_class_mock):
client_class_mock.return_value.extract_credentials.return_value = {
'data': {'tender_token': sha512(b'tender_token').hexdigest()}
}
self.app.authorization = ('Basic', (self.broker_name, self.broker_pass))
response = self.app.patch_json(
'/monitorings/{}/credentials'.format(self.monitoring_id, 'tender_token'),
headers={'X-access-token': 'tender_token'}
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content_type, 'application/json')
self.assertIn('access', response.json)
@mock.patch('openprocurement.audit.monitoring.validation.TendersClient')
def test_credentials_header_wrong_access_token(self, client_class_mock):
client_class_mock.return_value.extract_credentials.return_value = {
'data': {'tender_token': sha512(b'tender_token').hexdigest()}
}
self.app.authorization = ('Basic', (self.broker_name, self.broker_pass))
response = self.app.patch_json(
'/monitorings/{}/credentials'.format(self.monitoring_id, 'tender_token'),
headers={'X-access-token': 'wrong_token'},
status=403
)
self.assertEqual(response.status_code, 403)
self.assertEqual(response.content_type, 'application/json')
@mock.patch('openprocurement.audit.monitoring.validation.TendersClient')
def test_credentials_body_access_token(self, client_class_mock):
client_class_mock.return_value.extract_credentials.return_value = {
'data': {
'tender_token': sha512(b'tender_token').hexdigest()
}
}
self.app.authorization = ('Basic', (self.broker_name, self.broker_pass))
response = self.app.patch_json(
'/monitorings/{}/credentials'.format(self.monitoring_id, 'tender_token'),
{'access': {'token': 'tender_token'}}
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content_type, 'application/json')
self.assertIn('access', response.json)
@mock.patch('openprocurement.audit.monitoring.validation.TendersClient')
def test_credentials_body_wrong_access_token(self, client_class_mock):
client_class_mock.return_value.extract_credentials.return_value = {
'data': {'tender_token': sha512(b'tender_token').hexdigest()}
}
self.app.authorization = ('Basic', (self.broker_name, self.broker_pass))
response = self.app.patch_json(
'/monitorings/{}/credentials'.format(self.monitoring_id, 'tender_token'),
{'access': {'token': 'wrong_token'}},
status=403
)
self.assertEqual(response.status_code, 403)
self.assertEqual(response.content_type, 'application/json')
| 46.297297 | 115 | 0.681699 | 735 | 6,852 | 6.09932 | 0.117007 | 0.051528 | 0.087218 | 0.05019 | 0.89449 | 0.877091 | 0.877091 | 0.877091 | 0.877091 | 0.862815 | 0 | 0.012303 | 0.193374 | 6,852 | 147 | 116 | 46.612245 | 0.798806 | 0 | 0 | 0.560976 | 0 | 0 | 0.212639 | 0.109603 | 0 | 0 | 0 | 0 | 0.178862 | 1 | 0.081301 | false | 0.073171 | 0.04065 | 0 | 0.130081 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
4284f31df9ef9a0af62057d2fe2b84c4f3a02d3a | 174 | py | Python | src/util.py | cao5zy/ali_mini_program_component_helper | 53d82595337fd18f5b5a9d8fc5dfd45ddc6322ec | [
"MIT"
] | null | null | null | src/util.py | cao5zy/ali_mini_program_component_helper | 53d82595337fd18f5b5a9d8fc5dfd45ddc6322ec | [
"MIT"
] | 1 | 2019-04-07T13:02:23.000Z | 2019-04-07T13:02:23.000Z | src/util.py | cao5zy/ali_mini_program_component_helper | 53d82595337fd18f5b5a9d8fc5dfd45ddc6322ec | [
"MIT"
] | null | null | null | import os
import shutil
def init_with_template(root_folder, app_name):
shutil.copytree(os.path.join('./tests/templates', app_name), os.path.join(root_folder, app_name))
| 29 | 101 | 0.775862 | 28 | 174 | 4.571429 | 0.571429 | 0.164063 | 0.203125 | 0.265625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091954 | 174 | 5 | 102 | 34.8 | 0.810127 | 0 | 0 | 0 | 0 | 0 | 0.097701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
42853bd823551a1a6e171c2dc938ebd9d0aaffa7 | 109 | py | Python | integration/tests_ok/assert_base64.py | jleverenz/hurl | b81ca8ab7e0e409ec0c074fd8e118721ff4d3fb3 | [
"Apache-2.0"
] | null | null | null | integration/tests_ok/assert_base64.py | jleverenz/hurl | b81ca8ab7e0e409ec0c074fd8e118721ff4d3fb3 | [
"Apache-2.0"
] | null | null | null | integration/tests_ok/assert_base64.py | jleverenz/hurl | b81ca8ab7e0e409ec0c074fd8e118721ff4d3fb3 | [
"Apache-2.0"
] | null | null | null | from app import app
@app.route("/assert-base64")
def assert_base64():
return "line1\nline2\r\nline3\n"
| 15.571429 | 36 | 0.706422 | 17 | 109 | 4.470588 | 0.764706 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074468 | 0.137615 | 109 | 6 | 37 | 18.166667 | 0.734043 | 0 | 0 | 0 | 0 | 0 | 0.33945 | 0.211009 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
c429f7cc7f2fe49769a27e5b6bf29d8d6528bd6d | 17,700 | py | Python | tripleoclient/tests/v1/undercloud/test_backup.py | paramite/python-tripleoclient | 2d7cf1b8061eaaf6fc4b31be4490586a1f80133d | [
"Apache-2.0"
] | 39 | 2015-09-08T14:34:36.000Z | 2022-02-20T21:00:44.000Z | tripleoclient/tests/v1/undercloud/test_backup.py | paramite/python-tripleoclient | 2d7cf1b8061eaaf6fc4b31be4490586a1f80133d | [
"Apache-2.0"
] | 1 | 2021-02-28T06:06:29.000Z | 2021-02-28T06:06:29.000Z | tripleoclient/tests/v1/undercloud/test_backup.py | paramite/python-tripleoclient | 2d7cf1b8061eaaf6fc4b31be4490586a1f80133d | [
"Apache-2.0"
] | 33 | 2015-10-01T17:53:04.000Z | 2022-03-10T11:50:38.000Z | # Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from unittest import mock
from osc_lib.tests import utils
from tripleoclient import constants
from tripleoclient.tests import fakes
from tripleoclient.v1 import undercloud_backup
from unittest.mock import call
class TestUndercloudBackup(utils.TestCommand):
def setUp(self):
super(TestUndercloudBackup, self).setUp()
# Get the command object to test
app_args = mock.Mock()
app_args.verbose_level = 1
self.app.options = fakes.FakeOptions()
self.cmd = undercloud_backup.BackupUndercloud(self.app, app_args)
self.app.client_manager.workflow_engine = mock.Mock()
self.workflow = self.app.client_manager.workflow_engine
self.inventory = '/tmp/test_inventory.yaml'
self.file = open(self.inventory, 'w').close()
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_legacy_withargs(self, mock_playbook):
arglist = [
'--add-path',
'/tmp/foo.yaml',
'--add-path',
'/tmp/bar.yaml'
]
verifylist = []
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook=mock.ANY,
inventory=mock.ANY,
tags=None,
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars={'sources_path':
'/home/stack/,/tmp/bar.yaml,/tmp/foo.yaml'})
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_withargs_remove(self, mock_playbook):
arglist = [
'--add-path',
'/tmp/foo.yaml',
'--exclude-path',
'/tmp/bar.yaml',
'--exclude-path',
'/home/stack/',
'--add-path',
'/tmp/bar.yaml'
]
verifylist = []
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook=mock.ANY,
inventory=mock.ANY,
tags=None,
skip_tags=None,
verbosity=3,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
extra_vars={'sources_path':
'/tmp/foo.yaml'})
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_withargs_remove_double(self, mock_playbook):
arglist = [
'--add-path',
'/tmp/foo.yaml',
'--add-path',
'/tmp/bar.yaml',
'--exclude-path',
'/tmp/foo.yaml',
'--exclude-path',
'/tmp/foo.yaml'
]
verifylist = []
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook=mock.ANY,
inventory=mock.ANY,
tags=None,
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars={'sources_path':
'/home/stack/,/tmp/bar.yaml'})
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_withargs_remove_unex(self, mock_playbook):
arglist = [
'--add-path',
'/tmp/foo.yaml',
'--exclude-path',
'/tmp/non-existing-path.yaml'
]
verifylist = []
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook=mock.ANY,
inventory=mock.ANY,
tags=None,
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars={'sources_path':
'/home/stack/,/tmp/foo.yaml'})
@mock.patch('os.path.isfile')
@mock.patch('os.access')
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_noargs(self,
mock_playbook,
mock_access,
mock_isfile):
arglist = []
verifylist = []
mock_isfile.return_value = True
mock_access.return_value = True
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook='cli-undercloud-backup.yaml',
inventory=parsed_args.inventory,
tags='bar_create_recover_image',
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars=None
)
@mock.patch('os.path.isfile')
@mock.patch('os.access')
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_init(self,
mock_playbook,
mock_access,
mock_isfile):
arglist = [
'--init'
]
verifylist = []
mock_isfile.return_value = True
mock_access.return_value = True
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook='prepare-undercloud-backup.yaml',
inventory=parsed_args.inventory,
tags='bar_setup_rear',
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars=None
)
@mock.patch('os.path.isfile')
@mock.patch('os.access')
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_init_nfs(self,
mock_playbook,
mock_access,
mock_isfile):
arglist = [
'--init',
'nfs'
]
verifylist = []
mock_isfile.return_value = True
mock_access.return_value = True
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook='prepare-nfs-backup.yaml',
inventory=parsed_args.inventory,
tags='bar_setup_nfs_server',
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars=None
)
@mock.patch('os.path.isfile')
@mock.patch('os.access')
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_setup_nfs(self,
mock_playbook,
mock_access,
mock_isfile):
arglist = [
'--setup-nfs'
]
verifylist = []
mock_isfile.return_value = True
mock_access.return_value = True
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook='prepare-nfs-backup.yaml',
inventory=parsed_args.inventory,
tags='bar_setup_nfs_server',
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars=None
)
@mock.patch('os.path.isfile')
@mock.patch('os.access')
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_db_only(self,
mock_playbook,
mock_access,
mock_isfile):
arglist = [
'--db-only'
]
verifylist = []
mock_isfile.return_value = True
mock_access.return_value = True
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook='cli-undercloud-db-backup.yaml',
inventory=parsed_args.inventory,
tags=None,
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars=None
)
@mock.patch('os.path.isfile')
@mock.patch('os.access')
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_setup_rear(self,
mock_playbook,
mock_access,
mock_isfile):
arglist = [
'--setup-rear'
]
verifylist = []
mock_isfile.return_value = True
mock_access.return_value = True
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook='prepare-undercloud-backup.yaml',
inventory=parsed_args.inventory,
tags='bar_setup_rear',
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars=None
)
@mock.patch('os.path.isfile')
@mock.patch('os.access')
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_setup_rear_extra_vars_inline(self,
mock_playbook,
mock_access,
mock_isfile):
arglist = [
'--setup-rear',
'--extra-vars',
'{"tripleo_backup_and_restore_nfs_server": "192.168.24.1"}'
]
verifylist = []
mock_isfile.return_value = True
mock_access.return_value = True
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
extra_vars_dict = {
'tripleo_backup_and_restore_nfs_server': '192.168.24.1'
}
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook='prepare-undercloud-backup.yaml',
inventory=parsed_args.inventory,
tags='bar_setup_rear',
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars=extra_vars_dict
)
@mock.patch('os.path.isfile')
@mock.patch('os.access')
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_db_only_with_setup_options(self,
mock_playbook,
mock_access,
mock_isfile):
arglist = [
'--db-only',
'--setup-nfs',
'--setup-rear'
]
verifylist = []
mock_isfile.return_value = True
mock_access.return_value = True
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook='cli-undercloud-db-backup.yaml',
inventory=parsed_args.inventory,
tags=None,
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars=None
)
@mock.patch('tripleoclient.utils.run_ansible_playbook', autospec=True)
def test_undercloud_backup_setup_nfs_rear_with_inventory(self,
mock_playbook):
arglist = [
'--setup-nfs',
'--setup-rear',
'--inventory',
self.inventory
]
verifylist = []
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
calls = [call(workdir=mock.ANY,
playbook='prepare-nfs-backup.yaml',
inventory=parsed_args.inventory,
tags='bar_setup_nfs_server',
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars=None),
call(workdir=mock.ANY,
playbook='prepare-undercloud-backup.yaml',
inventory=parsed_args.inventory,
tags='bar_setup_rear',
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars=None)]
mock_playbook.assert_has_calls(calls)
@mock.patch('os.path.isfile')
@mock.patch('os.access')
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_setup_nfs_with_extra_vars(self,
mock_playbook,
mock_access,
mock_isfile):
arglist = [
'--setup-nfs',
'--extra-vars',
'/tmp/test_vars.yaml'
]
verifylist = []
mock_isfile.return_value = True
mock_access.return_value = True
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook='prepare-nfs-backup.yaml',
inventory=parsed_args.inventory,
tags='bar_setup_nfs_server',
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars='/tmp/test_vars.yaml'
)
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_inventory(self, mock_playbook):
arglist = [
'--inventory',
self.inventory
]
verifylist = []
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.cmd.take_action(parsed_args)
mock_playbook.assert_called_once_with(
workdir=mock.ANY,
playbook='cli-undercloud-backup.yaml',
inventory=parsed_args.inventory,
tags='bar_create_recover_image',
skip_tags=None,
playbook_dir=constants.ANSIBLE_TRIPLEO_PLAYBOOKS,
verbosity=3,
extra_vars=None
)
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_no_inventory(self, mock_playbook):
arglist = [
'--inventory',
'/tmp/no_inventory.yaml'
]
verifylist = []
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.assertRaisesRegex(
RuntimeError,
'The inventory file',
self.cmd.take_action,
parsed_args)
@mock.patch('os.access')
@mock.patch('tripleoclient.utils.run_ansible_playbook',
autospec=True)
def test_undercloud_backup_no_readable_inventory(self,
mock_playbook,
mock_access):
arglist = [
'--inventory',
self.inventory
]
verifylist = []
mock_access.return_value = False
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.assertRaisesRegex(
RuntimeError,
'The inventory file',
self.cmd.take_action,
parsed_args)
| 35.119048 | 77 | 0.551469 | 1,743 | 17,700 | 5.33276 | 0.103844 | 0.049489 | 0.022485 | 0.049381 | 0.869069 | 0.857881 | 0.830446 | 0.827434 | 0.822808 | 0.812372 | 0 | 0.003869 | 0.357571 | 17,700 | 503 | 78 | 35.188867 | 0.813561 | 0.033898 | 0 | 0.791284 | 0 | 0 | 0.133708 | 0.075694 | 0 | 0 | 0 | 0 | 0.038991 | 1 | 0.041284 | false | 0 | 0.013761 | 0 | 0.057339 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c43609c38b53e3d69093bbe82a0d5192272ac43b | 19,312 | py | Python | sdk/timeseriesinsights/azure-mgmt-timeseriesinsights/azure/mgmt/timeseriesinsights/operations/_access_policies_operations.py | anuchandy/azure-sdk-for-python | 589b9890554ebf261aa2184e8f1c6507f01a207c | [
"MIT"
] | null | null | null | sdk/timeseriesinsights/azure-mgmt-timeseriesinsights/azure/mgmt/timeseriesinsights/operations/_access_policies_operations.py | anuchandy/azure-sdk-for-python | 589b9890554ebf261aa2184e8f1c6507f01a207c | [
"MIT"
] | null | null | null | sdk/timeseriesinsights/azure-mgmt-timeseriesinsights/azure/mgmt/timeseriesinsights/operations/_access_policies_operations.py | anuchandy/azure-sdk-for-python | 589b9890554ebf261aa2184e8f1c6507f01a207c | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
import uuid
from msrest.pipeline import ClientRawResponse
from msrestazure.azure_exceptions import CloudError
from .. import models
class AccessPoliciesOperations(object):
"""AccessPoliciesOperations operations.
You should not instantiate directly this class, but create a Client instance that will create it for you and attach it as attribute.
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
:ivar api_version: Version of the API to be used with the client request. Constant value: "2017-11-15".
"""
models = models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self.api_version = "2017-11-15"
self.config = config
def create_or_update(
self, resource_group_name, environment_name, access_policy_name, parameters, custom_headers=None, raw=False, **operation_config):
"""Create or update an access policy in the specified environment.
:param resource_group_name: Name of an Azure Resource group.
:type resource_group_name: str
:param environment_name: The name of the Time Series Insights
environment associated with the specified resource group.
:type environment_name: str
:param access_policy_name: Name of the access policy.
:type access_policy_name: str
:param parameters: Parameters for creating an access policy.
:type parameters:
~azure.mgmt.timeseriesinsights.models.AccessPolicyCreateOrUpdateParameters
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: AccessPolicyResource or ClientRawResponse if raw=true
:rtype: ~azure.mgmt.timeseriesinsights.models.AccessPolicyResource or
~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.create_or_update.metadata['url']
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'environmentName': self._serialize.url("environment_name", environment_name, 'str'),
'accessPolicyName': self._serialize.url("access_policy_name", access_policy_name, 'str', max_length=90, min_length=1, pattern=r'^[-\w\._\(\)]+$')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(parameters, 'AccessPolicyCreateOrUpdateParameters')
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters, body_content)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 201]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('AccessPolicyResource', response)
if response.status_code == 201:
deserialized = self._deserialize('AccessPolicyResource', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
create_or_update.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.TimeSeriesInsights/environments/{environmentName}/accessPolicies/{accessPolicyName}'}
def get(
self, resource_group_name, environment_name, access_policy_name, custom_headers=None, raw=False, **operation_config):
"""Gets the access policy with the specified name in the specified
environment.
:param resource_group_name: Name of an Azure Resource group.
:type resource_group_name: str
:param environment_name: The name of the Time Series Insights
environment associated with the specified resource group.
:type environment_name: str
:param access_policy_name: The name of the Time Series Insights access
policy associated with the specified environment.
:type access_policy_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: AccessPolicyResource or ClientRawResponse if raw=true
:rtype: ~azure.mgmt.timeseriesinsights.models.AccessPolicyResource or
~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.get.metadata['url']
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'environmentName': self._serialize.url("environment_name", environment_name, 'str'),
'accessPolicyName': self._serialize.url("access_policy_name", access_policy_name, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('AccessPolicyResource', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.TimeSeriesInsights/environments/{environmentName}/accessPolicies/{accessPolicyName}'}
def update(
self, resource_group_name, environment_name, access_policy_name, description=None, roles=None, custom_headers=None, raw=False, **operation_config):
"""Updates the access policy with the specified name in the specified
subscription, resource group, and environment.
:param resource_group_name: Name of an Azure Resource group.
:type resource_group_name: str
:param environment_name: The name of the Time Series Insights
environment associated with the specified resource group.
:type environment_name: str
:param access_policy_name: The name of the Time Series Insights access
policy associated with the specified environment.
:type access_policy_name: str
:param description: An description of the access policy.
:type description: str
:param roles: The list of roles the principal is assigned on the
environment.
:type roles: list[str or
~azure.mgmt.timeseriesinsights.models.AccessPolicyRole]
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: AccessPolicyResource or ClientRawResponse if raw=true
:rtype: ~azure.mgmt.timeseriesinsights.models.AccessPolicyResource or
~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
access_policy_update_parameters = models.AccessPolicyUpdateParameters(description=description, roles=roles)
# Construct URL
url = self.update.metadata['url']
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'environmentName': self._serialize.url("environment_name", environment_name, 'str'),
'accessPolicyName': self._serialize.url("access_policy_name", access_policy_name, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(access_policy_update_parameters, 'AccessPolicyUpdateParameters')
# Construct and send request
request = self._client.patch(url, query_parameters, header_parameters, body_content)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('AccessPolicyResource', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
update.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.TimeSeriesInsights/environments/{environmentName}/accessPolicies/{accessPolicyName}'}
def delete(
self, resource_group_name, environment_name, access_policy_name, custom_headers=None, raw=False, **operation_config):
"""Deletes the access policy with the specified name in the specified
subscription, resource group, and environment.
:param resource_group_name: Name of an Azure Resource group.
:type resource_group_name: str
:param environment_name: The name of the Time Series Insights
environment associated with the specified resource group.
:type environment_name: str
:param access_policy_name: The name of the Time Series Insights access
policy associated with the specified environment.
:type access_policy_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.delete.metadata['url']
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'environmentName': self._serialize.url("environment_name", environment_name, 'str'),
'accessPolicyName': self._serialize.url("access_policy_name", access_policy_name, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.delete(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 204]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
delete.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.TimeSeriesInsights/environments/{environmentName}/accessPolicies/{accessPolicyName}'}
def list_by_environment(
self, resource_group_name, environment_name, custom_headers=None, raw=False, **operation_config):
"""Lists all the available access policies associated with the
environment.
:param resource_group_name: Name of an Azure Resource group.
:type resource_group_name: str
:param environment_name: The name of the Time Series Insights
environment associated with the specified resource group.
:type environment_name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: AccessPolicyListResponse or ClientRawResponse if raw=true
:rtype: ~azure.mgmt.timeseriesinsights.models.AccessPolicyListResponse
or ~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.list_by_environment.metadata['url']
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'environmentName': self._serialize.url("environment_name", environment_name, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('AccessPolicyListResponse', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
list_by_environment.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.TimeSeriesInsights/environments/{environmentName}/accessPolicies'}
| 50.422977 | 213 | 0.690918 | 2,075 | 19,312 | 6.239036 | 0.098313 | 0.037154 | 0.032829 | 0.027808 | 0.862197 | 0.853005 | 0.842809 | 0.83045 | 0.827051 | 0.827051 | 0 | 0.004147 | 0.213391 | 19,312 | 382 | 214 | 50.554974 | 0.848068 | 0.326947 | 0 | 0.737705 | 0 | 0.021858 | 0.208038 | 0.107196 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032787 | false | 0 | 0.021858 | 0 | 0.114754 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c438d839514ece50d8bd5f332a3290ab4c2d2541 | 6,883 | py | Python | gypse/constants.py | aeroxis/gypsy | bfcdb64e9ca61fac6a2b41780b11e87c7df759b2 | [
"MIT"
] | 3 | 2019-04-10T22:02:36.000Z | 2020-12-13T21:29:28.000Z | gypse/constants.py | aeroxis/gypsy | bfcdb64e9ca61fac6a2b41780b11e87c7df759b2 | [
"MIT"
] | null | null | null | gypse/constants.py | aeroxis/gypsy | bfcdb64e9ca61fac6a2b41780b11e87c7df759b2 | [
"MIT"
] | null | null | null | import os
DEFAULT_LOGGER_NAME = 'gypse'
DEFAULT_CONFIG_HOME = os.path.expanduser('~/.gypse')
# Regular Expressions that will help us
REGEX_URL = "(https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|www\.[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9]\.[^\s]{2,}|www\.[a-zA-Z0-9]\.[^\s]{2,})"
REGEX_PHONE = "[\+]?(\+\d{1,2}\s)?\(?\d{3}\)?[\s.-]?\d{3}[\s.-]?\d{4}"
REGEX_EMAIL = """(?:(?:\r\n)?[ \t])*(?:(?:(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t]
)+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:
\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(
?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[
\t]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\0
31]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\
](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+
(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:
(?:\r\n)?[ \t])*))*|(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z
|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)
?[ \t])*)*\<(?:(?:\r\n)?[ \t])*(?:@(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\
r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[
\t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)
?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t]
)*))*(?:,@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[
\t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*
)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t]
)+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*)
*:(?:(?:\r\n)?[ \t])*)?(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+
|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r
\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:
\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t
]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031
]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](
?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?
:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?
:\r\n)?[ \t])*))*\>(?:(?:\r\n)?[ \t])*)|(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?
:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?
[ \t]))*"(?:(?:\r\n)?[ \t])*)*:(?:(?:\r\n)?[ \t])*(?:(?:(?:[^()<>@,;:\\".\[\]
\000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|
\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>
@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"
(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t]
)*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\
".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?
:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[
\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*|(?:[^()<>@,;:\\".\[\] \000-
\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(
?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)*\<(?:(?:\r\n)?[ \t])*(?:@(?:[^()<>@,;
:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([
^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\"
.\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\
]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*(?:,@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\
[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\
r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\]
\000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]
|\\.)*\](?:(?:\r\n)?[ \t])*))*)*:(?:(?:\r\n)?[ \t])*)?(?:[^()<>@,;:\\".\[\] \0
00-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\
.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,
;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?
:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t])*
(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".
\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[
^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]
]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*\>(?:(?:\r\n)?[ \t])*)(?:,\s*(
?:(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\
".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(
?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[
\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t
])*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t
])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?
:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|
\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*|(?:
[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\
]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)*\<(?:(?:\r\n)
?[ \t])*(?:@(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["
()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)
?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>
@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*(?:,@(?:(?:\r\n)?[
\t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,
;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t]
)*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\
".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*)*:(?:(?:\r\n)?[ \t])*)?
(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".
\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*)(?:\.(?:(?:
\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\[
"()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])
*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])
+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\
.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z
|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*\>(?:(
?:\r\n)?[ \t])*))*)?;\s*)""".strip() | 74.010753 | 221 | 0.139474 | 652 | 6,883 | 1.461656 | 0.064417 | 0.281217 | 0.421826 | 0.222455 | 0.849948 | 0.841553 | 0.841553 | 0.819517 | 0.819517 | 0.819517 | 0 | 0.040152 | 0.041116 | 6,883 | 93 | 222 | 74.010753 | 0.104242 | 0.005376 | 0 | 0.045977 | 0 | 0.149425 | 0.978524 | 0.769175 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.011494 | 0 | 0.011494 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
c44d3a65d02e6dfe44c848ced9733d0face8178d | 9,054 | py | Python | test/components/scheduler/promises/test_2_3_2.py | rerobins/rhobot_framework | f97d1cedc929387f69448e41346a0d15fe202eef | [
"BSD-3-Clause"
] | null | null | null | test/components/scheduler/promises/test_2_3_2.py | rerobins/rhobot_framework | f97d1cedc929387f69448e41346a0d15fe202eef | [
"BSD-3-Clause"
] | null | null | null | test/components/scheduler/promises/test_2_3_2.py | rerobins/rhobot_framework | f97d1cedc929387f69448e41346a0d15fe202eef | [
"BSD-3-Clause"
] | null | null | null | """
"2.3.2: If `x` is a promise, adopt its state"
https://github.com/promises-aplus/promises-tests/blob/2.1.1/lib/tests/2.3.2.js
"""
from sleekxmpp.test import SleekTest
import unittest
import threading
dummy = {'dummy': 'dummy'}
sentinel = {'sentinel': 'sentinel'}
class Promise_2_3_2_1_TestCase(SleekTest):
"""
2.3.2.1: If `x` is pending, `promise` must remain pending until `x` is fulfilled or rejected.
"""
dummy = {}
def setUp(self):
from rhobot.components import register_core_plugins
register_core_plugins()
self.session = {}
self.stream_start(plugins=['rho_bot_scheduler', ])
self.scheduler = self.xmpp['rho_bot_scheduler']
def tearDown(self):
self.stream_close()
def test_fulfilled(self):
"""
The promise shouldn't resolve until the child promise is resolved.
:return:
"""
def create_child_promise(value):
return self.scheduler.promise()
event = threading.Event()
start_promise = self.scheduler.promise()
start_promise.resolved(dummy)
wait_for_promise = start_promise.then(create_child_promise)
#### TEST COMPONENT ####
self.session['rejected'] = False
self.session['fulfilled'] = False
def waiting_fulfilled(value):
self.session['fulfilled'] = True
event.set()
def waiting_rejected(value):
self.session['rejected'] = True
event.set()
wait_for_promise.then(waiting_fulfilled, waiting_rejected)
#### END TEST COMPONENT ####
self.assertFalse(event.wait(1.0))
self.assertFalse(self.session['rejected'])
self.assertFalse(self.session['fulfilled'])
def test_rejected(self):
"""
The promise shouldn't resolve until the child promise is resolved.
:return:
"""
self.session['rejected'] = False
self.session['fulfilled'] = False
def create_child_promise(value):
return self.scheduler.promise()
event = threading.Event()
start_promise = self.scheduler.promise()
start_promise.rejected(dummy)
def waiting_fulfilled(value):
self.session['fulfilled'] = True
event.set()
def waiting_rejected(value):
self.session['rejected'] = True
event.set()
wait_for_promise = start_promise.then(None, create_child_promise)
wait_for_promise.then(waiting_fulfilled, waiting_rejected)
self.assertFalse(event.wait(1.0))
self.assertFalse(self.session['rejected'])
self.assertFalse(self.session['fulfilled'])
class Promise_2_3_2_2_TestCase(SleekTest):
"""
2.3.2.2: If/when `x` is fulfilled, fulfill `promise` with the same value.
"""
def setUp(self):
self.session = {}
self.stream_start(plugins=[])
self.xmpp.register_plugin('rho_bot_scheduler', module='rhobot.components')
self.scheduler = self.xmpp['rho_bot_scheduler']
def tearDown(self):
self.stream_close()
def test_already_fulfilled_fulfilled(self):
def create_child_promise(value):
promise = self.scheduler.promise()
promise.resolved(sentinel)
return promise
event = threading.Event()
start_promise = self.scheduler.promise()
start_promise.resolved(dummy)
wait_for_promise = start_promise.then(create_child_promise)
#### TEST COMPONENT ####
def waiting_fulfilled(value):
self.assertIs(value, sentinel)
event.set()
wait_for_promise.then(waiting_fulfilled)
#### END TEST COMPONENT ####
self.assertTrue(event.wait())
def test_already_fulfilled_rejected(self):
def create_child_promise(value):
promise = self.scheduler.promise()
promise.resolved(sentinel)
return promise
event = threading.Event()
start_promise = self.scheduler.promise()
start_promise.rejected(dummy)
wait_for_promise = start_promise.then(None, create_child_promise)
#### TEST COMPONENT ####
def waiting_fulfilled(value):
self.assertIs(value, sentinel)
event.set()
wait_for_promise.then(waiting_fulfilled)
#### END TEST COMPONENT ####
self.assertTrue(event.wait())
def test_eventually_fulfilled_fulfilled(self):
def create_child_promise(value):
promise = self.scheduler.promise()
self.scheduler.schedule_task(lambda: promise.resolved(sentinel), delay=0.05)
return promise
event = threading.Event()
start_promise = self.scheduler.promise()
start_promise.resolved(dummy)
wait_for_promise = start_promise.then(create_child_promise)
#### TEST COMPONENT ####
def waiting_fulfilled(value):
self.assertIs(value, sentinel)
event.set()
wait_for_promise.then(waiting_fulfilled)
#### END TEST COMPONENT ####
self.assertTrue(event.wait())
def test_eventually_fulfilled_rejected(self):
def create_child_promise(value):
promise = self.scheduler.promise()
self.scheduler.schedule_task(lambda: promise.resolved(sentinel), delay=0.05)
return promise
event = threading.Event()
start_promise = self.scheduler.promise()
start_promise.rejected(dummy)
wait_for_promise = start_promise.then(None, create_child_promise)
#### TEST COMPONENT ####
def waiting_fulfilled(value):
self.assertIs(value, sentinel)
event.set()
wait_for_promise.then(waiting_fulfilled)
#### END TEST COMPONENT ####
self.assertTrue(event.wait())
class Promise_2_3_2_3_TestCase(SleekTest):
"""
2.3.2.2: If/when `x` is fulfilled, fulfill `promise` with the same value.
"""
def setUp(self):
self.session = {}
self.stream_start(plugins=[])
self.xmpp.register_plugin('rho_bot_scheduler', module='rhobot.components')
self.scheduler = self.xmpp['rho_bot_scheduler']
def tearDown(self):
self.stream_close()
def test_already_rejected_fulfilled(self):
def create_child_promise(value):
promise = self.scheduler.promise()
promise.rejected(sentinel)
return promise
event = threading.Event()
start_promise = self.scheduler.promise()
start_promise.resolved(dummy)
wait_for_promise = start_promise.then(create_child_promise)
#### TEST COMPONENT ####
def waiting_rejected(value):
self.assertIs(value, sentinel)
event.set()
wait_for_promise.then(None, waiting_rejected)
#### END TEST COMPONENT ####
self.assertTrue(event.wait())
def test_already_rejected_rejected(self):
def create_child_promise(value):
promise = self.scheduler.promise()
promise.rejected(sentinel)
return promise
event = threading.Event()
start_promise = self.scheduler.promise()
start_promise.rejected(dummy)
wait_for_promise = start_promise.then(None, create_child_promise)
#### TEST COMPONENT ####
def waiting_rejected(value):
self.assertIs(value, sentinel)
event.set()
wait_for_promise.then(None, waiting_rejected)
#### END TEST COMPONENT ####
self.assertTrue(event.wait())
def test_eventually_rejected_fulfilled(self):
def create_child_promise(value):
promise = self.scheduler.promise()
self.scheduler.schedule_task(lambda: promise.rejected(sentinel), delay=0.05)
return promise
event = threading.Event()
start_promise = self.scheduler.promise()
start_promise.resolved(dummy)
wait_for_promise = start_promise.then(create_child_promise)
#### TEST COMPONENT ####
def waiting_rejected(value):
self.assertIs(value, sentinel)
event.set()
wait_for_promise.then(None, waiting_rejected)
#### END TEST COMPONENT ####
self.assertTrue(event.wait())
def test_eventually_fulfilled_rejected(self):
def create_child_promise(value):
promise = self.scheduler.promise()
self.scheduler.schedule_task(lambda: promise.rejected(sentinel), delay=0.05)
return promise
event = threading.Event()
start_promise = self.scheduler.promise()
start_promise.rejected(dummy)
wait_for_promise = start_promise.then(None, create_child_promise)
#### TEST COMPONENT ####
def waiting_rejected(value):
self.assertIs(value, sentinel)
event.set()
wait_for_promise.then(None, waiting_rejected)
#### END TEST COMPONENT ####
self.assertTrue(event.wait()) | 29.491857 | 97 | 0.626243 | 985 | 9,054 | 5.550254 | 0.093401 | 0.06585 | 0.080483 | 0.088897 | 0.92976 | 0.917871 | 0.908908 | 0.908908 | 0.893543 | 0.875617 | 0 | 0.007353 | 0.263972 | 9,054 | 307 | 98 | 29.491857 | 0.813025 | 0.092998 | 0 | 0.889503 | 0 | 0 | 0.033191 | 0 | 0 | 0 | 0 | 0 | 0.121547 | 1 | 0.209945 | false | 0 | 0.022099 | 0.01105 | 0.309392 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c4541b6a7888ac4b6f146efb805c1ebbb8ca998b | 2,547 | py | Python | database/userdatabase.py | BradleyPelton/StressTestingStore | 0a1651e0f7cfc02fa57f673cec502cba0f6dd44d | [
"MIT"
] | null | null | null | database/userdatabase.py | BradleyPelton/StressTestingStore | 0a1651e0f7cfc02fa57f673cec502cba0f6dd44d | [
"MIT"
] | null | null | null | database/userdatabase.py | BradleyPelton/StressTestingStore | 0a1651e0f7cfc02fa57f673cec502cba0f6dd44d | [
"MIT"
] | null | null | null | # UNTESTED. USERDATABASE IS A WORK IN PROGRESS
# UNTESTED. USERDATABASE IS A WORK IN PROGRESS
# UNTESTED. USERDATABASE IS A WORK IN PROGRESS
# UNTESTED. USERDATABASE IS A WORK IN PROGRESS
# UNTESTED. USERDATABASE IS A WORK IN PROGRESS
# UNTESTED. USERDATABASE IS A WORK IN PROGRESS
# UNTESTED. USERDATABASE IS A WORK IN PROGRESS
# UNTESTED. USERDATABASE IS A WORK IN PROGRESS
import psycopg2
import secrets
# I need to connect to a user database that stores credentials
# try:
# connection = psycopg2.connect(user=secrets.DATABASE_USER,
# password=secrets.DATABASE_USER_PASSWORD,
# host=secrets.DATABASE_HOST_IP,
# port=secrets.DATABASE_PORT,
# database="locustDB")
# cursor = connection.cursor()
# # Print PostgreSQL Connection properties
# print(connection.get_dsn_parameters(), "\n")
# # Print PostgreSQL version
# cursor.execute("SELECT * FROM user_credential;")
# # record = cursor.fetchone()
# records = cursor.fetchall()
# print(records)
# except (Exception, psycopg2.Error) as error:
# print("Error while connecting to PostgreSQL", error)
# finally:
# # closing database connection.
# if(connection):
# cursor.close()
# connection.close()
# print("PostgreSQL connection is closed")
def select_all_users():
"""return a dictionary of all rows in the user_credentials table"""
try:
connection = psycopg2.connect(user=secrets.DATABASE_USER,
password=secrets.DATABASE_USER_PASSWORD,
host=secrets.DATABASE_HOST_IP,
port=secrets.DATABASE_PORT,
database="locustDB")
cursor = connection.cursor()
# Print PostgreSQL Connection properties
print(connection.get_dsn_parameters(), "\n")
# Print PostgreSQL version
cursor.execute("SELECT * FROM user_credential;")
records = cursor.fetchall()
user_dict = {tup[1]: {'password': tup[2], 'cookies': ''} for tup in records}
print(user_dict)
except (Exception, psycopg2.Error) as error:
print("Error while connecting to PostgreSQL", error)
finally:
# closing database connection.
if(connection):
cursor.close()
connection.close()
print("PostgreSQL connection is closed")
select_all_users() | 35.873239 | 84 | 0.616019 | 263 | 2,547 | 5.870722 | 0.262357 | 0.103627 | 0.11399 | 0.119171 | 0.810881 | 0.810881 | 0.810881 | 0.810881 | 0.810881 | 0.810881 | 0 | 0.003908 | 0.29682 | 2,547 | 71 | 85 | 35.873239 | 0.85818 | 0.571653 | 0 | 0 | 0 | 0 | 0.116301 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0.086957 | 0.086957 | 0 | 0.130435 | 0.173913 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
6711054349ca6d474b8705e7dc3cb7632f6882e5 | 23,407 | py | Python | turbo-codes/tests/channelcoding/test_turbo_integration.py | tripods-xai/isit-2022 | 024a0ccb59f7d4b2c9e88ef96d4a9c57712d6dfd | [
"MIT"
] | 1 | 2022-02-23T14:59:14.000Z | 2022-02-23T14:59:14.000Z | turbo-codes/tests/channelcoding/test_turbo_integration.py | tripods-xai/isit-2022 | 024a0ccb59f7d4b2c9e88ef96d4a9c57712d6dfd | [
"MIT"
] | null | null | null | turbo-codes/tests/channelcoding/test_turbo_integration.py | tripods-xai/isit-2022 | 024a0ccb59f7d4b2c9e88ef96d4a9c57712d6dfd | [
"MIT"
] | null | null | null | import tensorflow as tf
import numpy as np
import commpy.channelcoding as cc
from src.channelcoding.channels import AWGN
from src.channelcoding.encoder_decoders import TurboNonsystematicEncoderDecoder, TurboSystematicEncoderDecoder
from src.channelcoding.encoders import AffineConvolutionalCode, TrellisCode
from src.channelcoding.bcjr import BCJRDecoder, HazzysTurboDecoder, SystematicTurboRepeater, TurboDecoder
from src.channelcoding.interleavers import PermuteInterleaver
from src.codes import turboae_binary_exact_nonsys
from tests.channelcoding.utils import FixedNPAWGN, FixedNoiseAWGN, NoNoiseAWGN, interleaver_to_commpy, vhazzys_turbo_decode, vsystematic_turbo_encode
# def test_channel_randomness():
# sigma = 1
# seed = 0
# channel = FixedNoiseAWGN(sigma, seed)
# compare_channel = FixedNPAWGN(sigma, seed)
# msg_shape = (10, 20, 1)
# np.testing.assert_array_almost_equal(channel(tf.zeros(msg_shape)).numpy(), channel(tf.zeros(msg_shape)).numpy())
# np.testing.assert_array_almost_equal(channel(tf.zeros(msg_shape)).numpy(), compare_channel.corrupt(np.zeros(msg_shape)))
# def test_turbo_compare_with_commpy_one_iter_without_noise():
# code = AffineConvolutionalCode(tf.constant([[1, 1, 1], [1, 0, 1]]), tf.constant([0, 0])).to_rc()
# sigma = 1.
# block_len = 100
# num_iter = 1
# seed = 0
# tf.random.set_seed(seed)
# np_msg = np.random.default_rng(seed+1).integers(0, 2, size=(100, 100, 1))
# msg = tf.constant(np_msg, dtype=tf.float32)
# # My Code
# channel = NoNoiseAWGN(sigma)
# # enc_dec = TurboSystematicEncoderDecoder(code1, code2, channel, block_len, False, num_iter)
# systematic_code = code.with_systematic()
# interleaved_code = code
# use_max = False
# interleaver = PermuteInterleaver(block_len)
# # [Sys, Straight_1,..., Straight_{n-1}, Interleaved_1, ..., Interleaved_{m-1}]
# encoder = systematic_code \
# .concat(
# interleaver.and_then(interleaved_code)
# )
# non_interleaved_bcjr = BCJRDecoder(
# systematic_code.trellis,
# channel, use_max=use_max
# )
# interleaved_bcjr = BCJRDecoder(
# interleaved_code.trellis.with_systematic(),
# channel, use_max=use_max
# )
# decoder = SystematicTurboRepeater(
# num_noninterleaved_streams=non_interleaved_bcjr.num_input_channels,
# interleaver=interleaver
# ).and_then(HazzysTurboDecoder(
# decoder1=non_interleaved_bcjr,
# decoder2=interleaved_bcjr,
# interleaver=interleaver,
# num_iter=num_iter
# ))
# encoder_decoder = encoder \
# .and_then(channel) \
# .and_then(decoder)
# tf_encoded = encoder(msg)
# tf_received = channel(tf_encoded)
# tf_decode_L = decoder(tf_received)
# # Commpy
# commpy_trellis = cc.Trellis(np.array([2]), np.array([[7, 5]]), feedback=7)
# commpy_interleaver = interleaver_to_commpy(interleaver)
# commpy_out = vsystematic_turbo_encode(np_msg, commpy_trellis, commpy_trellis, commpy_interleaver)
# np_received = 2. * commpy_out - 1.
# np_received_repeated = np.concatenate([np_received[:, :, 0:2], np_received[:, commpy_interleaver.p_array, 0:1], np_received[:, :, 2:3]], axis=-1)
# commpy_L = vhazzys_turbo_decode(np_received_repeated, commpy_trellis, commpy_trellis, sigma ** 2, num_iter, commpy_interleaver)
# np.testing.assert_array_almost_equal(tf_encoded.numpy(), commpy_out)
# np.testing.assert_array_almost_equal(tf_received.numpy(), np_received)
# np.testing.assert_array_almost_equal(tf_decode_L.numpy(), commpy_L, decimal=5)
# np.testing.assert_array_almost_equal(encoder_decoder(msg).numpy(), commpy_L, decimal=5)
# complete_encoder_decoder = TurboSystematicEncoderDecoder(
# systematic_code,
# interleaved_code,
# channel,
# HazzysTurboDecoder,
# block_len,
# use_max,
# num_iter,
# interleaver=interleaver
# )
# assert complete_encoder_decoder.rate == (1, 3)
# np.testing.assert_array_almost_equal(complete_encoder_decoder(msg).numpy(), commpy_L, decimal=5)
# def test_turbo_compare_with_commpy_one_iter_with_noise():
# code = AffineConvolutionalCode(tf.constant([[1, 1, 1], [1, 0, 1]]), tf.constant([0, 0])).to_rc()
# sigma = 1.
# block_len = 100
# num_iter = 1
# seed = 0
# tf.random.set_seed(seed)
# np_msg = np.random.default_rng(seed+1).integers(0, 2, size=(100, 100, 1))
# msg = tf.constant(np_msg, dtype=tf.float32)
# # My Code
# channel = FixedNoiseAWGN(sigma, seed)
# # enc_dec = TurboSystematicEncoderDecoder(code1, code2, channel, block_len, False, num_iter)
# systematic_code = code.with_systematic()
# interleaved_code = code
# use_max = False
# interleaver = PermuteInterleaver(block_len)
# # [Sys, Straight_1,..., Straight_{n-1}, Interleaved_1, ..., Interleaved_{m-1}]
# encoder = systematic_code \
# .concat(
# interleaver.and_then(interleaved_code)
# )
# non_interleaved_bcjr = BCJRDecoder(
# systematic_code.trellis,
# channel, use_max=use_max
# )
# interleaved_bcjr = BCJRDecoder(
# interleaved_code.trellis.with_systematic(),
# channel, use_max=use_max
# )
# decoder = SystematicTurboRepeater(
# num_noninterleaved_streams=non_interleaved_bcjr.num_input_channels,
# interleaver=interleaver
# ).and_then(HazzysTurboDecoder(
# decoder1=non_interleaved_bcjr,
# decoder2=interleaved_bcjr,
# interleaver=interleaver,
# num_iter=num_iter
# ))
# encoder_decoder = encoder \
# .and_then(channel) \
# .and_then(decoder)
# tf_encoded = encoder(msg)
# tf_received = channel(tf_encoded)
# tf_decode_L = decoder(tf_received)
# # Commpy
# np_channel = FixedNPAWGN(sigma, seed)
# commpy_trellis = cc.Trellis(np.array([2]), np.array([[7, 5]]), feedback=7)
# commpy_interleaver = interleaver_to_commpy(interleaver)
# commpy_out = vsystematic_turbo_encode(np_msg, commpy_trellis, commpy_trellis, commpy_interleaver)
# np_received = np_channel.corrupt(commpy_out)
# np_received_repeated = np.concatenate([np_received[:, :, 0:2], np_received[:, commpy_interleaver.p_array, 0:1], np_received[:, :, 2:3]], axis=-1)
# commpy_L = vhazzys_turbo_decode(np_received_repeated, commpy_trellis, commpy_trellis, sigma ** 2, num_iter, commpy_interleaver)
# np.testing.assert_array_almost_equal(tf_encoded.numpy(), commpy_out)
# np.testing.assert_array_almost_equal(tf_received.numpy(), np_received)
# np.testing.assert_array_almost_equal(tf_decode_L.numpy(), commpy_L, decimal=5)
# np.testing.assert_array_almost_equal(encoder_decoder(msg).numpy(), commpy_L, decimal=5)
# complete_encoder_decoder = TurboSystematicEncoderDecoder(
# systematic_code,
# interleaved_code,
# channel,
# HazzysTurboDecoder,
# block_len,
# use_max,
# num_iter,
# interleaver=interleaver
# )
# assert complete_encoder_decoder.rate == (1, 3)
# np.testing.assert_array_almost_equal(complete_encoder_decoder(msg).numpy(), commpy_L, decimal=5)
# def test_turbo_compare_with_commpy_two_iter_without_noise():
# code = AffineConvolutionalCode(tf.constant([[1, 1, 1], [1, 0, 1]]), tf.constant([0, 0])).to_rc()
# sigma = 1.
# block_len = 100
# num_iter = 2
# seed = 0
# tf.random.set_seed(seed)
# np_msg = np.random.default_rng(seed+1).integers(0, 2, size=(100, 100, 1))
# msg = tf.constant(np_msg, dtype=tf.float32)
# # My Code
# channel = NoNoiseAWGN(sigma)
# # enc_dec = TurboSystematicEncoderDecoder(code1, code2, channel, block_len, False, num_iter)
# systematic_code = code.with_systematic()
# interleaved_code = code
# use_max = False
# interleaver = PermuteInterleaver(block_len)
# # [Sys, Straight_1,..., Straight_{n-1}, Interleaved_1, ..., Interleaved_{m-1}]
# encoder = systematic_code \
# .concat(
# interleaver.and_then(interleaved_code)
# )
# non_interleaved_bcjr = BCJRDecoder(
# systematic_code.trellis,
# channel, use_max=use_max
# )
# interleaved_bcjr = BCJRDecoder(
# interleaved_code.trellis.with_systematic(),
# channel, use_max=use_max
# )
# decoder = SystematicTurboRepeater(
# num_noninterleaved_streams=non_interleaved_bcjr.num_input_channels,
# interleaver=interleaver
# ).and_then(HazzysTurboDecoder(
# decoder1=non_interleaved_bcjr,
# decoder2=interleaved_bcjr,
# interleaver=interleaver,
# num_iter=num_iter
# ))
# encoder_decoder = encoder \
# .and_then(channel) \
# .and_then(decoder)
# tf_encoded = encoder(msg)
# tf_received = channel(tf_encoded)
# tf_decode_L = decoder(tf_received)
# # Commpy
# commpy_trellis = cc.Trellis(np.array([2]), np.array([[7, 5]]), feedback=7)
# commpy_interleaver = interleaver_to_commpy(interleaver)
# commpy_out = vsystematic_turbo_encode(np_msg, commpy_trellis, commpy_trellis, commpy_interleaver)
# np_received = 2. * commpy_out - 1.
# np_received_repeated = np.concatenate([np_received[:, :, 0:2], np_received[:, commpy_interleaver.p_array, 0:1], np_received[:, :, 2:3]], axis=-1)
# commpy_L = vhazzys_turbo_decode(np_received_repeated, commpy_trellis, commpy_trellis, sigma ** 2, num_iter, commpy_interleaver)
# np.testing.assert_array_almost_equal(tf_encoded.numpy(), commpy_out)
# np.testing.assert_array_almost_equal(tf_received.numpy(), np_received)
# np.testing.assert_array_almost_equal(tf_decode_L.numpy(), commpy_L, decimal=4)
# np.testing.assert_array_almost_equal(encoder_decoder(msg).numpy(), commpy_L, decimal=4)
# complete_encoder_decoder = TurboSystematicEncoderDecoder(
# systematic_code,
# interleaved_code,
# channel,
# HazzysTurboDecoder,
# block_len,
# use_max,
# num_iter,
# interleaver=interleaver
# )
# assert complete_encoder_decoder.rate == (1, 3)
# np.testing.assert_array_almost_equal(complete_encoder_decoder(msg).numpy(), commpy_L, decimal=4)
# def test_turbo_compare_with_commpy_two_iter_with_noise():
# code = AffineConvolutionalCode(tf.constant([[1, 1, 1], [1, 0, 1]]), tf.constant([0, 0])).to_rc()
# sigma = 1.
# block_len = 100
# num_iter = 2
# seed = 0
# tf.random.set_seed(seed)
# np_msg = np.random.default_rng(seed+1).integers(0, 2, size=(100, 100, 1))
# msg = tf.constant(np_msg, dtype=tf.float32)
# # My Code
# channel = FixedNoiseAWGN(sigma, seed)
# # enc_dec = TurboSystematicEncoderDecoder(code1, code2, channel, block_len, False, num_iter)
# systematic_code = code.with_systematic()
# interleaved_code = code
# use_max = False
# interleaver = PermuteInterleaver(block_len)
# # [Sys, Straight_1,..., Straight_{n-1}, Interleaved_1, ..., Interleaved_{m-1}]
# encoder = systematic_code \
# .concat(
# interleaver.and_then(interleaved_code)
# )
# non_interleaved_bcjr = BCJRDecoder(
# systematic_code.trellis,
# channel, use_max=use_max
# )
# interleaved_bcjr = BCJRDecoder(
# interleaved_code.trellis.with_systematic(),
# channel, use_max=use_max
# )
# decoder = SystematicTurboRepeater(
# num_noninterleaved_streams=non_interleaved_bcjr.num_input_channels,
# interleaver=interleaver
# ).and_then(HazzysTurboDecoder(
# decoder1=non_interleaved_bcjr,
# decoder2=interleaved_bcjr,
# interleaver=interleaver,
# num_iter=num_iter
# ))
# encoder_decoder = encoder \
# .and_then(channel) \
# .and_then(decoder)
# tf_encoded = encoder(msg)
# tf_received = channel(tf_encoded)
# tf_decode_L = decoder(tf_received)
# # Commpy
# np_channel = FixedNPAWGN(sigma, seed)
# commpy_trellis = cc.Trellis(np.array([2]), np.array([[7, 5]]), feedback=7)
# commpy_interleaver = interleaver_to_commpy(interleaver)
# commpy_out = vsystematic_turbo_encode(np_msg, commpy_trellis, commpy_trellis, commpy_interleaver)
# np_received = np_channel.corrupt(commpy_out)
# np_received_repeated = np.concatenate([np_received[:, :, 0:2], np_received[:, commpy_interleaver.p_array, 0:1], np_received[:, :, 2:3]], axis=-1)
# commpy_L = vhazzys_turbo_decode(np_received_repeated, commpy_trellis, commpy_trellis, sigma ** 2, num_iter, commpy_interleaver)
# np.testing.assert_array_almost_equal(tf_encoded.numpy(), commpy_out)
# np.testing.assert_array_almost_equal(tf_received.numpy(), np_received)
# np.testing.assert_array_almost_equal(tf_decode_L.numpy(), commpy_L, decimal=4)
# np.testing.assert_array_almost_equal(encoder_decoder(msg).numpy(), commpy_L, decimal=4)
# complete_encoder_decoder = TurboSystematicEncoderDecoder(
# systematic_code,
# interleaved_code,
# channel,
# HazzysTurboDecoder,
# block_len,
# use_max,
# num_iter,
# interleaver=interleaver
# )
# assert complete_encoder_decoder.rate == (1, 3)
# np.testing.assert_array_almost_equal(complete_encoder_decoder(msg).numpy(), commpy_L, decimal=4)
# def test_turbo_compare_with_commpy_six_iter_without_noise():
# code = AffineConvolutionalCode(tf.constant([[1, 1, 1], [1, 0, 1]]), tf.constant([0, 0])).to_rc()
# sigma = 1.
# block_len = 100
# num_iter = 6
# seed = 0
# tf.random.set_seed(seed)
# np_msg = np.random.default_rng(seed+1).integers(0, 2, size=(100, 100, 1))
# msg = tf.constant(np_msg, dtype=tf.float32)
# # My Code
# channel = NoNoiseAWGN(sigma)
# # enc_dec = TurboSystematicEncoderDecoder(code1, code2, channel, block_len, False, num_iter)
# systematic_code = code.with_systematic()
# interleaved_code = code
# use_max = False
# interleaver = PermuteInterleaver(block_len)
# # [Sys, Straight_1,..., Straight_{n-1}, Interleaved_1, ..., Interleaved_{m-1}]
# encoder = systematic_code \
# .concat(
# interleaver.and_then(interleaved_code)
# )
# non_interleaved_bcjr = BCJRDecoder(
# systematic_code.trellis,
# channel, use_max=use_max
# )
# interleaved_bcjr = BCJRDecoder(
# interleaved_code.trellis.with_systematic(),
# channel, use_max=use_max
# )
# decoder = SystematicTurboRepeater(
# num_noninterleaved_streams=non_interleaved_bcjr.num_input_channels,
# interleaver=interleaver
# ).and_then(HazzysTurboDecoder(
# decoder1=non_interleaved_bcjr,
# decoder2=interleaved_bcjr,
# interleaver=interleaver,
# num_iter=num_iter
# ))
# encoder_decoder = encoder \
# .and_then(channel) \
# .and_then(decoder)
# tf_encoded = encoder(msg)
# tf_received = channel(tf_encoded)
# tf_decode_L = decoder(tf_received)
# # Commpy
# commpy_trellis = cc.Trellis(np.array([2]), np.array([[7, 5]]), feedback=7)
# commpy_interleaver = interleaver_to_commpy(interleaver)
# commpy_out = vsystematic_turbo_encode(np_msg, commpy_trellis, commpy_trellis, commpy_interleaver)
# np_received = 2. * commpy_out - 1.
# np_received_repeated = np.concatenate([np_received[:, :, 0:2], np_received[:, commpy_interleaver.p_array, 0:1], np_received[:, :, 2:3]], axis=-1)
# commpy_L = vhazzys_turbo_decode(np_received_repeated, commpy_trellis, commpy_trellis, sigma ** 2, num_iter, commpy_interleaver)
# np.testing.assert_array_almost_equal(tf_encoded.numpy(), commpy_out)
# np.testing.assert_array_almost_equal(tf_received.numpy(), np_received)
# np.testing.assert_array_almost_equal(tf_decode_L.numpy(), commpy_L, decimal=3)
# np.testing.assert_array_almost_equal(encoder_decoder(msg).numpy(), commpy_L, decimal=3)
# complete_encoder_decoder = TurboSystematicEncoderDecoder(
# systematic_code,
# interleaved_code,
# channel,
# HazzysTurboDecoder,
# block_len,
# use_max,
# num_iter,
# interleaver=interleaver
# )
# assert complete_encoder_decoder.rate == (1, 3)
# np.testing.assert_array_almost_equal(complete_encoder_decoder(msg).numpy(), commpy_L, decimal=3)
# def test_turbo_compare_with_commpy_six_iter_with_noise():
# code = AffineConvolutionalCode(tf.constant([[1, 1, 1], [1, 0, 1]]), tf.constant([0, 0])).to_rc()
# sigma = 1.
# block_len = 100
# num_iter = 6
# seed = 3
# tf.random.set_seed(seed)
# np_msg = np.random.default_rng(seed+1).integers(0, 2, size=(100, 100, 1))
# msg = tf.constant(np_msg, dtype=tf.float32)
# # My Code
# channel = FixedNoiseAWGN(sigma, seed)
# # enc_dec = TurboSystematicEncoderDecoder(code1, code2, channel, block_len, False, num_iter)
# systematic_code = code.with_systematic()
# interleaved_code = code
# use_max = False
# interleaver = PermuteInterleaver(block_len)
# # [Sys, Straight_1,..., Straight_{n-1}, Interleaved_1, ..., Interleaved_{m-1}]
# encoder = systematic_code \
# .concat(
# interleaver.and_then(interleaved_code)
# )
# non_interleaved_bcjr = BCJRDecoder(
# systematic_code.trellis,
# channel, use_max=use_max
# )
# interleaved_bcjr = BCJRDecoder(
# interleaved_code.trellis.with_systematic(),
# channel, use_max=use_max
# )
# decoder = SystematicTurboRepeater(
# num_noninterleaved_streams=non_interleaved_bcjr.num_input_channels,
# interleaver=interleaver
# ).and_then(HazzysTurboDecoder(
# decoder1=non_interleaved_bcjr,
# decoder2=interleaved_bcjr,
# interleaver=interleaver,
# num_iter=num_iter
# ))
# encoder_decoder = encoder \
# .and_then(channel) \
# .and_then(decoder)
# tf_encoded = encoder(msg)
# tf_received = channel(tf_encoded)
# tf_decode_L = decoder(tf_received)
# # Commpy
# np_channel = FixedNPAWGN(sigma, seed)
# commpy_trellis = cc.Trellis(np.array([2]), np.array([[7, 5]]), feedback=7)
# commpy_interleaver = interleaver_to_commpy(interleaver)
# commpy_out = vsystematic_turbo_encode(np_msg, commpy_trellis, commpy_trellis, commpy_interleaver)
# np_received = np_channel.corrupt(commpy_out)
# np_received_repeated = np.concatenate([np_received[:, :, 0:2], np_received[:, commpy_interleaver.p_array, 0:1], np_received[:, :, 2:3]], axis=-1)
# commpy_L = vhazzys_turbo_decode(np_received_repeated, commpy_trellis, commpy_trellis, sigma ** 2, num_iter, commpy_interleaver)
# np.testing.assert_array_almost_equal(tf_encoded.numpy(), commpy_out)
# np.testing.assert_array_almost_equal(tf_received.numpy(), np_received)
# np.testing.assert_array_almost_equal(tf_decode_L.numpy(), commpy_L, decimal=3)
# np.testing.assert_array_almost_equal(encoder_decoder(msg).numpy(), commpy_L, decimal=3)
# complete_encoder_decoder = TurboSystematicEncoderDecoder(
# systematic_code,
# interleaved_code,
# channel,
# HazzysTurboDecoder,
# block_len,
# use_max,
# num_iter,
# interleaver=interleaver
# )
# assert complete_encoder_decoder.rate == (1, 3)
# np.testing.assert_array_almost_equal(complete_encoder_decoder(msg).numpy(), commpy_L, decimal=2)
# def test_turboae_exact_nonsys_trellis_compare_one_iter():
# block_len = 100
# num_iter = 1
# encoder_spec = turboae_binary_exact_nonsys()
# tf_interleaver = PermuteInterleaver(block_len)
# channel = FixedNoiseAWGN(1., 0)
# conv_enc_dec = TurboNonsystematicEncoderDecoder(
# encoder_spec.noninterleaved_code,
# encoder_spec.interleaved_code,
# channel,
# TurboDecoder,
# block_len,
# False,
# num_iter,
# interleaver=tf_interleaver
# )
# trellis_enc_dec = TurboNonsystematicEncoderDecoder(
# TrellisCode(encoder_spec.noninterleaved_code.trellis),
# TrellisCode(encoder_spec.interleaved_code.trellis),
# channel,
# TurboDecoder,
# block_len,
# False,
# num_iter,
# interleaver=tf_interleaver
# )
# msg = tf.random.uniform((100, block_len, 1), dtype=tf.int32, maxval=2)
# assert tf.reduce_all(conv_enc_dec(msg) == trellis_enc_dec(msg))
# def test_turboae_exact_nonsys_trellis_compare_two_iter():
# block_len = 100
# num_iter = 2
# encoder_spec = turboae_binary_exact_nonsys()
# tf_interleaver = PermuteInterleaver(block_len)
# channel = FixedNoiseAWGN(1., 0)
# conv_enc_dec = TurboNonsystematicEncoderDecoder(
# encoder_spec.noninterleaved_code,
# encoder_spec.interleaved_code,
# channel,
# TurboDecoder,
# block_len,
# False,
# num_iter,
# interleaver=tf_interleaver
# )
# trellis_enc_dec = TurboNonsystematicEncoderDecoder(
# TrellisCode(encoder_spec.noninterleaved_code.trellis),
# TrellisCode(encoder_spec.interleaved_code.trellis),
# channel,
# TurboDecoder,
# block_len,
# False,
# num_iter,
# interleaver=tf_interleaver
# )
# msg = tf.random.uniform((100, block_len, 1), dtype=tf.int32, maxval=2)
# assert tf.reduce_all(conv_enc_dec(msg) == trellis_enc_dec(msg))
@tf.function
def run_enc_dec(enc_dec, msg):
return enc_dec(msg)
def test_turboae_exact_nonsys_trellis_compare_six_iter():
block_len = 100
num_iter = 1
encoder_spec = turboae_binary_exact_nonsys()
tf_interleaver = PermuteInterleaver(block_len)
channel = FixedNoiseAWGN(1., 0, (1000, block_len, 3))
conv_enc_dec = TurboNonsystematicEncoderDecoder(
encoder_spec.noninterleaved_code,
encoder_spec.interleaved_code,
channel,
TurboDecoder,
block_len,
False,
num_iter,
interleaver=tf_interleaver
)
trellis_enc_dec = TurboNonsystematicEncoderDecoder(
TrellisCode(encoder_spec.noninterleaved_code.trellis),
TrellisCode(encoder_spec.interleaved_code.trellis),
channel,
TurboDecoder,
block_len,
False,
num_iter,
interleaver=tf_interleaver
)
msg = tf.random.uniform((1000, block_len, 1), dtype=tf.int32, maxval=2)
conv_conf = run_enc_dec(conv_enc_dec, msg)
trellis_conf = run_enc_dec(trellis_enc_dec, msg)
assert tf.reduce_all(conv_conf == trellis_conf)
conv_decoded = tf.cast(conv_conf > 0, dtype=tf.int32)
conv_num_bit_errors = tf.reduce_sum(tf.cast(tf.not_equal(msg, conv_decoded), tf.float32), axis=1)[:, 0]
trellis_decoded = tf.cast(trellis_conf > 0, dtype=tf.int32)
trellis_num_bit_errors = tf.reduce_sum(tf.cast(tf.not_equal(msg, trellis_decoded), tf.float32), axis=1)[:, 0]
print(f"Conv BER: {np.sum(conv_num_bit_errors.numpy()) / (100 * block_len)}")
print(f"Trellis BER: {np.sum(trellis_num_bit_errors.numpy()) / (100 * block_len)}") | 37.936791 | 151 | 0.675738 | 2,760 | 23,407 | 5.389855 | 0.054348 | 0.021175 | 0.032267 | 0.043022 | 0.933719 | 0.928677 | 0.922358 | 0.915972 | 0.908914 | 0.908914 | 0 | 0.020685 | 0.206904 | 23,407 | 617 | 152 | 37.936791 | 0.780651 | 0.853164 | 0 | 0.244898 | 0 | 0 | 0.047554 | 0.025476 | 0 | 0 | 0 | 0 | 0.020408 | 1 | 0.040816 | false | 0 | 0.204082 | 0.020408 | 0.265306 | 0.040816 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
67182f361519be668513b346144794e2b7e7a7b5 | 101,303 | py | Python | influxdb_client/service/dashboards_service.py | Rajpratik71/influxdb-client-python | ae537018b638600552b3ac11f1b070c048719910 | [
"MIT"
] | null | null | null | influxdb_client/service/dashboards_service.py | Rajpratik71/influxdb-client-python | ae537018b638600552b3ac11f1b070c048719910 | [
"MIT"
] | null | null | null | influxdb_client/service/dashboards_service.py | Rajpratik71/influxdb-client-python | ae537018b638600552b3ac11f1b070c048719910 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Influx API Service.
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
OpenAPI spec version: 0.1.0
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from influxdb_client.api_client import ApiClient
class DashboardsService(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None): # noqa: E501,D401,D403
"""DashboardsService - a operation defined in OpenAPI."""
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def delete_dashboards_id(self, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""Delete a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_dashboards_id(dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param str zap_trace_span: OpenTracing span context
:return: None
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_dashboards_id_with_http_info(dashboard_id, **kwargs) # noqa: E501
else:
(data) = self.delete_dashboards_id_with_http_info(dashboard_id, **kwargs) # noqa: E501
return data
def delete_dashboards_id_with_http_info(self, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""Delete a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_dashboards_id_with_http_info(dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param str zap_trace_span: OpenTracing span context
:return: None
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_dashboards_id" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `delete_dashboards_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def delete_dashboards_id_cells_id(self, dashboard_id, cell_id, **kwargs): # noqa: E501,D401,D403
"""Delete a dashboard cell.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_dashboards_id_cells_id(dashboard_id, cell_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to delete. (required)
:param str cell_id: The ID of the cell to delete. (required)
:param str zap_trace_span: OpenTracing span context
:return: None
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_dashboards_id_cells_id_with_http_info(dashboard_id, cell_id, **kwargs) # noqa: E501
else:
(data) = self.delete_dashboards_id_cells_id_with_http_info(dashboard_id, cell_id, **kwargs) # noqa: E501
return data
def delete_dashboards_id_cells_id_with_http_info(self, dashboard_id, cell_id, **kwargs): # noqa: E501,D401,D403
"""Delete a dashboard cell.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_dashboards_id_cells_id_with_http_info(dashboard_id, cell_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to delete. (required)
:param str cell_id: The ID of the cell to delete. (required)
:param str zap_trace_span: OpenTracing span context
:return: None
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'cell_id', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_dashboards_id_cells_id" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `delete_dashboards_id_cells_id`") # noqa: E501
# verify the required parameter 'cell_id' is set
if ('cell_id' not in local_var_params or
local_var_params['cell_id'] is None):
raise ValueError("Missing the required parameter `cell_id` when calling `delete_dashboards_id_cells_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
if 'cell_id' in local_var_params:
path_params['cellID'] = local_var_params['cell_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/cells/{cellID}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def delete_dashboards_id_labels_id(self, dashboard_id, label_id, **kwargs): # noqa: E501,D401,D403
"""Delete a label from a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_dashboards_id_labels_id(dashboard_id, label_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param str label_id: The ID of the label to delete. (required)
:param str zap_trace_span: OpenTracing span context
:return: None
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_dashboards_id_labels_id_with_http_info(dashboard_id, label_id, **kwargs) # noqa: E501
else:
(data) = self.delete_dashboards_id_labels_id_with_http_info(dashboard_id, label_id, **kwargs) # noqa: E501
return data
def delete_dashboards_id_labels_id_with_http_info(self, dashboard_id, label_id, **kwargs): # noqa: E501,D401,D403
"""Delete a label from a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_dashboards_id_labels_id_with_http_info(dashboard_id, label_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param str label_id: The ID of the label to delete. (required)
:param str zap_trace_span: OpenTracing span context
:return: None
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'label_id', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_dashboards_id_labels_id" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `delete_dashboards_id_labels_id`") # noqa: E501
# verify the required parameter 'label_id' is set
if ('label_id' not in local_var_params or
local_var_params['label_id'] is None):
raise ValueError("Missing the required parameter `label_id` when calling `delete_dashboards_id_labels_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
if 'label_id' in local_var_params:
path_params['labelID'] = local_var_params['label_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/labels/{labelID}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def delete_dashboards_id_members_id(self, user_id, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""Remove a member from a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_dashboards_id_members_id(user_id, dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str user_id: The ID of the member to remove. (required)
:param str dashboard_id: The dashboard ID. (required)
:param str zap_trace_span: OpenTracing span context
:return: None
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_dashboards_id_members_id_with_http_info(user_id, dashboard_id, **kwargs) # noqa: E501
else:
(data) = self.delete_dashboards_id_members_id_with_http_info(user_id, dashboard_id, **kwargs) # noqa: E501
return data
def delete_dashboards_id_members_id_with_http_info(self, user_id, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""Remove a member from a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_dashboards_id_members_id_with_http_info(user_id, dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str user_id: The ID of the member to remove. (required)
:param str dashboard_id: The dashboard ID. (required)
:param str zap_trace_span: OpenTracing span context
:return: None
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['user_id', 'dashboard_id', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_dashboards_id_members_id" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'user_id' is set
if ('user_id' not in local_var_params or
local_var_params['user_id'] is None):
raise ValueError("Missing the required parameter `user_id` when calling `delete_dashboards_id_members_id`") # noqa: E501
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `delete_dashboards_id_members_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'user_id' in local_var_params:
path_params['userID'] = local_var_params['user_id'] # noqa: E501
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/members/{userID}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def delete_dashboards_id_owners_id(self, user_id, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""Remove an owner from a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_dashboards_id_owners_id(user_id, dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str user_id: The ID of the owner to remove. (required)
:param str dashboard_id: The dashboard ID. (required)
:param str zap_trace_span: OpenTracing span context
:return: None
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_dashboards_id_owners_id_with_http_info(user_id, dashboard_id, **kwargs) # noqa: E501
else:
(data) = self.delete_dashboards_id_owners_id_with_http_info(user_id, dashboard_id, **kwargs) # noqa: E501
return data
def delete_dashboards_id_owners_id_with_http_info(self, user_id, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""Remove an owner from a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_dashboards_id_owners_id_with_http_info(user_id, dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str user_id: The ID of the owner to remove. (required)
:param str dashboard_id: The dashboard ID. (required)
:param str zap_trace_span: OpenTracing span context
:return: None
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['user_id', 'dashboard_id', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_dashboards_id_owners_id" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'user_id' is set
if ('user_id' not in local_var_params or
local_var_params['user_id'] is None):
raise ValueError("Missing the required parameter `user_id` when calling `delete_dashboards_id_owners_id`") # noqa: E501
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `delete_dashboards_id_owners_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'user_id' in local_var_params:
path_params['userID'] = local_var_params['user_id'] # noqa: E501
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/owners/{userID}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def get_dashboards(self, **kwargs): # noqa: E501,D401,D403
"""Get all dashboards.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_dashboards(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str zap_trace_span: OpenTracing span context
:param str owner: The owner ID.
:param str sort_by: The column to sort by.
:param list[str] id: List of dashboard IDs to return. If both `id and `owner` are specified, only `id` is used.
:param str org_id: The organization ID.
:param str org: The organization name.
:return: Dashboards
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_dashboards_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_dashboards_with_http_info(**kwargs) # noqa: E501
return data
def get_dashboards_with_http_info(self, **kwargs): # noqa: E501,D401,D403
"""Get all dashboards.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_dashboards_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str zap_trace_span: OpenTracing span context
:param str owner: The owner ID.
:param str sort_by: The column to sort by.
:param list[str] id: List of dashboard IDs to return. If both `id and `owner` are specified, only `id` is used.
:param str org_id: The organization ID.
:param str org: The organization name.
:return: Dashboards
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['zap_trace_span', 'owner', 'sort_by', 'id', 'org_id', 'org'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_dashboards" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'owner' in local_var_params:
query_params.append(('owner', local_var_params['owner'])) # noqa: E501
if 'sort_by' in local_var_params:
query_params.append(('sortBy', local_var_params['sort_by'])) # noqa: E501
if 'id' in local_var_params:
query_params.append(('id', local_var_params['id'])) # noqa: E501
collection_formats['id'] = 'multi' # noqa: E501
if 'org_id' in local_var_params:
query_params.append(('orgID', local_var_params['org_id'])) # noqa: E501
if 'org' in local_var_params:
query_params.append(('org', local_var_params['org'])) # noqa: E501
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Dashboards', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def get_dashboards_id(self, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""Get a Dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_dashboards_id(dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param str zap_trace_span: OpenTracing span context
:param str include: Includes the cell view properties in the response if set to `properties`
:return: Dashboard
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_dashboards_id_with_http_info(dashboard_id, **kwargs) # noqa: E501
else:
(data) = self.get_dashboards_id_with_http_info(dashboard_id, **kwargs) # noqa: E501
return data
def get_dashboards_id_with_http_info(self, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""Get a Dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_dashboards_id_with_http_info(dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param str zap_trace_span: OpenTracing span context
:param str include: Includes the cell view properties in the response if set to `properties`
:return: Dashboard
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'zap_trace_span', 'include'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_dashboards_id" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `get_dashboards_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
query_params = []
if 'include' in local_var_params:
query_params.append(('include', local_var_params['include'])) # noqa: E501
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Dashboard', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def get_dashboards_id_cells_id_view(self, dashboard_id, cell_id, **kwargs): # noqa: E501,D401,D403
"""Retrieve the view for a cell.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_dashboards_id_cells_id_view(dashboard_id, cell_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param str cell_id: The cell ID. (required)
:param str zap_trace_span: OpenTracing span context
:return: View
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_dashboards_id_cells_id_view_with_http_info(dashboard_id, cell_id, **kwargs) # noqa: E501
else:
(data) = self.get_dashboards_id_cells_id_view_with_http_info(dashboard_id, cell_id, **kwargs) # noqa: E501
return data
def get_dashboards_id_cells_id_view_with_http_info(self, dashboard_id, cell_id, **kwargs): # noqa: E501,D401,D403
"""Retrieve the view for a cell.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_dashboards_id_cells_id_view_with_http_info(dashboard_id, cell_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param str cell_id: The cell ID. (required)
:param str zap_trace_span: OpenTracing span context
:return: View
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'cell_id', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_dashboards_id_cells_id_view" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `get_dashboards_id_cells_id_view`") # noqa: E501
# verify the required parameter 'cell_id' is set
if ('cell_id' not in local_var_params or
local_var_params['cell_id'] is None):
raise ValueError("Missing the required parameter `cell_id` when calling `get_dashboards_id_cells_id_view`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
if 'cell_id' in local_var_params:
path_params['cellID'] = local_var_params['cell_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/cells/{cellID}/view', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='View', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def get_dashboards_id_labels(self, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""list all labels for a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_dashboards_id_labels(dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param str zap_trace_span: OpenTracing span context
:return: LabelsResponse
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_dashboards_id_labels_with_http_info(dashboard_id, **kwargs) # noqa: E501
else:
(data) = self.get_dashboards_id_labels_with_http_info(dashboard_id, **kwargs) # noqa: E501
return data
def get_dashboards_id_labels_with_http_info(self, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""list all labels for a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_dashboards_id_labels_with_http_info(dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param str zap_trace_span: OpenTracing span context
:return: LabelsResponse
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_dashboards_id_labels" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `get_dashboards_id_labels`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/labels', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='LabelsResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def get_dashboards_id_members(self, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""List all dashboard members.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_dashboards_id_members(dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param str zap_trace_span: OpenTracing span context
:return: ResourceMembers
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_dashboards_id_members_with_http_info(dashboard_id, **kwargs) # noqa: E501
else:
(data) = self.get_dashboards_id_members_with_http_info(dashboard_id, **kwargs) # noqa: E501
return data
def get_dashboards_id_members_with_http_info(self, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""List all dashboard members.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_dashboards_id_members_with_http_info(dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param str zap_trace_span: OpenTracing span context
:return: ResourceMembers
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_dashboards_id_members" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `get_dashboards_id_members`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/members', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ResourceMembers', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def get_dashboards_id_owners(self, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""List all dashboard owners.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_dashboards_id_owners(dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param str zap_trace_span: OpenTracing span context
:return: ResourceOwners
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_dashboards_id_owners_with_http_info(dashboard_id, **kwargs) # noqa: E501
else:
(data) = self.get_dashboards_id_owners_with_http_info(dashboard_id, **kwargs) # noqa: E501
return data
def get_dashboards_id_owners_with_http_info(self, dashboard_id, **kwargs): # noqa: E501,D401,D403
"""List all dashboard owners.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_dashboards_id_owners_with_http_info(dashboard_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param str zap_trace_span: OpenTracing span context
:return: ResourceOwners
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_dashboards_id_owners" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `get_dashboards_id_owners`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/owners', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ResourceOwners', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def patch_dashboards_id(self, dashboard_id, dashboard, **kwargs): # noqa: E501,D401,D403
"""Update a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_dashboards_id(dashboard_id, dashboard, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param Dashboard dashboard: Patching of a dashboard (required)
:param str zap_trace_span: OpenTracing span context
:return: Dashboard
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.patch_dashboards_id_with_http_info(dashboard_id, dashboard, **kwargs) # noqa: E501
else:
(data) = self.patch_dashboards_id_with_http_info(dashboard_id, dashboard, **kwargs) # noqa: E501
return data
def patch_dashboards_id_with_http_info(self, dashboard_id, dashboard, **kwargs): # noqa: E501,D401,D403
"""Update a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_dashboards_id_with_http_info(dashboard_id, dashboard, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param Dashboard dashboard: Patching of a dashboard (required)
:param str zap_trace_span: OpenTracing span context
:return: Dashboard
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'dashboard', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_dashboards_id" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `patch_dashboards_id`") # noqa: E501
# verify the required parameter 'dashboard' is set
if ('dashboard' not in local_var_params or
local_var_params['dashboard'] is None):
raise ValueError("Missing the required parameter `dashboard` when calling `patch_dashboards_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'dashboard' in local_var_params:
body_params = local_var_params['dashboard']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Dashboard', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def patch_dashboards_id_cells_id(self, dashboard_id, cell_id, cell_update, **kwargs): # noqa: E501,D401,D403
"""Update the non-positional information related to a cell.
Updates the non positional information related to a cell. Updates to a single cell's positional data could cause grid conflicts.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_dashboards_id_cells_id(dashboard_id, cell_id, cell_update, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param str cell_id: The ID of the cell to update. (required)
:param CellUpdate cell_update: (required)
:param str zap_trace_span: OpenTracing span context
:return: Cell
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.patch_dashboards_id_cells_id_with_http_info(dashboard_id, cell_id, cell_update, **kwargs) # noqa: E501
else:
(data) = self.patch_dashboards_id_cells_id_with_http_info(dashboard_id, cell_id, cell_update, **kwargs) # noqa: E501
return data
def patch_dashboards_id_cells_id_with_http_info(self, dashboard_id, cell_id, cell_update, **kwargs): # noqa: E501,D401,D403
"""Update the non-positional information related to a cell.
Updates the non positional information related to a cell. Updates to a single cell's positional data could cause grid conflicts.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_dashboards_id_cells_id_with_http_info(dashboard_id, cell_id, cell_update, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param str cell_id: The ID of the cell to update. (required)
:param CellUpdate cell_update: (required)
:param str zap_trace_span: OpenTracing span context
:return: Cell
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'cell_id', 'cell_update', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_dashboards_id_cells_id" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `patch_dashboards_id_cells_id`") # noqa: E501
# verify the required parameter 'cell_id' is set
if ('cell_id' not in local_var_params or
local_var_params['cell_id'] is None):
raise ValueError("Missing the required parameter `cell_id` when calling `patch_dashboards_id_cells_id`") # noqa: E501
# verify the required parameter 'cell_update' is set
if ('cell_update' not in local_var_params or
local_var_params['cell_update'] is None):
raise ValueError("Missing the required parameter `cell_update` when calling `patch_dashboards_id_cells_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
if 'cell_id' in local_var_params:
path_params['cellID'] = local_var_params['cell_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'cell_update' in local_var_params:
body_params = local_var_params['cell_update']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/cells/{cellID}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Cell', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def patch_dashboards_id_cells_id_view(self, dashboard_id, cell_id, view, **kwargs): # noqa: E501,D401,D403
"""Update the view for a cell.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_dashboards_id_cells_id_view(dashboard_id, cell_id, view, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param str cell_id: The ID of the cell to update. (required)
:param View view: (required)
:param str zap_trace_span: OpenTracing span context
:return: View
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.patch_dashboards_id_cells_id_view_with_http_info(dashboard_id, cell_id, view, **kwargs) # noqa: E501
else:
(data) = self.patch_dashboards_id_cells_id_view_with_http_info(dashboard_id, cell_id, view, **kwargs) # noqa: E501
return data
def patch_dashboards_id_cells_id_view_with_http_info(self, dashboard_id, cell_id, view, **kwargs): # noqa: E501,D401,D403
"""Update the view for a cell.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_dashboards_id_cells_id_view_with_http_info(dashboard_id, cell_id, view, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param str cell_id: The ID of the cell to update. (required)
:param View view: (required)
:param str zap_trace_span: OpenTracing span context
:return: View
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'cell_id', 'view', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method patch_dashboards_id_cells_id_view" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `patch_dashboards_id_cells_id_view`") # noqa: E501
# verify the required parameter 'cell_id' is set
if ('cell_id' not in local_var_params or
local_var_params['cell_id'] is None):
raise ValueError("Missing the required parameter `cell_id` when calling `patch_dashboards_id_cells_id_view`") # noqa: E501
# verify the required parameter 'view' is set
if ('view' not in local_var_params or
local_var_params['view'] is None):
raise ValueError("Missing the required parameter `view` when calling `patch_dashboards_id_cells_id_view`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
if 'cell_id' in local_var_params:
path_params['cellID'] = local_var_params['cell_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'view' in local_var_params:
body_params = local_var_params['view']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/cells/{cellID}/view', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='View', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def post_dashboards(self, create_dashboard_request, **kwargs): # noqa: E501,D401,D403
"""Create a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.post_dashboards(create_dashboard_request, async_req=True)
>>> result = thread.get()
:param async_req bool
:param CreateDashboardRequest create_dashboard_request: Dashboard to create (required)
:param str zap_trace_span: OpenTracing span context
:return: Dashboard
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.post_dashboards_with_http_info(create_dashboard_request, **kwargs) # noqa: E501
else:
(data) = self.post_dashboards_with_http_info(create_dashboard_request, **kwargs) # noqa: E501
return data
def post_dashboards_with_http_info(self, create_dashboard_request, **kwargs): # noqa: E501,D401,D403
"""Create a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.post_dashboards_with_http_info(create_dashboard_request, async_req=True)
>>> result = thread.get()
:param async_req bool
:param CreateDashboardRequest create_dashboard_request: Dashboard to create (required)
:param str zap_trace_span: OpenTracing span context
:return: Dashboard
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['create_dashboard_request', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_dashboards" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'create_dashboard_request' is set
if ('create_dashboard_request' not in local_var_params or
local_var_params['create_dashboard_request'] is None):
raise ValueError("Missing the required parameter `create_dashboard_request` when calling `post_dashboards`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'create_dashboard_request' in local_var_params:
body_params = local_var_params['create_dashboard_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Dashboard', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def post_dashboards_id_cells(self, dashboard_id, create_cell, **kwargs): # noqa: E501,D401,D403
"""Create a dashboard cell.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.post_dashboards_id_cells(dashboard_id, create_cell, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param CreateCell create_cell: Cell that will be added (required)
:param str zap_trace_span: OpenTracing span context
:return: Cell
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.post_dashboards_id_cells_with_http_info(dashboard_id, create_cell, **kwargs) # noqa: E501
else:
(data) = self.post_dashboards_id_cells_with_http_info(dashboard_id, create_cell, **kwargs) # noqa: E501
return data
def post_dashboards_id_cells_with_http_info(self, dashboard_id, create_cell, **kwargs): # noqa: E501,D401,D403
"""Create a dashboard cell.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.post_dashboards_id_cells_with_http_info(dashboard_id, create_cell, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param CreateCell create_cell: Cell that will be added (required)
:param str zap_trace_span: OpenTracing span context
:return: Cell
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'create_cell', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_dashboards_id_cells" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `post_dashboards_id_cells`") # noqa: E501
# verify the required parameter 'create_cell' is set
if ('create_cell' not in local_var_params or
local_var_params['create_cell'] is None):
raise ValueError("Missing the required parameter `create_cell` when calling `post_dashboards_id_cells`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'create_cell' in local_var_params:
body_params = local_var_params['create_cell']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/cells', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Cell', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def post_dashboards_id_labels(self, dashboard_id, label_mapping, **kwargs): # noqa: E501,D401,D403
"""Add a label to a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.post_dashboards_id_labels(dashboard_id, label_mapping, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param LabelMapping label_mapping: Label to add (required)
:param str zap_trace_span: OpenTracing span context
:return: LabelResponse
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.post_dashboards_id_labels_with_http_info(dashboard_id, label_mapping, **kwargs) # noqa: E501
else:
(data) = self.post_dashboards_id_labels_with_http_info(dashboard_id, label_mapping, **kwargs) # noqa: E501
return data
def post_dashboards_id_labels_with_http_info(self, dashboard_id, label_mapping, **kwargs): # noqa: E501,D401,D403
"""Add a label to a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.post_dashboards_id_labels_with_http_info(dashboard_id, label_mapping, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param LabelMapping label_mapping: Label to add (required)
:param str zap_trace_span: OpenTracing span context
:return: LabelResponse
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'label_mapping', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_dashboards_id_labels" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `post_dashboards_id_labels`") # noqa: E501
# verify the required parameter 'label_mapping' is set
if ('label_mapping' not in local_var_params or
local_var_params['label_mapping'] is None):
raise ValueError("Missing the required parameter `label_mapping` when calling `post_dashboards_id_labels`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'label_mapping' in local_var_params:
body_params = local_var_params['label_mapping']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/labels', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='LabelResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def post_dashboards_id_members(self, dashboard_id, add_resource_member_request_body, **kwargs): # noqa: E501,D401,D403
"""Add a member to a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.post_dashboards_id_members(dashboard_id, add_resource_member_request_body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param AddResourceMemberRequestBody add_resource_member_request_body: User to add as member (required)
:param str zap_trace_span: OpenTracing span context
:return: ResourceMember
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.post_dashboards_id_members_with_http_info(dashboard_id, add_resource_member_request_body, **kwargs) # noqa: E501
else:
(data) = self.post_dashboards_id_members_with_http_info(dashboard_id, add_resource_member_request_body, **kwargs) # noqa: E501
return data
def post_dashboards_id_members_with_http_info(self, dashboard_id, add_resource_member_request_body, **kwargs): # noqa: E501,D401,D403
"""Add a member to a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.post_dashboards_id_members_with_http_info(dashboard_id, add_resource_member_request_body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param AddResourceMemberRequestBody add_resource_member_request_body: User to add as member (required)
:param str zap_trace_span: OpenTracing span context
:return: ResourceMember
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'add_resource_member_request_body', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_dashboards_id_members" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `post_dashboards_id_members`") # noqa: E501
# verify the required parameter 'add_resource_member_request_body' is set
if ('add_resource_member_request_body' not in local_var_params or
local_var_params['add_resource_member_request_body'] is None):
raise ValueError("Missing the required parameter `add_resource_member_request_body` when calling `post_dashboards_id_members`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'add_resource_member_request_body' in local_var_params:
body_params = local_var_params['add_resource_member_request_body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/members', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ResourceMember', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def post_dashboards_id_owners(self, dashboard_id, add_resource_member_request_body, **kwargs): # noqa: E501,D401,D403
"""Add an owner to a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.post_dashboards_id_owners(dashboard_id, add_resource_member_request_body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param AddResourceMemberRequestBody add_resource_member_request_body: User to add as owner (required)
:param str zap_trace_span: OpenTracing span context
:return: ResourceOwner
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.post_dashboards_id_owners_with_http_info(dashboard_id, add_resource_member_request_body, **kwargs) # noqa: E501
else:
(data) = self.post_dashboards_id_owners_with_http_info(dashboard_id, add_resource_member_request_body, **kwargs) # noqa: E501
return data
def post_dashboards_id_owners_with_http_info(self, dashboard_id, add_resource_member_request_body, **kwargs): # noqa: E501,D401,D403
"""Add an owner to a dashboard.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.post_dashboards_id_owners_with_http_info(dashboard_id, add_resource_member_request_body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The dashboard ID. (required)
:param AddResourceMemberRequestBody add_resource_member_request_body: User to add as owner (required)
:param str zap_trace_span: OpenTracing span context
:return: ResourceOwner
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'add_resource_member_request_body', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method post_dashboards_id_owners" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `post_dashboards_id_owners`") # noqa: E501
# verify the required parameter 'add_resource_member_request_body' is set
if ('add_resource_member_request_body' not in local_var_params or
local_var_params['add_resource_member_request_body'] is None):
raise ValueError("Missing the required parameter `add_resource_member_request_body` when calling `post_dashboards_id_owners`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'add_resource_member_request_body' in local_var_params:
body_params = local_var_params['add_resource_member_request_body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/owners', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ResourceOwner', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
def put_dashboards_id_cells(self, dashboard_id, cell, **kwargs): # noqa: E501,D401,D403
"""Replace cells in a dashboard.
Replaces all cells in a dashboard. This is used primarily to update the positional information of all cells.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.put_dashboards_id_cells(dashboard_id, cell, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param list[Cell] cell: (required)
:param str zap_trace_span: OpenTracing span context
:return: Dashboard
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.put_dashboards_id_cells_with_http_info(dashboard_id, cell, **kwargs) # noqa: E501
else:
(data) = self.put_dashboards_id_cells_with_http_info(dashboard_id, cell, **kwargs) # noqa: E501
return data
def put_dashboards_id_cells_with_http_info(self, dashboard_id, cell, **kwargs): # noqa: E501,D401,D403
"""Replace cells in a dashboard.
Replaces all cells in a dashboard. This is used primarily to update the positional information of all cells.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.put_dashboards_id_cells_with_http_info(dashboard_id, cell, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str dashboard_id: The ID of the dashboard to update. (required)
:param list[Cell] cell: (required)
:param str zap_trace_span: OpenTracing span context
:return: Dashboard
If the method is called asynchronously,
returns the request thread.
""" # noqa: E501
local_var_params = locals()
all_params = ['dashboard_id', 'cell', 'zap_trace_span'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
all_params.append('urlopen_kw')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method put_dashboards_id_cells" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'dashboard_id' is set
if ('dashboard_id' not in local_var_params or
local_var_params['dashboard_id'] is None):
raise ValueError("Missing the required parameter `dashboard_id` when calling `put_dashboards_id_cells`") # noqa: E501
# verify the required parameter 'cell' is set
if ('cell' not in local_var_params or
local_var_params['cell'] is None):
raise ValueError("Missing the required parameter `cell` when calling `put_dashboards_id_cells`") # noqa: E501
collection_formats = {}
path_params = {}
if 'dashboard_id' in local_var_params:
path_params['dashboardID'] = local_var_params['dashboard_id'] # noqa: E501
query_params = []
header_params = {}
if 'zap_trace_span' in local_var_params:
header_params['Zap-Trace-Span'] = local_var_params['zap_trace_span'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'cell' in local_var_params:
body_params = local_var_params['cell']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
# urlopen optional setting
urlopen_kw = None
if 'urlopen_kw' in kwargs:
urlopen_kw = kwargs['urlopen_kw']
return self.api_client.call_api(
'/api/v2/dashboards/{dashboardID}/cells', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Dashboard', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
urlopen_kw=urlopen_kw)
| 44.179241 | 153 | 0.638915 | 12,369 | 101,303 | 4.91026 | 0.018514 | 0.051107 | 0.080217 | 0.024763 | 0.979797 | 0.976817 | 0.972965 | 0.968585 | 0.962641 | 0.947197 | 0 | 0.017083 | 0.275964 | 101,303 | 2,292 | 154 | 44.198517 | 0.810967 | 0.307079 | 0 | 0.810078 | 1 | 0 | 0.208253 | 0.060041 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031783 | false | 0 | 0.003101 | 0 | 0.082171 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
672c6c3dc77ad9dce8b08d4c43208bbd3f0195f8 | 207 | py | Python | torabot/mods/yyets/views/email.py | Answeror/torabot | b6260190ec1f0dc8bf3f7ba3512c0522668c59ed | [
"MIT"
] | 42 | 2015-01-20T10:45:08.000Z | 2021-04-17T05:10:27.000Z | torabot/mods/yyets/views/email.py | Answeror/torabot | b6260190ec1f0dc8bf3f7ba3512c0522668c59ed | [
"MIT"
] | 4 | 2015-01-23T05:40:44.000Z | 2016-12-19T03:52:20.000Z | torabot/mods/yyets/views/email.py | Answeror/torabot | b6260190ec1f0dc8bf3f7ba3512c0522668c59ed | [
"MIT"
] | 8 | 2015-05-07T03:51:05.000Z | 2019-03-20T05:40:47.000Z | def format_notice_body(notice):
return {
'rss.new': format_rss_notice,
}[notice.change.kind](notice)
def format_rss_notice(notice):
return '%(title)s 更新了: %(link)s' % notice.change.art
| 23 | 56 | 0.671498 | 29 | 207 | 4.586207 | 0.482759 | 0.135338 | 0.225564 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178744 | 207 | 8 | 57 | 25.875 | 0.782353 | 0 | 0 | 0 | 0 | 0 | 0.144928 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
673308598faa7f8e0f9e453798c9dc24057178f1 | 20,287 | py | Python | datapremultiprocess.py | c896/Bert-Chinese-Text-Classification | 59f3673cd3a3ea477ea6977d821a96a06ed91d12 | [
"MIT"
] | null | null | null | datapremultiprocess.py | c896/Bert-Chinese-Text-Classification | 59f3673cd3a3ea477ea6977d821a96a06ed91d12 | [
"MIT"
] | null | null | null | datapremultiprocess.py | c896/Bert-Chinese-Text-Classification | 59f3673cd3a3ea477ea6977d821a96a06ed91d12 | [
"MIT"
] | null | null | null | # import jieba
from jieba import lcut,analyse
# import jieba.analyse as analyse
import re
from tqdm import tqdm #, trange
import pandas as pd
# from sklearn.utils import shuffle
# from sklearn.model_selection import train_test_split
# import matplotlib.pyplot as plt
# import warnings
import multiprocessing
import string
import time
# import frozen
multiprocessing.freeze_support()
analyse.set_stop_words('./stopwords/cg_stopwords.txt')
# warnings.filterwarnings("ignore")
# plt.rcParams['font.sans-serif']='SimHei'
# plt.rcParams['axes.unicode_minus']=False
# classdict = {'财经':0,'股票':0,'房产':1,'教育':2,'科技':3,'军事':4,'汽车':5,'体育':6,'综合体育最新':6,'体育焦点':6,\
# '游戏':7,'娱乐':8,'其它':9,'社会':9,'健康':9,'法制':9}
#世界国际历史有歧义
classdict = {'财经':0,'股票':0,'房产':1,'教育':2,'科技':3,'数码': 3,'科普': 3,'军事':4,'汽车':5,'体育':6,'足球': 6,'综合体育最新':6,\
'体育焦点':6,'游戏':7,'娱乐':8,'其它':9,'其他':9,'社会':9,'健康':9,'法制':9,'世界':9,'国际':9,'文化':9,'历史':9,'时尚':9,'情感':9,\
'旅游':9,'健康':9,'美食':9,'宠物':9,'星座':9,'动漫':9}
iddict={0:'财经',1:'房产',2:'教育',3:'科技',4:'军事',5:'汽车',6:'体育',7:'游戏',8:'娱乐',9:'其他'}
class Linedata(object):
def __init__(self):
# self.textinput = textinput
# self.path_txt = path_txt
# self.news_path=news_path
self.classdict = {'财经':0,'房产':1,'教育':2,'科技':3,'军事':4,'汽车':5,'体育':6,'游戏':7,'娱乐':8,'其他':9}
#------------------------读取文件-------------------------#
def read_excel(self,path):
data_xls = pd.ExcelFile(path)
df =pd.read_excel(path,sheet_name=0,usecols=[0,1,2])
print(data_xls.sheet_names)
for idx in range(1,len(data_xls.sheet_names)-1):
# print(idx)
df_ =pd.read_excel(path,sheet_name=idx,usecols=[0,1,2])
df_=df_.drop(index=0)
df = pd.concat([df, df_], axis=0)
return df
def stopwordslist(self,filepath):
stopwords = [line.strip() for line in open(filepath,encoding='UTF-8').readlines()]
return stopwords
def chuli(self,stopwords,textinput):
# textinput=self.textinput
#------------------------------------------加载停用词-----------------------------------------------------#
# strip() 方法用于移除字符串头尾指定的字符(默认为空格或换行符)或字符序列
# stopwords=set(self.stopwordslist('cg_stopwords.txt'))
# # vocab=set(self.stopwordslist('vocab.txt'))
# stopwords=list(stopwords)
#---------------------------------基于TF-IDF算法的关键词抽取------------------------------------------#
# analyse.set_stop_words('cg_stopwords.txt')
# import zhon.hanzi
# punc = string.punctuation + ';:,、?!‘’“”《》¥%' #zhon.hanzi.punctuation
punc = string.punctuation + '《》¥%'
textinput = textinput.strip()
#--------------------------------标题处理----------------------------------------#
try:
title,text = textinput.split('\t')
title_ = lcut(title)
title_ = [w for w in title_ if (w not in stopwords or w in punc) and not re.match('^[0-9|.]*$',w)]
# print(title_)
newtitle = "".join(title_)
except:
text_ = analyse.extract_tags(textinput,32)
text_ = [w for w in text_ if not re.match('^[0-9|.]*$',w)]
# print(title_)
output= "".join(text_) +'\t'+'9'
print("转换后:{}".format(output[:-2]))
return output
#-------------------clean words whos length<2 and with only numbers and characters-------------------#
# title_1 = jieba.lcut(title.strip())
# title_1 = [w for w in title_1 if len(w)>1 and not re.match('^[0-9|.]*$',w)]
# title = [w for w in title if w in vocab]
# title = "".join(title)
#--------------------------------内容处理----------------------------------------#
# text = [w for w in text if w in vocab] \w 匹配字母或数字或下划线或汉字 等价于 '[^A-Za-z0-9_]'。
# text = "".join(text)
text_ = analyse.extract_tags(text,32)
text_ = [w for w in text_ if w not in title_ and not re.match('^[0-9|.]*$',w)] #'^[a-z|A-Z|0-9|.]*$'
newtext = "".join(text_)
output = newtitle +' '+ newtext +'\t'+'9'
# print(newtitle)
# print(newtext)
print("转换后:{}".format(output[:-2]))
return output
class Excel_preddata(object):
def __init__(self):
# self.textinput = textinput
self.classdict = {'财经':0,'房产':1,'教育':2,'科技':3,'军事':4,'汽车':5,'体育':6,'游戏':7,'娱乐':8,'其他':9}
#------------------------读取文件-------------------------#
def read_excel(self,path):
data_xls = pd.ExcelFile(path)
df =pd.read_excel(path,sheet_name=0,usecols=[2,3])
print(data_xls.sheet_names)
# for idx in range(1,len(data_xls.sheet_names)-1):
# # print(idx)
# df_ =pd.read_excel(path,sheet_name=idx,usecols=[0,1,2])
# df_=df_.drop(index=0)
# df = pd.concat([df, df_], axis=0)
return df,data_xls.sheet_names
def stopwordslist(self,filepath):
stopwords = [line.strip() for line in open(filepath,encoding='UTF-8').readlines()]
return stopwords
def chuli(self,stopwords,news_path):
flag = 0
if news_path.endswith('.xlsx'):
path_txt = news_path.split('.xlsx')[0] + '.txt'
elif news_path.endswith('.xls'):
path_txt = news_path.split('.xls')[0] + '.txt'
else:
flag = 1
print('输入文件格式不符,请重新输入')
if flag == 1:
return None
with open(path_txt,'w+', encoding='UTF-8') as f:
f.write('')
df,sheet_names= self.read_excel(news_path)
#---------------------------------基于TF-IDF算法的关键词抽取------------------------------------------#
# analyse.set_stop_words('cg_stopwords.txt')
# import zhon.hanzi
# punc = string.punctuation + ';:,、?!‘’“”《》¥%' #zhon.hanzi.punctuation
punc = string.punctuation + '《》¥%'
#--------------------------------标题处理----------------------------------------#
with open(path_txt,'a+', encoding='UTF-8') as f:
for content in tqdm(df.values): #可改成多进程处理
title = content[0]
text = content[1]
# label = content[1]
#-------------------clean words whos length<2 and with only numbers and characters-------------------#
title_ = lcut(title.strip())
title_ = [w for w in title_ if (w not in stopwords or w in punc) and not re.match('^[0-9|.]*$',w)]
# print(title_)
newtitle = "".join(title_)
#--------------------------------内容处理----------------------------------------#
try:
# text = [w for w in text if w in vocab]
# text = "".join(text)
text_ = analyse.extract_tags(text,32)
text_ = [w for w in text_ if w not in title_ and not re.match('^[0-9|.]*$',w)] #'^[a-z|A-Z|0-9|.]*$' \w 匹配字母或数字或下划线或汉字 等价于 '[^A-Za-z0-9_]'。
newtext = "".join(text_)
title_text = " ".join([newtitle,newtext])
newcontent = title_text +'\t'+'9'+'\n'
except:
newcontent = newtitle +'\t'+'9'+'\n'
# print(newtitle)
# print(newtext)
f.write(newcontent)
time.sleep(0.01)
return path_txt,sheet_names
def worker(self, q1, q2, b1,stopwords,punc):
"""
定义处理线程
"""
while True:
content = q1.get()
if len(content)==0:
b1.wait() # 阻塞进程,等待其他进程执行完毕
break
title = content[0]
text = content[1]
title_ = lcut(title.strip())
title_ = [w for w in title_ if (w not in stopwords or w in punc) and not re.match('^[0-9|.]*$',w)]
# print(title_)
newtitle = "".join(title_)
#--------------------------------内容处理----------------------------------------#
try:
# text = [w for w in text if w in vocab]
# text = "".join(text)
text_ = analyse.extract_tags(text,32)
text_ = [w for w in text_ if w not in title_ and not re.match('^[0-9|.]*$',w)] #'^[a-z|A-Z|0-9|.]*$' \w 匹配字母或数字或下划线或汉字 等价于 '[^A-Za-z0-9_]'。
newtext = "".join(text_)
title_text = " ".join([newtitle,newtext])
newcontent = title_text +'\t'+'9'+'\n'
except:
newcontent = newtitle +'\t'+'9'+'\n'
q2.put(newcontent)
print("Worker finished!")
def feed(self, q1, q2, b1, num, df):
"""
定义生产者
"""
# for f_name in tqdm.tqdm_gui(file_paths): #终端进度条展示
for content in tqdm(df.values):# 每次提供一些数据
q1.put(content)
for itr in range(num):
q1.put([]) # 在输入结束后输入结束标志(空列表)
b1.wait() # 阻塞,等待所有处理完成后向输出进程输入终止命令
q2.put([]) # 输入空白以终止进程
print("Feed finished")
def output(self, q2, path_txt):
"""
定义输出单元
"""
count = 0
# newcontent=[]
while True:
a = q2.get()
if len(a)==0:
break
# newcontent.append(a)
with open(path_txt,'a+', encoding='UTF-8') as f:
f.write(a)
count += 1
print("Output finished", "N input", count)
def multi_chuli(self,stopwords,news_path):
"""
主程序
最好使用类进行封装
"""
flag = 0
if news_path.endswith('.xlsx'):
path_txt = news_path.split('.xlsx')[0] + '.txt'
elif news_path.endswith('.xls'):
path_txt = news_path.split('.xls')[0] + '.txt'
else:
flag = 1
print('输入文件格式不符,请重新输入')
if flag == 1:
return None
with open(path_txt,'w+', encoding='UTF-8') as f:
f.write('')
df,sheet_names= self.read_excel(news_path)
# df["channelName"].value_counts().plot.pie(subplots=True, figsize=(4, 4), autopct='%0.1f%%')
# plt.title('数据分布')
# # plt.show()
# plt.draw()
# plt.pause(2) #显示秒数
# plt.close()
#---------------------------------基于TF-IDF算法的关键词抽取------------------------------------------#
# analyse.set_stop_words('cg_stopwords.txt')
# import zhon.hanzi
# punc = string.punctuation + ';:,、?!‘’“”《》¥%' #zhon.hanzi.punctuation
punc = string.punctuation + '《》¥%'
n_process = 8 # 定义进程数量
# multiprocessing.freeze_support()
q1 = multiprocessing.Queue(50) # 定义队列
q2 = multiprocessing.Queue(50)
barrier = multiprocessing.Barrier(n_process+1) # 用于进程同步,同步指令还可以使用过Manager、Pipe等,注意阻塞造成的死锁问题。
mp1 = multiprocessing.Process(target=self.feed, args=(q1, q2, barrier, n_process, df)) #加载df下所有行
mp1.start()#多进程读取文件路径
wks = []
# mydict=multiprocessing.Manager().dict() #主进程与子进程共享这个字典
stopwords=multiprocessing.Manager().list(stopwords) #主进程与子进程共享这个List
punc=multiprocessing.Manager().list(punc)
# num = multiprocessing.Value('d', 1.0) # 共享数字
# arr = multiprocessing.Array('i', range(10)) # 共享数组
for p in range(n_process): #取决于cpu的核心数
w1 = multiprocessing.Process(target=self.worker, args=(q1, q2, barrier, stopwords, punc)) #多进程读取所有行,并进行分词
w1.start()
wks.append(w1)
mp2 = multiprocessing.Process(target=self.output, args=(q2, path_txt)) #多进程地把读取的文件数据并存储为txt
mp2.start()
#设置进程顺序
mp1.join()
for w in wks:
w.join()
mp2.join()
return path_txt,sheet_names
class Excel_testdata(object):
def __init__(self):
# self.textinput = textinput
self.classdict = {'财经':0,'房产':1,'教育':2,'科技':3,'军事':4,'汽车':5,'体育':6,'游戏':7,'娱乐':8,'其他':9}
#------------------------读取文件-------------------------#
def read_excel(self,path):
data_xls = pd.ExcelFile(path)
df =pd.read_excel(path,sheet_name=0,usecols=[0,1,2])
print(data_xls.sheet_names)
for idx in range(1,len(data_xls.sheet_names)-1):
# print(idx)
df_ =pd.read_excel(path,sheet_name=idx,usecols=[0,1,2])
df_=df_.drop(index=0)
df = pd.concat([df, df_], axis=0)
return df
def stopwordslist(self,filepath):
stopwords = [line.strip() for line in open(filepath,encoding='UTF-8').readlines()]
return stopwords
def content_process(self,df,stopwords,punc):
for content in tqdm(df.values):
title = content[2]
text = content[0]
# label = content[1]
#-------------------clean words whos length<2 and with only numbers and characters-------------------#
title_ = lcut(title.strip())
title_ = [w for w in title_ if (w not in stopwords or w in punc) and not re.match('^[0-9|.]*$',w)]
# print(title_)
newtitle = "".join(title_)
try:
# text = [w for w in text if w in vocab]
# text = "".join(text)
text_ = analyse.extract_tags(text,32)
text_ = [w for w in text_ if w not in title_ and not re.match('^[0-9|.]*$',w)] #'^[a-z|A-Z|0-9|.]*$' \w 匹配字母或数字或下划线或汉字 等价于 '[^A-Za-z0-9_]'。
newtext = "".join(text_)
title_text = " ".join([newtitle,newtext])
newcontent = title_text +'\t'+ str(classdict[content[1]])+'\n'
except:
newcontent = newtitle +'\t'+ str(classdict[content[1]])+'\n'
return newcontent
def chuli(self,stopwords,news_path):
flag = 0
if news_path.endswith('.xlsx'):
path_txt = news_path.split('.xlsx')[0] + '.txt'
elif news_path.endswith('.xls'):
path_txt = news_path.split('.xls')[0] + '.txt'
else:
flag = 1
print('输入文件格式不符,请重新输入')
if flag == 1:
return None
with open(path_txt,'w+', encoding='UTF-8') as f:
f.write('')
df = self.read_excel(news_path)
# df["channelName"].value_counts().plot.pie(subplots=True, figsize=(4, 4), autopct='%0.1f%%')
# plt.title('数据分布')
# # plt.show()
# plt.draw()
# plt.pause(2) #显示秒数
# plt.close()
#---------------------------------基于TF-IDF算法的关键词抽取------------------------------------------#
# analyse.set_stop_words('cg_stopwords.txt')
# import zhon.hanzi
# punc = string.punctuation + ';:,、?!‘’“”《》¥%' #zhon.hanzi.punctuation
punc = string.punctuation + '《》¥%'
#--------------------------------标题处理----------------------------------------#
newcontent = self.content_process(df,stopwords,punc)
with open(path_txt,'a+', encoding='UTF-8') as f:
for line in newcontent:
f.write(line)
time.sleep(0.01)
return path_txt
def worker(self, q1, q2, b1,stopwords,punc):
"""
定义处理线程
"""
while True:
content = q1.get()
if len(content)==0:
b1.wait() # 阻塞进程,等待其他进程执行完毕
break
title = content[2]
text = content[0]
title_ = lcut(title.strip())
title_ = [w for w in title_ if (w not in stopwords or w in punc) and not re.match('^[0-9|.]*$',w)]
# print(title_)
newtitle = "".join(title_)
try:
# text = [w for w in text if w in vocab]
# text = "".join(text)
text_ = analyse.extract_tags(text,32)
text_ = [w for w in text_ if w not in title_ and not re.match('^[0-9|.]*$',w)] #'^[a-z|A-Z|0-9|.]*$' \w 匹配字母或数字或下划线或汉字 等价于 '[^A-Za-z0-9_]'。
newtext = "".join(text_)
title_text = " ".join([newtitle,newtext])
newcontent = title_text +'\t'+ str(classdict[content[1]])+'\n'
except:
newcontent = newtitle +'\t'+ str(classdict[content[1]])+'\n'
q2.put(newcontent)
print("Worker finished!")
def feed(self, q1, q2, b1, num, df):
"""
定义生产者
"""
# for f_name in tqdm.tqdm_gui(file_paths): #终端进度条展示
for content in tqdm(df.values):# 每次提供一些数据
q1.put(content)
for itr in range(num):
q1.put([]) # 在输入结束后输入结束标志(空列表)
b1.wait() # 阻塞,等待所有处理完成后向输出进程输入终止命令
q2.put([]) # 输入空白以终止进程
print("Feed finished")
def output(self, q2, path_txt):
"""
定义输出单元
"""
count = 0
# newcontent=[]
while True:
a = q2.get()
if len(a)==0:
break
# newcontent.append(a)
with open(path_txt,'a+', encoding='UTF-8') as f:
f.write(a)
count += 1
print("Output finished", "N input", count)
def multi_chuli(self,stopwords,news_path):
"""
主程序
最好使用类进行封装
"""
flag = 0
if news_path.endswith('.xlsx'):
path_txt = news_path.split('.xlsx')[0] + '.txt'
elif news_path.endswith('.xls'):
path_txt = news_path.split('.xls')[0] + '.txt'
else:
flag = 1
print('输入文件格式不符,请重新输入')
if flag == 1:
return None
with open(path_txt,'w+', encoding='UTF-8') as f:
f.write('')
df = self.read_excel(news_path)
# df["channelName"].value_counts().plot.pie(subplots=True, figsize=(4, 4), autopct='%0.1f%%')
# plt.title('数据分布')
# # plt.show()
# plt.draw()
# plt.pause(2) #显示秒数
# plt.close()
#------------------------------------------加载停用词-----------------------------------------------------#
# strip() 方法用于移除字符串头尾指定的字符(默认为空格或换行符)或字符序列
# stopwords=set(self.stopwordslist('cg_stopwords.txt'))
# # vocab=set(self.stopwordslist('vocab.txt'))
# stopwords=list(stopwords)
#---------------------------------基于TF-IDF算法的关键词抽取------------------------------------------#
# analyse.set_stop_words('cg_stopwords.txt')
# import zhon.hanzi
# punc = string.punctuation + ';:,、?!‘’“”《》¥%' #zhon.hanzi.punctuation
punc = string.punctuation + '《》¥%'
n_process = 8 # 定义进程数量
# multiprocessing.freeze_support()
q1 = multiprocessing.Queue(50) # 定义队列
q2 = multiprocessing.Queue(50)
barrier = multiprocessing.Barrier(n_process+1) # 用于进程同步,同步指令还可以使用过Manager、Pipe等,注意阻塞造成的死锁问题。
mp1 = multiprocessing.Process(target=self.feed, args=(q1, q2, barrier, n_process, df)) #加载df下所有行
mp1.start()#多进程读取文件路径
wks = []
# mydict=multiprocessing.Manager().dict() #主进程与子进程共享这个字典
stopwords=multiprocessing.Manager().list(stopwords) #主进程与子进程共享这个List
punc=multiprocessing.Manager().list(punc)
# num = multiprocessing.Value('d', 1.0) # 共享数字
# arr = multiprocessing.Array('i', range(10)) # 共享数组
for p in range(n_process): #取决于cpu的核心数
w1 = multiprocessing.Process(target=self.worker, args=(q1, q2, barrier, stopwords, punc)) #多进程读取所有行,并进行分词
w1.start()
wks.append(w1)
mp2 = multiprocessing.Process(target=self.output, args=(q2, path_txt)) #多进程地把读取的文件数据并存储为txt
mp2.start()
#设置进程顺序
mp1.join()
for w in wks:
w.join()
mp2.join()
return path_txt | 42.089212 | 161 | 0.477251 | 2,314 | 20,287 | 4.083405 | 0.123596 | 0.009842 | 0.0127 | 0.013335 | 0.886231 | 0.882845 | 0.870039 | 0.85046 | 0.839348 | 0.839348 | 0 | 0.024973 | 0.315079 | 20,287 | 482 | 162 | 42.089212 | 0.654336 | 0.290728 | 0 | 0.890411 | 0 | 0 | 0.051981 | 0.002073 | 0.003425 | 0 | 0 | 0 | 0 | 1 | 0.071918 | false | 0 | 0.023973 | 0 | 0.164384 | 0.05137 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
674af86a77c07ac52aeda3e35f8158c912cb68b4 | 8,470 | py | Python | ydb/public/api/grpc/draft/yql_db_v1_pb2_grpc.py | gridnevvvit/ydb-python-sdk | 952f7b4f179595b07711934691d2e15643929cc5 | [
"Apache-2.0"
] | 2 | 2022-02-18T16:18:52.000Z | 2022-02-19T20:15:05.000Z | ydb/public/api/grpc/draft/yql_db_v1_pb2_grpc.py | gridnevvvit/ydb-python-sdk | 952f7b4f179595b07711934691d2e15643929cc5 | [
"Apache-2.0"
] | 1 | 2022-02-09T12:49:19.000Z | 2022-02-21T08:15:36.000Z | ydb/public/api/grpc/draft/yql_db_v1_pb2_grpc.py | gridnevvvit/ydb-python-sdk | 952f7b4f179595b07711934691d2e15643929cc5 | [
"Apache-2.0"
] | 2 | 2022-02-03T14:58:56.000Z | 2022-02-22T19:42:59.000Z | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from ydb.public.api.protos.draft import yq_private_pb2 as ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2
class YqPrivateTaskServiceStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.GetTask = channel.unary_unary(
'/Yq.Private.V1.YqPrivateTaskService/GetTask',
request_serializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.GetTaskRequest.SerializeToString,
response_deserializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.GetTaskResponse.FromString,
)
self.PingTask = channel.unary_unary(
'/Yq.Private.V1.YqPrivateTaskService/PingTask',
request_serializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.PingTaskRequest.SerializeToString,
response_deserializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.PingTaskResponse.FromString,
)
self.WriteTaskResult = channel.unary_unary(
'/Yq.Private.V1.YqPrivateTaskService/WriteTaskResult',
request_serializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.WriteTaskResultRequest.SerializeToString,
response_deserializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.WriteTaskResultResponse.FromString,
)
self.NodesHealthCheck = channel.unary_unary(
'/Yq.Private.V1.YqPrivateTaskService/NodesHealthCheck',
request_serializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.NodesHealthCheckRequest.SerializeToString,
response_deserializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.NodesHealthCheckResponse.FromString,
)
class YqPrivateTaskServiceServicer(object):
"""Missing associated documentation comment in .proto file."""
def GetTask(self, request, context):
"""gets new task
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def PingTask(self, request, context):
"""pings new task (also can update metadata)
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def WriteTaskResult(self, request, context):
"""writes rows
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def NodesHealthCheck(self, request, context):
"""Nodes
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_YqPrivateTaskServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'GetTask': grpc.unary_unary_rpc_method_handler(
servicer.GetTask,
request_deserializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.GetTaskRequest.FromString,
response_serializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.GetTaskResponse.SerializeToString,
),
'PingTask': grpc.unary_unary_rpc_method_handler(
servicer.PingTask,
request_deserializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.PingTaskRequest.FromString,
response_serializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.PingTaskResponse.SerializeToString,
),
'WriteTaskResult': grpc.unary_unary_rpc_method_handler(
servicer.WriteTaskResult,
request_deserializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.WriteTaskResultRequest.FromString,
response_serializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.WriteTaskResultResponse.SerializeToString,
),
'NodesHealthCheck': grpc.unary_unary_rpc_method_handler(
servicer.NodesHealthCheck,
request_deserializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.NodesHealthCheckRequest.FromString,
response_serializer=ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.NodesHealthCheckResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'Yq.Private.V1.YqPrivateTaskService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class YqPrivateTaskService(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def GetTask(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/Yq.Private.V1.YqPrivateTaskService/GetTask',
ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.GetTaskRequest.SerializeToString,
ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.GetTaskResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def PingTask(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/Yq.Private.V1.YqPrivateTaskService/PingTask',
ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.PingTaskRequest.SerializeToString,
ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.PingTaskResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def WriteTaskResult(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/Yq.Private.V1.YqPrivateTaskService/WriteTaskResult',
ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.WriteTaskResultRequest.SerializeToString,
ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.WriteTaskResultResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def NodesHealthCheck(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/Yq.Private.V1.YqPrivateTaskService/NodesHealthCheck',
ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.NodesHealthCheckRequest.SerializeToString,
ydb_dot_public_dot_api_dot_protos_dot_draft_dot_yq__private__pb2.NodesHealthCheckResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 49.823529 | 148 | 0.711334 | 886 | 8,470 | 6.291196 | 0.133183 | 0.056512 | 0.055974 | 0.067277 | 0.81934 | 0.802835 | 0.802835 | 0.741119 | 0.731432 | 0.71098 | 0 | 0.005323 | 0.223731 | 8,470 | 169 | 149 | 50.118343 | 0.842433 | 0.060331 | 0 | 0.484848 | 1 | 0 | 0.081653 | 0.052491 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075758 | false | 0 | 0.015152 | 0.030303 | 0.143939 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
67a7c08c64158e62376545de4f5eb82c2c038f30 | 57,645 | py | Python | sdk/python/pulumi_aws/amplify/branch.py | RafalSumislawski/pulumi-aws | 7c8a335d327c173aa32c8b3d98816e760db329fa | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/amplify/branch.py | RafalSumislawski/pulumi-aws | 7c8a335d327c173aa32c8b3d98816e760db329fa | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/amplify/branch.py | RafalSumislawski/pulumi-aws | 7c8a335d327c173aa32c8b3d98816e760db329fa | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['BranchArgs', 'Branch']
@pulumi.input_type
class BranchArgs:
def __init__(__self__, *,
app_id: pulumi.Input[str],
branch_name: pulumi.Input[str],
backend_environment_arn: Optional[pulumi.Input[str]] = None,
basic_auth_credentials: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
enable_auto_build: Optional[pulumi.Input[bool]] = None,
enable_basic_auth: Optional[pulumi.Input[bool]] = None,
enable_notification: Optional[pulumi.Input[bool]] = None,
enable_performance_mode: Optional[pulumi.Input[bool]] = None,
enable_pull_request_preview: Optional[pulumi.Input[bool]] = None,
environment_variables: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
framework: Optional[pulumi.Input[str]] = None,
pull_request_environment_name: Optional[pulumi.Input[str]] = None,
stage: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
ttl: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a Branch resource.
:param pulumi.Input[str] app_id: The unique ID for an Amplify app.
:param pulumi.Input[str] branch_name: The name for the branch.
:param pulumi.Input[str] backend_environment_arn: The Amazon Resource Name (ARN) for a backend environment that is part of an Amplify app.
:param pulumi.Input[str] basic_auth_credentials: The basic authorization credentials for the branch.
:param pulumi.Input[str] description: The description for the branch.
:param pulumi.Input[str] display_name: The display name for a branch. This is used as the default domain prefix.
:param pulumi.Input[bool] enable_auto_build: Enables auto building for the branch.
:param pulumi.Input[bool] enable_basic_auth: Enables basic authorization for the branch.
:param pulumi.Input[bool] enable_notification: Enables notifications for the branch.
:param pulumi.Input[bool] enable_performance_mode: Enables performance mode for the branch.
:param pulumi.Input[bool] enable_pull_request_preview: Enables pull request previews for this branch.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] environment_variables: The environment variables for the branch.
:param pulumi.Input[str] framework: The framework for the branch.
:param pulumi.Input[str] pull_request_environment_name: The Amplify environment name for the pull request.
:param pulumi.Input[str] stage: Describes the current stage for the branch. Valid values: `PRODUCTION`, `BETA`, `DEVELOPMENT`, `EXPERIMENTAL`, `PULL_REQUEST`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value mapping of resource tags. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[str] ttl: The content Time To Live (TTL) for the website in seconds.
"""
pulumi.set(__self__, "app_id", app_id)
pulumi.set(__self__, "branch_name", branch_name)
if backend_environment_arn is not None:
pulumi.set(__self__, "backend_environment_arn", backend_environment_arn)
if basic_auth_credentials is not None:
pulumi.set(__self__, "basic_auth_credentials", basic_auth_credentials)
if description is not None:
pulumi.set(__self__, "description", description)
if display_name is not None:
pulumi.set(__self__, "display_name", display_name)
if enable_auto_build is not None:
pulumi.set(__self__, "enable_auto_build", enable_auto_build)
if enable_basic_auth is not None:
pulumi.set(__self__, "enable_basic_auth", enable_basic_auth)
if enable_notification is not None:
pulumi.set(__self__, "enable_notification", enable_notification)
if enable_performance_mode is not None:
pulumi.set(__self__, "enable_performance_mode", enable_performance_mode)
if enable_pull_request_preview is not None:
pulumi.set(__self__, "enable_pull_request_preview", enable_pull_request_preview)
if environment_variables is not None:
pulumi.set(__self__, "environment_variables", environment_variables)
if framework is not None:
pulumi.set(__self__, "framework", framework)
if pull_request_environment_name is not None:
pulumi.set(__self__, "pull_request_environment_name", pull_request_environment_name)
if stage is not None:
pulumi.set(__self__, "stage", stage)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if ttl is not None:
pulumi.set(__self__, "ttl", ttl)
@property
@pulumi.getter(name="appId")
def app_id(self) -> pulumi.Input[str]:
"""
The unique ID for an Amplify app.
"""
return pulumi.get(self, "app_id")
@app_id.setter
def app_id(self, value: pulumi.Input[str]):
pulumi.set(self, "app_id", value)
@property
@pulumi.getter(name="branchName")
def branch_name(self) -> pulumi.Input[str]:
"""
The name for the branch.
"""
return pulumi.get(self, "branch_name")
@branch_name.setter
def branch_name(self, value: pulumi.Input[str]):
pulumi.set(self, "branch_name", value)
@property
@pulumi.getter(name="backendEnvironmentArn")
def backend_environment_arn(self) -> Optional[pulumi.Input[str]]:
"""
The Amazon Resource Name (ARN) for a backend environment that is part of an Amplify app.
"""
return pulumi.get(self, "backend_environment_arn")
@backend_environment_arn.setter
def backend_environment_arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "backend_environment_arn", value)
@property
@pulumi.getter(name="basicAuthCredentials")
def basic_auth_credentials(self) -> Optional[pulumi.Input[str]]:
"""
The basic authorization credentials for the branch.
"""
return pulumi.get(self, "basic_auth_credentials")
@basic_auth_credentials.setter
def basic_auth_credentials(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "basic_auth_credentials", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description for the branch.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> Optional[pulumi.Input[str]]:
"""
The display name for a branch. This is used as the default domain prefix.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter(name="enableAutoBuild")
def enable_auto_build(self) -> Optional[pulumi.Input[bool]]:
"""
Enables auto building for the branch.
"""
return pulumi.get(self, "enable_auto_build")
@enable_auto_build.setter
def enable_auto_build(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_auto_build", value)
@property
@pulumi.getter(name="enableBasicAuth")
def enable_basic_auth(self) -> Optional[pulumi.Input[bool]]:
"""
Enables basic authorization for the branch.
"""
return pulumi.get(self, "enable_basic_auth")
@enable_basic_auth.setter
def enable_basic_auth(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_basic_auth", value)
@property
@pulumi.getter(name="enableNotification")
def enable_notification(self) -> Optional[pulumi.Input[bool]]:
"""
Enables notifications for the branch.
"""
return pulumi.get(self, "enable_notification")
@enable_notification.setter
def enable_notification(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_notification", value)
@property
@pulumi.getter(name="enablePerformanceMode")
def enable_performance_mode(self) -> Optional[pulumi.Input[bool]]:
"""
Enables performance mode for the branch.
"""
return pulumi.get(self, "enable_performance_mode")
@enable_performance_mode.setter
def enable_performance_mode(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_performance_mode", value)
@property
@pulumi.getter(name="enablePullRequestPreview")
def enable_pull_request_preview(self) -> Optional[pulumi.Input[bool]]:
"""
Enables pull request previews for this branch.
"""
return pulumi.get(self, "enable_pull_request_preview")
@enable_pull_request_preview.setter
def enable_pull_request_preview(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_pull_request_preview", value)
@property
@pulumi.getter(name="environmentVariables")
def environment_variables(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
The environment variables for the branch.
"""
return pulumi.get(self, "environment_variables")
@environment_variables.setter
def environment_variables(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "environment_variables", value)
@property
@pulumi.getter
def framework(self) -> Optional[pulumi.Input[str]]:
"""
The framework for the branch.
"""
return pulumi.get(self, "framework")
@framework.setter
def framework(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "framework", value)
@property
@pulumi.getter(name="pullRequestEnvironmentName")
def pull_request_environment_name(self) -> Optional[pulumi.Input[str]]:
"""
The Amplify environment name for the pull request.
"""
return pulumi.get(self, "pull_request_environment_name")
@pull_request_environment_name.setter
def pull_request_environment_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "pull_request_environment_name", value)
@property
@pulumi.getter
def stage(self) -> Optional[pulumi.Input[str]]:
"""
Describes the current stage for the branch. Valid values: `PRODUCTION`, `BETA`, `DEVELOPMENT`, `EXPERIMENTAL`, `PULL_REQUEST`.
"""
return pulumi.get(self, "stage")
@stage.setter
def stage(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "stage", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Key-value mapping of resource tags. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter
def ttl(self) -> Optional[pulumi.Input[str]]:
"""
The content Time To Live (TTL) for the website in seconds.
"""
return pulumi.get(self, "ttl")
@ttl.setter
def ttl(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ttl", value)
@pulumi.input_type
class _BranchState:
def __init__(__self__, *,
app_id: Optional[pulumi.Input[str]] = None,
arn: Optional[pulumi.Input[str]] = None,
associated_resources: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
backend_environment_arn: Optional[pulumi.Input[str]] = None,
basic_auth_credentials: Optional[pulumi.Input[str]] = None,
branch_name: Optional[pulumi.Input[str]] = None,
custom_domains: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
description: Optional[pulumi.Input[str]] = None,
destination_branch: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
enable_auto_build: Optional[pulumi.Input[bool]] = None,
enable_basic_auth: Optional[pulumi.Input[bool]] = None,
enable_notification: Optional[pulumi.Input[bool]] = None,
enable_performance_mode: Optional[pulumi.Input[bool]] = None,
enable_pull_request_preview: Optional[pulumi.Input[bool]] = None,
environment_variables: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
framework: Optional[pulumi.Input[str]] = None,
pull_request_environment_name: Optional[pulumi.Input[str]] = None,
source_branch: Optional[pulumi.Input[str]] = None,
stage: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
tags_all: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
ttl: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering Branch resources.
:param pulumi.Input[str] app_id: The unique ID for an Amplify app.
:param pulumi.Input[str] arn: The Amazon Resource Name (ARN) for the branch.
:param pulumi.Input[Sequence[pulumi.Input[str]]] associated_resources: A list of custom resources that are linked to this branch.
:param pulumi.Input[str] backend_environment_arn: The Amazon Resource Name (ARN) for a backend environment that is part of an Amplify app.
:param pulumi.Input[str] basic_auth_credentials: The basic authorization credentials for the branch.
:param pulumi.Input[str] branch_name: The name for the branch.
:param pulumi.Input[Sequence[pulumi.Input[str]]] custom_domains: The custom domains for the branch.
:param pulumi.Input[str] description: The description for the branch.
:param pulumi.Input[str] destination_branch: The destination branch if the branch is a pull request branch.
:param pulumi.Input[str] display_name: The display name for a branch. This is used as the default domain prefix.
:param pulumi.Input[bool] enable_auto_build: Enables auto building for the branch.
:param pulumi.Input[bool] enable_basic_auth: Enables basic authorization for the branch.
:param pulumi.Input[bool] enable_notification: Enables notifications for the branch.
:param pulumi.Input[bool] enable_performance_mode: Enables performance mode for the branch.
:param pulumi.Input[bool] enable_pull_request_preview: Enables pull request previews for this branch.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] environment_variables: The environment variables for the branch.
:param pulumi.Input[str] framework: The framework for the branch.
:param pulumi.Input[str] pull_request_environment_name: The Amplify environment name for the pull request.
:param pulumi.Input[str] source_branch: The source branch if the branch is a pull request branch.
:param pulumi.Input[str] stage: Describes the current stage for the branch. Valid values: `PRODUCTION`, `BETA`, `DEVELOPMENT`, `EXPERIMENTAL`, `PULL_REQUEST`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value mapping of resource tags. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags_all: A map of tags assigned to the resource, including those inherited from the provider `default_tags` configuration block.
:param pulumi.Input[str] ttl: The content Time To Live (TTL) for the website in seconds.
"""
if app_id is not None:
pulumi.set(__self__, "app_id", app_id)
if arn is not None:
pulumi.set(__self__, "arn", arn)
if associated_resources is not None:
pulumi.set(__self__, "associated_resources", associated_resources)
if backend_environment_arn is not None:
pulumi.set(__self__, "backend_environment_arn", backend_environment_arn)
if basic_auth_credentials is not None:
pulumi.set(__self__, "basic_auth_credentials", basic_auth_credentials)
if branch_name is not None:
pulumi.set(__self__, "branch_name", branch_name)
if custom_domains is not None:
pulumi.set(__self__, "custom_domains", custom_domains)
if description is not None:
pulumi.set(__self__, "description", description)
if destination_branch is not None:
pulumi.set(__self__, "destination_branch", destination_branch)
if display_name is not None:
pulumi.set(__self__, "display_name", display_name)
if enable_auto_build is not None:
pulumi.set(__self__, "enable_auto_build", enable_auto_build)
if enable_basic_auth is not None:
pulumi.set(__self__, "enable_basic_auth", enable_basic_auth)
if enable_notification is not None:
pulumi.set(__self__, "enable_notification", enable_notification)
if enable_performance_mode is not None:
pulumi.set(__self__, "enable_performance_mode", enable_performance_mode)
if enable_pull_request_preview is not None:
pulumi.set(__self__, "enable_pull_request_preview", enable_pull_request_preview)
if environment_variables is not None:
pulumi.set(__self__, "environment_variables", environment_variables)
if framework is not None:
pulumi.set(__self__, "framework", framework)
if pull_request_environment_name is not None:
pulumi.set(__self__, "pull_request_environment_name", pull_request_environment_name)
if source_branch is not None:
pulumi.set(__self__, "source_branch", source_branch)
if stage is not None:
pulumi.set(__self__, "stage", stage)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if tags_all is not None:
pulumi.set(__self__, "tags_all", tags_all)
if ttl is not None:
pulumi.set(__self__, "ttl", ttl)
@property
@pulumi.getter(name="appId")
def app_id(self) -> Optional[pulumi.Input[str]]:
"""
The unique ID for an Amplify app.
"""
return pulumi.get(self, "app_id")
@app_id.setter
def app_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "app_id", value)
@property
@pulumi.getter
def arn(self) -> Optional[pulumi.Input[str]]:
"""
The Amazon Resource Name (ARN) for the branch.
"""
return pulumi.get(self, "arn")
@arn.setter
def arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "arn", value)
@property
@pulumi.getter(name="associatedResources")
def associated_resources(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of custom resources that are linked to this branch.
"""
return pulumi.get(self, "associated_resources")
@associated_resources.setter
def associated_resources(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "associated_resources", value)
@property
@pulumi.getter(name="backendEnvironmentArn")
def backend_environment_arn(self) -> Optional[pulumi.Input[str]]:
"""
The Amazon Resource Name (ARN) for a backend environment that is part of an Amplify app.
"""
return pulumi.get(self, "backend_environment_arn")
@backend_environment_arn.setter
def backend_environment_arn(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "backend_environment_arn", value)
@property
@pulumi.getter(name="basicAuthCredentials")
def basic_auth_credentials(self) -> Optional[pulumi.Input[str]]:
"""
The basic authorization credentials for the branch.
"""
return pulumi.get(self, "basic_auth_credentials")
@basic_auth_credentials.setter
def basic_auth_credentials(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "basic_auth_credentials", value)
@property
@pulumi.getter(name="branchName")
def branch_name(self) -> Optional[pulumi.Input[str]]:
"""
The name for the branch.
"""
return pulumi.get(self, "branch_name")
@branch_name.setter
def branch_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "branch_name", value)
@property
@pulumi.getter(name="customDomains")
def custom_domains(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The custom domains for the branch.
"""
return pulumi.get(self, "custom_domains")
@custom_domains.setter
def custom_domains(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "custom_domains", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
The description for the branch.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="destinationBranch")
def destination_branch(self) -> Optional[pulumi.Input[str]]:
"""
The destination branch if the branch is a pull request branch.
"""
return pulumi.get(self, "destination_branch")
@destination_branch.setter
def destination_branch(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "destination_branch", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> Optional[pulumi.Input[str]]:
"""
The display name for a branch. This is used as the default domain prefix.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter(name="enableAutoBuild")
def enable_auto_build(self) -> Optional[pulumi.Input[bool]]:
"""
Enables auto building for the branch.
"""
return pulumi.get(self, "enable_auto_build")
@enable_auto_build.setter
def enable_auto_build(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_auto_build", value)
@property
@pulumi.getter(name="enableBasicAuth")
def enable_basic_auth(self) -> Optional[pulumi.Input[bool]]:
"""
Enables basic authorization for the branch.
"""
return pulumi.get(self, "enable_basic_auth")
@enable_basic_auth.setter
def enable_basic_auth(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_basic_auth", value)
@property
@pulumi.getter(name="enableNotification")
def enable_notification(self) -> Optional[pulumi.Input[bool]]:
"""
Enables notifications for the branch.
"""
return pulumi.get(self, "enable_notification")
@enable_notification.setter
def enable_notification(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_notification", value)
@property
@pulumi.getter(name="enablePerformanceMode")
def enable_performance_mode(self) -> Optional[pulumi.Input[bool]]:
"""
Enables performance mode for the branch.
"""
return pulumi.get(self, "enable_performance_mode")
@enable_performance_mode.setter
def enable_performance_mode(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_performance_mode", value)
@property
@pulumi.getter(name="enablePullRequestPreview")
def enable_pull_request_preview(self) -> Optional[pulumi.Input[bool]]:
"""
Enables pull request previews for this branch.
"""
return pulumi.get(self, "enable_pull_request_preview")
@enable_pull_request_preview.setter
def enable_pull_request_preview(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_pull_request_preview", value)
@property
@pulumi.getter(name="environmentVariables")
def environment_variables(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
The environment variables for the branch.
"""
return pulumi.get(self, "environment_variables")
@environment_variables.setter
def environment_variables(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "environment_variables", value)
@property
@pulumi.getter
def framework(self) -> Optional[pulumi.Input[str]]:
"""
The framework for the branch.
"""
return pulumi.get(self, "framework")
@framework.setter
def framework(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "framework", value)
@property
@pulumi.getter(name="pullRequestEnvironmentName")
def pull_request_environment_name(self) -> Optional[pulumi.Input[str]]:
"""
The Amplify environment name for the pull request.
"""
return pulumi.get(self, "pull_request_environment_name")
@pull_request_environment_name.setter
def pull_request_environment_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "pull_request_environment_name", value)
@property
@pulumi.getter(name="sourceBranch")
def source_branch(self) -> Optional[pulumi.Input[str]]:
"""
The source branch if the branch is a pull request branch.
"""
return pulumi.get(self, "source_branch")
@source_branch.setter
def source_branch(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_branch", value)
@property
@pulumi.getter
def stage(self) -> Optional[pulumi.Input[str]]:
"""
Describes the current stage for the branch. Valid values: `PRODUCTION`, `BETA`, `DEVELOPMENT`, `EXPERIMENTAL`, `PULL_REQUEST`.
"""
return pulumi.get(self, "stage")
@stage.setter
def stage(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "stage", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Key-value mapping of resource tags. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="tagsAll")
def tags_all(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A map of tags assigned to the resource, including those inherited from the provider `default_tags` configuration block.
"""
return pulumi.get(self, "tags_all")
@tags_all.setter
def tags_all(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags_all", value)
@property
@pulumi.getter
def ttl(self) -> Optional[pulumi.Input[str]]:
"""
The content Time To Live (TTL) for the website in seconds.
"""
return pulumi.get(self, "ttl")
@ttl.setter
def ttl(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ttl", value)
class Branch(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
app_id: Optional[pulumi.Input[str]] = None,
backend_environment_arn: Optional[pulumi.Input[str]] = None,
basic_auth_credentials: Optional[pulumi.Input[str]] = None,
branch_name: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
enable_auto_build: Optional[pulumi.Input[bool]] = None,
enable_basic_auth: Optional[pulumi.Input[bool]] = None,
enable_notification: Optional[pulumi.Input[bool]] = None,
enable_performance_mode: Optional[pulumi.Input[bool]] = None,
enable_pull_request_preview: Optional[pulumi.Input[bool]] = None,
environment_variables: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
framework: Optional[pulumi.Input[str]] = None,
pull_request_environment_name: Optional[pulumi.Input[str]] = None,
stage: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
ttl: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Provides an Amplify Branch resource.
## Example Usage
```python
import pulumi
import pulumi_aws as aws
example = aws.amplify.App("example")
master = aws.amplify.Branch("master",
app_id=example.id,
branch_name="master",
framework="React",
stage="PRODUCTION",
environment_variables={
"REACT_APP_API_SERVER": "https://api.example.com",
})
```
### Notifications
Amplify Console uses EventBridge (formerly known as CloudWatch Events) and SNS for email notifications. To implement the same functionality, you need to set `enable_notification` in a `amplify.Branch` resource, as well as creating an EventBridge Rule, an SNS topic, and SNS subscriptions.
```python
import pulumi
import json
import pulumi_aws as aws
example = aws.amplify.App("example")
master = aws.amplify.Branch("master",
app_id=example.id,
branch_name="master",
enable_notification=True)
# EventBridge Rule for Amplify notifications
amplify_app_master_event_rule = aws.cloudwatch.EventRule("amplifyAppMasterEventRule",
description=master.branch_name.apply(lambda branch_name: f"AWS Amplify build notifications for : App: {aws_amplify_app['app']['id']} Branch: {branch_name}"),
event_pattern=pulumi.Output.all(example.id, master.branch_name).apply(lambda id, branch_name: json.dumps({
"detail": {
"appId": [id],
"branchName": [branch_name],
"jobStatus": [
"SUCCEED",
"FAILED",
"STARTED",
],
},
"detail-type": ["Amplify Deployment Status Change"],
"source": ["aws.amplify"],
})))
amplify_app_master_topic = aws.sns.Topic("amplifyAppMasterTopic")
amplify_app_master_event_target = aws.cloudwatch.EventTarget("amplifyAppMasterEventTarget",
rule=amplify_app_master_event_rule.name,
arn=amplify_app_master_topic.arn,
input_transformer=aws.cloudwatch.EventTargetInputTransformerArgs(
input_paths={
"jobId": "$.detail.jobId",
"appId": "$.detail.appId",
"region": "$.region",
"branch": "$.detail.branchName",
"status": "$.detail.jobStatus",
},
input_template="\"Build notification from the AWS Amplify Console for app: https://<branch>.<appId>.amplifyapp.com/. Your build status is <status>. Go to https://console.aws.amazon.com/amplify/home?region=<region>#<appId>/<branch>/<jobId> to view details on your build. \"",
))
# SNS Topic for Amplify notifications
amplify_app_master_policy_document = pulumi.Output.all(master.arn, amplify_app_master_topic.arn).apply(lambda masterArn, amplifyAppMasterTopicArn: aws.iam.get_policy_document(statements=[aws.iam.GetPolicyDocumentStatementArgs(
sid=f"Allow_Publish_Events {master_arn}",
effect="Allow",
actions=["SNS:Publish"],
principals=[aws.iam.GetPolicyDocumentStatementPrincipalArgs(
type="Service",
identifiers=["events.amazonaws.com"],
)],
resources=[amplify_app_master_topic_arn],
)]))
amplify_app_master_topic_policy = aws.sns.TopicPolicy("amplifyAppMasterTopicPolicy",
arn=amplify_app_master_topic.arn,
policy=amplify_app_master_policy_document.json)
```
## Import
Amplify branch can be imported using `app_id` and `branch_name`, e.g.,
```sh
$ pulumi import aws:amplify/branch:Branch master d2ypk4k47z8u6/master
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] app_id: The unique ID for an Amplify app.
:param pulumi.Input[str] backend_environment_arn: The Amazon Resource Name (ARN) for a backend environment that is part of an Amplify app.
:param pulumi.Input[str] basic_auth_credentials: The basic authorization credentials for the branch.
:param pulumi.Input[str] branch_name: The name for the branch.
:param pulumi.Input[str] description: The description for the branch.
:param pulumi.Input[str] display_name: The display name for a branch. This is used as the default domain prefix.
:param pulumi.Input[bool] enable_auto_build: Enables auto building for the branch.
:param pulumi.Input[bool] enable_basic_auth: Enables basic authorization for the branch.
:param pulumi.Input[bool] enable_notification: Enables notifications for the branch.
:param pulumi.Input[bool] enable_performance_mode: Enables performance mode for the branch.
:param pulumi.Input[bool] enable_pull_request_preview: Enables pull request previews for this branch.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] environment_variables: The environment variables for the branch.
:param pulumi.Input[str] framework: The framework for the branch.
:param pulumi.Input[str] pull_request_environment_name: The Amplify environment name for the pull request.
:param pulumi.Input[str] stage: Describes the current stage for the branch. Valid values: `PRODUCTION`, `BETA`, `DEVELOPMENT`, `EXPERIMENTAL`, `PULL_REQUEST`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value mapping of resource tags. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[str] ttl: The content Time To Live (TTL) for the website in seconds.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: BranchArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides an Amplify Branch resource.
## Example Usage
```python
import pulumi
import pulumi_aws as aws
example = aws.amplify.App("example")
master = aws.amplify.Branch("master",
app_id=example.id,
branch_name="master",
framework="React",
stage="PRODUCTION",
environment_variables={
"REACT_APP_API_SERVER": "https://api.example.com",
})
```
### Notifications
Amplify Console uses EventBridge (formerly known as CloudWatch Events) and SNS for email notifications. To implement the same functionality, you need to set `enable_notification` in a `amplify.Branch` resource, as well as creating an EventBridge Rule, an SNS topic, and SNS subscriptions.
```python
import pulumi
import json
import pulumi_aws as aws
example = aws.amplify.App("example")
master = aws.amplify.Branch("master",
app_id=example.id,
branch_name="master",
enable_notification=True)
# EventBridge Rule for Amplify notifications
amplify_app_master_event_rule = aws.cloudwatch.EventRule("amplifyAppMasterEventRule",
description=master.branch_name.apply(lambda branch_name: f"AWS Amplify build notifications for : App: {aws_amplify_app['app']['id']} Branch: {branch_name}"),
event_pattern=pulumi.Output.all(example.id, master.branch_name).apply(lambda id, branch_name: json.dumps({
"detail": {
"appId": [id],
"branchName": [branch_name],
"jobStatus": [
"SUCCEED",
"FAILED",
"STARTED",
],
},
"detail-type": ["Amplify Deployment Status Change"],
"source": ["aws.amplify"],
})))
amplify_app_master_topic = aws.sns.Topic("amplifyAppMasterTopic")
amplify_app_master_event_target = aws.cloudwatch.EventTarget("amplifyAppMasterEventTarget",
rule=amplify_app_master_event_rule.name,
arn=amplify_app_master_topic.arn,
input_transformer=aws.cloudwatch.EventTargetInputTransformerArgs(
input_paths={
"jobId": "$.detail.jobId",
"appId": "$.detail.appId",
"region": "$.region",
"branch": "$.detail.branchName",
"status": "$.detail.jobStatus",
},
input_template="\"Build notification from the AWS Amplify Console for app: https://<branch>.<appId>.amplifyapp.com/. Your build status is <status>. Go to https://console.aws.amazon.com/amplify/home?region=<region>#<appId>/<branch>/<jobId> to view details on your build. \"",
))
# SNS Topic for Amplify notifications
amplify_app_master_policy_document = pulumi.Output.all(master.arn, amplify_app_master_topic.arn).apply(lambda masterArn, amplifyAppMasterTopicArn: aws.iam.get_policy_document(statements=[aws.iam.GetPolicyDocumentStatementArgs(
sid=f"Allow_Publish_Events {master_arn}",
effect="Allow",
actions=["SNS:Publish"],
principals=[aws.iam.GetPolicyDocumentStatementPrincipalArgs(
type="Service",
identifiers=["events.amazonaws.com"],
)],
resources=[amplify_app_master_topic_arn],
)]))
amplify_app_master_topic_policy = aws.sns.TopicPolicy("amplifyAppMasterTopicPolicy",
arn=amplify_app_master_topic.arn,
policy=amplify_app_master_policy_document.json)
```
## Import
Amplify branch can be imported using `app_id` and `branch_name`, e.g.,
```sh
$ pulumi import aws:amplify/branch:Branch master d2ypk4k47z8u6/master
```
:param str resource_name: The name of the resource.
:param BranchArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(BranchArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
app_id: Optional[pulumi.Input[str]] = None,
backend_environment_arn: Optional[pulumi.Input[str]] = None,
basic_auth_credentials: Optional[pulumi.Input[str]] = None,
branch_name: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
enable_auto_build: Optional[pulumi.Input[bool]] = None,
enable_basic_auth: Optional[pulumi.Input[bool]] = None,
enable_notification: Optional[pulumi.Input[bool]] = None,
enable_performance_mode: Optional[pulumi.Input[bool]] = None,
enable_pull_request_preview: Optional[pulumi.Input[bool]] = None,
environment_variables: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
framework: Optional[pulumi.Input[str]] = None,
pull_request_environment_name: Optional[pulumi.Input[str]] = None,
stage: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
ttl: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = BranchArgs.__new__(BranchArgs)
if app_id is None and not opts.urn:
raise TypeError("Missing required property 'app_id'")
__props__.__dict__["app_id"] = app_id
__props__.__dict__["backend_environment_arn"] = backend_environment_arn
__props__.__dict__["basic_auth_credentials"] = basic_auth_credentials
if branch_name is None and not opts.urn:
raise TypeError("Missing required property 'branch_name'")
__props__.__dict__["branch_name"] = branch_name
__props__.__dict__["description"] = description
__props__.__dict__["display_name"] = display_name
__props__.__dict__["enable_auto_build"] = enable_auto_build
__props__.__dict__["enable_basic_auth"] = enable_basic_auth
__props__.__dict__["enable_notification"] = enable_notification
__props__.__dict__["enable_performance_mode"] = enable_performance_mode
__props__.__dict__["enable_pull_request_preview"] = enable_pull_request_preview
__props__.__dict__["environment_variables"] = environment_variables
__props__.__dict__["framework"] = framework
__props__.__dict__["pull_request_environment_name"] = pull_request_environment_name
__props__.__dict__["stage"] = stage
__props__.__dict__["tags"] = tags
__props__.__dict__["ttl"] = ttl
__props__.__dict__["arn"] = None
__props__.__dict__["associated_resources"] = None
__props__.__dict__["custom_domains"] = None
__props__.__dict__["destination_branch"] = None
__props__.__dict__["source_branch"] = None
__props__.__dict__["tags_all"] = None
super(Branch, __self__).__init__(
'aws:amplify/branch:Branch',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
app_id: Optional[pulumi.Input[str]] = None,
arn: Optional[pulumi.Input[str]] = None,
associated_resources: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
backend_environment_arn: Optional[pulumi.Input[str]] = None,
basic_auth_credentials: Optional[pulumi.Input[str]] = None,
branch_name: Optional[pulumi.Input[str]] = None,
custom_domains: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
description: Optional[pulumi.Input[str]] = None,
destination_branch: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
enable_auto_build: Optional[pulumi.Input[bool]] = None,
enable_basic_auth: Optional[pulumi.Input[bool]] = None,
enable_notification: Optional[pulumi.Input[bool]] = None,
enable_performance_mode: Optional[pulumi.Input[bool]] = None,
enable_pull_request_preview: Optional[pulumi.Input[bool]] = None,
environment_variables: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
framework: Optional[pulumi.Input[str]] = None,
pull_request_environment_name: Optional[pulumi.Input[str]] = None,
source_branch: Optional[pulumi.Input[str]] = None,
stage: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
tags_all: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
ttl: Optional[pulumi.Input[str]] = None) -> 'Branch':
"""
Get an existing Branch resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] app_id: The unique ID for an Amplify app.
:param pulumi.Input[str] arn: The Amazon Resource Name (ARN) for the branch.
:param pulumi.Input[Sequence[pulumi.Input[str]]] associated_resources: A list of custom resources that are linked to this branch.
:param pulumi.Input[str] backend_environment_arn: The Amazon Resource Name (ARN) for a backend environment that is part of an Amplify app.
:param pulumi.Input[str] basic_auth_credentials: The basic authorization credentials for the branch.
:param pulumi.Input[str] branch_name: The name for the branch.
:param pulumi.Input[Sequence[pulumi.Input[str]]] custom_domains: The custom domains for the branch.
:param pulumi.Input[str] description: The description for the branch.
:param pulumi.Input[str] destination_branch: The destination branch if the branch is a pull request branch.
:param pulumi.Input[str] display_name: The display name for a branch. This is used as the default domain prefix.
:param pulumi.Input[bool] enable_auto_build: Enables auto building for the branch.
:param pulumi.Input[bool] enable_basic_auth: Enables basic authorization for the branch.
:param pulumi.Input[bool] enable_notification: Enables notifications for the branch.
:param pulumi.Input[bool] enable_performance_mode: Enables performance mode for the branch.
:param pulumi.Input[bool] enable_pull_request_preview: Enables pull request previews for this branch.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] environment_variables: The environment variables for the branch.
:param pulumi.Input[str] framework: The framework for the branch.
:param pulumi.Input[str] pull_request_environment_name: The Amplify environment name for the pull request.
:param pulumi.Input[str] source_branch: The source branch if the branch is a pull request branch.
:param pulumi.Input[str] stage: Describes the current stage for the branch. Valid values: `PRODUCTION`, `BETA`, `DEVELOPMENT`, `EXPERIMENTAL`, `PULL_REQUEST`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Key-value mapping of resource tags. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags_all: A map of tags assigned to the resource, including those inherited from the provider `default_tags` configuration block.
:param pulumi.Input[str] ttl: The content Time To Live (TTL) for the website in seconds.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _BranchState.__new__(_BranchState)
__props__.__dict__["app_id"] = app_id
__props__.__dict__["arn"] = arn
__props__.__dict__["associated_resources"] = associated_resources
__props__.__dict__["backend_environment_arn"] = backend_environment_arn
__props__.__dict__["basic_auth_credentials"] = basic_auth_credentials
__props__.__dict__["branch_name"] = branch_name
__props__.__dict__["custom_domains"] = custom_domains
__props__.__dict__["description"] = description
__props__.__dict__["destination_branch"] = destination_branch
__props__.__dict__["display_name"] = display_name
__props__.__dict__["enable_auto_build"] = enable_auto_build
__props__.__dict__["enable_basic_auth"] = enable_basic_auth
__props__.__dict__["enable_notification"] = enable_notification
__props__.__dict__["enable_performance_mode"] = enable_performance_mode
__props__.__dict__["enable_pull_request_preview"] = enable_pull_request_preview
__props__.__dict__["environment_variables"] = environment_variables
__props__.__dict__["framework"] = framework
__props__.__dict__["pull_request_environment_name"] = pull_request_environment_name
__props__.__dict__["source_branch"] = source_branch
__props__.__dict__["stage"] = stage
__props__.__dict__["tags"] = tags
__props__.__dict__["tags_all"] = tags_all
__props__.__dict__["ttl"] = ttl
return Branch(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="appId")
def app_id(self) -> pulumi.Output[str]:
"""
The unique ID for an Amplify app.
"""
return pulumi.get(self, "app_id")
@property
@pulumi.getter
def arn(self) -> pulumi.Output[str]:
"""
The Amazon Resource Name (ARN) for the branch.
"""
return pulumi.get(self, "arn")
@property
@pulumi.getter(name="associatedResources")
def associated_resources(self) -> pulumi.Output[Sequence[str]]:
"""
A list of custom resources that are linked to this branch.
"""
return pulumi.get(self, "associated_resources")
@property
@pulumi.getter(name="backendEnvironmentArn")
def backend_environment_arn(self) -> pulumi.Output[Optional[str]]:
"""
The Amazon Resource Name (ARN) for a backend environment that is part of an Amplify app.
"""
return pulumi.get(self, "backend_environment_arn")
@property
@pulumi.getter(name="basicAuthCredentials")
def basic_auth_credentials(self) -> pulumi.Output[Optional[str]]:
"""
The basic authorization credentials for the branch.
"""
return pulumi.get(self, "basic_auth_credentials")
@property
@pulumi.getter(name="branchName")
def branch_name(self) -> pulumi.Output[str]:
"""
The name for the branch.
"""
return pulumi.get(self, "branch_name")
@property
@pulumi.getter(name="customDomains")
def custom_domains(self) -> pulumi.Output[Sequence[str]]:
"""
The custom domains for the branch.
"""
return pulumi.get(self, "custom_domains")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
The description for the branch.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="destinationBranch")
def destination_branch(self) -> pulumi.Output[str]:
"""
The destination branch if the branch is a pull request branch.
"""
return pulumi.get(self, "destination_branch")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> pulumi.Output[str]:
"""
The display name for a branch. This is used as the default domain prefix.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter(name="enableAutoBuild")
def enable_auto_build(self) -> pulumi.Output[Optional[bool]]:
"""
Enables auto building for the branch.
"""
return pulumi.get(self, "enable_auto_build")
@property
@pulumi.getter(name="enableBasicAuth")
def enable_basic_auth(self) -> pulumi.Output[Optional[bool]]:
"""
Enables basic authorization for the branch.
"""
return pulumi.get(self, "enable_basic_auth")
@property
@pulumi.getter(name="enableNotification")
def enable_notification(self) -> pulumi.Output[Optional[bool]]:
"""
Enables notifications for the branch.
"""
return pulumi.get(self, "enable_notification")
@property
@pulumi.getter(name="enablePerformanceMode")
def enable_performance_mode(self) -> pulumi.Output[Optional[bool]]:
"""
Enables performance mode for the branch.
"""
return pulumi.get(self, "enable_performance_mode")
@property
@pulumi.getter(name="enablePullRequestPreview")
def enable_pull_request_preview(self) -> pulumi.Output[Optional[bool]]:
"""
Enables pull request previews for this branch.
"""
return pulumi.get(self, "enable_pull_request_preview")
@property
@pulumi.getter(name="environmentVariables")
def environment_variables(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
The environment variables for the branch.
"""
return pulumi.get(self, "environment_variables")
@property
@pulumi.getter
def framework(self) -> pulumi.Output[Optional[str]]:
"""
The framework for the branch.
"""
return pulumi.get(self, "framework")
@property
@pulumi.getter(name="pullRequestEnvironmentName")
def pull_request_environment_name(self) -> pulumi.Output[Optional[str]]:
"""
The Amplify environment name for the pull request.
"""
return pulumi.get(self, "pull_request_environment_name")
@property
@pulumi.getter(name="sourceBranch")
def source_branch(self) -> pulumi.Output[str]:
"""
The source branch if the branch is a pull request branch.
"""
return pulumi.get(self, "source_branch")
@property
@pulumi.getter
def stage(self) -> pulumi.Output[Optional[str]]:
"""
Describes the current stage for the branch. Valid values: `PRODUCTION`, `BETA`, `DEVELOPMENT`, `EXPERIMENTAL`, `PULL_REQUEST`.
"""
return pulumi.get(self, "stage")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
Key-value mapping of resource tags. If configured with a provider `default_tags` configuration block present, tags with matching keys will overwrite those defined at the provider-level.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="tagsAll")
def tags_all(self) -> pulumi.Output[Mapping[str, str]]:
"""
A map of tags assigned to the resource, including those inherited from the provider `default_tags` configuration block.
"""
return pulumi.get(self, "tags_all")
@property
@pulumi.getter
def ttl(self) -> pulumi.Output[Optional[str]]:
"""
The content Time To Live (TTL) for the website in seconds.
"""
return pulumi.get(self, "ttl")
| 46.563005 | 297 | 0.65487 | 6,651 | 57,645 | 5.439633 | 0.045407 | 0.092733 | 0.075071 | 0.058376 | 0.945189 | 0.930485 | 0.913652 | 0.904005 | 0.895215 | 0.874955 | 0 | 0.000297 | 0.239726 | 57,645 | 1,237 | 298 | 46.600647 | 0.825218 | 0.347194 | 0 | 0.80581 | 1 | 0 | 0.111811 | 0.047032 | 0 | 0 | 0 | 0 | 0 | 1 | 0.168196 | false | 0.001529 | 0.007645 | 0 | 0.278287 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
67ba00472016009edfe015eb36b0298870d2f013 | 32 | py | Python | t.py | kacystocks/programmers_assistant | 31e15cc945ffc4ebcae198f70a60b5e630bd6fb0 | [
"Intel"
] | null | null | null | t.py | kacystocks/programmers_assistant | 31e15cc945ffc4ebcae198f70a60b5e630bd6fb0 | [
"Intel"
] | null | null | null | t.py | kacystocks/programmers_assistant | 31e15cc945ffc4ebcae198f70a60b5e630bd6fb0 | [
"Intel"
] | null | null | null | print("asdfjkbasdlfjbsdljkfvb")
| 16 | 31 | 0.84375 | 2 | 32 | 13.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 32 | 1 | 32 | 32 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0.6875 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
67c9ee7ee1dea66135b61c0352848bd687af5ead | 3,103 | py | Python | V.BETA/claseEsquinas.py | Muteado/proyecto | 54cf8babe150a33f75851f6686094de8f743d332 | [
"MIT"
] | null | null | null | V.BETA/claseEsquinas.py | Muteado/proyecto | 54cf8babe150a33f75851f6686094de8f743d332 | [
"MIT"
] | null | null | null | V.BETA/claseEsquinas.py | Muteado/proyecto | 54cf8babe150a33f75851f6686094de8f743d332 | [
"MIT"
] | null | null | null | class Datos:
def __init__(self):
pass
#Bloques
'''
def generar_bloques_esquinas(posIniX,posIniY,largoBloq,altoBloq):
#BLOQUE AZUL
pygame.draw.rect(Pant,Negro,
(posIniX,
posIniY
,largoBloq+2,altoBloq+2))#Angulo N
pygame.draw.rect(Pant,Azul,
(posIniX+1,
posIniY+1
,largoBloq,altoBloq))#Angulo B
pygame.draw.rect(Pant,Negro,
(posIniX,
posIniY+21
,largoBloq+2,altoBloq+2))#Angulo N
pygame.draw.rect(Pant,Azul,
(posIniX+1,
posIniY+22
,largoBloq,altoBloq))#Angulo B
pygame.draw.rect(Pant,Negro,
(posIniX,
posIniY+42
,largoBloq+2,altoBloq+2))#Angulo N
pygame.draw.rect(Pant,Azul,
(posIniX+1,
posIniY+43
,largoBloq,altoBloq))#Angulo B
draw_text('Ángulo : ', font3, Blanco, Pant, posIniX+5, posIniY+1)
draw_text('Velocidad : ', font3, Blanco, Pant, posIniX+5, posIniY+25)
draw_text('Metros : ', font3, Blanco, Pant, posIniX+5, posIniY+46)
#BLOQUE ROJO
pygame.draw.rect(Pant,Negro,
(Pant.get_width()-largoBloq,
posIniY
,largoBloq+2,altoBloq+2))#Angulo N
pygame.draw.rect(Pant,Rojo,
(Pant.get_width()-largoBloq+1,
posIniY+1
,largoBloq,altoBloq))#Angulo B
pygame.draw.rect(Pant,Negro,
(Pant.get_width()-largoBloq,
posIniY+21
,largoBloq+2,altoBloq+2))#Angulo N
pygame.draw.rect(Pant,Rojo,
(Pant.get_width()-largoBloq+1,
posIniY+22
,largoBloq,altoBloq))#Angulo B
pygame.draw.rect(Pant,Negro,
(Pant.get_width()-largoBloq,
posIniY+42
,largoBloq+2,altoBloq+2))#Angulo N
pygame.draw.rect(Pant,Rojo,
(Pant.get_width()-largoBloq+1,
posIniY+43
,largoBloq,altoBloq))#Angulo B
draw_text('Ángulo : ', font3, Blanco, Pant, Pant.get_width()-largoBloq+5, posIniY+1)
draw_text('Velocidad : ', font3, Blanco, Pant, Pant.get_width()-largoBloq+5, posIniY+25)
draw_text('Metros : ', font3, Blanco, Pant, Pant.get_width()-largoBloq+5, posIniY+46)
idth()-largoBloq+5, posIniY+46)
'''
#Datos
'''
def texto_esquinas(texto_usuario,rectanguloTXT,color,activo):
if activo == True:
color = Rojo
else:
color = color
#print(texto_usuario)
pygame.draw.rect(Pant,color,rectanguloTXT,2)
textoenPant = font3.render(texto_usuario,True,Blanco)
Pant.blit(textoenPant,(rectanguloTXT.x+5,rectanguloTXT.y+5))
rectanguloTXT.w = textoenPant.get_width()+10
''' | 36.081395 | 93 | 0.515308 | 320 | 3,103 | 4.915625 | 0.18125 | 0.082645 | 0.115702 | 0.14876 | 0.739987 | 0.739987 | 0.721551 | 0.702479 | 0.702479 | 0.555626 | 0 | 0.034236 | 0.36932 | 3,103 | 86 | 94 | 36.081395 | 0.769545 | 0.003867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
67ffa8cd51c49a8a850c68ed397034e7b5e60711 | 1,654 | py | Python | grilly_api/hello.py | QualmandDriven/grilly | 0a1d5c95be4dd6f2423b3660ecece4f1ecedba70 | [
"MIT"
] | null | null | null | grilly_api/hello.py | QualmandDriven/grilly | 0a1d5c95be4dd6f2423b3660ecece4f1ecedba70 | [
"MIT"
] | null | null | null | grilly_api/hello.py | QualmandDriven/grilly | 0a1d5c95be4dd6f2423b3660ecece4f1ecedba70 | [
"MIT"
] | null | null | null | from flask import Flask
from flask import jsonify
app = Flask(__name__)
class Barbecue:
Name = ""
@app.route("/")
def hello():
return "Hello World!"
@app.route("/barbecues")
def getBarbecues():
b = Barbecue()
b.Name = "Wammerl"
return jsonify([ { "name": "Steak", "cookingLevels": [ { "name": "Raw", "requiredSeconds": 240, "turns": [ { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 60 } ] }, { "name": "Medium", "requiredSeconds": 360, "turns": [ { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 120 } ] }, { "name": "Well Done", "requiredSeconds": 480, "turns": [ { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 120 }, { "turnAfterSeconds": 120 } ] } ] }, { "name": "Käsegriller", "cookingLevels": [ { "name": "Hell", "requiredSeconds": 240, "turns": [ { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 60 } ] }, { "name": "Braun", "requiredSeconds": 360, "turns": [ { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 120 } ] }, { "name": "Dunkel", "requiredSeconds": 480, "turns": [ { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 30 }, { "turnAfterSeconds": 120 }, { "turnAfterSeconds": 120 } ] } ] } ])
if __name__ == "__main__":
app.run() | 87.052632 | 1,365 | 0.618501 | 139 | 1,654 | 7.273381 | 0.280576 | 0.4273 | 0.807122 | 0.64095 | 0.727992 | 0.727992 | 0.727992 | 0.727992 | 0.727992 | 0.727992 | 0 | 0.063264 | 0.159008 | 1,654 | 19 | 1,366 | 87.052632 | 0.663551 | 0 | 0 | 0 | 0 | 0 | 0.469486 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.133333 | 0.066667 | 0.533333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 10 |
db2eb2a7abdb6db3e29357e204af7f0571c95c84 | 2,729 | py | Python | examples/complex/entry.py | ryanchao2012/airfly | 230ddd88885defc67485fa0c51f66c4a67ae98a9 | [
"MIT"
] | 7 | 2021-09-27T11:38:48.000Z | 2022-02-01T06:06:24.000Z | examples/complex/entry.py | ryanchao2012/airfly | 230ddd88885defc67485fa0c51f66c4a67ae98a9 | [
"MIT"
] | null | null | null | examples/complex/entry.py | ryanchao2012/airfly | 230ddd88885defc67485fa0c51f66c4a67ae98a9 | [
"MIT"
] | null | null | null | from airfly.model.airflow import AirflowTask
# Create
class create_entry_group(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo create_entry_group")
class create_entry_group_result(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo create_entry_group_result")
upstreams = create_entry_group
class create_entry_group_result2(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo create_entry_group_result2")
upstreams = create_entry_group
class create_entry_gcs(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo create_entry_gcs")
upstreams = create_entry_group
class create_entry_gcs_result(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo create_entry_gcs_result")
upstreams = create_entry_gcs
class create_entry_gcs_result2(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo create_entry_gcs_result")
upstreams = create_entry_gcs
# Delete
class delete_entry(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo delete_entry")
upstreams = create_entry_gcs
class delete_entry_group(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo delete_entry_group")
upstreams = create_entry_group
downstreams = delete_entry
# Get
class get_entry_group(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo get_entry_group")
upstreams = create_entry_group
downstreams = delete_entry_group
class get_entry_group_result(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo get_entry_group_result")
upstreams = get_entry_group
class get_entry(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo get_entry")
upstreams = create_entry_gcs
downstreams = delete_entry
class get_entry_result(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo get_entry_result")
upstreams = get_entry
# Lookup
class lookup_entry(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo lookup_entry")
upstreams = create_entry_gcs
downstreams = delete_entry
class lookup_entry_result(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo lookup_entry_result")
upstreams = lookup_entry
# Update
class update_entry(AirflowTask):
operator_class = "BashOperator"
params = dict(bash_command="echo update_entry")
upstreams = create_entry_gcs
downstreams = delete_entry
| 26.240385 | 65 | 0.765115 | 323 | 2,729 | 6.080495 | 0.080495 | 0.128819 | 0.183299 | 0.274949 | 0.881365 | 0.848778 | 0.848778 | 0.819756 | 0.749491 | 0.648167 | 0 | 0.001308 | 0.159399 | 2,729 | 103 | 66 | 26.495146 | 0.854839 | 0.011359 | 0 | 0.492308 | 0 | 0 | 0.193536 | 0.044205 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.015385 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e1edbc72a20fef563aa82b480409285f18215f25 | 162 | py | Python | FinMind/BackTestSystem/__init__.py | HarshCasper/FinMind | 7b7571e443525edcd52c7f53e7fb0daca42b1f60 | [
"Apache-2.0"
] | 2 | 2021-01-29T07:55:52.000Z | 2021-01-29T07:55:56.000Z | FinMind/BackTestSystem/__init__.py | HarshCasper/FinMind | 7b7571e443525edcd52c7f53e7fb0daca42b1f60 | [
"Apache-2.0"
] | null | null | null | FinMind/BackTestSystem/__init__.py | HarshCasper/FinMind | 7b7571e443525edcd52c7f53e7fb0daca42b1f60 | [
"Apache-2.0"
] | 1 | 2021-01-15T08:29:37.000Z | 2021-01-15T08:29:37.000Z | from FinMind.BackTestSystem.BaseClass import BackTest, Strategy
from FinMind.BackTestSystem import Strategies
from FinMind.BackTestSystem import BaseClass, utils
| 40.5 | 63 | 0.876543 | 18 | 162 | 7.888889 | 0.5 | 0.232394 | 0.528169 | 0.43662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08642 | 162 | 3 | 64 | 54 | 0.959459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
c001591212b8ace52c13565037a7f3dc0bc3030b | 7,432 | py | Python | tests/integration/serverless/v1/test_service.py | timgates42/twilio-python | ef29d03a4857b62b616df4a8f4f2b7c294afbb99 | [
"MIT"
] | 1 | 2021-01-07T11:41:11.000Z | 2021-01-07T11:41:11.000Z | twilio-twilio-python-2fb5d37/tests/integration/serverless/v1/test_service.py | RomanPanshin/XClassBack | 6bb8bb5458fd991bcb22ed8c315d10cc5cea9d38 | [
"BSD-2-Clause"
] | 1 | 2021-08-21T22:54:01.000Z | 2021-08-23T19:39:42.000Z | tests/integration/serverless/v1/test_service.py | team-telnyx/twexit-python | 69e11c5c2b5681f9bc410795dda0cf8942219e6f | [
"MIT"
] | 4 | 2021-03-25T09:00:08.000Z | 2021-08-05T06:54:23.000Z | # coding=utf-8
r"""
This code was generated by
\ / _ _ _| _ _
| (_)\/(_)(_|\/| |(/_ v1.0.0
/ /
"""
from tests import IntegrationTestCase
from tests.holodeck import Request
from twilio.base.exceptions import TwilioException
from twilio.http.response import Response
class ServiceTestCase(IntegrationTestCase):
def test_list_request(self):
self.holodeck.mock(Response(500, ''))
with self.assertRaises(TwilioException):
self.client.serverless.v1.services.list()
self.holodeck.assert_has_request(Request(
'get',
'https://serverless.twilio.com/v1/Services',
))
def test_read_empty_response(self):
self.holodeck.mock(Response(
200,
'''
{
"services": [],
"meta": {
"first_page_url": "https://serverless.twilio.com/v1/Services?PageSize=50&Page=0",
"key": "services",
"next_page_url": null,
"page": 0,
"page_size": 50,
"previous_page_url": null,
"url": "https://serverless.twilio.com/v1/Services?PageSize=50&Page=0"
}
}
'''
))
actual = self.client.serverless.v1.services.list()
self.assertIsNotNone(actual)
def test_fetch_request(self):
self.holodeck.mock(Response(500, ''))
with self.assertRaises(TwilioException):
self.client.serverless.v1.services("ZSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX").fetch()
self.holodeck.assert_has_request(Request(
'get',
'https://serverless.twilio.com/v1/Services/ZSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
))
def test_fetch_response(self):
self.holodeck.mock(Response(
200,
'''
{
"sid": "ZS00000000000000000000000000000000",
"account_sid": "ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"friendly_name": "test-service",
"unique_name": "test-service-1",
"include_credentials": true,
"ui_editable": false,
"date_created": "2018-11-10T20:00:00Z",
"date_updated": "2018-11-10T20:00:00Z",
"url": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000",
"links": {
"environments": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000/Environments",
"functions": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000/Functions",
"assets": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000/Assets",
"builds": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000/Builds"
}
}
'''
))
actual = self.client.serverless.v1.services("ZSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX").fetch()
self.assertIsNotNone(actual)
def test_delete_request(self):
self.holodeck.mock(Response(500, ''))
with self.assertRaises(TwilioException):
self.client.serverless.v1.services("ZSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX").delete()
self.holodeck.assert_has_request(Request(
'delete',
'https://serverless.twilio.com/v1/Services/ZSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
))
def test_delete_response(self):
self.holodeck.mock(Response(
204,
None,
))
actual = self.client.serverless.v1.services("ZSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX").delete()
self.assertTrue(actual)
def test_create_request(self):
self.holodeck.mock(Response(500, ''))
with self.assertRaises(TwilioException):
self.client.serverless.v1.services.create(unique_name="unique_name", friendly_name="friendly_name")
values = {'UniqueName': "unique_name", 'FriendlyName': "friendly_name", }
self.holodeck.assert_has_request(Request(
'post',
'https://serverless.twilio.com/v1/Services',
data=values,
))
def test_create_response(self):
self.holodeck.mock(Response(
201,
'''
{
"sid": "ZS00000000000000000000000000000000",
"account_sid": "ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"friendly_name": "service-friendly",
"unique_name": "service-unique",
"include_credentials": true,
"ui_editable": false,
"date_created": "2018-11-10T20:00:00Z",
"date_updated": "2018-11-10T20:00:00Z",
"url": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000",
"links": {
"environments": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000/Environments",
"functions": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000/Functions",
"assets": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000/Assets",
"builds": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000/Builds"
}
}
'''
))
actual = self.client.serverless.v1.services.create(unique_name="unique_name", friendly_name="friendly_name")
self.assertIsNotNone(actual)
def test_update_request(self):
self.holodeck.mock(Response(500, ''))
with self.assertRaises(TwilioException):
self.client.serverless.v1.services("ZSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX").update()
self.holodeck.assert_has_request(Request(
'post',
'https://serverless.twilio.com/v1/Services/ZSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
))
def test_update_response(self):
self.holodeck.mock(Response(
200,
'''
{
"sid": "ZS00000000000000000000000000000000",
"account_sid": "ACaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"friendly_name": "service-friendly-update",
"unique_name": "service-unique-update",
"include_credentials": true,
"ui_editable": true,
"date_created": "2018-11-10T20:00:00Z",
"date_updated": "2018-11-10T20:00:00Z",
"url": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000",
"links": {
"environments": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000/Environments",
"functions": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000/Functions",
"assets": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000/Assets",
"builds": "https://serverless.twilio.com/v1/Services/ZS00000000000000000000000000000000/Builds"
}
}
'''
))
actual = self.client.serverless.v1.services("ZSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX").update()
self.assertIsNotNone(actual)
| 38.910995 | 128 | 0.593918 | 615 | 7,432 | 7.050407 | 0.160976 | 0.073801 | 0.10655 | 0.121771 | 0.862316 | 0.83464 | 0.809502 | 0.789437 | 0.763146 | 0.69857 | 0 | 0.138242 | 0.283638 | 7,432 | 190 | 129 | 39.115789 | 0.676183 | 0.014666 | 0 | 0.582278 | 1 | 0 | 0.180824 | 0.058739 | 0 | 0 | 0 | 0 | 0.189873 | 1 | 0.126582 | false | 0 | 0.050633 | 0 | 0.189873 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c02e03354943c1e636d0da1a38dce8d37c309af7 | 2,066 | py | Python | Hackathon 4.0_2021-01-08_14-05-03.py | ClointFusion-Community/CFC-Projects | c6381738ade07e6e8979bbae37400ec2b4e626c5 | [
"MIT"
] | null | null | null | Hackathon 4.0_2021-01-08_14-05-03.py | ClointFusion-Community/CFC-Projects | c6381738ade07e6e8979bbae37400ec2b4e626c5 | [
"MIT"
] | null | null | null | Hackathon 4.0_2021-01-08_14-05-03.py | ClointFusion-Community/CFC-Projects | c6381738ade07e6e8979bbae37400ec2b4e626c5 | [
"MIT"
] | null | null | null | # This code is generated automatically by ClointFusion BOT Builder Tool.
import ClointFusion as cf
import time
cf.window_show_desktop()
cf.mouse_click(int(cf.pg.size()[0]/2),int(cf.pg.size()[1]/2))
try:
cf.mouse_click(*cf.mouse_search_snip_return_coordinates_x_y(r'C:\Users\mrmay\AppData\Local\Temp\cf_log_3n8u_1nw_generator\Images\Snips\1--338_41.png',conf=0.7, wait=10),left_or_right='left', single_double_triple = 'single')
except:
cf.mouse_click(338,41,left_or_right='left', single_double_triple = 'single')
time.sleep(0)
try:
cf.mouse_click(*cf.mouse_search_snip_return_coordinates_x_y(r'C:\Users\mrmay\AppData\Local\Temp\cf_log_3n8u_1nw_generator\Images\Snips\2--338_41.png',conf=0.7, wait=13),left_or_right='left', single_double_triple = 'double')
except:
cf.mouse_click(338,41,left_or_right='left', single_double_triple = 'double')
time.sleep(3)
try:
cf.mouse_click(*cf.mouse_search_snip_return_coordinates_x_y(r'C:\Users\mrmay\AppData\Local\Temp\cf_log_3n8u_1nw_generator\Images\Snips\3-NewTabGoogleChrome-338_21.png',conf=0.7, wait=13),left_or_right='left', single_double_triple = 'single')
except:
cf.mouse_click(338,21,left_or_right='left', single_double_triple = 'single')
time.sleep(3)
cf.key_write_enter('modiji',key='')
time.sleep(0)
cf.key_press('enter')
time.sleep(29)
try:
cf.mouse_click(*cf.mouse_search_snip_return_coordinates_x_y(r'C:\Users\mrmay\AppData\Local\Temp\cf_log_3n8u_1nw_generator\Images\Snips\4-modijiGoogleSearchGoogleChrome-1889_18.png',conf=0.7, wait=15),left_or_right='left', single_double_triple = 'single')
except:
cf.mouse_click(1889,18,left_or_right='left', single_double_triple = 'single')
time.sleep(5)
try:
cf.mouse_click(*cf.mouse_search_snip_return_coordinates_x_y(r'C:\Users\mrmay\AppData\Local\Temp\cf_log_3n8u_1nw_generator\Images\Snips\5--1078_1043.png',conf=0.7, wait=10),left_or_right='left', single_double_triple = 'single')
except:
cf.mouse_click(1078,1043,left_or_right='left', single_double_triple = 'single')
time.sleep(0)
cf.window_close_windows('nan') | 43.957447 | 258 | 0.782188 | 358 | 2,066 | 4.198324 | 0.226257 | 0.074518 | 0.087824 | 0.0998 | 0.777778 | 0.769128 | 0.769128 | 0.762475 | 0.754491 | 0.754491 | 0 | 0.057054 | 0.066796 | 2,066 | 47 | 259 | 43.957447 | 0.72251 | 0.033882 | 0 | 0.441176 | 1 | 0.147059 | 0.298747 | 0.241604 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.058824 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c04fdf840a7b2de64d57993d70c7a6ebbdb303dc | 31 | py | Python | pyfiles/0.py | StevenPZChan/pythonchallenge | 84c0e7458189f6d74e2cfbd169d854dae11d07a9 | [
"MIT"
] | null | null | null | pyfiles/0.py | StevenPZChan/pythonchallenge | 84c0e7458189f6d74e2cfbd169d854dae11d07a9 | [
"MIT"
] | null | null | null | pyfiles/0.py | StevenPZChan/pythonchallenge | 84c0e7458189f6d74e2cfbd169d854dae11d07a9 | [
"MIT"
] | null | null | null | print(2 ** 38) # 274877906944
| 15.5 | 30 | 0.645161 | 4 | 31 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.6 | 0.193548 | 31 | 1 | 31 | 31 | 0.2 | 0.387097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
c059792f04f0d4df3226a48e8f7701383ff8e148 | 561 | py | Python | sqlite_flask.py | sdfxisme/lesson16 | c481e09dff1a39c243582fc8461fbdaec075daca | [
"MIT"
] | null | null | null | sqlite_flask.py | sdfxisme/lesson16 | c481e09dff1a39c243582fc8461fbdaec075daca | [
"MIT"
] | null | null | null | sqlite_flask.py | sdfxisme/lesson16 | c481e09dff1a39c243582fc8461fbdaec075daca | [
"MIT"
] | null | null | null | import sqlite3 as lite
import sys
connect = None
connect = lite.connect('test.db')
cur = connect.cursor()
sqlite_select_query = """SELECT city, prof, sallary_from, sallary_to from hhotelka"""
cur.execute(sqlite_select_query)
records = cur.fetchall()
print(len(records))
for row in records:
print(row)
connect = lite.connect('test.db')
cur = connect.cursor()
sqlite_select_query = """SELECT city, prof, sallary_from, sallary_to from hh"""
cur.execute(sqlite_select_query)
records = cur.fetchall()
print(len(records))
for row in records:
print(row) | 21.576923 | 85 | 0.743316 | 82 | 561 | 4.939024 | 0.353659 | 0.118519 | 0.167901 | 0.108642 | 0.879012 | 0.879012 | 0.879012 | 0.879012 | 0.879012 | 0.879012 | 0 | 0.002049 | 0.130125 | 561 | 26 | 86 | 21.576923 | 0.827869 | 0 | 0 | 0.736842 | 0 | 0 | 0.217082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.105263 | 0.210526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c060f6f38491a855f9a841593c73716e4372c2ea | 1,642 | py | Python | testing/bike.py | kylecorry31/Bikeshare | 0167dc6b2e94d7b87975f2db0e3af21cbef85129 | [
"MIT"
] | null | null | null | testing/bike.py | kylecorry31/Bikeshare | 0167dc6b2e94d7b87975f2db0e3af21cbef85129 | [
"MIT"
] | 2 | 2018-09-23T22:29:40.000Z | 2018-09-24T12:56:01.000Z | testing/bike.py | kylecorry31/Bikeshare | 0167dc6b2e94d7b87975f2db0e3af21cbef85129 | [
"MIT"
] | null | null | null | import unittest
from bikeshare.entities import BikeState, Bike
from bikeshare import Logger
from testing import mocks
class TestBikeStateMachine(unittest.TestCase):
def test_states(self):
Logger.on = False
bikeshare = mocks.MockBikeShare()
bike = bikeshare.bike
# Typical use
self.assertEqual(bike.state, BikeState.ACTIVE_NO_USER)
bike.on_swipe(0)
self.assertEqual(bike.state, BikeState.ACTIVE_USER)
bike.on_swipe(1)
self.assertEqual(bike.state, BikeState.ACTIVE_USER)
bike.on_swipe(0)
self.assertEqual(bike.state, BikeState.ACTIVE_NO_USER)
bike.on_swipe(1)
self.assertEqual(bike.state, BikeState.ACTIVE_USER)
bike.on_swipe(1)
self.assertEqual(bike.state, BikeState.ACTIVE_NO_USER)
# Out of order
bike.on_deactivate()
self.assertEqual(bike.state, BikeState.OUT_OF_ORDER_NO_USER)
bike.on_swipe(1)
self.assertEqual(bike.state, BikeState.OUT_OF_ORDER_NO_USER)
bike.on_activate()
self.assertEqual(bike.state, BikeState.ACTIVE_NO_USER)
bike.on_swipe(1)
self.assertEqual(bike.state, BikeState.ACTIVE_USER)
bike.on_deactivate()
self.assertEqual(bike.state, BikeState.OUT_OF_ORDER_USER)
bike.on_swipe(0)
self.assertEqual(bike.state, BikeState.OUT_OF_ORDER_USER)
bike.on_swipe(1)
self.assertEqual(bike.state, BikeState.OUT_OF_ORDER_NO_USER)
bike.on_activate()
self.assertEqual(bike.state, BikeState.ACTIVE_NO_USER)
bike.on_swipe(1)
self.assertEqual(bike.state, BikeState.ACTIVE_USER)
bike.on_deactivate()
self.assertEqual(bike.state, BikeState.OUT_OF_ORDER_USER)
bike.on_activate()
self.assertEqual(bike.state, BikeState.ACTIVE_USER)
| 19.317647 | 62 | 0.774056 | 235 | 1,642 | 5.178723 | 0.148936 | 0.209532 | 0.265407 | 0.335251 | 0.806081 | 0.806081 | 0.806081 | 0.802794 | 0.797864 | 0.797864 | 0 | 0.006944 | 0.123021 | 1,642 | 84 | 63 | 19.547619 | 0.838194 | 0.014616 | 0 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.404762 | 1 | 0.02381 | false | 0 | 0.095238 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
2261a73f96cdcf52a2d0366bd16d85f239d82cdf | 274 | py | Python | __init__.py | tfredian/mdsConnector | 31eda5419a6dc5406222f3792b98d5c1f0e73904 | [
"MIT"
] | 2 | 2019-09-24T19:56:25.000Z | 2020-08-27T19:25:31.000Z | __init__.py | tfredian/mdsConnector | 31eda5419a6dc5406222f3792b98d5c1f0e73904 | [
"MIT"
] | 1 | 2021-09-12T06:03:31.000Z | 2021-09-12T06:03:31.000Z | __init__.py | tfredian/mdsConnector | 31eda5419a6dc5406222f3792b98d5c1f0e73904 | [
"MIT"
] | 1 | 2018-09-26T12:06:38.000Z | 2018-09-26T12:06:38.000Z |
def _import(name, level=1):
try:
if not __package__:
return __import__(name, globals())
return __import__(name, globals(), level=level)
except:
return __import__(name, globals())
mdsConnector=_import('mdsconnector').mdsConnector
| 27.4 | 55 | 0.653285 | 28 | 274 | 5.75 | 0.464286 | 0.248447 | 0.298137 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004762 | 0.233577 | 274 | 9 | 56 | 30.444444 | 0.761905 | 0 | 0 | 0.25 | 0 | 0 | 0.043956 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.625 | 0 | 1.125 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3f731be9e243361f8e0d3e5bab1d08e3a6b4da79 | 24,135 | py | Python | apps/node/tests/websocket_api/test_roles_ws.py | gbarvinok/federatedlearning | 2f74ee492c1c6c603f3d78f32406a0c94bf9d8c3 | [
"Apache-2.0"
] | null | null | null | apps/node/tests/websocket_api/test_roles_ws.py | gbarvinok/federatedlearning | 2f74ee492c1c6c603f3d78f32406a0c94bf9d8c3 | [
"Apache-2.0"
] | null | null | null | apps/node/tests/websocket_api/test_roles_ws.py | gbarvinok/federatedlearning | 2f74ee492c1c6c603f3d78f32406a0c94bf9d8c3 | [
"Apache-2.0"
] | null | null | null | from json import loads
import jwt
import pytest
from flask import current_app as app
from src.app.main.events.role_related import *
from src.app.main.core.exceptions import PyGridError
from src.app.main.database import Role, User, create_role, create_user, model_to_json
role = {
"name": "mario mario",
"can_triage_requests": False,
"can_edit_settings": False,
"can_create_users": True,
"can_create_groups": True,
"can_edit_roles": False,
"can_manage_infrastructure": False,
"can_upload_data": False,
}
JSON_DECODE_ERR_MSG = (
"Expecting property name enclosed in " "double quotes: line 1 column 2 (char 1)"
)
owner_role = ("Owner", True, True, True, True, True, True, True)
admin_role = ("Administrator", True, True, True, True, False, False, True)
user_role = ("User", False, False, False, False, False, False, False)
officer_role = ("Compliance Officer", True, False, False, False, False, False, False)
user_1 = (
"tech@gibberish.com",
"BDEB6E8EE39B6C70835993486C9E65DC",
"]GBF[R>GX[9Cmk@DthFT!mhloUc%[f",
"3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
1,
)
user_2 = (
"tech@gibberish.com",
"BDEB6E8EE39B6C70835993486C9E65DC",
"]GBF[R>GX[9Cmk@DthFT!mhloUc%[f",
"3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
2,
)
@pytest.fixture
def cleanup(database):
yield
try:
database.session.query(User).delete()
database.session.query(Role).delete()
database.session.commit()
except:
database.session.rollback()
# POST ROLE
def test_post_role_missing_token(client, database, cleanup):
new_role = create_role(*admin_role)
database.session.add(new_role)
new_role = create_role(*owner_role)
database.session.add(new_role)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
payload = {
"role": role,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
}
result = create_role_socket(payload)
result = loads(result)
assert result["error"] == "Missing request key!"
def test_post_role_missing_key(client, database, cleanup):
new_role = create_role(*admin_role)
database.session.add(new_role)
new_role = create_role(*owner_role)
database.session.add(new_role)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {"role": role, "token": token.decode("UTF-8")}
result = create_role_socket(payload)
result = loads(result)
assert result["error"] == "Missing request key!"
def test_post_role_invalid_key(client, database, cleanup):
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"role": role,
"private-key": "IdoNotExist",
"token": token.decode("UTF-8"),
}
result = create_role_socket(payload)
result = loads(result)
assert result["error"] == "Invalid credentials!"
def test_post_role_invalid_token(client, database, cleanup):
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"asdsadad": 124356}, app.config["SECRET_KEY"])
payload = {
"role": role,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = create_role_socket(payload)
result = loads(result)
assert result["error"] == "Invalid credentials!"
def test_post_role_user_with_missing_role(client, database, cleanup):
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"role": role,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = create_role_socket(payload)
result = loads(result)
assert result["error"] == "Role ID not found!"
def test_post_role_missing_user(client, database, cleanup):
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"role": role,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = create_role_socket(payload)
result = loads(result)
assert result["error"] == "Invalid credentials!"
def test_post_role_unauthorized_user(client, database, cleanup):
new_role = create_role(*admin_role)
new_user = create_user(*user_1)
database.session.add(new_role)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"role": role,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = create_role_socket(payload)
result = loads(result)
assert result["error"] == "User is not authorized for this operation!"
def test_post_role_success(client, database, cleanup):
new_role = create_role(*admin_role)
database.session.add(new_role)
new_role = create_role(*owner_role)
database.session.add(new_role)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"role": role,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = create_role_socket(payload)
result = loads(result)
expected_role = role.copy()
expected_role["id"] = 3 # Two roles already inserted
assert result["role"] == expected_role
# GET ALL ROLES
def test_get_all_roles_missing_token(client, database, cleanup):
new_role = create_role(*admin_role)
database.session.add(new_role)
new_role = create_role(*owner_role)
database.session.add(new_role)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
payload = {
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb"
}
result = get_all_roles_socket(payload)
result = loads(result)
assert result["error"] == "Missing request key!"
def test_get_all_roles_missing_key(client, database, cleanup):
new_role = create_role(*admin_role)
database.session.add(new_role)
new_role = create_role(*owner_role)
database.session.add(new_role)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {"role": role, "token": token.decode("UTF-8")}
result = get_all_roles_socket(payload)
result = loads(result)
assert result["error"] == "Missing request key!"
def test_get_all_roles_invalid_key(client, database, cleanup):
new_role = create_role(*admin_role)
database.session.add(new_role)
new_role = create_role(*owner_role)
database.session.add(new_role)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"private-key": "siohfigdadANDVBSIAWE0WI21Y8OR1082ORHFEDNSLCSADIJOKA",
"token": token.decode("UTF-8"),
}
result = get_all_roles_socket(payload)
result = loads(result)
assert result["error"] == "Invalid credentials!"
def test_get_all_roles_invalid_token(client, database, cleanup):
new_role = create_role(*admin_role)
database.session.add(new_role)
new_role = create_role(*owner_role)
database.session.add(new_role)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, "totally a secret, trust me")
payload = {
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = get_all_roles_socket(payload)
result = loads(result)
assert result["error"] == "Invalid credentials!"
def test_get_all_roles_user_with_missing_role(client, database, cleanup):
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = get_all_roles_socket(payload)
result = loads(result)
assert result["error"] == "Role ID not found!"
def test_get_all_roles_unauthorized_user(client, database, cleanup):
new_role = create_role(*user_role)
new_user = create_user(*user_1)
database.session.add(new_role)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = get_all_roles_socket(payload)
result = loads(result)
assert result["error"] == "User is not authorized for this operation!"
def test_get_all_roles_success(client, database, cleanup):
role1 = create_role(*user_role)
database.session.add(role1)
role2 = create_role(*admin_role)
database.session.add(role2)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = get_all_roles_socket(payload)
result = loads(result)
expected_roles = [model_to_json(role1), model_to_json(role2)]
assert result["roles"] == expected_roles
# GET SINGLE ROLE
def test_get_role_missing_key(client, database, cleanup):
new_role = create_role(*admin_role)
database.session.add(new_role)
new_role = create_role(*owner_role)
database.session.add(new_role)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {"id": 1, "token": token.decode("UTF-8")}
result = get_role_socket(payload)
result = loads(result)
assert result["error"] == "Missing request key!"
def test_get_role_missing_token(client, database, cleanup):
new_role = create_role(*admin_role)
database.session.add(new_role)
new_role = create_role(*owner_role)
database.session.add(new_role)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
payload = {
"id": 1,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
}
result = get_role_socket(payload)
result = loads(result)
assert result["error"] == "Missing request key!"
def test_get_role_invalid_key(client, database, cleanup):
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {"id": 1, "private-key": "IdoNotExist", "token": token.decode("UTF-8")}
result = get_role_socket(payload)
result = loads(result)
assert result["error"] == "Invalid credentials!"
def test_get_role_invalid_token(client, database, cleanup):
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"asdsadad": 124356}, app.config["SECRET_KEY"])
payload = {
"id": 1,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = get_role_socket(payload)
result = loads(result)
assert result["error"] == "Invalid credentials!"
def test_get_role_missing_user(client):
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 2,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = get_role_socket(payload)
result = loads(result)
assert result["error"] == "Invalid credentials!"
def test_get_role_missing_role(client, database, cleanup):
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 1,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = get_role_socket(payload)
result = loads(result)
assert result["error"] == "Role ID not found!"
def test_get_role_unauthorized_user(client, database, cleanup):
new_role = create_role(*user_role)
new_user = create_user(*user_1)
database.session.add(new_role)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 1,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = get_role_socket(payload)
result = loads(result)
assert result["error"] == "User is not authorized for this operation!"
def test_get_role_success(client, database, cleanup):
role1 = create_role(*user_role)
database.session.add(role1)
role2 = create_role(*admin_role)
database.session.add(role2)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 1,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = get_role_socket(payload)
result = loads(result)
expected_role = model_to_json(role1)
assert result["role"] == expected_role
# PUT ROLE
def test_put_role_missing_key(client, database, cleanup):
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {"role": role, "id": 1, "token": token.decode("UTF-8")}
result = put_role_socket(payload)
result = loads(result)
assert result["error"] == "Missing request key!"
def test_put_role_missing_token(client, database, cleanup):
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
payload = {
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb"
}
result = put_role_socket(payload)
result = loads(result)
assert result["error"] == "Missing request key!"
def test_put_role_invalid_key(client, database, cleanup):
new_role = create_role(*owner_role)
new_user = create_user(*user_1)
database.session.add(new_role)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 1,
"role": role,
"private-key": "dsapksasdp12-04290u83t5r752tyvdwhbsacnxz",
"token": token.decode("UTF-8"),
}
result = put_role_socket(payload)
result = loads(result)
assert result["error"] == "Invalid credentials!"
def test_put_role_invalid_token(client, database, cleanup):
new_role = create_role(*owner_role)
new_user = create_user(*user_1)
database.session.add(new_role)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, "1029382trytdfsvcbxz")
payload = {
"id": 1,
"role": role,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = put_role_socket(payload)
result = loads(result)
assert result["error"] == "Invalid credentials!"
def test_put_role_user_with_missing_role(client, database, cleanup):
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 1,
"role": role,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = put_role_socket(payload)
result = loads(result)
assert result["error"] == "Role ID not found!"
def test_put_role_unauthorized_user(client, database, cleanup):
new_role = create_role(*admin_role)
new_user = create_user(*user_1)
database.session.add(new_role)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 1,
"role": role,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = put_role_socket(payload)
result = loads(result)
assert result["error"] == "User is not authorized for this operation!"
def test_put_over_missing_role(client, database, cleanup):
new_role = create_role(*admin_role)
database.session.add(new_role)
new_role = create_role(*owner_role)
database.session.add(new_role)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 3,
"role": role,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = put_role_socket(payload)
result = loads(result)
assert result["error"] == "Role ID not found!"
def test_put_role_success(client, database, cleanup):
new_role = create_role(*admin_role)
database.session.add(new_role)
new_role = create_role(*owner_role)
database.session.add(new_role)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 1,
"role": role,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = put_role_socket(payload)
result = loads(result)
expected_role = role
expected_role["id"] = 1
assert result["role"] == expected_role
# DELETE ROLE
def test_delete_role_missing_key(client, database, cleanup):
new_role = create_role(*owner_role)
database.session.add(new_role)
new_role = create_role(*admin_role)
database.session.add(new_role)
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {"id": 2, "token": token.decode("UTF-8")}
result = delete_role_socket(payload)
result = loads(result)
assert result["error"] == "Missing request key!"
def test_delete_role_missing_token(client, database, cleanup):
new_role = create_role(*owner_role)
database.session.add(new_role)
new_role = create_role(*admin_role)
database.session.add(new_role)
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
payload = {
"id": 2,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
}
result = delete_role_socket(payload)
result = loads(result)
assert result["error"] == "Missing request key!"
def test_delete_role_invalid_key(client, database, cleanup):
new_role = create_role(*owner_role)
database.session.add(new_role)
new_role = create_role(*admin_role)
database.session.add(new_role)
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 2,
"private-key": "1230896843rtfsvdjb123453212098792171766n",
"token": token.decode("UTF-8"),
}
result = delete_role_socket(payload)
result = loads(result)
assert result["error"] == "Invalid credentials!"
def test_delete_role_invalid_token(client, database, cleanup):
new_role = create_role(*owner_role)
database.session.add(new_role)
new_role = create_role(*admin_role)
database.session.add(new_role)
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, "213p4u4trgsvczxnwdaere67yiukyhj")
payload = {
"id": 2,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = delete_role_socket(payload)
result = loads(result)
assert result["error"] == "Invalid credentials!"
def test_delete_role_missing_role(client, database, cleanup):
new_user = create_user(*user_1)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 1,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = delete_role_socket(payload)
result = loads(result)
assert result["error"] == "Role ID not found!"
def test_delete_role_unauthorized_user(client, database, cleanup):
new_role = create_role(*admin_role)
new_user = create_user(*user_1)
database.session.add(new_role)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 1,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = delete_role_socket(payload)
result = loads(result)
assert result["error"] == "User is not authorized for this operation!"
def test_delete_role_user_with_missing_role(client, database, cleanup):
new_role = create_role(*admin_role)
database.session.add(new_role)
new_role = create_role(*owner_role)
database.session.add(new_role)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 3,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = delete_role_socket(payload)
result = loads(result)
assert result["error"] == "Role ID not found!"
def test_delete_role_success(client, database, cleanup):
new_role = create_role(*admin_role)
database.session.add(new_role)
new_role = create_role(*owner_role)
database.session.add(new_role)
new_user = create_user(*user_2)
database.session.add(new_user)
database.session.commit()
token = jwt.encode({"id": 1}, app.config["SECRET_KEY"])
payload = {
"id": 1,
"private-key": "3c777d6e1cece1e78aa9c26ae7fa2ecf33a6d3fb1db7c1313e7b79ef3ee884eb",
"token": token.decode("UTF-8"),
}
result = delete_role_socket(payload)
result = loads(result)
assert database.session.query(Role).get(1) is None
| 28.697979 | 90 | 0.68295 | 2,859 | 24,135 | 5.539 | 0.049318 | 0.117454 | 0.093205 | 0.103435 | 0.924097 | 0.913804 | 0.891639 | 0.890503 | 0.889555 | 0.884504 | 0 | 0.062484 | 0.187031 | 24,135 | 840 | 91 | 28.732143 | 0.74461 | 0.003605 | 0 | 0.788177 | 0 | 0 | 0.19995 | 0.095462 | 0 | 0 | 0 | 0 | 0.064039 | 1 | 0.065681 | false | 0 | 0.011494 | 0 | 0.077176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
58cdf7e743941f2d87960a497f1903d845d41b04 | 1,169 | py | Python | root/labels.py | jko0401/IT299-project | 89ddaa13b757ed85b149721a51d7ff496b9a6a14 | [
"MIT"
] | null | null | null | root/labels.py | jko0401/IT299-project | 89ddaa13b757ed85b149721a51d7ff496b9a6a14 | [
"MIT"
] | null | null | null | root/labels.py | jko0401/IT299-project | 89ddaa13b757ed85b149721a51d7ff496b9a6a14 | [
"MIT"
] | null | null | null | FEATURES = dict(
s_release_date='Spotify Release Date',
datepublished='YouTube Publish Date',
tempo='Tempo',
energy='Energy',
danceability='Danceability',
loudness='Loudness',
instrumentalness='Instrumentalness',
liveness='Liveness',
valence='Positiveness',
speechiness='Speechiness',
acousticness='Acousticness',
music_key='Key',
)
SCATTER = dict(
popularity='Popularity',
s_release_date='Spotify Release Date',
datepublished='YouTube Publish Date',
view_count='YouTube Play Count',
tempo='Tempo',
energy='Energy',
danceability='Danceability',
loudness='Loudness',
instrumentalness='Instrumentalness',
liveness='Liveness',
valence='Positiveness',
speechiness='Speechiness',
acousticness='Acousticness',
music_key='Key',
)
SUMMARY = dict(
popularity='Popularity',
view_count='YouTube Play Count',
tempo='Tempo',
energy='Energy',
danceability='Danceability',
loudness='Loudness',
instrumentalness='Instrumentalness',
liveness='Liveness',
valence='Positiveness',
speechiness='Speechiness',
acousticness='Acousticness',
) | 25.413043 | 42 | 0.685201 | 100 | 1,169 | 7.93 | 0.27 | 0.055486 | 0.06053 | 0.083228 | 0.906683 | 0.906683 | 0.906683 | 0.906683 | 0.906683 | 0.906683 | 0 | 0 | 0.180496 | 1,169 | 46 | 43 | 25.413043 | 0.827766 | 0 | 0 | 0.860465 | 0 | 0 | 0.352137 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
58f54ebb058f7fbda99b82795286abb45a2864f7 | 43 | py | Python | easy_vector/__init__.py | kunov/easy_vector | 938a61790dc33f14cec0b0b4391a2777dc6d1178 | [
"MIT"
] | null | null | null | easy_vector/__init__.py | kunov/easy_vector | 938a61790dc33f14cec0b0b4391a2777dc6d1178 | [
"MIT"
] | null | null | null | easy_vector/__init__.py | kunov/easy_vector | 938a61790dc33f14cec0b0b4391a2777dc6d1178 | [
"MIT"
] | null | null | null | from easy_vector.easy_vector import Vector
| 21.5 | 42 | 0.883721 | 7 | 43 | 5.142857 | 0.571429 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
189d49d9894f175458a04c4bcd5c69e02048a292 | 22,592 | py | Python | modules/api/functional_test/live_tests/membership/update_group_test.py | exoego/vinyldns | aac4c2afe4c599ac8c96ad3a826f3a6dff887104 | [
"Apache-2.0"
] | null | null | null | modules/api/functional_test/live_tests/membership/update_group_test.py | exoego/vinyldns | aac4c2afe4c599ac8c96ad3a826f3a6dff887104 | [
"Apache-2.0"
] | null | null | null | modules/api/functional_test/live_tests/membership/update_group_test.py | exoego/vinyldns | aac4c2afe4c599ac8c96ad3a826f3a6dff887104 | [
"Apache-2.0"
] | null | null | null | import pytest
import json
import time
from hamcrest import *
from vinyldns_python import VinylDNSClient
def test_update_group_success(shared_zone_test_context):
"""
Tests that we can update a group that has been created
"""
client = shared_zone_test_context.ok_vinyldns_client
saved_group = None
try:
new_group = {
'name': 'test-update-group-success',
'email': 'test@test.com',
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
saved_group = client.create_group(new_group, status=200)
group = client.get_group(saved_group['id'], status=200)
assert_that(group['name'], is_(saved_group['name']))
assert_that(group['email'], is_(saved_group['email']))
assert_that(group['description'], is_(saved_group['description']))
assert_that(group['status'], is_(saved_group['status']))
assert_that(group['created'], is_(saved_group['created']))
assert_that(group['id'], is_(saved_group['id']))
time.sleep(1) # sleep to ensure that update doesnt change created time
update_group = {
'id': group['id'],
'name': 'updated-name',
'email': 'update@test.com',
'description': 'this is a new description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
group = client.update_group(update_group['id'], update_group, status=200)
assert_that(group['name'], is_(update_group['name']))
assert_that(group['email'], is_(update_group['email']))
assert_that(group['description'], is_(update_group['description']))
assert_that(group['status'], is_(saved_group['status']))
assert_that(group['created'], is_(saved_group['created']))
assert_that(group['id'], is_(saved_group['id']))
assert_that(group['members'][0]['id'], is_('ok'))
assert_that(group['admins'][0]['id'], is_('ok'))
finally:
if saved_group:
client.delete_group(saved_group['id'], status=(200,404))
def test_update_group_without_name(shared_zone_test_context):
"""
Tests that updating a group without a name fails
"""
client = shared_zone_test_context.ok_vinyldns_client
result = None
try:
new_group = {
'name': 'test-update-without-name',
'email': 'test@test.com',
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
result = client.create_group(new_group, status=200)
assert_that(result['name'], is_(new_group['name']))
assert_that(result['email'], is_(new_group['email']))
update_group = {
'id': result['id'],
'email': 'update@test.com',
'description': 'this is a new description'
}
errors = client.update_group(update_group['id'], update_group, status=400)['errors']
assert_that(errors[0], is_("Missing Group.name"))
finally:
if result:
client.delete_group(result['id'], status=(200,404))
def test_update_group_without_email(shared_zone_test_context):
"""
Tests that updating a group without an email fails
"""
client = shared_zone_test_context.ok_vinyldns_client
result = None
try:
new_group = {
'name': 'test-update-without-email',
'email': 'test@test.com',
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
result = client.create_group(new_group, status=200)
assert_that(result['name'], is_(new_group['name']))
assert_that(result['email'], is_(new_group['email']))
update_group = {
'id': result['id'],
'name': 'without-email',
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
errors = client.update_group(update_group['id'], update_group, status=400)['errors']
assert_that(errors[0], is_("Missing Group.email"))
finally:
if result:
client.delete_group(result['id'], status=(200,404))
def test_updating_group_without_name_or_email(shared_zone_test_context):
"""
Tests that updating a group without name or an email fails
"""
client = shared_zone_test_context.ok_vinyldns_client
result = None
try:
new_group = {
'name': 'test-update-without-name-and-email',
'email': 'test@test.com',
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
result = client.create_group(new_group, status=200)
assert_that(result['name'], is_(new_group['name']))
assert_that(result['email'], is_(new_group['email']))
update_group = {
'id': result['id'],
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
errors = client.update_group(update_group['id'], update_group, status=400)['errors']
assert_that(errors, has_length(2))
assert_that(errors, contains_inanyorder(
"Missing Group.name",
"Missing Group.email"
))
finally:
if result:
client.delete_group(result['id'], status=(200,404))
def test_updating_group_without_members_or_admins(shared_zone_test_context):
"""
Tests that updating a group without members or admins fails
"""
client = shared_zone_test_context.ok_vinyldns_client
result = None
try:
new_group = {
'name': 'test-update-without-members',
'email': 'test@test.com',
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
result = client.create_group(new_group, status=200)
assert_that(result['name'], is_(new_group['name']))
assert_that(result['email'], is_(new_group['email']))
update_group = {
'id': result['id'],
'name': 'test-update-without-members',
'email': 'test@test.com',
'description': 'this is a description',
}
errors = client.update_group(update_group['id'], update_group, status=400)['errors']
assert_that(errors, has_length(2))
assert_that(errors, contains_inanyorder(
"Missing Group.members",
"Missing Group.admins"
))
finally:
if result:
client.delete_group(result['id'], status=(200,404))
def test_update_group_adds_admins_as_members(shared_zone_test_context):
"""
Tests that when we add an admin to a group the admin is also a member
"""
client = shared_zone_test_context.ok_vinyldns_client
saved_group = None
try:
new_group = {
'name': 'test-update-group-admins-as-members',
'email': 'test@test.com',
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
saved_group = client.create_group(new_group, status=200)
group = client.get_group(saved_group['id'], status=200)
assert_that(group['name'], is_(saved_group['name']))
assert_that(group['email'], is_(saved_group['email']))
assert_that(group['description'], is_(saved_group['description']))
assert_that(group['status'], is_(saved_group['status']))
assert_that(group['created'], is_(saved_group['created']))
assert_that(group['id'], is_(saved_group['id']))
update_group = {
'id': group['id'],
'name': 'test-update-group-admins-as-members',
'email': 'test@test.com',
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'}, { 'id': 'dummy' } ]
}
group = client.update_group(update_group['id'], update_group, status=200)
import json
print json.dumps(group, indent=4)
assert_that(group['members'], has_length(2))
assert_that(['ok', 'dummy'], has_item(group['members'][0]['id']))
assert_that(['ok', 'dummy'], has_item(group['members'][1]['id']))
assert_that(group['admins'], has_length(2))
assert_that(['ok', 'dummy'], has_item(group['admins'][0]['id']))
assert_that(['ok', 'dummy'], has_item(group['admins'][1]['id']))
finally:
if saved_group:
client.delete_group(saved_group['id'], status=(200,404))
def test_update_group_conflict(shared_zone_test_context):
"""
Tests that we can not update a groups name to a name already in use
"""
client = shared_zone_test_context.ok_vinyldns_client
result = None
conflict_group=None
try:
new_group = {
'name': 'test_update_group_conflict',
'email': 'test@test.com',
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
conflict_group = client.create_group(new_group, status=200)
assert_that(conflict_group['name'], is_(new_group['name']))
other_group = {
'name': 'change_me',
'email': 'test@test.com',
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
result = client.create_group(other_group, status=200)
assert_that(result['name'], is_(other_group['name']))
# change the name of the other_group to the first group (conflict)
update_group = {
'id': result['id'],
'name': 'test_update_group_conflict',
'email': 'test@test.com',
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
client.update_group(update_group['id'], update_group, status=409)
finally:
if result:
client.delete_group(result['id'], status=(200,404))
if conflict_group:
client.delete_group(conflict_group['id'], status=(200,404))
def test_update_group_not_found(shared_zone_test_context):
"""
Tests that we can not update a group that has not been created
"""
client = shared_zone_test_context.ok_vinyldns_client
update_group = {
'id': 'test-update-group-not-found',
'name': 'test-update-group-not-found',
'email': 'update@test.com',
'description': 'this is a new description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
client.update_group(update_group['id'], update_group, status=404)
def test_update_group_deleted(shared_zone_test_context):
"""
Tests that we can not update a group that has been deleted
"""
client = shared_zone_test_context.ok_vinyldns_client
saved_group = None
try:
new_group = {
'name': 'test-update-group-deleted',
'email': 'test@test.com',
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
saved_group = client.create_group(new_group, status=200)
client.delete_group(saved_group['id'], status=200)
update_group = {
'id': saved_group['id'],
'name': 'test-update-group-deleted-updated',
'email': 'update@test.com',
'description': 'this is a new description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
client.update_group(update_group['id'], update_group, status=404)
finally:
if saved_group:
client.delete_group(saved_group['id'], status=(200,404))
def test_add_member_via_update_group_success(shared_zone_test_context):
"""
Tests that we can add a member to a group via update successfully
"""
client = shared_zone_test_context.ok_vinyldns_client
saved_group = None
try:
new_group = {
'name': 'test-add-member-to-via-update-group-success',
'email': 'test@test.com',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
saved_group = client.create_group(new_group, status=200)
updated_group = {
'id': saved_group['id'],
'name': 'test-add-member-to-via-update-group-success',
'email': 'test@test.com',
'members': [ { 'id': 'ok'}, { 'id': 'dummy' } ],
'admins': [ { 'id': 'ok'} ]
}
saved_group = client.update_group(updated_group['id'], updated_group, status=200)
expected_members = ['ok', 'dummy']
assert_that(saved_group['members'], has_length(2))
assert_that(expected_members, has_item(saved_group['members'][0]['id']))
assert_that(expected_members, has_item(saved_group['members'][1]['id']))
finally:
if saved_group:
client.delete_group(saved_group['id'], status=(200,404))
def test_add_member_to_group_twice_via_update_group(shared_zone_test_context):
"""
Tests that we can add a member to a group twice successfully via update group
"""
client = shared_zone_test_context.ok_vinyldns_client
saved_group = None
try:
new_group = {
'name': 'test-add-member-to-group-twice-success-via-update-group',
'email': 'test@test.com',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
saved_group = client.create_group(new_group, status=200)
updated_group = {
'id': saved_group['id'],
'name': 'test-add-member-to-group-twice-success-via-update-group',
'email': 'test@test.com',
'members': [ { 'id': 'ok'}, { 'id': 'dummy' } ],
'admins': [ { 'id': 'ok'} ]
}
saved_group = client.update_group(updated_group['id'], updated_group, status=200)
saved_group = client.update_group(updated_group['id'], updated_group, status=200)
expected_members = ['ok', 'dummy']
assert_that(saved_group['members'], has_length(2))
assert_that(expected_members, has_item(saved_group['members'][0]['id']))
assert_that(expected_members, has_item(saved_group['members'][1]['id']))
finally:
if saved_group:
client.delete_group(saved_group['id'], status=(200,404))
def test_add_not_found_member_to_group_via_update_group(shared_zone_test_context):
"""
Tests that we can not add a non-existent member to a group via update group
"""
client = shared_zone_test_context.ok_vinyldns_client
saved_group = None
try:
new_group = {
'name': 'test-add-not-found-member-to-group-via-update-group',
'email': 'test@test.com',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
saved_group = client.create_group(new_group, status=200)
result = client.get_group(saved_group['id'], status=200)
assert_that(result['members'], has_length(1))
updated_group = {
'id': saved_group['id'],
'name': 'test-add-not-found-member-to-group-via-update-group',
'email': 'test@test.com',
'members': [ { 'id': 'ok'}, { 'id': 'not_found' } ],
'admins': [ { 'id': 'ok'} ]
}
client.update_group(updated_group['id'], updated_group, status=404)
finally:
if saved_group:
client.delete_group(saved_group['id'], status=(200,404))
def test_remove_member_via_update_group_success(shared_zone_test_context):
"""
Tests that we can remove a member via update group successfully
"""
client = shared_zone_test_context.ok_vinyldns_client
saved_group = None
try:
new_group = {
'name': 'test-remove-member-via-update-group-success',
'email': 'test@test.com',
'members': [ { 'id': 'ok'}, {'id': 'dummy'} ],
'admins': [ { 'id': 'ok'} ]
}
saved_group = client.create_group(new_group, status=200)
assert_that(saved_group['members'], has_length(2))
updated_group = {
'id': saved_group['id'],
'name': 'test-remove-member-via-update-group-success',
'email': 'test@test.com',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
saved_group = client.update_group(updated_group['id'], updated_group, status=200)
assert_that(saved_group['members'], has_length(1))
assert_that(saved_group['members'][0]['id'], is_('ok'))
finally:
if saved_group:
client.delete_group(saved_group['id'], status=(200,404))
def test_remove_member_and_admin(shared_zone_test_context):
"""
Tests that if we remove a member who is an admin, the admin is also removed
"""
client = shared_zone_test_context.ok_vinyldns_client
saved_group = None
try:
new_group = {
'name': 'test-remove-member-and-admin',
'email': 'test@test.com',
'members': [ { 'id': 'ok'}, {'id': 'dummy'} ],
'admins': [ { 'id': 'ok'}, {'id': 'dummy'} ]
}
saved_group = client.create_group(new_group, status=200)
assert_that(saved_group['members'], has_length(2))
updated_group = {
'id': saved_group['id'],
'name': 'test-remove-member-and-admin',
'email': 'test@test.com',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
saved_group = client.update_group(updated_group['id'], updated_group, status=200)
assert_that(saved_group['members'], has_length(1))
assert_that(saved_group['members'][0]['id'], is_('ok'))
assert_that(saved_group['admins'], has_length(1))
assert_that(saved_group['admins'][0]['id'], is_('ok'))
finally:
if saved_group:
client.delete_group(saved_group['id'], status=(200,404))
def test_remove_member_but_not_admin_keeps_member(shared_zone_test_context):
"""
Tests that if we remove a member but do not remove the admin, the admin remains a member
"""
client = shared_zone_test_context.ok_vinyldns_client
saved_group = None
try:
new_group = {
'name': 'test-remove-member-not-admin-keeps-member',
'email': 'test@test.com',
'members': [ { 'id': 'ok'}, {'id': 'dummy'} ],
'admins': [ { 'id': 'ok'}, {'id': 'dummy'} ]
}
saved_group = client.create_group(new_group, status=200)
assert_that(saved_group['members'], has_length(2))
updated_group = {
'id': saved_group['id'],
'name': 'test-remove-member-not-admin-keeps-member',
'email': 'test@test.com',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'}, {'id': 'dummy'} ]
}
saved_group = client.update_group(updated_group['id'], updated_group, status=200)
expected_members = ['ok', 'dummy']
assert_that(saved_group['members'], has_length(2))
assert_that(expected_members, has_item(saved_group['members'][0]['id']))
assert_that(expected_members, has_item(saved_group['members'][1]['id']))
assert_that(expected_members, has_item(saved_group['admins'][0]['id']))
assert_that(expected_members, has_item(saved_group['admins'][1]['id']))
finally:
if saved_group:
client.delete_group(saved_group['id'], status=(200,404))
def test_remove_admin_keeps_member(shared_zone_test_context):
"""
Tests that if we remove a member from admins, the member still remains part of the group
"""
client = shared_zone_test_context.ok_vinyldns_client
saved_group = None
try:
new_group = {
'name': 'test-remove-admin-keeps-member',
'email': 'test@test.com',
'members': [ { 'id': 'ok'}, {'id': 'dummy'} ],
'admins': [ { 'id': 'ok'}, {'id': 'dummy'} ]
}
saved_group = client.create_group(new_group, status=200)
assert_that(saved_group['members'], has_length(2))
updated_group = {
'id': saved_group['id'],
'name': 'test-remove-admin-keeps-member',
'email': 'test@test.com',
'members': [ { 'id': 'ok'}, {'id': 'dummy'} ],
'admins': [ { 'id': 'ok'} ]
}
saved_group = client.update_group(updated_group['id'], updated_group, status=200)
expected_members = ['ok', 'dummy']
assert_that(saved_group['members'], has_length(2))
assert_that(expected_members, has_item(saved_group['members'][0]['id']))
assert_that(expected_members, has_item(saved_group['members'][1]['id']))
assert_that(saved_group['admins'], has_length(1))
assert_that(saved_group['admins'][0]['id'], is_('ok'))
finally:
if saved_group:
client.delete_group(saved_group['id'], status=(200,404))
def test_update_group_not_authorized(shared_zone_test_context):
"""
Tests that only the admins can update a zone
"""
ok_client = shared_zone_test_context.ok_vinyldns_client
not_admin_client = shared_zone_test_context.dummy_vinyldns_client
try:
new_group = {
'name': 'test-update-group-not-authorized',
'email': 'test@test.com',
'description': 'this is a description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
saved_group = ok_client.create_group(new_group, status=200)
update_group = {
'id': saved_group['id'],
'name': 'updated-name',
'email': 'update@test.com',
'description': 'this is a new description',
'members': [ { 'id': 'ok'} ],
'admins': [ { 'id': 'ok'} ]
}
not_admin_client.update_group(update_group['id'], update_group, status=403)
finally:
if saved_group:
ok_client.delete_group(saved_group['id'], status=(200,404))
| 36.615883 | 92 | 0.572504 | 2,665 | 22,592 | 4.604878 | 0.043527 | 0.084746 | 0.039928 | 0.059892 | 0.926988 | 0.910772 | 0.895698 | 0.882008 | 0.866362 | 0.840857 | 0 | 0.015726 | 0.268192 | 22,592 | 616 | 93 | 36.675325 | 0.726547 | 0.005267 | 0 | 0.77521 | 0 | 0 | 0.195327 | 0.047872 | 0 | 0 | 0 | 0 | 0.144958 | 0 | null | null | 0 | 0.012605 | null | null | 0.002101 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e1779fc9e19cd2bd2a301deb4e97b3565569c668 | 14,913 | py | Python | module/schema_api.py | SamerW/Verifiable-Credentials-Policy | cd4aad055a9a8e2164dc24a72c69ff8881822d45 | [
"Apache-2.0"
] | null | null | null | module/schema_api.py | SamerW/Verifiable-Credentials-Policy | cd4aad055a9a8e2164dc24a72c69ff8881822d45 | [
"Apache-2.0"
] | null | null | null | module/schema_api.py | SamerW/Verifiable-Credentials-Policy | cd4aad055a9a8e2164dc24a72c69ff8881822d45 | [
"Apache-2.0"
] | null | null | null | from flask import current_app as app, make_response
from .models import db, \
Property, \
Issuer, \
VcType, \
Schema, \
VcProfile, \
association_table, \
SdStatement
from sqlalchemy import or_
import json
import urllib
from flask_babel import _
from jsonschema import Draft7Validator, \
SchemaError
def get_schema_by_url(url, extract_nested_properties):
f = urllib.request.urlopen(url)
content = f.read()
json_schema = json.loads(content)
if validate_json_schema(json_schema):
schema_name = json_schema["$schema"]
issuer_name = json_schema["issuer"]
vc_type_name = json_schema["credential_type"]
existing_schema = Schema.query.filter(
Schema.name == schema_name
).first()
if existing_schema:
return _("Schema \"{schema_name}\" already exist").format(schema_name=schema_name)
issuer = Issuer.query.filter(
Issuer.name == issuer_name
).first()
if not issuer:
new_issuer = Issuer(
name=issuer_name
)
issuer = new_issuer
db.session.add(new_issuer)
vc_type = VcType.query.filter(
VcType.name == vc_type_name
).first()
if not vc_type:
new_vc_type = VcType(
name=vc_type_name
)
db.session.add(new_vc_type)
else:
return _("Schema \"{type_name}\" already exist").format(type_name=vc_type_name)
new_schema = Schema(
name=schema_name,
issuer=issuer,
type=new_vc_type
)
db.session.add(new_schema)
properties = json_schema["properties"]
for prop in properties:
new_property = Property(
name=properties[prop]["name"],
schema=new_schema
)
db.session.add(new_property)
if extract_nested_properties:
get_nested_properties(properties[prop], new_schema)
db.session.commit()
print(vc_type_name)
return _("Schema \"{type_name}\" uploaded").format(type_name=vc_type_name)
else:
return _("Schema not valid")
def get_nested_properties(json_data, new_schema):
if "properties" in json_data:
properties = json_data["properties"]
for prop in properties:
new_property = Property(
name=properties[prop]["name"],
schema=new_schema
)
db.session.add(new_property)
get_nested_properties(prop, new_schema)
def list_all_properties(json_data,prop_list):
if "properties" in json_data:
properties = json_data["properties"]
for prop in properties:
prop_list.append(properties[prop]["name"])
list_all_properties(prop, prop_list)
return prop_list
def get_schema_by_json_data(json_schema, extract_nested_properties):
if validate_json_schema(json_schema):
schema_name = json_schema["$schema"]
issuer_name = json_schema["issuer"]
vc_type_name = json_schema["credential_type"]
existing_schema = Schema.query.filter(
Schema.name == schema_name
).first()
if existing_schema:
return _("Schema \"{schema_name}\" already exist").format(schema_name=schema_name)
issuer = Issuer.query.filter(
Issuer.name == issuer_name
).first()
if not issuer:
new_issuer = Issuer(
name=issuer_name
)
issuer = new_issuer
db.session.add(new_issuer)
vc_type = VcType.query.filter(
VcType.name == vc_type_name
).first()
if not vc_type:
new_vc_type = VcType(
name=vc_type_name
)
db.session.add(new_vc_type)
else:
return _("Schema \"{type_name}\" already exist").format(type_name=vc_type_name)
new_schema = Schema(
name=schema_name,
issuer=issuer,
type=new_vc_type
)
db.session.add(new_schema)
properties = json_schema["properties"]
for prop in properties:
new_property = Property(
name=properties[prop]["name"],
schema=new_schema
)
db.session.add(new_property)
if extract_nested_properties:
get_nested_properties(properties[prop], new_schema)
db.session.commit()
print(vc_type_name)
return _("Schema \"{type_name}\" uploaded").format(type_name=vc_type_name)
else:
return _("Schema not valid")
def replace_schema_by_json_data(json_schema, schema_id):
if validate_json_schema(json_schema):
issuer_name = json_schema["issuer"]
vc_type_name = json_schema["credential_type"]
existing_schema = Schema.query.filter(
Schema.id == schema_id
).first()
vc_type = VcType.query.filter(
VcType.name == vc_type_name
).first()
if not vc_type:
return _("Can't replace schema: schema \"{type_name}\" doesn't exist").format(type_name=vc_type_name)
new_properties = list_all_properties(json_schema, [])
print("3", new_properties)
old_properties = Property.query.filter(
Property.schema_id == schema_id
).all()
prop_to_remove = []
prop_to_add = []
for old_prop in old_properties:
is_in_new = False
for new_prop in new_properties:
if old_prop.name == new_prop:
is_in_new = True
if not is_in_new:
prop_to_remove.append(old_prop)
for new_prop in new_properties:
is_in_old = False
for old_prop in old_properties:
if new_prop == old_prop.name:
is_in_old = True
if not is_in_old:
prop_to_add.append(new_prop)
print("TO REMOVE", prop_to_remove)
print("TO ADD", prop_to_add)
if check_if_schema_is_used_with_properties(prop_to_remove):
return _("Can't replace schema: Some properties you want to replace are used in profiles or statements")
issuer = Issuer.query.filter(
Issuer.name == issuer_name
).first()
if existing_schema.issuer.name != issuer_name:
print(existing_schema.issuer.name)
old_issuer = Issuer.query.filter(
Issuer.name == existing_schema.issuer.name
).first()
existing_profile_with_issuer = VcProfile.query.filter(
VcProfile.issuer_id == old_issuer.id
).first()
if existing_profile_with_issuer:
return _("Can't replace schema: The issuer in the old schema is used in some profiles")
db.session.delete(old_issuer)
if not issuer:
new_issuer = Issuer(
name=issuer_name
)
db.session.add(new_issuer)
existing_schema.issuer = new_issuer
for prop in prop_to_remove:
old_prop = Property.query.filter(
Property.schema_id == schema_id,
Property.name == prop.name
).first()
db.session.delete(old_prop)
for prop in prop_to_add:
new_property = Property(
name=prop,
schema=existing_schema
)
db.session.add(new_property)
db.session.commit()
print(vc_type_name)
return _("Schema \"{type_name}\" replaced").format(type_name=vc_type_name)
else:
return _("Schema not valid")
def delete_schema_by_schema_id(schema_id):
existing_schema = Schema.query.filter(
Schema.id == schema_id
).first()
if existing_schema:
print(existing_schema)
print(existing_schema.issuer)
print(existing_schema.issuer.id)
db.session.delete(existing_schema.type)
for prop in existing_schema.properties:
db.session.delete(prop)
issuer_id = existing_schema.issuer.id
existing_other_schema_with_same_issuer = Schema.query.filter(
Schema.issuer_id == issuer_id,
Schema.name != existing_schema.name
).first()
if not existing_other_schema_with_same_issuer:
db.session.delete(existing_schema.issuer)
db.session.delete(existing_schema)
db.session.commit()
return make_response(_("Schema with id {schema_id} deleted").format(schema_id=str(schema_id)))
else:
return make_response(_("Schema with id {schema_id} doesn't exist").format(schema_id=str(schema_id)))
def replace_schema_by_json_url(url, schema_id):
f = urllib.request.urlopen(url)
content = f.read()
json_schema = json.loads(content)
if validate_json_schema(json_schema):
issuer_name = json_schema["issuer"]
vc_type_name = json_schema["credential_type"]
existing_schema = Schema.query.filter(
Schema.id == schema_id
).first()
vc_type = VcType.query.filter(
VcType.name == vc_type_name
).first()
if not vc_type:
return _("Can't replace schema: schema \"{type_name}\" doesn't exist").format(type_name=vc_type_name)
new_properties = list_all_properties(json_schema, [])
print("3", new_properties)
old_properties = Property.query.filter(
Property.schema_id == schema_id
).all()
prop_to_remove = []
prop_to_add = []
for old_prop in old_properties:
is_in_new = False
for new_prop in new_properties:
if old_prop.name == new_prop:
is_in_new = True
if not is_in_new:
prop_to_remove.append(old_prop)
for new_prop in new_properties:
is_in_old = False
for old_prop in old_properties:
if new_prop == old_prop.name:
is_in_old = True
if not is_in_old:
prop_to_add.append(new_prop)
print("TO REMOVE", prop_to_remove)
print("TO ADD", prop_to_add)
if check_if_schema_is_used_with_properties(prop_to_remove):
return _("Can't replace schema: Some properties you want to replace are used in profiles or statements")
issuer = Issuer.query.filter(
Issuer.name == issuer_name
).first()
if existing_schema.issuer.name != issuer_name:
print(existing_schema.issuer.name)
old_issuer = Issuer.query.filter(
Issuer.name == existing_schema.issuer.name
).first()
existing_profile_with_issuer = VcProfile.query.filter(
VcProfile.issuer_id == old_issuer.id
).first()
if existing_profile_with_issuer:
return _("Can't replace schema: The issuer in the old schema is used in some profiles")
db.session.delete(old_issuer)
if not issuer:
new_issuer = Issuer(
name=issuer_name
)
db.session.add(new_issuer)
existing_schema.issuer = new_issuer
for prop in prop_to_remove:
old_prop = Property.query.filter(
Property.schema_id == schema_id,
Property.name == prop.name
).first()
db.session.delete(old_prop)
for prop in prop_to_add:
new_property = Property(
name=prop,
schema=existing_schema
)
db.session.add(new_property)
db.session.commit()
print(vc_type_name)
return _("Schema \"{type_name}\" replaced").format(type_name=vc_type_name)
else:
return _("Schema not valid")
def validate_json_schema(schema):
try:
Draft7Validator.check_schema(schema)
except SchemaError as schemaError:
print(schemaError)
return False
return True
def read_schema_by_id(schema_id):
schema = Schema.query.filter(
Schema.id == schema_id
).first()
return schema
def check_if_schema_is_used(schema_id):
existing_schema = Schema.query.filter(
Schema.id == schema_id
).first()
type_id = existing_schema.type_id
issuer_id = existing_schema.issuer_id
existing_profiles = VcProfile.query.filter(or_(
VcProfile.type_id == type_id,
VcProfile.issuer_id == issuer_id
)).all()
properties = Property.query.filter(
Property.schema_id == schema_id
).all()
for prop in properties:
prop_in_statement = Property.query.join(association_table).join(SdStatement).filter(
association_table.c.property_id == prop.id
).all()
if len(prop_in_statement) > 0:
return True
if len(existing_profiles) > 0:
return True
return False
def check_if_schema_is_used_with_properties(properties):
for prop in properties:
prop_in_statement = Property.query.join(association_table).join(SdStatement).filter(
association_table.c.property_id == prop.id
).all()
if len(prop_in_statement) > 0:
return True
return False
def search_schema_by_arg(search_str):
schemas = []
existing_schemas = Schema.query.all()
for schema in existing_schemas:
if schema not in schemas and search_str.lower() in (schema.name.lower() or
schema.issuer.name.lower() or
schema.type.name.lower()):
schemas.append(schema)
else:
for prop in schema.properties:
if schema not in schemas and search_str.lower() in prop.name.lower():
schemas.append(schema)
return schemas
test_schema = """{
"$schema": "http://example.com/example3CreditCard",
"issuer": "https://bigbigbank.com/issuer/UK",
"credential_type": "ExampleCreditCard3",
"type": "object",
"properties": {
"creditCardNumber": {
"name": "credentialSubject.ex3credcard.number",
"type": "string",
"pattern": "^[0-9]{4}-[0-9]{4}-[0-9]{4}-[0-9]{4}$",
"example": "1111-2222-3333-4444"
},
"owner": {
"name": "credentialSubject.ex3credcard.owner",
"type": "string",
"maxLength": 64
},
"expiringDate": {
"name": "credentialSubject.ex3credcard.expiringDate",
"format": "date"
}
},
"required": [ "creditCardNumber", "owner", "expiringDate" ]
}"""
@app.route('/get_test_schema', methods=['GET'])
def get_test_schema():
return test_schema
| 35.507143 | 116 | 0.594113 | 1,736 | 14,913 | 4.810484 | 0.080645 | 0.037361 | 0.026344 | 0.02347 | 0.801221 | 0.781344 | 0.745539 | 0.732487 | 0.723386 | 0.716681 | 0 | 0.004095 | 0.312211 | 14,913 | 419 | 117 | 35.591885 | 0.810081 | 0 | 0 | 0.724936 | 0 | 0.002571 | 0.115738 | 0.013679 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033419 | false | 0 | 0.017995 | 0.002571 | 0.131105 | 0.041131 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e19aacbc52e087f2954f7cb81fb873f22d091a66 | 1,534 | py | Python | engine/inventory.py | Jawmo/Hope_Pub | 1d142a712c064c2fb9dc63b38d8c8e65fb089386 | [
"MIT"
] | null | null | null | engine/inventory.py | Jawmo/Hope_Pub | 1d142a712c064c2fb9dc63b38d8c8e65fb089386 | [
"MIT"
] | null | null | null | engine/inventory.py | Jawmo/Hope_Pub | 1d142a712c064c2fb9dc63b38d8c8e65fb089386 | [
"MIT"
] | null | null | null |
inv = {
"suit": {
"name" : "body",
"worn" : "on",
"wearing" : None,
"wear_max" : 1,
"contents" : None},
"head": {
"name" : "head",
"worn" : "on",
"wearing" : None,
"wear_max" : 1,
"contents" : None},
"chest": {
"name" : "chest",
"worn" : "on",
"wearing" : None,
"wear_max" : 1,
"contents" : None},
"back": {
"name" : "back",
"worn" : "on",
"wearing" : None,
"wear_max" : 1,
"contents" : None},
"wrists": {
"name" : "wrist",
"worn" : "on",
"wearing" : None,
"wear_max" : 1,
"contents" : None},
"l_hand": {
"name" : "left hand",
"worn" : "in",
"wearing" : None,
"wear_max" : 1,
"contents" : None},
"r_hand": {
"name" : "right hand",
"worn" : "in",
"wearing" : None,
"wear_max" : 1,
"contents" : None},
"belt": {
"name" : "belt",
"worn" : "on",
"wearing" : None,
"wear_max" : 1,
"contents" : None},
"feet": {
"name" : "feet",
"worn" : "on",
"wearing" : None,
"wear_max" : 1,
"contents" : None},
} | 22.895522 | 37 | 0.30704 | 113 | 1,534 | 4.070796 | 0.221239 | 0.215217 | 0.293478 | 0.352174 | 0.741304 | 0.741304 | 0.741304 | 0.741304 | 0.741304 | 0.178261 | 0 | 0.012245 | 0.520861 | 1,534 | 67 | 38 | 22.895522 | 0.613605 | 0 | 0 | 0.642857 | 0 | 0 | 0.253585 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e1ccd30533e45c1701f4a05103065fefb767d7f2 | 1,216 | py | Python | stubs/micropython-v1_13-95-pyboard/uio.py | mattytrentini/micropython-stubs | 4d596273823b69e9e5bcf5fa67f249c374ee0bbc | [
"MIT"
] | null | null | null | stubs/micropython-v1_13-95-pyboard/uio.py | mattytrentini/micropython-stubs | 4d596273823b69e9e5bcf5fa67f249c374ee0bbc | [
"MIT"
] | null | null | null | stubs/micropython-v1_13-95-pyboard/uio.py | mattytrentini/micropython-stubs | 4d596273823b69e9e5bcf5fa67f249c374ee0bbc | [
"MIT"
] | null | null | null | """
Module: 'uio' on pyboard 1.13.0-95
"""
# MCU: (sysname='pyboard', nodename='pyboard', release='1.13.0', version='v1.13-95-g0fff2e03f on 2020-10-03', machine='PYBv1.1 with STM32F405RG')
# Stubber: 1.3.4 - updated
from typing import Any
class BytesIO:
""""""
def close(self, *args) -> Any:
pass
def flush(self, *args) -> Any:
pass
def getvalue(self, *args) -> Any:
pass
def read(self, *args) -> Any:
pass
def readinto(self, *args) -> Any:
pass
def readline(self, *args) -> Any:
pass
def seek(self, *args) -> Any:
pass
def tell(self, *args) -> Any:
pass
def write(self, *args) -> Any:
pass
class FileIO:
""""""
def close(self, *args) -> Any:
pass
def flush(self, *args) -> Any:
pass
def read(self, *args) -> Any:
pass
def readinto(self, *args) -> Any:
pass
def readline(self, *args) -> Any:
pass
def readlines(self, *args) -> Any:
pass
def seek(self, *args) -> Any:
pass
def tell(self, *args) -> Any:
pass
def write(self, *args) -> Any:
pass
class IOBase:
""""""
| 16.657534 | 145 | 0.51398 | 154 | 1,216 | 4.058442 | 0.311688 | 0.2304 | 0.3168 | 0.432 | 0.672 | 0.672 | 0.672 | 0.672 | 0.672 | 0.672 | 0 | 0.045122 | 0.325658 | 1,216 | 72 | 146 | 16.888889 | 0.717073 | 0.167763 | 0 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.45 | false | 0.45 | 0.025 | 0 | 0.55 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 8 |
833bbbb7fde91cfedb9e972ca68e95c46624659c | 17,565 | py | Python | neonsrv/tornadoapi.py | DDMAL/Neon.js | bf4fb62f574773422e4275aebdca95056ffed160 | [
"MIT"
] | 18 | 2015-03-09T23:09:38.000Z | 2019-04-10T16:21:02.000Z | neonsrv/tornadoapi.py | DDMAL/Neon.js-Legacy- | bf4fb62f574773422e4275aebdca95056ffed160 | [
"MIT"
] | 199 | 2015-01-07T20:23:15.000Z | 2018-05-11T21:38:19.000Z | neonsrv/tornadoapi.py | DDMAL/Neon.js | bf4fb62f574773422e4275aebdca95056ffed160 | [
"MIT"
] | 3 | 2015-07-23T13:32:52.000Z | 2017-09-04T05:06:53.000Z | import os
from modifymei import ModifyDocument
import tornado.web
import json
import conf
#####################################################
# NEUME HANDLER CLASSES #
#####################################################
class InsertNeumeHandler(tornado.web.RequestHandler):
def post(self, file):
name = str(self.get_argument("name", ""))
inclinatum = self.get_argument("inclinatum", None)
deminutus = self.get_argument("deminutus", None)
before_id = self.get_argument("beforeid", None)
pname = str(self.get_argument("pname", ""))
oct = str(self.get_argument("oct", ""))
dot_form = self.get_argument("dotform", None)
episema_form = self.get_argument("episemaform", None)
id = self.get_argument("id", None)
# Bounding box
lrx = str(self.get_argument("lrx", None))
lry = str(self.get_argument("lry", None))
ulx = str(self.get_argument("ulx", None))
uly = str(self.get_argument("uly", None))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
result = md.insert_punctum(name, inclinatum, deminutus, before_id, pname, oct, dot_form, episema_form, ulx, uly, lrx, lry)
md.write_doc()
self.write(json.dumps(result))
self.set_status(200)
class ChangeNeumePitchHandler(tornado.web.RequestHandler):
def post(self, file):
data = json.loads(self.get_argument("data", ""))
id = str(data["id"])
before_id = str(data["beforeid"])
# Bounding box
ulx = str(data["ulx"])
uly = str(data["uly"])
lrx = str(data["lrx"])
lry = str(data["lry"])
pitch_info = data["pitchInfo"]
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.move_neume(id, before_id, pitch_info, ulx, uly, lrx, lry)
md.write_doc()
self.set_status(200)
class DeleteNeumeHandler(tornado.web.RequestHandler):
def post(self, file):
ids = str(self.get_argument("ids", ""))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.delete_neume(ids.split(","))
md.write_doc()
self.set_status(200)
class UpdateNeumeHeadShapeHandler(tornado.web.RequestHandler):
def post(self, file):
id = str(self.get_argument("id", ""))
head_shape = str(self.get_argument("shape", ""))
lrx = str(self.get_argument("lrx", None))
lry = str(self.get_argument("lry", None))
ulx = str(self.get_argument("ulx", None))
uly = str(self.get_argument("uly", None))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.update_neume_head_shape(id, head_shape, ulx, uly, lrx, lry)
md.write_doc()
self.set_status(200)
class NeumifyNeumeHandler(tornado.web.RequestHandler):
def post(self, file):
data = json.loads(self.get_argument("data", ""))
nids = str(data["nids"]).split(",")
type_id = str(data["typeid"])
liquescence = str(data.get("liquescence", None))
head_shapes = data["headShapes"]
try:
lrx = str(data["lrx"])
lry = str(data["lry"])
ulx = str(data["ulx"])
uly = str(data["uly"])
except KeyError:
ulx = uly = lrx = lry = None
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
result = md.neumify(nids, type_id, liquescence, head_shapes, ulx, uly, lrx, lry)
md.write_doc()
self.write(json.dumps(result))
self.set_status(200)
class UngroupNeumeHandler(tornado.web.RequestHandler):
def post(self, file):
data = json.loads(self.get_argument("data", ""))
nids = str(data["nids"]).split(",")
bboxes = data["bbs"]
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
result = md.ungroup(nids, bboxes)
md.write_doc()
self.write(json.dumps(result))
self.set_status(200)
#####################################################
# DIVISION HANDLER CLASSES #
#####################################################
class InsertDivisionHandler(tornado.web.RequestHandler):
def post(self, file):
div_type = str(self.get_argument("type", ""))
before_id = self.get_argument("beforeid", None)
# bounding box
lrx = str(self.get_argument("lrx", None))
lry = str(self.get_argument("lry", None))
ulx = str(self.get_argument("ulx", None))
uly = str(self.get_argument("uly", None))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
result = md.insert_division(before_id, div_type, ulx, uly, lrx, lry)
md.write_doc()
self.write(json.dumps(result))
self.set_status(200)
class MoveDivisionHandler(tornado.web.RequestHandler):
def post(self, file):
id = str(self.get_argument("id", ""))
before_id = str(self.get_argument("beforeid", None))
# bounding box
ulx = str(self.get_argument("ulx", None))
uly = str(self.get_argument("uly", None))
lrx = str(self.get_argument("lrx", None))
lry = str(self.get_argument("lry", None))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.move_division(id, before_id, ulx, uly, lrx, lry)
md.write_doc()
self.set_status(200)
class UpdateDivisionShapeHandler(tornado.web.RequestHandler):
def post(self, file):
data = json.loads(self.get_argument("data", ""))
div_type = str(data["type"])
id = str(data["id"])
# bounding box
lrx = str(data["lrx"])
lry = str(data["lry"])
ulx = str(data["ulx"])
uly = str(data["uly"])
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.update_division_shape(id, div_type, ulx, uly, lrx, lry)
md.write_doc()
self.set_status(200)
class DeleteDivisionHandler(tornado.web.RequestHandler):
def post(self, file):
ids = str(self.get_argument("ids", ""))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.delete_division(ids.split(","))
md.write_doc()
self.set_status(200)
class AddEpisemaHandler(tornado.web.RequestHandler):
def post(self, file):
id = str(self.get_argument("id", ""))
episema_form = str(self.get_argument("episemaform", ""))
# Bounding box
ulx = str(self.get_argument("ulx", None))
uly = str(self.get_argument("uly", None))
lrx = str(self.get_argument("lrx", None))
lry = str(self.get_argument("lry", None))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.add_episema(id, episema_form, ulx, uly, lrx, lry)
md.write_doc()
self.set_status(200)
class DeleteEpisemaHandler(tornado.web.RequestHandler):
def post(self, file):
id = str(self.get_argument("id", ""))
# Bounding box
ulx = str(self.get_argument("ulx", None))
uly = str(self.get_argument("uly", None))
lrx = str(self.get_argument("lrx", None))
lry = str(self.get_argument("lry", None))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.delete_episema(id, ulx, uly, lrx, lry)
md.write_doc()
self.set_status(200)
class AddDotHandler(tornado.web.RequestHandler):
def post(self, file):
id = str(self.get_argument("id", ""))
dot_form = str(self.get_argument("dotform", ""))
# Bounding box
ulx = str(self.get_argument("ulx", None))
uly = str(self.get_argument("uly", None))
lrx = str(self.get_argument("lrx", None))
lry = str(self.get_argument("lry", None))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.add_dot(id, dot_form, ulx, uly, lrx, lry)
md.write_doc()
self.set_status(200)
class DeleteDotHandler(tornado.web.RequestHandler):
def post(self, file):
id = str(self.get_argument("id", ""))
# Bounding box
ulx = str(self.get_argument("ulx", None))
uly = str(self.get_argument("uly", None))
lrx = str(self.get_argument("lrx", None))
lry = str(self.get_argument("lry", None))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.delete_dot(id, ulx, uly, lrx, lry)
md.write_doc()
self.set_status(200)
#####################################################
# CLEF HANDLER CLASSES #
#####################################################
class MoveClefHandler(tornado.web.RequestHandler):
def post(self, file):
data = json.loads(self.get_argument("data", ""))
clef_id = str(data["id"])
# bounding box
ulx = str(data["ulx"])
uly = str(data["uly"])
lrx = str(data["lrx"])
lry = str(data["lry"])
line = str(data["line"])
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.move_clef(clef_id, line, data["pitchInfo"], ulx, uly, lrx, lry)
md.write_doc()
self.set_status(200)
class UpdateClefShapeHandler(tornado.web.RequestHandler):
def post(self, file):
data = json.loads(self.get_argument("data", ""))
clef_id = str(data["id"])
# bounding box
ulx = str(data["ulx"])
uly = str(data["uly"])
lrx = str(data["lrx"])
lry = str(data["lry"])
shape = str(data["shape"])
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.update_clef_shape(clef_id, shape, data["pitchInfo"], ulx, uly, lrx, lry)
md.write_doc()
self.set_status(200)
class InsertClefHandler(tornado.web.RequestHandler):
def post(self, file):
data = json.loads(self.get_argument("data", ""))
shape = str(data["shape"])
line = str(data["line"])
before_id = str(data["beforeid"])
pitchInfo = data["pitchInfo"]
# bounding box
try:
lrx = str(data["lrx"])
lry = str(data["lry"])
ulx = str(data["ulx"])
uly = str(data["uly"])
except KeyError:
ulx = uly = lrx = lry = None
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
result = md.insert_clef(line, shape, pitchInfo, before_id, ulx, uly, lrx, lry)
md.write_doc()
self.write(json.dumps(result))
self.set_status(200)
class DeleteClefHandler(tornado.web.RequestHandler):
def post(self, file):
clefs_to_delete = json.loads(self.get_argument("data", ""))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.delete_clef(clefs_to_delete)
md.write_doc()
self.set_status(200)
#####################################################
# CUSTOS HANDLER CLASSES #
#####################################################
class InsertCustosHandler(tornado.web.RequestHandler):
def post(self, file):
pname = str(self.get_argument("pname", ""))
oct = str(self.get_argument("oct", ""))
before_id = self.get_argument("beforeid", None)
# bounding box
ulx = str(self.get_argument("ulx", None))
uly = str(self.get_argument("uly", None))
lrx = str(self.get_argument("lrx", None))
lry = str(self.get_argument("lry", None))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
result = md.insert_custos(pname, oct, before_id, ulx, uly, lrx, lry)
md.write_doc()
self.write(json.dumps(result))
self.set_status(200)
class MoveCustosHandler(tornado.web.RequestHandler):
def post(self, file):
custos_id = str(self.get_argument("id", ""))
pname = self.get_argument("pname", "")
oct = self.get_argument("oct", "")
# bounding box
ulx = str(self.get_argument("ulx", None))
uly = str(self.get_argument("uly", None))
lrx = str(self.get_argument("lrx", None))
lry = str(self.get_argument("lry", None))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.move_custos(custos_id, pname, oct, ulx, uly, lrx, lry)
md.write_doc()
self.set_status(200)
class DeleteCustosHandler(tornado.web.RequestHandler):
def post(self, file):
custos_ids = str(self.get_argument("ids", "")).split(",")
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.delete_custos(custos_ids)
md.write_doc()
self.set_status(200)
#####################################################
# STAFF/SYSTEM HANDLER CLASSES #
#####################################################
class InsertSystemHandler(tornado.web.RequestHandler):
def post(self, file):
page_id = str(self.get_argument("pageid", None))
ulx = str(self.get_argument("ulx", None))
uly = str(self.get_argument("uly", None))
lrx = str(self.get_argument("lrx", None))
lry = str(self.get_argument("lry", None))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
result = md.insert_system(page_id, ulx, uly, lrx, lry)
md.write_doc()
self.write(json.dumps(result))
self.set_status(200)
class InsertSystemBreakHandler(tornado.web.RequestHandler):
def post(self, file):
system_id = self.get_argument("systemid", None)
order_number = self.get_argument("ordernumber", None)
next_sb_id = self.get_argument("nextsbid", None)
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
result = md.insert_system_break(system_id, order_number, next_sb_id)
md.write_doc()
self.write(json.dumps(result))
self.set_status(200)
class ModifySystemBreakHandler(tornado.web.RequestHandler):
def post(self, file):
sb_id = str(self.get_argument("sbid"))
order_number = str(self.get_argument("ordernumber"))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
result = md.modify_system_break(sb_id, order_number)
md.write_doc()
self.write(json.dumps(result))
self.set_status(200)
class DeleteSystemBreakHandler(tornado.web.RequestHandler):
def post(self, file):
sb_ids = str(self.get_argument("sbids", "")).split(",")
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.delete_system(sb_ids)
md.write_doc()
self.set_status(200)
class DeleteSystemHandler(tornado.web.RequestHandler):
def post(self, file):
system_ids = str(self.get_argument("sids", "")).split(",")
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.delete_system(system_ids)
md.write_doc()
self.set_status(200)
class UpdateSystemZoneHandler(tornado.web.RequestHandler):
def post(self, file):
system_id = str(self.get_argument("sid"))
ulx = str(self.get_argument("ulx"))
uly = str(self.get_argument("uly"))
lrx = str(self.get_argument("lrx"))
lry = str(self.get_argument("lry"))
mei_directory = os.path.abspath(conf.MEI_DIRECTORY)
fname = os.path.join(mei_directory, file)
md = ModifyDocument(fname)
md.update_system_zone(system_id, ulx, uly, lrx, lry)
md.write_doc()
self.set_status(200)
| 32.467652 | 130 | 0.590379 | 2,128 | 17,565 | 4.719925 | 0.06344 | 0.066209 | 0.141876 | 0.132616 | 0.808841 | 0.787137 | 0.768021 | 0.742533 | 0.718041 | 0.698029 | 0 | 0.006121 | 0.24657 | 17,565 | 540 | 131 | 32.527778 | 0.752834 | 0.023399 | 0 | 0.745989 | 0 | 0 | 0.035236 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072193 | false | 0 | 0.013369 | 0 | 0.157754 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
55dcc189bc1d3bdc2bc2ea9089b4356f88d8c5ce | 9,241 | py | Python | base/site-packages/tencentyun/auth.py | edisonlz/fastor | 342078a18363ac41d3c6b1ab29dbdd44fdb0b7b3 | [
"Apache-2.0"
] | 285 | 2019-12-23T09:50:21.000Z | 2021-12-08T09:08:49.000Z | base/site-packages/tencentyun/auth.py | jeckun/fastor | 342078a18363ac41d3c6b1ab29dbdd44fdb0b7b3 | [
"Apache-2.0"
] | null | null | null | base/site-packages/tencentyun/auth.py | jeckun/fastor | 342078a18363ac41d3c6b1ab29dbdd44fdb0b7b3 | [
"Apache-2.0"
] | 9 | 2019-12-23T12:59:25.000Z | 2022-03-15T05:12:11.000Z | # -*- coding: utf-8 -*-
import time
import random
import hmac, hashlib
import binascii
import base64
from urlparse import urlparse
from tencentyun import conf
class Auth(object):
def __init__(self, secret_id, secret_key):
self.AUTH_URL_FORMAT_ERROR = -1
self.AUTH_SECRET_ID_KEY_ERROR = -2
self._secret_id,self._secret_key = secret_id,secret_key
def get_app_sign_v2(self, bucket, fileid, expired=0, userid='0'):
""" GET V2 SIGN USE FILEID
copy and del operation must have fileid and set expired=0
Args:
bucket: user bucket
fileid: user defined fileid, not urlencoded
expired: expire time
userid: user id, pls ignore or set to '0'
"""
if isinstance(fileid, unicode):
fileid = fileid.encode("utf-8")
if not self._secret_id or not self._secret_key:
return self.AUTH_SECRET_ID_KEY_ERROR
app_info = conf.get_app_info()
appid = app_info['appid']
puserid = ''
if userid != '':
if len(userid) > 64:
return self.AUTH_URL_FORMAT_ERROR
puserid = userid
now = int(time.time())
rdm = random.randint(0, 999999999)
plain_text = 'a=' + appid + '&b=' + bucket +'&k=' + self._secret_id + '&e=' + str(expired) + '&t=' + str(now) + '&r=' + str(rdm) + '&u=' + puserid + '&f=' + fileid
bin = hmac.new(self._secret_key, plain_text, hashlib.sha1)
s = bin.hexdigest()
s = binascii.unhexlify(s)
s = s + plain_text
signature = base64.b64encode(s).rstrip() #生成签名
return signature
def get_info_from_url(self, url):
app_info = conf.get_app_info()
end_point = app_info['end_point']
info = urlparse(url)
end_point_info = urlparse(end_point)
if (info.hostname == urlparse(conf.API_IMAGE_END_POINT).hostname or info.hostname == urlparse(conf.API_VIDEO_END_POINT).hostname) :
# 非下载url
if info.path :
parts = info.path.split('/')
if len(parts) == 5:
cate = parts[1]
ver = parts[2]
appid = parts[3]
userid = parts[4]
return {'cate':cate, 'ver':ver, 'appid':appid, 'userid':userid}
elif len(parts) == 6:
cate = parts[1]
ver = parts[2]
appid = parts[3]
userid = parts[4]
fileid = parts[5]
return {'cate':cate, 'ver':ver, 'appid':appid, 'userid':userid, 'fileid':fileid}
elif len(parts) == 7:
cate = parts[1]
ver = parts[2]
appid = parts[3]
userid = parts[4]
fileid = parts[5]
oper = parts[6]
return {'cate':cate, 'ver':ver, 'appid':appid, 'userid':userid, 'fileid':fileid, 'oper':oper}
else:
return {}
else:
return {}
else :
if info.path :
parts = info.path.split('/')
if len(parts) == 5:
appid = parts[1]
userid = parts[2]
fileid = parts[3]
style = parts[4]
return {'appid':appid, 'userid':userid, 'fileid':fileid, 'style':style}
else:
return {}
else:
return {}
def get_info_from_url_v2(self, url):
app_info = conf.get_app_info()
end_point = app_info['end_point_v2']
info = urlparse(url)
end_point_info = urlparse(end_point)
if (info.hostname == urlparse(conf.API_IMAGE_END_POINT_V2).hostname) :
# 非下载url
if info.path :
parts = info.path.split('/')
if len(parts) == 6:
cate = parts[1]
ver = parts[2]
appid = parts[3]
bucket = parts[4]
userid = parts[5]
return {'cate':cate, 'ver':ver, 'appid':appid, 'bucket':bucket, 'userid':userid}
elif len(parts) == 7:
cate = parts[1]
ver = parts[2]
appid = parts[3]
bucket = parts[4]
userid = parts[5]
fileid = parts[6]
return {'cate':cate, 'ver':ver, 'appid':appid, 'bucket':bucket, 'userid':userid, 'fileid':fileid}
elif len(parts) == 8:
cate = parts[1]
ver = parts[2]
appid = parts[3]
bucket = parts[4]
userid = parts[5]
fileid = parts[6]
oper = parts[7]
return {'cate':cate, 'ver':ver, 'appid':appid, 'bucket':bucket, 'userid':userid, 'fileid':fileid, 'oper':oper}
else:
return {}
else:
return {}
else :
if info.path :
parts = info.path.split('/')
if len(parts) == 5:
arr = parts[1].split('-')
if len(arr) != 2:
return {}
bucket = arr[0]
appid = arr[1]
userid = parts[2]
fileid = parts[3]
style = parts[4]
return {'appid':appid, 'bucket':bucket, 'userid':userid, 'fileid':fileid, 'style':style}
else:
return {}
else:
return {}
def app_sign_v2(self, url, expired=0):
if not self._secret_id or not self._secret_key:
return self.AUTH_SECRET_ID_KEY_ERROR
url_info = self.get_info_from_url_v2(url)
if len(url_info) == 0:
return self.AUTH_URL_FORMAT_ERROR
if 'cate' in url_info:
cate = url_info['cate']
else:
cate = ''
if 'ver' in url_info:
ver = url_info['ver']
else:
ver = ''
appid = url_info['appid']
bucket = url_info['bucket']
userid = url_info['userid']
if 'oper' in url_info:
oper = url_info['oper']
else:
oper = ''
if 'fileid' in url_info:
fileid = url_info['fileid']
else:
fileid = ''
if 'style' in url_info:
style = url_info['style']
else:
style = ''
once_opers = ['del', 'copy']
if oper in once_opers:
expired = 0
if not oper and not style and fileid:
fileid = ''
puserid = ''
if userid != '':
if len(userid) > 64:
return self.AUTH_URL_FORMAT_ERROR
puserid = userid
now = int(time.time())
rdm = random.randint(0, 999999999)
plain_text = 'a=' + appid + '&k=' + self._secret_id + '&e=' + str(expired) + '&t=' + str(now) + '&r=' + str(rdm) + '&u=' + puserid + '&f=' + fileid
bin = hmac.new(self._secret_key, plain_text, hashlib.sha1)
s = bin.hexdigest()
s = binascii.unhexlify(s)
s = s + plain_text.encode('ascii')
signature = base64.b64encode(s).rstrip() #生成签名
return signature
def app_sign(self, url, expired=0):
if not self._secret_id or not self._secret_key:
return self.AUTH_SECRET_ID_KEY_ERROR
url_info = self.get_info_from_url(url)
if len(url_info) == 0:
return self.AUTH_URL_FORMAT_ERROR
if 'cate' in url_info:
cate = url_info['cate']
else:
cate = ''
if 'ver' in url_info:
ver = url_info['ver']
else:
ver = ''
appid = url_info['appid']
userid = url_info['userid']
if 'oper' in url_info:
oper = url_info['oper']
else:
oper = ''
if 'fileid' in url_info:
fileid = url_info['fileid']
else:
fileid = ''
if 'style' in url_info:
style = url_info['style']
else:
style = ''
once_opers = ['del', 'copy']
if oper in once_opers:
expired = 0
puserid = ''
if userid != '':
if len(userid) > 64:
return self.AUTH_URL_FORMAT_ERROR
puserid = userid
now = int(time.time())
rdm = random.randint(0, 999999999)
plain_text = 'a=' + appid + '&k=' + self._secret_id + '&e=' + str(expired) + '&t=' + str(now) + '&r=' + str(rdm) + '&u=' + puserid + '&f=' + fileid
bin = hmac.new(self._secret_key, plain_text, hashlib.sha1)
s = bin.hexdigest()
s = binascii.unhexlify(s)
s = s + plain_text.encode('ascii')
signature = base64.b64encode(s).rstrip() #生成签名
return signature
| 34.099631 | 171 | 0.466941 | 1,014 | 9,241 | 4.092702 | 0.115385 | 0.048916 | 0.021687 | 0.024578 | 0.839759 | 0.819277 | 0.808434 | 0.805542 | 0.802651 | 0.77759 | 0 | 0.023299 | 0.414782 | 9,241 | 270 | 172 | 34.225926 | 0.744083 | 0.0303 | 0 | 0.786667 | 0 | 0 | 0.050388 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026667 | false | 0 | 0.031111 | 0 | 0.186667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3621732b35476f2db7e7602186e7335262749aa0 | 1,727 | py | Python | torchsparse/base_utils.py | f-sky/torchsparse | 65466a10c6fa54bff17c6429706b7019a2a59409 | [
"MIT"
] | null | null | null | torchsparse/base_utils.py | f-sky/torchsparse | 65466a10c6fa54bff17c6429706b7019a2a59409 | [
"MIT"
] | null | null | null | torchsparse/base_utils.py | f-sky/torchsparse | 65466a10c6fa54bff17c6429706b7019a2a59409 | [
"MIT"
] | null | null | null | import numpy as np
import torch
def to_device(x, device, non_blocking=True):
if x is None:
return x
elif isinstance(x, list):
return [to_device(a, device) for a in x]
elif isinstance(x, dict):
return {k: to_device(v, device) for k, v in x.items()}
elif isinstance(x, torch.Tensor):
return x.to(device=device, non_blocking=non_blocking)
elif isinstance(x, (int, float, str)):
return x
else:
raise TypeError()
def to_cuda(x):
if x is None:
return x
elif isinstance(x, list):
return [to_cuda(a) for a in x]
elif isinstance(x, dict):
return {k: to_cuda(v) for k, v in x.items()}
elif isinstance(x, torch.Tensor):
return x.cuda()
elif isinstance(x, (int, float, str)):
return x
else:
raise TypeError()
def to_cpu(x):
if x is None:
return x
elif isinstance(x, list):
return [to_cpu(a) for a in x]
elif isinstance(x, dict):
return {k: to_cpu(v) for k, v in x.items()}
elif isinstance(x, torch.Tensor):
return x.cpu()
elif isinstance(x, (int, float, str)):
return x
else:
raise TypeError()
def clone(x):
if x is None:
return x
elif isinstance(x, torch.Tensor):
return x.clone()
elif isinstance(x, np.ndarray):
return x.copy()
elif isinstance(x, list):
return [clone(el) for el in x]
elif isinstance(x, tuple):
return (clone(el) for el in x)
elif isinstance(x, dict):
return {k: clone(v) for k, v in x.items()}
elif isinstance(x, (int, float, str)):
return x
else:
raise RuntimeError(f'{type(x)} cannot be cloned.')
| 25.776119 | 62 | 0.579039 | 264 | 1,727 | 3.742424 | 0.174242 | 0.255061 | 0.273279 | 0.145749 | 0.761134 | 0.741903 | 0.741903 | 0.723684 | 0.712551 | 0.712551 | 0 | 0 | 0.3011 | 1,727 | 66 | 63 | 26.166667 | 0.818558 | 0 | 0 | 0.603448 | 0 | 0 | 0.015634 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.034483 | 0 | 0.482759 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3629de26e34dc71e6f24912fdb82da75cd2245fd | 5,103 | py | Python | src/genie/libs/parser/iosxe/tests/ShowIsisTopology/cli/equal/golden_output1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/iosxe/tests/ShowIsisTopology/cli/equal/golden_output1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/iosxe/tests/ShowIsisTopology/cli/equal/golden_output1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
"tag": {
"1": {
"level": {
1: {
"hosts": {
"R1-asr1k-43": {},
"R2-asr1k-33": {
"metric": 10,
"interface": {
"Gi0/0/2": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e020"
},
"Gi0/0/3": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e021"
}
}
},
"R3-asr1k-53": {
"metric": 20,
"interface": {
"Gi0/0/2": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e020"
},
"Gi0/0/3": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e021"
}
}
},
"R5-asr1k-11": {
"metric": 16777234,
"interface": {
"Gi0/0/2": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e020"
},
"Gi0/0/3": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e021"
}
}
},
"R6-asr1k-20": {
"metric": 20,
"interface": {
"Gi0/0/2": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e020"
},
"Gi0/0/3": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e021"
}
}
}
}
},
2: {
"hosts": {
"R1-asr1k-43": {},
"R2-asr1k-33": {
"metric": 10,
"interface": {
"Gi0/0/2": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e020"
},
"Gi0/0/3": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e021"
}
}
},
"R3-asr1k-53": {
"metric": 20,
"interface": {
"Gi0/0/2": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e020"
},
"Gi0/0/3": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e021"
}
}
},
"R5-asr1k-11": {
"metric": 16777234,
"interface": {
"Gi0/0/2": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e020"
},
"Gi0/0/3": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e021"
}
}
},
"R6-asr1k-20": {
"metric": 20,
"interface": {
"Gi0/0/2": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e020"
},
"Gi0/0/3": {
"next_hop": "R2-asr1k-33",
"snpa": "c47d.4f12.e021"
}
}
}
}
}
}
}
}
} | 41.827869 | 62 | 0.17284 | 255 | 5,103 | 3.392157 | 0.12549 | 0.145665 | 0.187283 | 0.25896 | 0.971098 | 0.971098 | 0.971098 | 0.971098 | 0.971098 | 0.971098 | 0 | 0.21794 | 0.720361 | 5,103 | 122 | 63 | 41.827869 | 0.388227 | 0 | 0 | 0.622951 | 0 | 0 | 0.186716 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
36522dd28173a511ecfd7b5ee954f06fa6f7990f | 2,550 | py | Python | ignition/script-python/v1/tests/auth-simple/__logic__/code.py | pwalker91/IgnitionSwagger | 7bbc0a1a692a57a483c82d94570ad10f365d6f4e | [
"MIT"
] | null | null | null | ignition/script-python/v1/tests/auth-simple/__logic__/code.py | pwalker91/IgnitionSwagger | 7bbc0a1a692a57a483c82d94570ad10f365d6f4e | [
"MIT"
] | null | null | null | ignition/script-python/v1/tests/auth-simple/__logic__/code.py | pwalker91/IgnitionSwagger | 7bbc0a1a692a57a483c82d94570ad10f365d6f4e | [
"MIT"
] | null | null | null | import apiAuth
from __swagger2__ import requests as swagRq
from __swagger2__ import responses as swagRsp
from v1 import statics as swagStc
PREFIX = swagStc.IGNITION_SWAGGER_CUSTOM_PREFIX
class GET(swagRq.HttpMethod):
SWAGGER = {
# CUSTOM KEYS FOR IA PURPOSES
PREFIX+'auth': [
{
'method': apiAuth.simple.allowWithApiKeyHeader,
'extraArgs': {
'headerName': 'IS-API-KEY',
'keyValue': 'abcd1234',
},
},
],
PREFIX+'hide': False,
PREFIX+'validateRequest': False,
PREFIX+'validateResponse': False,
PREFIX+'tagGroup': 'Tests',
# ACTUAL SWAGGER DEFINITION
'operationId': 'tests_validation_auth-simple_get',
'summary': 'GET Test Simple Auth',
'description': '''Provides the ability to test a simple endpoint with "authentication" required.
Provide the value `abcd1234` in the Header `IS-API-KEY`.
''',
'security': [
{'api_key': []},
],
'tags': [
'Testing'
],
'consumes': [
'application/x-www-form-urlencoded',
],
'produces': [
'application/json',
],
'parameters': [
{'$ref': '#/parameters/objs_api_key_header'},
],
'responses': {
'200': swagStc.GENERIC_SUCCESS_RESPONSE,
'default': swagStc.GENERIC_FAILURE_RESPONSE,
}
}
@staticmethod
def __do__(wdr, LOGGER):
return swagRsp.json(success=True, status='SUCCESS', data={'auth': wdr.swag['auth']})
#END DEF
#END CLASS
class POST(swagRq.HttpMethod):
SWAGGER = {
# CUSTOM KEYS FOR IA PURPOSES
PREFIX+'auth': [
{
'method': apiAuth.simple.allowWithApiKeyHeader,
'extraArgs': {
'headerName': 'IS-API-KEY',
'keyValue': 'qwerty123456',
},
},
],
PREFIX+'hide': False,
PREFIX+'validateRequest': False,
PREFIX+'validateResponse': False,
PREFIX+'tagGroup': 'Tests',
# ACTUAL SWAGGER DEFINITION
'operationId': 'tests_validation_auth-simple_post',
'summary': 'POST Test Simple Auth',
'description': '''Provides the ability to test a simple endpoint with "authentication" required.
Provide the value `qwerty123456` in the Header `IS-API-KEY`.
''',
'security': [
{'api_key': []},
],
'tags': [
'Testing'
],
'consumes': [
'application/json',
],
'produces': [
'application/json',
],
'parameters': [
{'$ref': '#/parameters/objs_api_key_header'},
],
'responses': {
'200': swagStc.GENERIC_SUCCESS_RESPONSE,
'default': swagStc.GENERIC_FAILURE_RESPONSE,
}
}
@staticmethod
def __do__(wdr, LOGGER):
return swagRsp.json(success=True, status='SUCCESS', data={'auth': wdr.swag['auth']})
#END DEF
#END CLASS | 23.831776 | 98 | 0.653725 | 271 | 2,550 | 6 | 0.343173 | 0.02952 | 0.01968 | 0.03567 | 0.841328 | 0.841328 | 0.841328 | 0.841328 | 0.841328 | 0.841328 | 0 | 0.013969 | 0.185882 | 2,550 | 107 | 99 | 23.831776 | 0.769268 | 0.05451 | 0 | 0.714286 | 0 | 0 | 0.411569 | 0.067416 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021978 | false | 0 | 0.043956 | 0.021978 | 0.131868 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
365e630ef1467714fc706b5d2b4ce12bff78be9f | 4,901 | py | Python | NippoKun/report/tests/test_report.py | KIKUYA-Takumi/Nippokun | aa82f97aaf5b61d94b213425f28314a248914eb9 | [
"MIT"
] | null | null | null | NippoKun/report/tests/test_report.py | KIKUYA-Takumi/Nippokun | aa82f97aaf5b61d94b213425f28314a248914eb9 | [
"MIT"
] | 4 | 2016-10-19T00:23:21.000Z | 2016-11-04T01:29:08.000Z | NippoKun/report/tests/test_report.py | KIKUYA-Takumi/NippoKun | aa82f97aaf5b61d94b213425f28314a248914eb9 | [
"MIT"
] | null | null | null | from django.contrib.auth.models import User
from django.test import TestCase, Client, RequestFactory
from ..models import Report
# Create your tests here.
class CreateReportTest(TestCase):
def setUp(self):
self.client = Client()
self.client.post('/report/user_register/',
{'username': 'john', 'password1': 'johnpass', 'password2': 'johnpass'})
self.client.post('/report/login/', {'username': 'john', 'password': 'johnpass'})
"""
status_code = 302: created new report.
status_code = 200: not create new report.
"""
def test_create_report(self):
response = self.client.post('/report/report_entries/',
{'report_title': 'test title', 'report_content': 'test'})
self.assertEqual(response.status_code, 302)
def test_create_report_no_report_title(self):
response = self.client.post('/report/report_entries/',
{'report_title': '', 'report_content': 'test'})
self.assertEqual(response.status_code, 302)
def test_create_report_no_report_content(self):
response = self.client.post('/report/report_entries/',
{'report_title': 'test title', 'report_content': ''})
self.assertEqual(response.status_code, 302)
def test_create_report_no_report_info(self):
response = self.client.post('/report/report_entries/',
{'report_title': '', 'report_content': ''})
self.assertEqual(response.status_code, 302)
class DeleteReportTest(TestCase):
def setUp(self):
self.client = Client()
self.client.post('/report/user_register/',
{'username': 'john', 'password1': 'johnpass', 'password2': 'johnpass'})
self.client.post('/report/login/', {'username': 'john', 'password': 'johnpass'})
"""
status_code = 404: deleted report.
status_code = otherwise: not delete report.
"""
def test_delete_report(self):
report = self.client.post('/report/report_entries/',
{'report_title': 'test title', 'report_content': 'test'})
before_count = Report.objects.count()
self.client.delete(report)
after_count = Report.objects.count()
self.assertEqual(before_count, after_count + 1)
class UpdateReportContentTest(TestCase):
def setUp(self):
self.client = Client()
self.client.post('/report/user_register/',
{'username': 'john',
'password1': 'johnpass',
'password2': 'johnpass'})
self.client.login(username='john', password='johnpass')
request_factory = RequestFactory()
self.request = request_factory.get('/report/mypage/')
def test_update_report_content(self):
self.request.user = User.objects.get(pk=1)
self.client.post('/report/report_entries/',
{'report_author': self.request.user,
'report_title': 'test title',
'report_content': 'test'
})
self.request.report = Report.objects.get(pk=1)
report = {
'report_author': self.request.report.report_author,
'report_title': self.request.report.report_title,
'report_content': 'update content'
}
self.client.post('/report/1/edition/', report)
self.request.report = Report.objects.get(pk=1)
self.assertEqual(self.request.report.report_content, 'update content')
class UpdateReportTitleTest(TestCase):
def setUp(self):
self.client = Client()
self.client.post('/report/user_register/',
{'username': 'john',
'password1': 'johnpass',
'password2': 'johnpass'})
self.client.login(username='john', password='johnpass')
request_factory = RequestFactory()
self.request = request_factory.get('/report/mypage/')
def test_update_report_title(self):
self.request.user = User.objects.get(pk=1)
self.client.post('/report/report_entries/',
{'report_author': self.request.user,
'report_title': 'test title',
'report_content': 'test'
})
self.request.report = Report.objects.get(pk=1)
report = {
'report_author': self.request.report.report_author,
'report_title': 'update title',
'report_content': self.request.report.report_content
}
self.client.post('/report/1/edition/', report)
self.request.report = Report.objects.get(pk=1)
self.assertEqual(self.request.report.report_title, 'update title')
| 40.841667 | 96 | 0.580902 | 491 | 4,901 | 5.635438 | 0.130346 | 0.079508 | 0.075894 | 0.108421 | 0.809541 | 0.767257 | 0.767257 | 0.767257 | 0.767257 | 0.754246 | 0 | 0.010854 | 0.285656 | 4,901 | 119 | 97 | 41.184874 | 0.779492 | 0.004693 | 0 | 0.666667 | 0 | 0 | 0.217718 | 0.053411 | 0 | 0 | 0 | 0 | 0.077778 | 1 | 0.122222 | false | 0.111111 | 0.033333 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
368b487af9eeb0c062b0db605ecc01733a5ab12a | 36,336 | py | Python | poem/core/input_generator_test.py | ParikhKadam/google-research | 00a282388e389e09ce29109eb050491c96cfab85 | [
"Apache-2.0"
] | 2 | 2022-01-21T18:15:34.000Z | 2022-01-25T15:21:34.000Z | poem/core/input_generator_test.py | ParikhKadam/google-research | 00a282388e389e09ce29109eb050491c96cfab85 | [
"Apache-2.0"
] | 110 | 2021-10-01T18:22:38.000Z | 2021-12-27T22:08:31.000Z | poem/core/input_generator_test.py | admariner/google-research | 7cee4b22b925581d912e8d993625c180da2a5a4f | [
"Apache-2.0"
] | 1 | 2021-12-01T23:20:45.000Z | 2021-12-01T23:20:45.000Z | # coding=utf-8
# Copyright 2021 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for input generator functions."""
import math
import numpy as np
import tensorflow as tf
from poem.core import common
from poem.core import input_generator
from poem.core import keypoint_profiles
class InputGeneratorTest(tf.test.TestCase):
def test_preprocess_keypoints_3d(self):
profile = keypoint_profiles.KeypointProfile3D(
name='Dummy',
keypoint_names=[('A', keypoint_profiles.LeftRightType.UNKNOWN),
('B', keypoint_profiles.LeftRightType.UNKNOWN),
('C', keypoint_profiles.LeftRightType.UNKNOWN)],
offset_keypoint_names=['A'],
scale_keypoint_name_pairs=[(['A'], ['B'])],
segment_name_pairs=[])
keypoints_3d = [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]
preprocessed_keypoints_3d, side_outputs = (
input_generator.preprocess_keypoints_3d(keypoints_3d, profile))
sqrt_3 = 1.73205080757
self.assertAllClose(
preprocessed_keypoints_3d,
[[0.0, 0.0, 0.0], [1.0 / sqrt_3, 1.0 / sqrt_3, 1.0 / sqrt_3],
[2.0 / sqrt_3, 2.0 / sqrt_3, 2.0 / sqrt_3]])
self.assertCountEqual(
side_outputs,
['offset_points_3d', 'scale_distances_3d', 'preprocessed_keypoints_3d'])
self.assertAllClose(side_outputs['offset_points_3d'], [[1.0, 2.0, 3.0]])
self.assertAllClose(side_outputs['scale_distances_3d'], [[3.0 * sqrt_3]])
self.assertAllClose(
side_outputs['preprocessed_keypoints_3d'],
[[0.0, 0.0, 0.0], [1.0 / sqrt_3, 1.0 / sqrt_3, 1.0 / sqrt_3],
[2.0 / sqrt_3, 2.0 / sqrt_3, 2.0 / sqrt_3]])
def test_preprocess_keypoints_2d_with_projection(self):
# Shape = [4, 2, 17, 3].
keypoints_3d = tf.constant([
[[[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[7.0, 8.0, 9.0]],
[[11.0, 12.0, 13.0], [13.0, 14.0, 15.0], [15.0, 16.0, 17.0],
[17.0, 18.0, 19.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [7.0, 8.0, 9.0]]],
[[[31.0, 32.0, 33.0], [33.0, 34.0, 35.0], [35.0, 36.0, 37.0],
[37.0, 38.0, 39.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [7.0, 8.0, 9.0]],
[[41.0, 42.0, 43.0], [43.0, 44.0, 35.0], [45.0, 46.0, 47.0],
[47.0, 48.0, 49.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [7.0, 8.0, 9.0]]],
[[[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[7.0, 8.0, 9.0]],
[[11.0, 12.0, 13.0], [13.0, 14.0, 15.0], [15.0, 16.0, 17.0],
[17.0, 18.0, 19.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [7.0, 8.0, 9.0]]],
[[[31.0, 32.0, 33.0], [33.0, 34.0, 35.0], [35.0, 36.0, 37.0],
[37.0, 38.0, 39.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [7.0, 8.0, 9.0]],
[[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[7.0, 8.0, 9.0]]],
])
keypoint_profile_3d = (
keypoint_profiles.create_keypoint_profile_or_die('LEGACY_3DH36M17'))
keypoint_profile_2d = (
keypoint_profiles.create_keypoint_profile_or_die('LEGACY_2DCOCO13'))
keypoints_2d, _ = input_generator.preprocess_keypoints_2d(
keypoints_2d=None,
keypoint_masks_2d=None,
keypoints_3d=keypoints_3d,
model_input_keypoint_type=common
.MODEL_INPUT_KEYPOINT_TYPE_3D_PROJECTION,
keypoint_profile_2d=keypoint_profile_2d,
keypoint_profile_3d=keypoint_profile_3d,
azimuth_range=(math.pi / 2.0, math.pi / 2.0),
elevation_range=(-math.pi / 2.0, -math.pi / 2.0),
roll_range=(math.pi, math.pi))
# Note that the results here were copied from test output; this test is
# mainly meant for protecting the executability testing batch mixing. The
# actual projection accuracy is tested separately.
expected_keypoints_2d = [
[[[-0.08777856, -0.08777856], [0., 0.], [-0.08777856, -0.08777856],
[-0.1613905, -0.1613905], [-0.22400928, -0.22400929], [0., 0.],
[-0.08777856, -0.08777856], [-0.22400928, -0.22400929], [0., 0.],
[-0.08777856, -0.08777856], [-0.1613905, -0.1613905],
[-0.22400928, -0.22400929], [-0.22400928, -0.22400929]],
[[-0.03107818, -0.03107818], [0.19100799, 0.19100802],
[0.14718375, 0.14718376], [0.10647015, 0.10647015],
[0.06854735, 0.06854735], [0.19100799, 0.19100802],
[0.14718375, 0.14718376], [0.06854735, 0.06854735],
[0.19100799, 0.19100802], [0.14718375, 0.14718376],
[0.10647015, 0.10647015], [0.06854735, 0.06854735],
[0.06854735, 0.06854735]]],
[[[-0.0098562, -0.0098562], [0.17552288, 0.1755229],
[0.16192658, 0.1619266], [0.14864118, 0.1486412],
[0.13565616, 0.13565616], [0.17552288, 0.1755229],
[0.16192658, 0.1619266], [0.13565616, 0.13565616],
[0.17552288, 0.1755229], [0.16192658, 0.1619266],
[0.14864118, 0.1486412], [0.13565616, 0.13565616],
[0.13565616, 0.13565616]],
[[-0.00734754, 0.02939016], [0.17376201, 0.17376202],
[0.16365208, 0.16365209], [0.15371482, 0.15371484],
[0.14394586, 0.14394587], [0.17376201, 0.17376202],
[0.16365208, 0.16365209], [0.14394586, 0.14394587],
[0.17376201, 0.17376202], [0.16365208, 0.16365209],
[0.15371482, 0.15371484], [0.14394586, 0.14394587],
[0.14394586, 0.14394587]]],
[[[-0.08777856, -0.08777856], [0., 0.], [-0.08777856, -0.08777856],
[-0.1613905, -0.1613905], [-0.22400928, -0.22400929], [0., 0.],
[-0.08777856, -0.08777856], [-0.22400928, -0.22400929], [0., 0.],
[-0.08777856, -0.08777856], [-0.1613905, -0.1613905],
[-0.22400928, -0.22400929], [-0.22400928, -0.22400929]],
[[-0.03107818, -0.03107818], [0.19100799, 0.19100802],
[0.14718375, 0.14718376], [0.10647015, 0.10647015],
[0.06854735, 0.06854735], [0.19100799, 0.19100802],
[0.14718375, 0.14718376], [0.06854735, 0.06854735],
[0.19100799, 0.19100802], [0.14718375, 0.14718376],
[0.10647015, 0.10647015], [0.06854735, 0.06854735],
[0.06854735, 0.06854735]]],
[[[-0.0098562, -0.0098562], [0.17552288, 0.1755229],
[0.16192658, 0.1619266], [0.14864118, 0.1486412],
[0.13565616, 0.13565616], [0.17552288, 0.1755229],
[0.16192658, 0.1619266], [0.13565616, 0.13565616],
[0.17552288, 0.1755229], [0.16192658, 0.1619266],
[0.14864118, 0.1486412], [0.13565616, 0.13565616],
[0.13565616, 0.13565616]],
[[-0.08777856, -0.08777856], [0., 0.], [-0.08777856, -0.08777856],
[-0.1613905, -0.1613905], [-0.22400928, -0.22400929], [0., 0.],
[-0.08777856, -0.08777856], [-0.22400928, -0.22400929], [0., 0.],
[-0.08777856, -0.08777856], [-0.1613905, -0.1613905],
[-0.22400928, -0.22400929], [-0.22400928, -0.22400929]]]
]
self.assertAllClose(keypoints_2d, expected_keypoints_2d)
def test_preprocess_keypoints_2d_with_input_and_projection(self):
# Shape = [4, 2, 13, 2].
keypoints_2d = tf.constant([
[
[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0],
[3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0], [3.0, 4.0],
[5.0, 6.0], [7.0, 8.0], [1.0, 2.0]],
[[11.0, 12.0], [13.0, 14.0], [15.0, 16.0], [17.0, 18.0], [1.0, 2.0],
[3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0], [3.0, 4.0],
[5.0, 6.0], [7.0, 8.0], [1.0, 2.0]],
],
[
[[31.0, 32.0], [33.0, 34.0], [35.0, 36.0], [37.0, 38.0], [1.0, 2.0],
[3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0], [3.0, 4.0],
[5.0, 6.0], [7.0, 8.0], [1.0, 2.0]],
[[41.0, 42.0], [43.0, 44.0], [45.0, 46.0], [47.0, 48.0], [1.0, 2.0],
[3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0], [3.0, 4.0],
[5.0, 6.0], [7.0, 8.0], [1.0, 2.0]],
],
[
[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0],
[3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0], [3.0, 4.0],
[5.0, 6.0], [7.0, 8.0], [1.0, 2.0]],
[[11.0, 12.0], [13.0, 14.0], [15.0, 16.0], [17.0, 18.0], [1.0, 2.0],
[3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0], [3.0, 4.0],
[5.0, 6.0], [7.0, 8.0], [1.0, 2.0]],
],
[
[[31.0, 32.0], [33.0, 34.0], [35.0, 36.0], [37.0, 38.0], [1.0, 2.0],
[3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0], [3.0, 4.0],
[5.0, 6.0], [7.0, 8.0], [1.0, 2.0]],
[[41.0, 42.0], [43.0, 44.0], [45.0, 46.0], [47.0, 48.0], [1.0, 2.0],
[3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0], [3.0, 4.0],
[5.0, 6.0], [7.0, 8.0], [1.0, 2.0]],
],
])
# Shape = [4, 2, 13].
keypoint_masks_2d = tf.constant([
[[
0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.10, 0.11,
0.12, 0.13
],
[
0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.20, 0.21, 0.22, 0.23, 0.24,
0.25, 0.26
]],
[[
0.27, 0.28, 0.29, 0.30, 0.31, 0.32, 0.33, 0.34, 0.35, 0.36, 0.37,
0.38, 0.39
],
[
0.40, 0.41, 0.42, 0.43, 0.44, 0.45, 0.46, 0.47, 0.48, 0.49, 0.50,
0.51, 0.52
]],
[[
0.53, 0.54, 0.55, 0.56, 0.57, 0.58, 0.59, 0.60, 0.61, 0.62, 0.63,
0.64, 0.65
],
[
0.66, 0.67, 0.68, 0.69, 0.70, 0.71, 0.72, 0.73, 0.74, 0.75, 0.76,
0.77, 0.78
]],
[[
0.79, 0.80, 0.81, 0.82, 0.83, 0.84, 0.85, 0.86, 0.87, 0.88, 0.89,
0.90, 0.91
],
[
0.92, 0.93, 0.94, 0.95, 0.96, 0.97, 0.98, 0.99, 1.00, 0.99, 0.98,
0.97, 0.96
]],
])
# Shape = [4, 2, 17, 3].
keypoints_3d = tf.constant([
[[[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[7.0, 8.0, 9.0]],
[[11.0, 12.0, 13.0], [13.0, 14.0, 15.0], [15.0, 16.0, 17.0],
[17.0, 18.0, 19.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [7.0, 8.0, 9.0]]],
[[[31.0, 32.0, 33.0], [33.0, 34.0, 35.0], [35.0, 36.0, 37.0],
[37.0, 38.0, 39.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [7.0, 8.0, 9.0]],
[[41.0, 42.0, 43.0], [43.0, 44.0, 35.0], [45.0, 46.0, 47.0],
[47.0, 48.0, 49.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [7.0, 8.0, 9.0]]],
[[[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[7.0, 8.0, 9.0]],
[[11.0, 12.0, 13.0], [13.0, 14.0, 15.0], [15.0, 16.0, 17.0],
[17.0, 18.0, 19.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [7.0, 8.0, 9.0]]],
[[[31.0, 32.0, 33.0], [33.0, 34.0, 35.0], [35.0, 36.0, 37.0],
[37.0, 38.0, 39.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0],
[7.0, 8.0, 9.0], [7.0, 8.0, 9.0]],
[[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[1.0, 2.0, 3.0], [3.0, 4.0, 5.0], [5.0, 6.0, 7.0], [7.0, 8.0, 9.0],
[7.0, 8.0, 9.0]]],
])
# Shape = [4, 2].
assignment = tf.constant([[True, True], [False, False], [True, False],
[False, True]])
keypoint_profile_3d = (
keypoint_profiles.create_keypoint_profile_or_die('LEGACY_3DH36M17'))
keypoint_profile_2d = (
keypoint_profiles.create_keypoint_profile_or_die('LEGACY_2DCOCO13'))
keypoints_2d, _ = input_generator.preprocess_keypoints_2d(
keypoints_2d,
keypoint_masks_2d,
keypoints_3d,
model_input_keypoint_type=common
.MODEL_INPUT_KEYPOINT_TYPE_2D_INPUT_AND_3D_PROJECTION,
keypoint_profile_2d=keypoint_profile_2d,
keypoint_profile_3d=keypoint_profile_3d,
azimuth_range=(math.pi / 2.0, math.pi / 2.0),
elevation_range=(-math.pi / 2.0, -math.pi / 2.0),
roll_range=(math.pi, math.pi),
projection_mix_batch_assignment=assignment)
# Note that the results here were copied from test output; this test is
# mainly meant for protecting the executability testing batch mixing. The
# actual projection accuracy is tested separately.
expected_keypoints_2d = [
[
[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0],
[3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0], [3.0, 4.0],
[5.0, 6.0], [7.0, 8.0], [1.0, 2.0]],
[[11.0, 12.0], [13.0, 14.0], [15.0, 16.0], [17.0, 18.0], [1.0, 2.0],
[3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0], [3.0, 4.0],
[5.0, 6.0], [7.0, 8.0], [1.0, 2.0]],
],
[
[[-0.0098562, -0.0098562], [0.17552288, 0.1755229],
[0.16192658, 0.1619266], [0.14864118, 0.1486412],
[0.13565616, 0.13565616], [0.17552288, 0.1755229],
[0.16192658, 0.1619266], [0.13565616, 0.13565616],
[0.17552288, 0.1755229], [0.16192658, 0.1619266],
[0.14864118, 0.1486412], [0.13565616, 0.13565616],
[0.13565616, 0.13565616]],
[[-0.00734754, 0.02939016], [0.17376201, 0.17376202],
[0.16365208, 0.16365209], [0.15371482, 0.15371484],
[0.14394586, 0.14394587], [0.17376201, 0.17376202],
[0.16365208, 0.16365209], [0.14394586, 0.14394587],
[0.17376201, 0.17376202], [0.16365208, 0.16365209],
[0.15371482, 0.15371484], [0.14394586, 0.14394587],
[0.14394586, 0.14394587]],
],
[
[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0],
[3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0], [3.0, 4.0],
[5.0, 6.0], [7.0, 8.0], [1.0, 2.0]],
[[-0.03107818, -0.03107818], [0.19100799, 0.19100802],
[0.14718375, 0.14718376], [0.10647015, 0.10647015],
[0.06854735, 0.06854735], [0.19100799, 0.19100802],
[0.14718375, 0.14718376], [0.06854735, 0.06854735],
[0.19100799, 0.19100802], [0.14718375, 0.14718376],
[0.10647015, 0.10647015], [0.06854735, 0.06854735],
[0.06854735, 0.06854735]],
],
[
[[-0.0098562, -0.0098562], [0.17552288, 0.1755229],
[0.16192658, 0.1619266], [0.14864118, 0.1486412],
[0.13565616, 0.13565616], [0.17552288, 0.1755229],
[0.16192658, 0.1619266], [0.13565616, 0.13565616],
[0.17552288, 0.1755229], [0.16192658, 0.1619266],
[0.14864118, 0.1486412], [0.13565616, 0.13565616],
[0.13565616, 0.13565616]],
[[41.0, 42.0], [43.0, 44.0], [45.0, 46.0], [47.0, 48.0], [1.0, 2.0],
[3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [1.0, 2.0], [3.0, 4.0],
[5.0, 6.0], [7.0, 8.0], [1.0, 2.0]],
],
]
self.assertAllClose(keypoints_2d, expected_keypoints_2d)
def test_create_model_keypoints_2d_input(self):
keypoint_profile_2d = keypoint_profiles.KeypointProfile2D(
name='Dummy',
keypoint_names=[('A', keypoint_profiles.LeftRightType.UNKNOWN),
('B', keypoint_profiles.LeftRightType.UNKNOWN),
('C', keypoint_profiles.LeftRightType.UNKNOWN)],
offset_keypoint_names=['A', 'B'],
scale_keypoint_name_pairs=[(['A', 'B'], ['B']), (['A'], ['B', 'C'])],
segment_name_pairs=[],
scale_distance_reduction_fn=tf.math.reduce_sum,
scale_unit=1.0)
# Shape = [2, 3, 2].
keypoints_2d = tf.constant([[[0.0, 1.0], [2.0, 3.0], [4.0, 5.0]],
[[10.0, 11.0], [12.0, 13.0], [14.0, 15.0]]])
keypoint_masks_2d = tf.ones([2, 3], dtype=tf.float32)
# Shape = [2, 6].
features, side_outputs = input_generator.create_model_input(
keypoints_2d,
keypoint_masks_2d,
keypoints_3d=None,
model_input_keypoint_type=common.MODEL_INPUT_KEYPOINT_TYPE_2D_INPUT,
normalize_keypoints_2d=True,
keypoint_profile_2d=keypoint_profile_2d)
sqrt_2 = 1.414213562
self.assertAllClose(features,
[[
-0.25 / sqrt_2, -0.25 / sqrt_2, 0.25 / sqrt_2,
0.25 / sqrt_2, 0.75 / sqrt_2, 0.75 / sqrt_2
],
[
-0.25 / sqrt_2, -0.25 / sqrt_2, 0.25 / sqrt_2,
0.25 / sqrt_2, 0.75 / sqrt_2, 0.75 / sqrt_2
]])
self.assertCountEqual(side_outputs.keys(), [
'preprocessed_keypoints_2d', 'preprocessed_keypoint_masks_2d',
'offset_points_2d', 'scale_distances_2d'
])
self.assertAllClose(
side_outputs['preprocessed_keypoints_2d'],
[[[-0.25 / sqrt_2, -0.25 / sqrt_2], [0.25 / sqrt_2, 0.25 / sqrt_2],
[0.75 / sqrt_2, 0.75 / sqrt_2]],
[[-0.25 / sqrt_2, -0.25 / sqrt_2], [0.25 / sqrt_2, 0.25 / sqrt_2],
[0.75 / sqrt_2, 0.75 / sqrt_2]]])
self.assertAllClose(side_outputs['preprocessed_keypoint_masks_2d'],
[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]])
self.assertAllClose(side_outputs['offset_points_2d'],
[[[1.0, 2.0]], [[11.0, 12.0]]])
self.assertAllClose(side_outputs['scale_distances_2d'],
[[[4.0 * sqrt_2]], [[4.0 * sqrt_2]]])
def test_create_model_masked_keypoints_2d_input(self):
keypoints_2d = tf.constant([[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]],
[[7.0, 8.0], [9.0, 10.0], [11.0, 12.0]]])
keypoint_masks_2d = tf.constant([[1.0, 0.0, 1.0], [0.0, 0.0, 1.0]])
features, side_outputs = input_generator.create_model_input(
keypoints_2d,
keypoint_masks_2d,
keypoints_3d=None,
model_input_keypoint_type=common.MODEL_INPUT_KEYPOINT_TYPE_2D_INPUT,
model_input_keypoint_mask_type=(
common.MODEL_INPUT_KEYPOINT_MASK_TYPE_MASK_KEYPOINTS),
normalize_keypoints_2d=False,
rescale_features=True)
expected_features = np.array([[1.0, 2.0, 0.0, 0.0, 5.0, 6.0],
[0.0, 0.0, 0.0, 0.0, 11.0, 12.0]])
expected_features *= np.array([[3.0 / 2.0], [3.0 / 1.0]])
self.assertAllClose(features, expected_features)
self.assertCountEqual(
side_outputs.keys(),
{'preprocessed_keypoints_2d', 'preprocessed_keypoint_masks_2d'})
self.assertAllClose(side_outputs['preprocessed_keypoints_2d'],
[[[1.0, 2.0], [0.0, 0.0], [5.0, 6.0]],
[[0.0, 0.0], [0.0, 0.0], [11.0, 12.0]]])
self.assertAllClose(side_outputs['preprocessed_keypoint_masks_2d'],
[[1.0, 0.0, 1.0], [0.0, 0.0, 1.0]])
def test_create_model_keypoints_with_masks_2d_input(self):
keypoints_2d = tf.constant([[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]],
[[7.0, 8.0], [9.0, 10.0], [11.0, 12.0]]])
keypoint_masks_2d = tf.constant([[1.0, 0.0, 1.0], [0.0, 0.0, 1.0]])
features, side_outputs = input_generator.create_model_input(
keypoints_2d,
keypoint_masks_2d,
keypoints_3d=None,
model_input_keypoint_type=common.MODEL_INPUT_KEYPOINT_TYPE_2D_INPUT,
model_input_keypoint_mask_type=(
common.MODEL_INPUT_KEYPOINT_MASK_TYPE_AS_INPUT),
normalize_keypoints_2d=False,
rescale_features=True)
expected_features = np.array(
[[1.0, 2.0, 1.0, 3.0, 4.0, 0.0, 5.0, 6.0, 1.0],
[7.0, 8.0, 0.0, 9.0, 10.0, 0.0, 11.0, 12.0, 1.0]])
expected_features *= np.array([[3.0 / 2.0], [3.0 / 1.0]])
self.assertAllClose(features, expected_features)
self.assertCountEqual(
side_outputs.keys(),
{'preprocessed_keypoints_2d', 'preprocessed_keypoint_masks_2d'})
self.assertAllClose(side_outputs['preprocessed_keypoints_2d'],
[[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]],
[[7.0, 8.0], [9.0, 10.0], [11.0, 12.0]]])
self.assertAllClose(side_outputs['preprocessed_keypoint_masks_2d'],
[[1.0, 0.0, 1.0], [0.0, 0.0, 1.0]])
def test_create_model_masked_keypoints_with_masks_2d_input(self):
keypoints_2d = tf.constant([[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]],
[[7.0, 8.0], [9.0, 10.0], [11.0, 12.0]]])
keypoint_masks_2d = tf.constant([[1.0, 0.0, 1.0], [0.0, 0.0, 1.0]])
features, side_outputs = input_generator.create_model_input(
keypoints_2d,
keypoint_masks_2d,
keypoints_3d=None,
model_input_keypoint_type=common.MODEL_INPUT_KEYPOINT_TYPE_2D_INPUT,
model_input_keypoint_mask_type=(
common.MODEL_INPUT_KEYPOINT_MASK_TYPE_MASK_KEYPOINTS_AND_AS_INPUT),
normalize_keypoints_2d=False,
rescale_features=True)
expected_features = np.array(
[[1.0, 2.0, 1.0, 0.0, 0.0, 0.0, 5.0, 6.0, 1.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 11.0, 12.0, 1.0]])
expected_features *= np.array([[3.0 / 2.0], [3.0 / 1.0]])
self.assertAllClose(features, expected_features)
self.assertCountEqual(
side_outputs.keys(),
{'preprocessed_keypoints_2d', 'preprocessed_keypoint_masks_2d'})
self.assertAllClose(side_outputs['preprocessed_keypoints_2d'],
[[[1.0, 2.0], [0.0, 0.0], [5.0, 6.0]],
[[0.0, 0.0], [0.0, 0.0], [11.0, 12.0]]])
self.assertAllClose(side_outputs['preprocessed_keypoint_masks_2d'],
[[1.0, 0.0, 1.0], [0.0, 0.0, 1.0]])
def test_create_model_input_with_instance_keypoint_dropout(self):
# Shape = [2, 3, 4, 2].
keypoints_2d = tf.constant([
[[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0], [7.0, 8.0]],
[[11.0, 12.0], [13.0, 14.0], [15.0, 16.0], [17.0, 18.0]],
[[21.0, 22.0], [23.0, 24.0], [25.0, 26.0], [27.0, 28.0]]],
[[[31.0, 32.0], [33.0, 34.0], [35.0, 36.0], [37.0, 38.0]],
[[41.0, 42.0], [43.0, 44.0], [45.0, 46.0], [47.0, 48.0]],
[[51.0, 52.0], [53.0, 54.0], [55.0, 56.0], [57.0, 58.0]]],
])
# Shape = [2, 3, 4].
keypoint_masks_2d = tf.constant([[
[0.0, 0.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0],
], [
[1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 0.0],
[0.0, 0.0, 1.0, 1.0],
]])
# Shape = [2, 3, 12].
features, side_outputs = input_generator.create_model_input(
keypoints_2d,
keypoint_masks_2d,
keypoints_3d=None,
model_input_keypoint_type=common.MODEL_INPUT_KEYPOINT_TYPE_2D_INPUT,
model_input_keypoint_mask_type=(
common.MODEL_INPUT_KEYPOINT_MASK_TYPE_MASK_KEYPOINTS_AND_AS_INPUT),
normalize_keypoints_2d=False,
keypoint_dropout_probs=(0.1, 0.8),
rescale_features=True,
seed=0)
self.assertCountEqual(
side_outputs.keys(),
{'preprocessed_keypoints_2d', 'preprocessed_keypoint_masks_2d'})
expected_features = np.array([
[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.0, 6.0, 1.0, 7.0, 8.0, 1.0],
[11.0, 12.0, 1.0, 13.0, 14.0, 1.0, 15.0, 16.0, 1.0, 17.0, 18.0, 1.0],
[21.0, 22.0, 1.0, 23.0, 24.0, 1.0, 25.0, 26.0, 1.0, 27.0, 28.0, 1.0]],
[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 35.0, 36.0, 1.0, 0.0, 0.0, 0.0],
[41.0, 42.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 55.0, 56.0, 1.0, 57.0, 58.0, 1.0]],
])
expected_features *= np.array([[[4.0 / 2.0], [4.0 / 4.0], [4.0 / 4.0]],
[[4.0 / 1.0], [4.0 / 1.0], [4.0 / 2.0]]])
self.assertAllClose(features, expected_features)
self.assertAllClose(
side_outputs['preprocessed_keypoints_2d'],
[[[[0.0, 0.0], [0.0, 0.0], [5.0, 6.0], [7.0, 8.0]],
[[11.0, 12.0], [13.0, 14.0], [15.0, 16.0], [17.0, 18.0]],
[[21.0, 22.0], [23.0, 24.0], [25.0, 26.0], [27.0, 28.0]]],
[[[0.0, 0.0], [0.0, 0.0], [35.0, 36.0], [0.0, 0.0]],
[[41.0, 42.0], [0.0, 0.0], [0.0, 0.0], [0.0, 0.0]],
[[0.0, 0.0], [0.0, 0.0], [55.0, 56.0], [57.0, 58.0]]]])
self.assertAllClose(
side_outputs['preprocessed_keypoint_masks_2d'],
[[[0.0, 0.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0]],
[[0.0, 0.0, 1.0, 0.0], [1.0, 0.0, 0.0, 0.0], [0.0, 0.0, 1.0, 1.0]]])
def test_create_model_sequence_input_with_instance_keypoint_dropout(self):
# Shape = [2, 3, 4, 2].
keypoints_2d = tf.constant([
[[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0], [7.0, 8.0]],
[[11.0, 12.0], [13.0, 14.0], [15.0, 16.0], [17.0, 18.0]],
[[21.0, 22.0], [23.0, 24.0], [25.0, 26.0], [27.0, 28.0]]],
[[[31.0, 32.0], [33.0, 34.0], [35.0, 36.0], [37.0, 38.0]],
[[41.0, 42.0], [43.0, 44.0], [45.0, 46.0], [47.0, 48.0]],
[[51.0, 52.0], [53.0, 54.0], [55.0, 56.0], [57.0, 58.0]]],
])
# Shape = [2, 3, 4].
keypoint_masks_2d = tf.constant([[
[0.0, 0.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0],
], [
[1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 0.0],
[0.0, 0.0, 1.0, 1.0],
]])
# Shape = [2, 3, 12].
features, side_outputs = input_generator.create_model_input(
keypoints_2d,
keypoint_masks_2d,
keypoints_3d=None,
model_input_keypoint_type=common.MODEL_INPUT_KEYPOINT_TYPE_2D_INPUT,
model_input_keypoint_mask_type=(
common.MODEL_INPUT_KEYPOINT_MASK_TYPE_MASK_KEYPOINTS_AND_AS_INPUT),
normalize_keypoints_2d=False,
keypoint_dropout_probs=(0.1, 0.8, 0.5),
rescale_features=True,
sequential_inputs=True,
seed=0)
self.assertCountEqual(
side_outputs.keys(),
{'preprocessed_keypoints_2d', 'preprocessed_keypoint_masks_2d'})
self.assertAllEqual(features.shape.as_list(), [2, 3, 12])
self.assertAllEqual(
side_outputs['preprocessed_keypoints_2d'].shape.as_list(), [2, 3, 4, 2])
self.assertAllEqual(
side_outputs['preprocessed_keypoint_masks_2d'].shape.as_list(),
[2, 3, 4])
self.assertAllClose(side_outputs['preprocessed_keypoints_2d'],
[[[[0., 0.], [0., 0.], [5., 6.], [7., 8.]],
[[11., 12.], [13., 14.], [15., 16.], [17., 18.]],
[[21., 22.], [23., 24.], [25., 26.], [27., 28.]]],
[[[31., 32.], [33., 34.], [35., 36.], [0., 0.]],
[[41., 42.], [43., 44.], [45., 46.], [0., 0.]],
[[0., 0.], [0., 0.], [55., 56.], [57., 58.]]]])
self.assertAllClose(
side_outputs['preprocessed_keypoint_masks_2d'],
[[[0., 0., 1., 1.], [1., 1., 1., 1.], [1., 1., 1., 1.]],
[[1., 1., 1., 0.], [1., 1., 1., 0.], [0., 0., 1., 1.]]])
def test_create_model_input_with_on_mask_for_non_anchors(self):
# Shape = [2, 3, 4, 2].
keypoints_2d = tf.constant([
[[[1.0, 2.0], [3.0, 4.0], [5.0, 6.0], [7.0, 8.0]],
[[11.0, 12.0], [13.0, 14.0], [15.0, 16.0], [17.0, 18.0]],
[[21.0, 22.0], [23.0, 24.0], [25.0, 26.0], [27.0, 28.0]]],
[[[31.0, 32.0], [33.0, 34.0], [35.0, 36.0], [37.0, 38.0]],
[[41.0, 42.0], [43.0, 44.0], [45.0, 46.0], [47.0, 48.0]],
[[51.0, 52.0], [53.0, 54.0], [55.0, 56.0], [57.0, 58.0]]],
])
# Shape = [2, 3, 4].
keypoint_masks_2d = tf.constant([[
[0.0, 0.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0],
], [
[1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 0.0],
[0.0, 0.0, 1.0, 1.0],
]])
# Shape = [2, 3, 12].
features, side_outputs = input_generator.create_model_input(
keypoints_2d,
keypoint_masks_2d,
keypoints_3d=None,
model_input_keypoint_type=common.MODEL_INPUT_KEYPOINT_TYPE_2D_INPUT,
model_input_keypoint_mask_type=(
common.MODEL_INPUT_KEYPOINT_MASK_TYPE_MASK_KEYPOINTS_AND_AS_INPUT),
normalize_keypoints_2d=False,
keypoint_dropout_probs=(0.1, 0.8),
set_on_mask_for_non_anchors=True,
seed=0)
self.assertCountEqual(
side_outputs.keys(),
{'preprocessed_keypoints_2d', 'preprocessed_keypoint_masks_2d'})
expected_features = np.array([
[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.0, 6.0, 1.0, 7.0, 8.0, 1.0],
[11.0, 12.0, 1.0, 13.0, 14.0, 1.0, 15.0, 16.0, 1.0, 17.0, 18.0, 1.0],
[21.0, 22.0, 1.0, 23.0, 24.0, 1.0, 25.0, 26.0, 1.0, 27.0, 28.0, 1.0]],
[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 35.0, 36.0, 1.0, 0.0, 0.0, 0.0],
[41.0, 42.0, 1.0, 43.0, 44.0, 1.0, 45.0, 46.0, 1.0, 47.0, 48.0, 1.0],
[51.0, 52.0, 1.0, 53.0, 54.0, 1.0, 55.0, 56.0, 1.0, 57.0, 58.0, 1.0]],
])
self.assertAllClose(features, expected_features)
self.assertAllClose(
side_outputs['preprocessed_keypoints_2d'],
[[[[0.0, 0.0], [0.0, 0.0], [5.0, 6.0], [7.0, 8.0]],
[[11.0, 12.0], [13.0, 14.0], [15.0, 16.0], [17.0, 18.0]],
[[21.0, 22.0], [23.0, 24.0], [25.0, 26.0], [27.0, 28.0]]],
[[[0.0, 0.0], [0.0, 0.0], [35.0, 36.0], [0.0, 0.0]],
[[41.0, 42.0], [43.0, 44.0], [45.0, 46.0], [47.0, 48.0]],
[[51.0, 52.0], [53.0, 54.0], [55.0, 56.0], [57.0, 58.0]]]])
self.assertAllClose(
side_outputs['preprocessed_keypoint_masks_2d'],
[[[0.0, 0.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0]],
[[0.0, 0.0, 1.0, 0.0], [1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0]]])
def test_create_model_input_with_forced_on_masks(self):
# Shape = [2, 13, 2].
keypoints_2d = tf.ones_like([2, 13, 2], dtype=tf.float32)
# Shape = [2, 13].
keypoint_masks_2d = tf.constant(
[[1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0],
[1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0]])
# Shape = [2, 39].
_, side_outputs = input_generator.create_model_input(
keypoints_2d,
keypoint_masks_2d,
keypoints_3d=None,
model_input_keypoint_type=common.MODEL_INPUT_KEYPOINT_TYPE_2D_INPUT,
model_input_keypoint_mask_type=(
common.MODEL_INPUT_KEYPOINT_MASK_TYPE_MASK_KEYPOINTS_AND_AS_INPUT),
normalize_keypoints_2d=False,
keypoint_profile_2d=keypoint_profiles.Std13KeypointProfile2D(),
keypoint_dropout_probs=(0.5, 0.5),
forced_mask_on_part_names=['HEAD', 'LEFT_SHOULDER', 'LEFT_ANKLE'],
seed=0)
self.assertCountEqual(
side_outputs.keys(),
{'preprocessed_keypoints_2d', 'preprocessed_keypoint_masks_2d'})
expected_preprocessed_keypoint_masks_2d = np.array(
[[1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0],
[1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0]])
self.assertAllClose(side_outputs['preprocessed_keypoint_masks_2d'],
expected_preprocessed_keypoint_masks_2d)
def test_create_model_input_with_forced_off_masks(self):
# Shape = [2, 13, 2].
keypoints_2d = tf.ones_like([2, 13, 2], dtype=tf.float32)
# Shape = [2, 13].
keypoint_masks_2d = tf.constant(
[[1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0],
[1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0]])
# Shape = [2, 39].
_, side_outputs = input_generator.create_model_input(
keypoints_2d,
keypoint_masks_2d,
keypoints_3d=None,
model_input_keypoint_type=common.MODEL_INPUT_KEYPOINT_TYPE_2D_INPUT,
model_input_keypoint_mask_type=(
common.MODEL_INPUT_KEYPOINT_MASK_TYPE_MASK_KEYPOINTS_AND_AS_INPUT),
normalize_keypoints_2d=False,
keypoint_profile_2d=keypoint_profiles.Std13KeypointProfile2D(),
keypoint_dropout_probs=(0.5, 0.5),
forced_mask_off_part_names=['HEAD', 'LEFT_SHOULDER', 'LEFT_ANKLE'],
seed=0)
self.assertCountEqual(
side_outputs.keys(),
{'preprocessed_keypoints_2d', 'preprocessed_keypoint_masks_2d'})
expected_preprocessed_keypoint_masks_2d = np.array(
[[0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0]])
self.assertAllClose(side_outputs['preprocessed_keypoint_masks_2d'],
expected_preprocessed_keypoint_masks_2d)
if __name__ == '__main__':
tf.test.main()
| 49.235772 | 80 | 0.49689 | 6,723 | 36,336 | 2.567752 | 0.049383 | 0.05225 | 0.061693 | 0.061171 | 0.902161 | 0.882292 | 0.863349 | 0.849563 | 0.846261 | 0.83288 | 0 | 0.296316 | 0.265714 | 36,336 | 737 | 81 | 49.302578 | 0.350699 | 0.039327 | 0 | 0.773556 | 0 | 0 | 0.038153 | 0.029977 | 0 | 0 | 0 | 0 | 0.06535 | 1 | 0.018237 | false | 0 | 0.009119 | 0 | 0.028875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
36df361920fef05c95af1662b62af232a64aaabf | 10,142 | py | Python | egret/thirdparty/pglib_opf_files.py | bknueven/Egret | 37567c1ec3bc0072b61124ce46ceb28add9ad539 | [
"BSD-3-Clause"
] | 71 | 2019-03-28T09:57:27.000Z | 2022-03-08T05:24:25.000Z | egret/thirdparty/pglib_opf_files.py | bknueven/Egret | 37567c1ec3bc0072b61124ce46ceb28add9ad539 | [
"BSD-3-Clause"
] | 139 | 2019-04-01T16:50:57.000Z | 2022-03-31T20:29:04.000Z | egret/thirdparty/pglib_opf_files.py | bknueven/Egret | 37567c1ec3bc0072b61124ce46ceb28add9ad539 | [
"BSD-3-Clause"
] | 44 | 2019-04-01T13:20:37.000Z | 2022-03-09T14:50:18.000Z | # ___________________________________________________________________________
#
# EGRET: Electrical Grid Research and Engineering Tools
# Copyright 2019 National Technology & Engineering Solutions of Sandia, LLC
# (NTESS). Under the terms of Contract DE-NA0003525 with NTESS, the U.S.
# Government retains certain rights in this software.
# This software is distributed under the Revised BSD License.
# ___________________________________________________________________________
"""
This is the list of expected files in the pglib ZIP archive
If the archive changes, this list will need to be updated
"""
pglib_files_to_extract = [
'pglib-opf-master/BASELINE.md',
'pglib-opf-master/CHANGELOG.md',
'pglib-opf-master/LICENSE',
'pglib-opf-master/MODEL.png',
'pglib-opf-master/MODEL.tex',
'pglib-opf-master/README.md',
'pglib-opf-master/api/pglib_opf_case10000_goc__api.m',
'pglib-opf-master/api/pglib_opf_case10480_goc__api.m',
'pglib-opf-master/api/pglib_opf_case118_ieee__api.m',
'pglib-opf-master/api/pglib_opf_case1354_pegase__api.m',
'pglib-opf-master/api/pglib_opf_case13659_pegase__api.m',
'pglib-opf-master/api/pglib_opf_case14_ieee__api.m',
'pglib-opf-master/api/pglib_opf_case162_ieee_dtc__api.m',
'pglib-opf-master/api/pglib_opf_case179_goc__api.m',
'pglib-opf-master/api/pglib_opf_case1888_rte__api.m',
'pglib-opf-master/api/pglib_opf_case19402_goc__api.m',
'pglib-opf-master/api/pglib_opf_case1951_rte__api.m',
'pglib-opf-master/api/pglib_opf_case2000_goc__api.m',
'pglib-opf-master/api/pglib_opf_case200_activ__api.m',
'pglib-opf-master/api/pglib_opf_case2312_goc__api.m',
'pglib-opf-master/api/pglib_opf_case2383wp_k__api.m',
'pglib-opf-master/api/pglib_opf_case240_pserc__api.m',
'pglib-opf-master/api/pglib_opf_case24464_goc__api.m',
'pglib-opf-master/api/pglib_opf_case24_ieee_rts__api.m',
'pglib-opf-master/api/pglib_opf_case2736sp_k__api.m',
'pglib-opf-master/api/pglib_opf_case2737sop_k__api.m',
'pglib-opf-master/api/pglib_opf_case2742_goc__api.m',
'pglib-opf-master/api/pglib_opf_case2746wop_k__api.m',
'pglib-opf-master/api/pglib_opf_case2746wp_k__api.m',
'pglib-opf-master/api/pglib_opf_case2848_rte__api.m',
'pglib-opf-master/api/pglib_opf_case2853_sdet__api.m',
'pglib-opf-master/api/pglib_opf_case2868_rte__api.m',
'pglib-opf-master/api/pglib_opf_case2869_pegase__api.m',
'pglib-opf-master/api/pglib_opf_case30000_goc__api.m',
'pglib-opf-master/api/pglib_opf_case300_ieee__api.m',
'pglib-opf-master/api/pglib_opf_case3012wp_k__api.m',
'pglib-opf-master/api/pglib_opf_case3022_goc__api.m',
'pglib-opf-master/api/pglib_opf_case30_as__api.m',
'pglib-opf-master/api/pglib_opf_case30_ieee__api.m',
'pglib-opf-master/api/pglib_opf_case3120sp_k__api.m',
'pglib-opf-master/api/pglib_opf_case3375wp_k__api.m',
'pglib-opf-master/api/pglib_opf_case3970_goc__api.m',
'pglib-opf-master/api/pglib_opf_case39_epri__api.m',
'pglib-opf-master/api/pglib_opf_case3_lmbd__api.m',
'pglib-opf-master/api/pglib_opf_case4020_goc__api.m',
'pglib-opf-master/api/pglib_opf_case4601_goc__api.m',
'pglib-opf-master/api/pglib_opf_case4619_goc__api.m',
'pglib-opf-master/api/pglib_opf_case4661_sdet__api.m',
'pglib-opf-master/api/pglib_opf_case4837_goc__api.m',
'pglib-opf-master/api/pglib_opf_case4917_goc__api.m',
'pglib-opf-master/api/pglib_opf_case500_goc__api.m',
'pglib-opf-master/api/pglib_opf_case57_ieee__api.m',
'pglib-opf-master/api/pglib_opf_case588_sdet__api.m',
'pglib-opf-master/api/pglib_opf_case5_pjm__api.m',
'pglib-opf-master/api/pglib_opf_case6468_rte__api.m',
'pglib-opf-master/api/pglib_opf_case6470_rte__api.m',
'pglib-opf-master/api/pglib_opf_case6495_rte__api.m',
'pglib-opf-master/api/pglib_opf_case6515_rte__api.m',
'pglib-opf-master/api/pglib_opf_case73_ieee_rts__api.m',
'pglib-opf-master/api/pglib_opf_case793_goc__api.m',
'pglib-opf-master/api/pglib_opf_case89_pegase__api.m',
'pglib-opf-master/api/pglib_opf_case9241_pegase__api.m',
'pglib-opf-master/api/pglib_opf_case9591_goc__api.m',
'pglib-opf-master/pglib_opf_case10000_goc.m',
'pglib-opf-master/pglib_opf_case10480_goc.m',
'pglib-opf-master/pglib_opf_case118_ieee.m',
'pglib-opf-master/pglib_opf_case1354_pegase.m',
'pglib-opf-master/pglib_opf_case13659_pegase.m',
'pglib-opf-master/pglib_opf_case14_ieee.m',
'pglib-opf-master/pglib_opf_case162_ieee_dtc.m',
'pglib-opf-master/pglib_opf_case179_goc.m',
'pglib-opf-master/pglib_opf_case1888_rte.m',
'pglib-opf-master/pglib_opf_case19402_goc.m',
'pglib-opf-master/pglib_opf_case1951_rte.m',
'pglib-opf-master/pglib_opf_case2000_goc.m',
'pglib-opf-master/pglib_opf_case200_activ.m',
'pglib-opf-master/pglib_opf_case2312_goc.m',
'pglib-opf-master/pglib_opf_case2383wp_k.m',
'pglib-opf-master/pglib_opf_case240_pserc.m',
'pglib-opf-master/pglib_opf_case24464_goc.m',
'pglib-opf-master/pglib_opf_case24_ieee_rts.m',
'pglib-opf-master/pglib_opf_case2736sp_k.m',
'pglib-opf-master/pglib_opf_case2737sop_k.m',
'pglib-opf-master/pglib_opf_case2742_goc.m',
'pglib-opf-master/pglib_opf_case2746wop_k.m',
'pglib-opf-master/pglib_opf_case2746wp_k.m',
'pglib-opf-master/pglib_opf_case2848_rte.m',
'pglib-opf-master/pglib_opf_case2853_sdet.m',
'pglib-opf-master/pglib_opf_case2868_rte.m',
'pglib-opf-master/pglib_opf_case2869_pegase.m',
'pglib-opf-master/pglib_opf_case30000_goc.m',
'pglib-opf-master/pglib_opf_case300_ieee.m',
'pglib-opf-master/pglib_opf_case3012wp_k.m',
'pglib-opf-master/pglib_opf_case3022_goc.m',
'pglib-opf-master/pglib_opf_case30_as.m',
'pglib-opf-master/pglib_opf_case30_ieee.m',
'pglib-opf-master/pglib_opf_case3120sp_k.m',
'pglib-opf-master/pglib_opf_case3375wp_k.m',
'pglib-opf-master/pglib_opf_case3970_goc.m',
'pglib-opf-master/pglib_opf_case39_epri.m',
'pglib-opf-master/pglib_opf_case3_lmbd.m',
'pglib-opf-master/pglib_opf_case4020_goc.m',
'pglib-opf-master/pglib_opf_case4601_goc.m',
'pglib-opf-master/pglib_opf_case4619_goc.m',
'pglib-opf-master/pglib_opf_case4661_sdet.m',
'pglib-opf-master/pglib_opf_case4837_goc.m',
'pglib-opf-master/pglib_opf_case4917_goc.m',
'pglib-opf-master/pglib_opf_case500_goc.m',
'pglib-opf-master/pglib_opf_case57_ieee.m',
'pglib-opf-master/pglib_opf_case588_sdet.m',
'pglib-opf-master/pglib_opf_case5_pjm.m',
'pglib-opf-master/pglib_opf_case6468_rte.m',
'pglib-opf-master/pglib_opf_case6470_rte.m',
'pglib-opf-master/pglib_opf_case6495_rte.m',
'pglib-opf-master/pglib_opf_case6515_rte.m',
'pglib-opf-master/pglib_opf_case73_ieee_rts.m',
'pglib-opf-master/pglib_opf_case793_goc.m',
'pglib-opf-master/pglib_opf_case89_pegase.m',
'pglib-opf-master/pglib_opf_case9241_pegase.m',
'pglib-opf-master/pglib_opf_case9591_goc.m',
'pglib-opf-master/sad/pglib_opf_case10000_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case10480_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case118_ieee__sad.m',
'pglib-opf-master/sad/pglib_opf_case1354_pegase__sad.m',
'pglib-opf-master/sad/pglib_opf_case13659_pegase__sad.m',
'pglib-opf-master/sad/pglib_opf_case14_ieee__sad.m',
'pglib-opf-master/sad/pglib_opf_case162_ieee_dtc__sad.m',
'pglib-opf-master/sad/pglib_opf_case179_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case1888_rte__sad.m',
'pglib-opf-master/sad/pglib_opf_case19402_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case1951_rte__sad.m',
'pglib-opf-master/sad/pglib_opf_case2000_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case200_activ__sad.m',
'pglib-opf-master/sad/pglib_opf_case2312_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case2383wp_k__sad.m',
'pglib-opf-master/sad/pglib_opf_case240_pserc__sad.m',
'pglib-opf-master/sad/pglib_opf_case24464_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case24_ieee_rts__sad.m',
'pglib-opf-master/sad/pglib_opf_case2736sp_k__sad.m',
'pglib-opf-master/sad/pglib_opf_case2737sop_k__sad.m',
'pglib-opf-master/sad/pglib_opf_case2742_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case2746wop_k__sad.m',
'pglib-opf-master/sad/pglib_opf_case2746wp_k__sad.m',
'pglib-opf-master/sad/pglib_opf_case2848_rte__sad.m',
'pglib-opf-master/sad/pglib_opf_case2853_sdet__sad.m',
'pglib-opf-master/sad/pglib_opf_case2868_rte__sad.m',
'pglib-opf-master/sad/pglib_opf_case2869_pegase__sad.m',
'pglib-opf-master/sad/pglib_opf_case30000_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case300_ieee__sad.m',
'pglib-opf-master/sad/pglib_opf_case3012wp_k__sad.m',
'pglib-opf-master/sad/pglib_opf_case3022_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case30_as__sad.m',
'pglib-opf-master/sad/pglib_opf_case30_ieee__sad.m',
'pglib-opf-master/sad/pglib_opf_case3120sp_k__sad.m',
'pglib-opf-master/sad/pglib_opf_case3375wp_k__sad.m',
'pglib-opf-master/sad/pglib_opf_case3970_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case39_epri__sad.m',
'pglib-opf-master/sad/pglib_opf_case3_lmbd__sad.m',
'pglib-opf-master/sad/pglib_opf_case4020_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case4601_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case4619_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case4661_sdet__sad.m',
'pglib-opf-master/sad/pglib_opf_case4837_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case4917_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case500_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case57_ieee__sad.m',
'pglib-opf-master/sad/pglib_opf_case588_sdet__sad.m',
'pglib-opf-master/sad/pglib_opf_case5_pjm__sad.m',
'pglib-opf-master/sad/pglib_opf_case6468_rte__sad.m',
'pglib-opf-master/sad/pglib_opf_case6470_rte__sad.m',
'pglib-opf-master/sad/pglib_opf_case6495_rte__sad.m',
'pglib-opf-master/sad/pglib_opf_case6515_rte__sad.m',
'pglib-opf-master/sad/pglib_opf_case73_ieee_rts__sad.m',
'pglib-opf-master/sad/pglib_opf_case793_goc__sad.m',
'pglib-opf-master/sad/pglib_opf_case89_pegase__sad.m',
'pglib-opf-master/sad/pglib_opf_case9241_pegase__sad.m',
'pglib-opf-master/sad/pglib_opf_case9591_goc__sad.m',
]
| 52.278351 | 78 | 0.784461 | 1,706 | 10,142 | 4.134232 | 0.07796 | 0.394726 | 0.35134 | 0.361548 | 0.739969 | 0.728484 | 0.701262 | 0.518361 | 0.460513 | 0.020417 | 0 | 0.066395 | 0.079274 | 10,142 | 193 | 79 | 52.549223 | 0.688906 | 0.057977 | 0 | 0 | 0 | 0 | 0.8655 | 0.8655 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7fe20823872735880db735dd41e6823f394ca7aa | 46,851 | py | Python | osisoft/pidevclub/piwebapi/api/stream_api.py | inselbuch/pwap2 | 4ded0a62b241d9354f39ce87f3411fe9708317e3 | [
"Apache-2.0"
] | 3 | 2019-05-16T15:44:09.000Z | 2020-11-25T22:28:31.000Z | osisoft/pidevclub/piwebapi/api/stream_api.py | inselbuch/pwap2 | 4ded0a62b241d9354f39ce87f3411fe9708317e3 | [
"Apache-2.0"
] | null | null | null | osisoft/pidevclub/piwebapi/api/stream_api.py | inselbuch/pwap2 | 4ded0a62b241d9354f39ce87f3411fe9708317e3 | [
"Apache-2.0"
] | 8 | 2019-03-15T10:20:57.000Z | 2021-05-20T13:06:37.000Z | # coding: utf-8
"""
Copyright 2018 OSIsoft, LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
<http://www.apache.org/licenses/LICENSE-2.0>
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
from six import iteritems
class StreamApi(object):
def __init__(self, api_client):
self.api_client = api_client
def get_channel(self, web_id, heartbeat_rate=None, include_initial_values=None, web_id_type=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_channel_with_http_info(web_id, heartbeat_rate, include_initial_values, web_id_type, **kwargs)
else:
(data) = self.get_channel_with_http_info(web_id, heartbeat_rate, include_initial_values, web_id_type, **kwargs)
return data
def get_channel_with_http_info(self, web_id, heartbeat_rate=None, include_initial_values=None, web_id_type=None, **kwargs):
all_params = ['web_id', 'heartbeat_rate', 'include_initial_values', 'web_id_type']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_channel_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('web_id' not in params) or (params['web_id'] is None):
raise ValueError("Missing the required parameter `web_id` when calling `get_channel_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'web_id' in params:
if (params['web_id'] is not None):
path_params['webId'] = params['web_id']
if 'heartbeat_rate' in params:
if (params['heartbeat_rate'] is not None):
query_params['heartbeatRate'] = params['heartbeat_rate']
if 'include_initial_values' in params:
if (params['include_initial_values'] is not None):
query_params['includeInitialValues'] = params['include_initial_values']
if 'web_id_type' in params:
if (params['web_id_type'] is not None):
query_params['webIdType'] = params['web_id_type']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/{webId}/channel', 'GET',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type ='PIItemsStreamValues',
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
def get_end(self, web_id, desired_units=None, selected_fields=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_end_with_http_info(web_id, desired_units, selected_fields, **kwargs)
else:
(data) = self.get_end_with_http_info(web_id, desired_units, selected_fields, **kwargs)
return data
def get_end_with_http_info(self, web_id, desired_units=None, selected_fields=None, **kwargs):
all_params = ['web_id', 'desired_units', 'selected_fields']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_end_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('web_id' not in params) or (params['web_id'] is None):
raise ValueError("Missing the required parameter `web_id` when calling `get_end_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'web_id' in params:
if (params['web_id'] is not None):
path_params['webId'] = params['web_id']
if 'desired_units' in params:
if (params['desired_units'] is not None):
query_params['desiredUnits'] = params['desired_units']
if 'selected_fields' in params:
if (params['selected_fields'] is not None):
query_params['selectedFields'] = params['selected_fields']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/{webId}/end', 'GET',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type ='PITimedValue',
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
def get_interpolated(self, web_id, desired_units=None, end_time=None, filter_expression=None, include_filtered_values=None, interval=None, selected_fields=None, start_time=None, sync_time=None, sync_time_boundary_type=None, time_zone=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_interpolated_with_http_info(web_id, desired_units, end_time, filter_expression, include_filtered_values, interval, selected_fields, start_time, sync_time, sync_time_boundary_type, time_zone, **kwargs)
else:
(data) = self.get_interpolated_with_http_info(web_id, desired_units, end_time, filter_expression, include_filtered_values, interval, selected_fields, start_time, sync_time, sync_time_boundary_type, time_zone, **kwargs)
return data
def get_interpolated_with_http_info(self, web_id, desired_units=None, end_time=None, filter_expression=None, include_filtered_values=None, interval=None, selected_fields=None, start_time=None, sync_time=None, sync_time_boundary_type=None, time_zone=None, **kwargs):
all_params = ['web_id', 'desired_units', 'end_time', 'filter_expression', 'include_filtered_values', 'interval', 'selected_fields', 'start_time', 'sync_time', 'sync_time_boundary_type', 'time_zone']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_interpolated_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('web_id' not in params) or (params['web_id'] is None):
raise ValueError("Missing the required parameter `web_id` when calling `get_interpolated_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'web_id' in params:
if (params['web_id'] is not None):
path_params['webId'] = params['web_id']
if 'desired_units' in params:
if (params['desired_units'] is not None):
query_params['desiredUnits'] = params['desired_units']
if 'end_time' in params:
if (params['end_time'] is not None):
query_params['endTime'] = params['end_time']
if 'filter_expression' in params:
if (params['filter_expression'] is not None):
query_params['filterExpression'] = params['filter_expression']
if 'include_filtered_values' in params:
if (params['include_filtered_values'] is not None):
query_params['includeFilteredValues'] = params['include_filtered_values']
if 'interval' in params:
if (params['interval'] is not None):
query_params['interval'] = params['interval']
if 'selected_fields' in params:
if (params['selected_fields'] is not None):
query_params['selectedFields'] = params['selected_fields']
if 'start_time' in params:
if (params['start_time'] is not None):
query_params['startTime'] = params['start_time']
if 'sync_time' in params:
if (params['sync_time'] is not None):
query_params['syncTime'] = params['sync_time']
if 'sync_time_boundary_type' in params:
if (params['sync_time_boundary_type'] is not None):
query_params['syncTimeBoundaryType'] = params['sync_time_boundary_type']
if 'time_zone' in params:
if (params['time_zone'] is not None):
query_params['timeZone'] = params['time_zone']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/{webId}/interpolated', 'GET',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type ='PITimedValues',
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
def get_interpolated_at_times(self, web_id, desired_units=None, filter_expression=None, include_filtered_values=None, selected_fields=None, sort_order=None, time=None, time_zone=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_interpolated_at_times_with_http_info(web_id, desired_units, filter_expression, include_filtered_values, selected_fields, sort_order, time, time_zone, **kwargs)
else:
(data) = self.get_interpolated_at_times_with_http_info(web_id, desired_units, filter_expression, include_filtered_values, selected_fields, sort_order, time, time_zone, **kwargs)
return data
def get_interpolated_at_times_with_http_info(self, web_id, desired_units=None, filter_expression=None, include_filtered_values=None, selected_fields=None, sort_order=None, time=None, time_zone=None, **kwargs):
all_params = ['web_id', 'desired_units', 'filter_expression', 'include_filtered_values', 'selected_fields', 'sort_order', 'time', 'time_zone']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_interpolated_at_times_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('web_id' not in params) or (params['web_id'] is None):
raise ValueError("Missing the required parameter `web_id` when calling `get_interpolated_at_times_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'web_id' in params:
if (params['web_id'] is not None):
path_params['webId'] = params['web_id']
if 'desired_units' in params:
if (params['desired_units'] is not None):
query_params['desiredUnits'] = params['desired_units']
if 'filter_expression' in params:
if (params['filter_expression'] is not None):
query_params['filterExpression'] = params['filter_expression']
if 'include_filtered_values' in params:
if (params['include_filtered_values'] is not None):
query_params['includeFilteredValues'] = params['include_filtered_values']
if 'selected_fields' in params:
if (params['selected_fields'] is not None):
query_params['selectedFields'] = params['selected_fields']
if 'sort_order' in params:
if (params['sort_order'] is not None):
query_params['sortOrder'] = params['sort_order']
if 'time' in params:
if (params['time'] is not None):
query_params['time'] = params['time']
collection_formats['time'] = 'multi'
if 'time_zone' in params:
if (params['time_zone'] is not None):
query_params['timeZone'] = params['time_zone']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/{webId}/interpolatedattimes', 'GET',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type ='PITimedValues',
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
def get_plot(self, web_id, desired_units=None, end_time=None, intervals=None, selected_fields=None, start_time=None, time_zone=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_plot_with_http_info(web_id, desired_units, end_time, intervals, selected_fields, start_time, time_zone, **kwargs)
else:
(data) = self.get_plot_with_http_info(web_id, desired_units, end_time, intervals, selected_fields, start_time, time_zone, **kwargs)
return data
def get_plot_with_http_info(self, web_id, desired_units=None, end_time=None, intervals=None, selected_fields=None, start_time=None, time_zone=None, **kwargs):
all_params = ['web_id', 'desired_units', 'end_time', 'intervals', 'selected_fields', 'start_time', 'time_zone']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_plot_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('web_id' not in params) or (params['web_id'] is None):
raise ValueError("Missing the required parameter `web_id` when calling `get_plot_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'web_id' in params:
if (params['web_id'] is not None):
path_params['webId'] = params['web_id']
if 'desired_units' in params:
if (params['desired_units'] is not None):
query_params['desiredUnits'] = params['desired_units']
if 'end_time' in params:
if (params['end_time'] is not None):
query_params['endTime'] = params['end_time']
if 'intervals' in params:
if (params['intervals'] is not None):
query_params['intervals'] = params['intervals']
if 'selected_fields' in params:
if (params['selected_fields'] is not None):
query_params['selectedFields'] = params['selected_fields']
if 'start_time' in params:
if (params['start_time'] is not None):
query_params['startTime'] = params['start_time']
if 'time_zone' in params:
if (params['time_zone'] is not None):
query_params['timeZone'] = params['time_zone']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/{webId}/plot', 'GET',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type ='PITimedValues',
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
def get_recorded(self, web_id, associations=None, boundary_type=None, desired_units=None, end_time=None, filter_expression=None, include_filtered_values=None, max_count=None, selected_fields=None, start_time=None, time_zone=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_recorded_with_http_info(web_id, associations, boundary_type, desired_units, end_time, filter_expression, include_filtered_values, max_count, selected_fields, start_time, time_zone, **kwargs)
else:
(data) = self.get_recorded_with_http_info(web_id, associations, boundary_type, desired_units, end_time, filter_expression, include_filtered_values, max_count, selected_fields, start_time, time_zone, **kwargs)
return data
def get_recorded_with_http_info(self, web_id, associations=None, boundary_type=None, desired_units=None, end_time=None, filter_expression=None, include_filtered_values=None, max_count=None, selected_fields=None, start_time=None, time_zone=None, **kwargs):
all_params = ['web_id', 'associations', 'boundary_type', 'desired_units', 'end_time', 'filter_expression', 'include_filtered_values', 'max_count', 'selected_fields', 'start_time', 'time_zone']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_recorded_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('web_id' not in params) or (params['web_id'] is None):
raise ValueError("Missing the required parameter `web_id` when calling `get_recorded_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'web_id' in params:
if (params['web_id'] is not None):
path_params['webId'] = params['web_id']
if 'associations' in params:
if (params['associations'] is not None):
query_params['associations'] = params['associations']
if 'boundary_type' in params:
if (params['boundary_type'] is not None):
query_params['boundaryType'] = params['boundary_type']
if 'desired_units' in params:
if (params['desired_units'] is not None):
query_params['desiredUnits'] = params['desired_units']
if 'end_time' in params:
if (params['end_time'] is not None):
query_params['endTime'] = params['end_time']
if 'filter_expression' in params:
if (params['filter_expression'] is not None):
query_params['filterExpression'] = params['filter_expression']
if 'include_filtered_values' in params:
if (params['include_filtered_values'] is not None):
query_params['includeFilteredValues'] = params['include_filtered_values']
if 'max_count' in params:
if (params['max_count'] is not None):
query_params['maxCount'] = params['max_count']
if 'selected_fields' in params:
if (params['selected_fields'] is not None):
query_params['selectedFields'] = params['selected_fields']
if 'start_time' in params:
if (params['start_time'] is not None):
query_params['startTime'] = params['start_time']
if 'time_zone' in params:
if (params['time_zone'] is not None):
query_params['timeZone'] = params['time_zone']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/{webId}/recorded', 'GET',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type ='PIExtendedTimedValues',
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
def update_values(self, web_id, values, buffer_option=None, update_option=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.update_values_with_http_info(web_id, values, buffer_option, update_option, **kwargs)
else:
(data) = self.update_values_with_http_info(web_id, values, buffer_option, update_option, **kwargs)
return data
def update_values_with_http_info(self, web_id, values, buffer_option=None, update_option=None, **kwargs):
all_params = ['web_id', 'values', 'buffer_option', 'update_option']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_values_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('web_id' not in params) or (params['web_id'] is None):
raise ValueError("Missing the required parameter `web_id` when calling `update_values_with_http_info`")
if ('values' not in params) or (params['values'] is None):
raise ValueError("Missing the required parameter `values` when calling `update_values_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'web_id' in params:
if (params['web_id'] is not None):
path_params['webId'] = params['web_id']
if 'values' in params:
body_params = params['values']
if 'buffer_option' in params:
if (params['buffer_option'] is not None):
query_params['bufferOption'] = params['buffer_option']
if 'update_option' in params:
if (params['update_option'] is not None):
query_params['updateOption'] = params['update_option']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/{webId}/recorded', 'POST',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type ='PIItemsSubstatus',
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
def get_recorded_at_time(self, web_id, time, associations=None, desired_units=None, retrieval_mode=None, selected_fields=None, time_zone=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_recorded_at_time_with_http_info(web_id, time, associations, desired_units, retrieval_mode, selected_fields, time_zone, **kwargs)
else:
(data) = self.get_recorded_at_time_with_http_info(web_id, time, associations, desired_units, retrieval_mode, selected_fields, time_zone, **kwargs)
return data
def get_recorded_at_time_with_http_info(self, web_id, time, associations=None, desired_units=None, retrieval_mode=None, selected_fields=None, time_zone=None, **kwargs):
all_params = ['web_id', 'time', 'associations', 'desired_units', 'retrieval_mode', 'selected_fields', 'time_zone']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_recorded_at_time_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('web_id' not in params) or (params['web_id'] is None):
raise ValueError("Missing the required parameter `web_id` when calling `get_recorded_at_time_with_http_info`")
if ('time' not in params) or (params['time'] is None):
raise ValueError("Missing the required parameter `time` when calling `get_recorded_at_time_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'web_id' in params:
if (params['web_id'] is not None):
path_params['webId'] = params['web_id']
if 'time' in params:
if (params['time'] is not None):
query_params['time'] = params['time']
if 'associations' in params:
if (params['associations'] is not None):
query_params['associations'] = params['associations']
if 'desired_units' in params:
if (params['desired_units'] is not None):
query_params['desiredUnits'] = params['desired_units']
if 'retrieval_mode' in params:
if (params['retrieval_mode'] is not None):
query_params['retrievalMode'] = params['retrieval_mode']
if 'selected_fields' in params:
if (params['selected_fields'] is not None):
query_params['selectedFields'] = params['selected_fields']
if 'time_zone' in params:
if (params['time_zone'] is not None):
query_params['timeZone'] = params['time_zone']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/{webId}/recordedattime', 'GET',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type ='PIExtendedTimedValue',
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
def get_recorded_at_times(self, web_id, associations=None, desired_units=None, retrieval_mode=None, selected_fields=None, sort_order=None, time=None, time_zone=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_recorded_at_times_with_http_info(web_id, associations, desired_units, retrieval_mode, selected_fields, sort_order, time, time_zone, **kwargs)
else:
(data) = self.get_recorded_at_times_with_http_info(web_id, associations, desired_units, retrieval_mode, selected_fields, sort_order, time, time_zone, **kwargs)
return data
def get_recorded_at_times_with_http_info(self, web_id, associations=None, desired_units=None, retrieval_mode=None, selected_fields=None, sort_order=None, time=None, time_zone=None, **kwargs):
all_params = ['web_id', 'associations', 'desired_units', 'retrieval_mode', 'selected_fields', 'sort_order', 'time', 'time_zone']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_recorded_at_times_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('web_id' not in params) or (params['web_id'] is None):
raise ValueError("Missing the required parameter `web_id` when calling `get_recorded_at_times_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'web_id' in params:
if (params['web_id'] is not None):
path_params['webId'] = params['web_id']
if 'associations' in params:
if (params['associations'] is not None):
query_params['associations'] = params['associations']
if 'desired_units' in params:
if (params['desired_units'] is not None):
query_params['desiredUnits'] = params['desired_units']
if 'retrieval_mode' in params:
if (params['retrieval_mode'] is not None):
query_params['retrievalMode'] = params['retrieval_mode']
if 'selected_fields' in params:
if (params['selected_fields'] is not None):
query_params['selectedFields'] = params['selected_fields']
if 'sort_order' in params:
if (params['sort_order'] is not None):
query_params['sortOrder'] = params['sort_order']
if 'time' in params:
if (params['time'] is not None):
query_params['time'] = params['time']
collection_formats['time'] = 'multi'
if 'time_zone' in params:
if (params['time_zone'] is not None):
query_params['timeZone'] = params['time_zone']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/{webId}/recordedattimes', 'GET',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type ='PIExtendedTimedValues',
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
def get_summary(self, web_id, calculation_basis=None, end_time=None, filter_expression=None, sample_interval=None, sample_type=None, selected_fields=None, start_time=None, summary_duration=None, summary_type=None, time_type=None, time_zone=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_summary_with_http_info(web_id, calculation_basis, end_time, filter_expression, sample_interval, sample_type, selected_fields, start_time, summary_duration, summary_type, time_type, time_zone, **kwargs)
else:
(data) = self.get_summary_with_http_info(web_id, calculation_basis, end_time, filter_expression, sample_interval, sample_type, selected_fields, start_time, summary_duration, summary_type, time_type, time_zone, **kwargs)
return data
def get_summary_with_http_info(self, web_id, calculation_basis=None, end_time=None, filter_expression=None, sample_interval=None, sample_type=None, selected_fields=None, start_time=None, summary_duration=None, summary_type=None, time_type=None, time_zone=None, **kwargs):
all_params = ['web_id', 'calculation_basis', 'end_time', 'filter_expression', 'sample_interval', 'sample_type', 'selected_fields', 'start_time', 'summary_duration', 'summary_type', 'time_type', 'time_zone']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_summary_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('web_id' not in params) or (params['web_id'] is None):
raise ValueError("Missing the required parameter `web_id` when calling `get_summary_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'web_id' in params:
if (params['web_id'] is not None):
path_params['webId'] = params['web_id']
if 'calculation_basis' in params:
if (params['calculation_basis'] is not None):
query_params['calculationBasis'] = params['calculation_basis']
if 'end_time' in params:
if (params['end_time'] is not None):
query_params['endTime'] = params['end_time']
if 'filter_expression' in params:
if (params['filter_expression'] is not None):
query_params['filterExpression'] = params['filter_expression']
if 'sample_interval' in params:
if (params['sample_interval'] is not None):
query_params['sampleInterval'] = params['sample_interval']
if 'sample_type' in params:
if (params['sample_type'] is not None):
query_params['sampleType'] = params['sample_type']
if 'selected_fields' in params:
if (params['selected_fields'] is not None):
query_params['selectedFields'] = params['selected_fields']
if 'start_time' in params:
if (params['start_time'] is not None):
query_params['startTime'] = params['start_time']
if 'summary_duration' in params:
if (params['summary_duration'] is not None):
query_params['summaryDuration'] = params['summary_duration']
if 'summary_type' in params:
if (params['summary_type'] is not None):
query_params['summaryType'] = params['summary_type']
collection_formats['summaryType'] = 'multi'
if 'time_type' in params:
if (params['time_type'] is not None):
query_params['timeType'] = params['time_type']
if 'time_zone' in params:
if (params['time_zone'] is not None):
query_params['timeZone'] = params['time_zone']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/{webId}/summary', 'GET',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type ='PIItemsSummaryValue',
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
def register_stream_update(self, web_id, selected_fields=None, web_id_type=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.register_stream_update_with_http_info(web_id, selected_fields, web_id_type, **kwargs)
else:
(data) = self.register_stream_update_with_http_info(web_id, selected_fields, web_id_type, **kwargs)
return data
def register_stream_update_with_http_info(self, web_id, selected_fields=None, web_id_type=None, **kwargs):
all_params = ['web_id', 'selected_fields', 'web_id_type']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method register_stream_update_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('web_id' not in params) or (params['web_id'] is None):
raise ValueError("Missing the required parameter `web_id` when calling `register_stream_update_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'web_id' in params:
if (params['web_id'] is not None):
path_params['webId'] = params['web_id']
if 'selected_fields' in params:
if (params['selected_fields'] is not None):
query_params['selectedFields'] = params['selected_fields']
if 'web_id_type' in params:
if (params['web_id_type'] is not None):
query_params['webIdType'] = params['web_id_type']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/{webId}/updates', 'POST',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type ='PIStreamUpdatesRegister',
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
def get_value(self, web_id, desired_units=None, selected_fields=None, time=None, time_zone=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_value_with_http_info(web_id, desired_units, selected_fields, time, time_zone, **kwargs)
else:
(data) = self.get_value_with_http_info(web_id, desired_units, selected_fields, time, time_zone, **kwargs)
return data
def get_value_with_http_info(self, web_id, desired_units=None, selected_fields=None, time=None, time_zone=None, **kwargs):
all_params = ['web_id', 'desired_units', 'selected_fields', 'time', 'time_zone']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_value_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('web_id' not in params) or (params['web_id'] is None):
raise ValueError("Missing the required parameter `web_id` when calling `get_value_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'web_id' in params:
if (params['web_id'] is not None):
path_params['webId'] = params['web_id']
if 'desired_units' in params:
if (params['desired_units'] is not None):
query_params['desiredUnits'] = params['desired_units']
if 'selected_fields' in params:
if (params['selected_fields'] is not None):
query_params['selectedFields'] = params['selected_fields']
if 'time' in params:
if (params['time'] is not None):
query_params['time'] = params['time']
if 'time_zone' in params:
if (params['time_zone'] is not None):
query_params['timeZone'] = params['time_zone']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/{webId}/value', 'GET',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type ='PITimedValue',
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
def update_value(self, web_id, value, buffer_option=None, update_option=None, web_id_type=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.update_value_with_http_info(web_id, value, buffer_option, update_option, web_id_type, **kwargs)
else:
(data) = self.update_value_with_http_info(web_id, value, buffer_option, update_option, web_id_type, **kwargs)
return data
def update_value_with_http_info(self, web_id, value, buffer_option=None, update_option=None, web_id_type=None, **kwargs):
all_params = ['web_id', 'value', 'buffer_option', 'update_option', 'web_id_type']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_value_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('web_id' not in params) or (params['web_id'] is None):
raise ValueError("Missing the required parameter `web_id` when calling `update_value_with_http_info`")
if ('value' not in params) or (params['value'] is None):
raise ValueError("Missing the required parameter `value` when calling `update_value_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'web_id' in params:
if (params['web_id'] is not None):
path_params['webId'] = params['web_id']
if 'value' in params:
body_params = params['value']
if 'buffer_option' in params:
if (params['buffer_option'] is not None):
query_params['bufferOption'] = params['buffer_option']
if 'update_option' in params:
if (params['update_option'] is not None):
query_params['updateOption'] = params['update_option']
if 'web_id_type' in params:
if (params['web_id_type'] is not None):
query_params['webIdType'] = params['web_id_type']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/{webId}/value', 'POST',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type =None,
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
def retrieve_stream_update(self, marker, desired_units=None, selected_fields=None, web_id_type=None, **kwargs):
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.retrieve_stream_update_with_http_info(marker, desired_units, selected_fields, web_id_type, **kwargs)
else:
(data) = self.retrieve_stream_update_with_http_info(marker, desired_units, selected_fields, web_id_type, **kwargs)
return data
def retrieve_stream_update_with_http_info(self, marker, desired_units=None, selected_fields=None, web_id_type=None, **kwargs):
all_params = ['marker', 'desired_units', 'selected_fields', 'web_id_type']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method retrieve_stream_update_with_http_info" % key
)
params[key] = val
del params['kwargs']
if ('marker' not in params) or (params['marker'] is None):
raise ValueError("Missing the required parameter `marker` when calling `retrieve_stream_update_with_http_info`")
collection_formats = {}
query_params = {}
path_params = {}
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'marker' in params:
if (params['marker'] is not None):
path_params['marker'] = params['marker']
if 'desired_units' in params:
if (params['desired_units'] is not None):
query_params['desiredUnits'] = params['desired_units']
if 'selected_fields' in params:
if (params['selected_fields'] is not None):
query_params['selectedFields'] = params['selected_fields']
if 'web_id_type' in params:
if (params['web_id_type'] is not None):
query_params['webIdType'] = params['web_id_type']
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json', 'text/json', 'text/html', 'application/x-ms-application'])
header_params['Content-Type'] = self.api_client.\
select_header_content_type([])
return self.api_client.call_api('/streams/updates/{marker}', 'GET',
path_params,
query_params,
header_params,
body =body_params,
post_params =form_params,
files =local_var_files,
response_type ='PIStreamUpdatesRetrieve',
callback =params.get('callback'),
_return_http_data_only =params.get('_return_http_data_only'),
_preload_content =params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats =collection_formats)
| 39.436869 | 273 | 0.710956 | 6,246 | 46,851 | 4.993756 | 0.035703 | 0.028053 | 0.028854 | 0.046167 | 0.945657 | 0.927255 | 0.913405 | 0.905133 | 0.894168 | 0.889071 | 0 | 0.00023 | 0.165141 | 46,851 | 1,187 | 274 | 39.470093 | 0.797208 | 0.012081 | 0 | 0.796875 | 0 | 0 | 0.265302 | 0.071544 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030208 | false | 0 | 0.002083 | 0 | 0.077083 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3d0b8ee8898caa473059975ecea803378304ffff | 271 | bzl | Python | markdown/deps.bzl | cgrindel/bazel-starlib | 17430b0574da13a01ab49f3bc24eea66b29c7905 | [
"Apache-2.0"
] | 9 | 2021-11-02T22:29:57.000Z | 2022-02-18T00:16:10.000Z | markdown/deps.bzl | cgrindel/bazel-starlib | 17430b0574da13a01ab49f3bc24eea66b29c7905 | [
"Apache-2.0"
] | 42 | 2021-12-06T18:28:36.000Z | 2022-03-29T16:44:51.000Z | markdown/deps.bzl | cgrindel/bazel-starlib | 17430b0574da13a01ab49f3bc24eea66b29c7905 | [
"Apache-2.0"
] | null | null | null | """Dependency Functions for markdown"""
load("//markdown/private:github_markdown_toc_go_repositories.bzl", "github_markdown_toc_go_repositories")
def bazel_starlib_markdown_dependencies():
# Deps for github-markdown-toc.go
github_markdown_toc_go_repositories()
| 33.875 | 105 | 0.815498 | 34 | 271 | 6.058824 | 0.470588 | 0.271845 | 0.330097 | 0.368932 | 0.451456 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084871 | 271 | 7 | 106 | 38.714286 | 0.830645 | 0.243542 | 0 | 0 | 0 | 0 | 0.467337 | 0.467337 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3d15ce756ff6198360c9b8128c76decc5361e0b8 | 239 | py | Python | signal_separation/__init__.py | MieszkoP/signal_separation | c18a29fb92891e671907524609162f9a7985eaff | [
"MIT"
] | null | null | null | signal_separation/__init__.py | MieszkoP/signal_separation | c18a29fb92891e671907524609162f9a7985eaff | [
"MIT"
] | null | null | null | signal_separation/__init__.py | MieszkoP/signal_separation | c18a29fb92891e671907524609162f9a7985eaff | [
"MIT"
] | null | null | null | from signal_separation._file_downloader import *
from signal_separation._genetic_alg import *
from signal_separation._neural_network import *
from signal_separation._signal_creator import *
from signal_separation._evaluation import *
| 39.833333 | 49 | 0.853556 | 29 | 239 | 6.551724 | 0.413793 | 0.263158 | 0.526316 | 0.547368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104603 | 239 | 5 | 50 | 47.8 | 0.88785 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
3d3265e49c5c3a533e0f1076970354752c8d9fba | 22,323 | py | Python | optimization/second_sdEta_mjj_optimization/lumi_and_kin_plots/four_cuts_lum2000/Output/Histos/MadAnalysis5job_0/selection_9.py | sheride/axion_pheno | 7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5 | [
"MIT"
] | null | null | null | optimization/second_sdEta_mjj_optimization/lumi_and_kin_plots/four_cuts_lum2000/Output/Histos/MadAnalysis5job_0/selection_9.py | sheride/axion_pheno | 7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5 | [
"MIT"
] | null | null | null | optimization/second_sdEta_mjj_optimization/lumi_and_kin_plots/four_cuts_lum2000/Output/Histos/MadAnalysis5job_0/selection_9.py | sheride/axion_pheno | 7d3fc08f5ae5b17a3500eba19a2e43f87f076ce5 | [
"MIT"
] | null | null | null | def selection_9():
# Library import
import numpy
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
# Library version
matplotlib_version = matplotlib.__version__
numpy_version = numpy.__version__
# Histo binning
xBinning = numpy.linspace(-15.0,15.0,101,endpoint=True)
# Creating data sequence: middle of each bin
xData = numpy.array([-14.85,-14.55,-14.25,-13.95,-13.65,-13.35,-13.05,-12.75,-12.45,-12.15,-11.85,-11.55,-11.25,-10.95,-10.65,-10.35,-10.05,-9.75,-9.45,-9.15,-8.85,-8.55,-8.25,-7.95,-7.65,-7.35,-7.05,-6.75,-6.45,-6.15,-5.85,-5.55,-5.25,-4.95,-4.65,-4.35,-4.05,-3.75,-3.45,-3.15,-2.85,-2.55,-2.25,-1.95,-1.65,-1.35,-1.05,-0.75,-0.45,-0.15,0.15,0.45,0.75,1.05,1.35,1.65,1.95,2.25,2.55,2.85,3.15,3.45,3.75,4.05,4.35,4.65,4.95,5.25,5.55,5.85,6.15,6.45,6.75,7.05,7.35,7.65,7.95,8.25,8.55,8.85,9.15,9.45,9.75,10.05,10.35,10.65,10.95,11.25,11.55,11.85,12.15,12.45,12.75,13.05,13.35,13.65,13.95,14.25,14.55,14.85])
# Creating weights for histo: y10_sdETA_0
y10_sdETA_0_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.204704212375,2.25174593613,8.80227993214,21.6986401118,36.6420464152,53.6324908423,82.905204012,114.020235493,170.518583709,232.953386483,307.465718188,414.321220248,521.995721557,650.959403353,804.692062447,1026.59145906,1284.72342247,1594.85033822,2013.26615471,2484.28972299,3234.94103497,4156.72419009,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,4114.35022893,3217.13105129,2523.18368734,1993.2045731,1604.67612921,1269.16603673,1008.16807595,827.619041433,645.227608607,510.122932439,410.022424188,304.19032119,222.103996427,167.857426148,116.271973429,84.7475423233,48.7195953453,35.2091277285,17.1951522395,8.18816649501,3.88938043513,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_1
y10_sdETA_1_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.608524689202,0.0,0.0,0.606569232146,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.60876676669,0.606204111962,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_2
y10_sdETA_2_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.50736881326,0.501704653628,0.502286650984,1.50712750232,1.50450179966,1.00388531787,3.51277241183,5.0203335714,5.01970550183,4.01664121963,2.50888792037,0.502662459713,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,3.01140968424,2.50877635538,3.0151533094,4.01494088657,1.00653271373,3.01181255781,5.51779978848,3.01261004087,2.00904929447,3.01094069808,1.00558647734,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_3
y10_sdETA_3_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.10026189065,1.10045140987,3.84930204752,6.87611909186,7.69970911604,9.62207798707,14.2923720407,14.8556503999,16.2327340661,23.3839708855,19.2492463834,18.6983852913,14.8475963406,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,15.3950245549,16.2282083551,21.179291494,22.8220352081,16.503096547,16.7778506739,11.824029358,12.3782745768,7.97835722319,4.12516728929,3.02472481708,1.09851681061,0.826740146537,0.275710863831,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_4
y10_sdETA_4_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0985701899697,0.296185732578,0.493794400971,1.18454993118,2.12262373267,3.69948679454,6.21838404826,9.52409248191,13.0754208746,16.2833531003,19.0463010678,20.0821913606,21.2150742407,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,19.5893902137,19.834445004,18.109267217,14.3079558422,12.8800327986,8.93105002911,6.71042362012,3.94722914285,2.22083884833,1.43056631006,0.345254810826,0.147886359218,0.0494219279649,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_5
y10_sdETA_5_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0378514998332,0.0755934089189,0.214357141265,0.516980647045,1.13450510324,2.25632904858,3.80652398013,5.28123173256,7.61313598495,9.65462675727,10.3478832613,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,10.6881327236,9.45177584426,7.38545810634,5.24336510849,3.40286672774,2.40734741915,1.28528580713,0.655535520793,0.415851873779,0.0630007958634,0.0126041182835,0.0378146494942,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_6
y10_sdETA_6_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0143411554736,0.100065596626,0.200460397999,0.47301298656,0.901711095362,1.68852393584,3.10671159184,4.19380604765,5.64104279847,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,4.8670482491,3.95155064278,2.41875214873,1.63223322321,1.20249856433,0.443279739695,0.143197066797,0.114620485258,0.0286273868392,0.014258439407,0.0143157628208,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_7
y10_sdETA_7_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.00215997406046,0.00431397601044,0.010799579031,0.0259288501526,0.0594129020269,0.105840026063,0.204042204777,0.377854801663,0.65747928235,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.579927970181,0.338987677991,0.198650206247,0.11437643506,0.0290956612717,0.0205236375417,0.00755819615887,0.00215868324641,0.00432027082444,0.00107991829857,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_8
y10_sdETA_8_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.00284489105365,0.0,0.00994095757125,0.0113629027033,0.0141164026728,0.017028113499,0.0780165872664,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0921160626396,0.0339963949335,0.0170338524475,0.00845509703485,0.00425738139803,0.00141082062962,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_9
y10_sdETA_9_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_10
y10_sdETA_10_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,52.7314194359,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,52.64790836,0.0,0.0,52.6056914349,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_11
y10_sdETA_11_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,11.5180092207,0.0,0.0,11.5010091185,57.6344196748,46.0519856772,103.676721749,69.1075673012,103.531217936,92.2040736739,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,46.1034778492,126.720679208,138.132209149,46.0055658683,11.533658614,46.0776164822,23.0208820484,0.0,11.5419281031,11.5309840952,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_12
y10_sdETA_12_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.38117140177,6.92129884493,2.76644767576,6.91446893152,26.3187472402,35.98510388,45.7183748221,78.9031949697,88.60297991,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,98.3118239582,89.9820800266,59.5519980326,29.0876606858,15.221767048,15.2301529738,8.30849445729,4.15073706475,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_13
y10_sdETA_13_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.503561896398,0.0,1.00812424448,1.0097770987,3.52699425988,6.55498555153,5.5475061235,18.6474302277,21.1769647485,34.7714160362,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,28.2438222874,26.2134849383,14.1134723929,7.05749328683,5.03883849459,3.52881188367,1.00915048883,1.00622711863,1.00524153565,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_14
y10_sdETA_14_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.141599040379,0.0,0.141600752492,0.566197133182,0.423849745501,1.55605375571,2.97105094155,6.07935563826,9.19626514586,16.409622513,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,14.4291270824,10.1862935568,5.23233098913,3.3952740816,1.69781902495,0.708094754535,0.141551466721,0.282364377887,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_15
y10_sdETA_15_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.15271428314,0.151729115483,0.455340992656,0.532849239786,1.21931442957,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.21780987864,0.915017526691,0.229729562326,0.227621122711,0.076910740671,0.0,0.0761303361315,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating weights for histo: y10_sdETA_16
y10_sdETA_16_weights = numpy.array([0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.00902010263406,0.0180103324069,0.0,0.0271034569489,0.0451848632429,0.12637290531,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.144445514712,0.0541432759836,0.0271105599103,0.0270862481481,0.0090377522419,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])
# Creating a new Canvas
fig = plt.figure(figsize=(12,6),dpi=80)
frame = gridspec.GridSpec(1,1,right=0.7)
pad = fig.add_subplot(frame[0])
# Creating a new Stack
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights+y10_sdETA_6_weights+y10_sdETA_7_weights+y10_sdETA_8_weights+y10_sdETA_9_weights+y10_sdETA_10_weights+y10_sdETA_11_weights+y10_sdETA_12_weights+y10_sdETA_13_weights+y10_sdETA_14_weights+y10_sdETA_15_weights+y10_sdETA_16_weights,\
label="$bg\_dip\_1600\_inf$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#e5e5e5", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights+y10_sdETA_6_weights+y10_sdETA_7_weights+y10_sdETA_8_weights+y10_sdETA_9_weights+y10_sdETA_10_weights+y10_sdETA_11_weights+y10_sdETA_12_weights+y10_sdETA_13_weights+y10_sdETA_14_weights+y10_sdETA_15_weights,\
label="$bg\_dip\_1200\_1600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#f2f2f2", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights+y10_sdETA_6_weights+y10_sdETA_7_weights+y10_sdETA_8_weights+y10_sdETA_9_weights+y10_sdETA_10_weights+y10_sdETA_11_weights+y10_sdETA_12_weights+y10_sdETA_13_weights+y10_sdETA_14_weights,\
label="$bg\_dip\_800\_1200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#ccc6aa", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights+y10_sdETA_6_weights+y10_sdETA_7_weights+y10_sdETA_8_weights+y10_sdETA_9_weights+y10_sdETA_10_weights+y10_sdETA_11_weights+y10_sdETA_12_weights+y10_sdETA_13_weights,\
label="$bg\_dip\_600\_800$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#ccc6aa", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights+y10_sdETA_6_weights+y10_sdETA_7_weights+y10_sdETA_8_weights+y10_sdETA_9_weights+y10_sdETA_10_weights+y10_sdETA_11_weights+y10_sdETA_12_weights,\
label="$bg\_dip\_400\_600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#c1bfa8", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights+y10_sdETA_6_weights+y10_sdETA_7_weights+y10_sdETA_8_weights+y10_sdETA_9_weights+y10_sdETA_10_weights+y10_sdETA_11_weights,\
label="$bg\_dip\_200\_400$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#bab5a3", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights+y10_sdETA_6_weights+y10_sdETA_7_weights+y10_sdETA_8_weights+y10_sdETA_9_weights+y10_sdETA_10_weights,\
label="$bg\_dip\_100\_200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#b2a596", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights+y10_sdETA_6_weights+y10_sdETA_7_weights+y10_sdETA_8_weights+y10_sdETA_9_weights,\
label="$bg\_dip\_0\_100$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#b7a39b", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights+y10_sdETA_6_weights+y10_sdETA_7_weights+y10_sdETA_8_weights,\
label="$bg\_vbf\_1600\_inf$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#ad998c", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights+y10_sdETA_6_weights+y10_sdETA_7_weights,\
label="$bg\_vbf\_1200\_1600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#9b8e82", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights+y10_sdETA_6_weights,\
label="$bg\_vbf\_800\_1200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#876656", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights,\
label="$bg\_vbf\_600\_800$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#afcec6", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights,\
label="$bg\_vbf\_400\_600$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#84c1a3", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights,\
label="$bg\_vbf\_200\_400$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#89a8a0", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights,\
label="$bg\_vbf\_100\_200$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#829e8c", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights+y10_sdETA_1_weights,\
label="$bg\_vbf\_0\_100$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#adbcc6", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
pad.hist(x=xData, bins=xBinning, weights=y10_sdETA_0_weights,\
label="$signal$", histtype="step", rwidth=1.0,\
color=None, edgecolor="#7a8e99", linewidth=1, linestyle="solid",\
bottom=None, cumulative=False, normed=False, align="mid", orientation="vertical")
# Axis
plt.rc('text',usetex=False)
plt.xlabel(r"\Delta\eta ( j_{1} , j_{2} ) ",\
fontsize=16,color="black")
plt.ylabel(r"$\mathrm{Events}$ $(\mathcal{L}_{\mathrm{int}} = 2000.0\ \mathrm{fb}^{-1})$ ",\
fontsize=16,color="black")
# Boundary of y-axis
ymax=(y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights+y10_sdETA_6_weights+y10_sdETA_7_weights+y10_sdETA_8_weights+y10_sdETA_9_weights+y10_sdETA_10_weights+y10_sdETA_11_weights+y10_sdETA_12_weights+y10_sdETA_13_weights+y10_sdETA_14_weights+y10_sdETA_15_weights+y10_sdETA_16_weights).max()*1.1
ymin=0 # linear scale
#ymin=min([x for x in (y10_sdETA_0_weights+y10_sdETA_1_weights+y10_sdETA_2_weights+y10_sdETA_3_weights+y10_sdETA_4_weights+y10_sdETA_5_weights+y10_sdETA_6_weights+y10_sdETA_7_weights+y10_sdETA_8_weights+y10_sdETA_9_weights+y10_sdETA_10_weights+y10_sdETA_11_weights+y10_sdETA_12_weights+y10_sdETA_13_weights+y10_sdETA_14_weights+y10_sdETA_15_weights+y10_sdETA_16_weights) if x])/100. # log scale
plt.gca().set_ylim(ymin,ymax)
# Log/Linear scale for X-axis
plt.gca().set_xscale("linear")
#plt.gca().set_xscale("log",nonposx="clip")
# Log/Linear scale for Y-axis
plt.gca().set_yscale("linear")
#plt.gca().set_yscale("log",nonposy="clip")
# Legend
plt.legend(bbox_to_anchor=(1.05,1), loc=2, borderaxespad=0.)
# Saving the image
plt.savefig('../../HTML/MadAnalysis5job_0/selection_9.png')
plt.savefig('../../PDF/MadAnalysis5job_0/selection_9.png')
plt.savefig('../../DVI/MadAnalysis5job_0/selection_9.eps')
# Running!
if __name__ == '__main__':
selection_9()
| 115.06701 | 868 | 0.696636 | 5,390 | 22,323 | 2.749536 | 0.097588 | 0.375439 | 0.551417 | 0.720108 | 0.644737 | 0.643455 | 0.641363 | 0.6361 | 0.633536 | 0.633536 | 0 | 0.373422 | 0.074004 | 22,323 | 193 | 869 | 115.663212 | 0.343525 | 0.063567 | 0 | 0.185841 | 0 | 0.00885 | 0.050029 | 0.009584 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00885 | false | 0 | 0.035398 | 0 | 0.044248 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
181e35012f8c7d48dca0fe2d11ba70d3848980bf | 7,357 | py | Python | unittests/tools/test_github_vulnerability_parser.py | mtcolman/django-DefectDojo | 76175aca446e077884bdb5e1d8e2a671a0840775 | [
"BSD-3-Clause"
] | 2 | 2022-03-29T11:37:23.000Z | 2022-03-31T18:32:35.000Z | unittests/tools/test_github_vulnerability_parser.py | mtcolman/django-DefectDojo | 76175aca446e077884bdb5e1d8e2a671a0840775 | [
"BSD-3-Clause"
] | 206 | 2020-04-20T16:03:18.000Z | 2022-01-15T23:07:48.000Z | unittests/tools/test_github_vulnerability_parser.py | mtcolman/django-DefectDojo | 76175aca446e077884bdb5e1d8e2a671a0840775 | [
"BSD-3-Clause"
] | 1 | 2020-12-06T15:44:44.000Z | 2020-12-06T15:44:44.000Z | from ..dojo_test_case import DojoTestCase
from dojo.models import Test
from dojo.tools.github_vulnerability.parser import GithubVulnerabilityParser
class TestGithubVulnerabilityParser(DojoTestCase):
def test_parse_file_with_no_vuln_has_no_findings(self):
"""sample with zero vulnerability"""
testfile = open("unittests/scans/github_vulnerability/github-0-vuln.json")
parser = GithubVulnerabilityParser()
findings = parser.get_findings(testfile, Test())
self.assertEqual(0, len(findings))
def test_parse_file_with_one_vuln_has_one_findings(self):
"""sample with one vulnerability"""
testfile = open("unittests/scans/github_vulnerability/github-1-vuln.json")
parser = GithubVulnerabilityParser()
findings = parser.get_findings(testfile, Test())
self.assertEqual(1, len(findings))
for finding in findings:
finding.clean()
with self.subTest(i=0):
finding = findings[0]
self.assertEqual(finding.title, "Critical severity vulnerability that affects package")
self.assertEqual(
finding.description,
"This is a sample description for sample description from Github API.",
)
self.assertEqual(finding.severity, "Critical")
self.assertIsNone(finding.cve)
self.assertEqual(finding.component_name, "package")
self.assertEqual(finding.unique_id_from_tool, "aabbccddeeff1122334401")
def test_parse_file_with_multiple_vuln_has_multiple_findings(self):
"""sample with five vulnerability"""
testfile = open("unittests/scans/github_vulnerability/github-5-vuln.json")
parser = GithubVulnerabilityParser()
findings = parser.get_findings(testfile, Test())
self.assertEqual(5, len(findings))
def test_parse_file_issue2984(self):
testfile = open("unittests/scans/github_vulnerability/github_issue2984.json")
parser = GithubVulnerabilityParser()
findings = parser.get_findings(testfile, Test())
self.assertEqual(4, len(findings))
for finding in findings:
finding.clean()
with self.subTest(i=0):
finding = findings[0]
self.assertEqual(finding.title, "XXXXXXXXXXXXXXX")
self.assertEqual(finding.severity, "Medium")
self.assertIsNone(finding.cve)
self.assertEqual(finding.unique_id_from_tool, "xxxxxxxxx")
with self.subTest(i=1):
finding = findings[1]
self.assertEqual(finding.title, "AMSVNASCMASNCADNNJSADC")
self.assertEqual(finding.severity, "Medium")
self.assertIsNone(finding.cve)
self.assertEqual(finding.unique_id_from_tool, "AFDSFSDAFSDASFDAFSDASFD=")
with self.subTest(i=3):
finding = findings[3]
self.assertEqual(finding.title, "SDKPKÁSMNMKSDANJDOPASJOKNDOSAJ")
self.assertEqual(finding.severity, "Medium")
self.assertIsNone(finding.cve)
self.assertEqual(finding.unique_id_from_tool, "DASFMMFKLNKDSAKFSDLANJKKFDSNJSAKDFNJKDFS=")
def test_parse_file_search(self):
testfile = open("unittests/scans/github_vulnerability/github_search.json")
parser = GithubVulnerabilityParser()
findings = parser.get_findings(testfile, Test())
self.assertEqual(2, len(findings))
for finding in findings:
finding.clean()
with self.subTest(i=0):
finding = findings[0]
self.assertEqual(finding.title, "Deserialization of Untrusted Data in Log4j")
self.assertEqual(finding.severity, "Critical")
self.assertEqual(finding.cve, "CVE-2019-17571")
self.assertEqual(finding.component_name, "log4j:log4j")
self.assertEqual(finding.unique_id_from_tool, "MDI4OlJlcG9zaXRvcnlWdWxuZXJhYmlsaXR5QWxlcnQyMDg2Nzc5NzY=")
with self.subTest(i=1):
finding = findings[1]
self.assertEqual(finding.title, "Deserialization of Untrusted Data in Log4j")
self.assertEqual(finding.severity, "Critical")
self.assertEqual(finding.cve, "CVE-2019-17571")
self.assertEqual(finding.component_name, "log4j:log4j")
self.assertEqual(finding.unique_id_from_tool, "MDI4OlJlcG9zaXRvcnlWdWxuZXJhYmlsaXR5QWxlcnQ1NTE5NTI2OTM=")
def test_parse_file_search2(self):
"""Search result with more data/attributes"""
testfile = open("unittests/scans/github_vulnerability/github_search2.json")
parser = GithubVulnerabilityParser()
findings = parser.get_findings(testfile, Test())
self.assertEqual(2, len(findings))
for finding in findings:
finding.clean()
with self.subTest(i=0):
finding = findings[0]
self.assertEqual(finding.title, "Deserialization of Untrusted Data in Log4j")
self.assertEqual(finding.severity, "Critical")
self.assertEqual(finding.cve, "CVE-2019-17571")
self.assertEqual(finding.component_name, "log4j:log4j")
self.assertEqual(finding.unique_id_from_tool, "MDI4OlJlcG9zaXRvcnlWdWxuZXJhYmlsaXR5QWxlcnQyMDg2Nzc5NzY=")
with self.subTest(i=1):
finding = findings[1]
self.assertEqual(finding.title, "Deserialization of Untrusted Data in Log4j")
self.assertEqual(finding.severity, "Critical")
self.assertEqual(finding.cve, "CVE-2019-17571")
self.assertEqual(finding.component_name, "log4j:log4j")
self.assertEqual(finding.unique_id_from_tool, "MDI4OlJlcG9zaXRvcnlWdWxuZXJhYmlsaXR5QWxlcnQ1NTE5NTI2OTM=")
def test_parse_file_search3(self):
"""Search result with more data/attributes"""
testfile = open("unittests/scans/github_vulnerability/github_search3.json")
parser = GithubVulnerabilityParser()
findings = parser.get_findings(testfile, Test())
self.assertEqual(2, len(findings))
for finding in findings:
finding.clean()
with self.subTest(i=0):
finding = findings[0]
self.assertEqual(finding.title, "Deserialization of Untrusted Data in Log4j")
self.assertEqual(finding.severity, "Critical")
self.assertEqual(finding.cve, "CVE-2019-17571")
self.assertEqual(finding.component_name, "log4j:log4j")
self.assertEqual(finding.cvssv3, "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H")
self.assertEqual(finding.file_path, "gogoph-crawler/pom.xml")
self.assertEqual(finding.unique_id_from_tool, "MDI4OlJlcG9zaXRvcnlWdWxuZXJhYmlsaXR5QWxlcnQyMDg2Nzc5NzY=")
with self.subTest(i=1):
finding = findings[1]
self.assertEqual(finding.title, "Deserialization of Untrusted Data in Log4j")
self.assertEqual(finding.severity, "Critical")
self.assertEqual(finding.cve, "CVE-2019-17571")
self.assertEqual(finding.component_name, "log4j:log4j")
self.assertEqual(finding.cvssv3, "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H")
self.assertEqual(finding.file_path, "gogoph/pom.xml")
self.assertEqual(finding.unique_id_from_tool, "MDI4OlJlcG9zaXRvcnlWdWxuZXJhYmlsaXR5QWxlcnQ1NTE5NTI2OTM=")
| 51.447552 | 117 | 0.674732 | 769 | 7,357 | 6.330299 | 0.139142 | 0.169474 | 0.216927 | 0.066557 | 0.845933 | 0.834224 | 0.815325 | 0.797658 | 0.724322 | 0.714051 | 0 | 0.028457 | 0.221422 | 7,357 | 142 | 118 | 51.809859 | 0.821404 | 0.023243 | 0 | 0.685484 | 0 | 0.016129 | 0.225698 | 0.136173 | 0 | 0 | 0 | 0 | 0.475806 | 1 | 0.056452 | false | 0 | 0.024194 | 0 | 0.08871 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
18293537f608d6098513d494708cfe87500a8f19 | 1,100 | py | Python | finrl_meta/env_execution_optimizing/order_execution_qlib/trade/action/action_rule.py | eitin-infant/FinRL-Meta | 4c94011e58425796e7e2e5c1bf848afd65c828d6 | [
"MIT"
] | 214 | 2021-11-08T17:06:11.000Z | 2022-03-31T18:29:48.000Z | finrl_meta/env_execution_optimizing/order_execution_qlib/trade/action/action_rule.py | eitin-infant/FinRL-Meta | 4c94011e58425796e7e2e5c1bf848afd65c828d6 | [
"MIT"
] | 51 | 2021-11-14T19:11:02.000Z | 2022-03-30T20:23:08.000Z | finrl_meta/env_execution_optimizing/order_execution_qlib/trade/action/action_rule.py | eitin-infant/FinRL-Meta | 4c94011e58425796e7e2e5c1bf848afd65c828d6 | [
"MIT"
] | 110 | 2021-11-03T07:41:40.000Z | 2022-03-31T03:23:38.000Z | import numpy as np
from gym.spaces import Discrete, Box, Tuple, MultiDiscrete
from .base import Base_Action
class Rule_Dynamic(Base_Action):
""" """
def get_space(self):
""" """
return Box(0, np.inf, shape=(), dtype=np.float32)
def get_action(self, action, target, position, max_step_num, t, **kargs):
"""
:param action: param target:
:param position: param max_step_num:
:param t: param **kargs:
:param target:
:param max_step_num:
:param **kargs:
"""
return position / (max_step_num - (t + 1)) * action
class Rule_Static(Base_Action):
""" """
def get_space(self):
""" """
return Box(0, np.inf, shape=(), dtype=np.float32)
def get_action(self, action, target, position, max_step_num, t, **kargs):
"""
:param action: param target:
:param position: param max_step_num:
:param t: param **kargs:
:param target:
:param max_step_num:
:param **kargs:
"""
return target / max_step_num * action
| 23.404255 | 77 | 0.571818 | 135 | 1,100 | 4.474074 | 0.266667 | 0.092715 | 0.13245 | 0.099338 | 0.756623 | 0.725166 | 0.725166 | 0.725166 | 0.725166 | 0.725166 | 0 | 0.009044 | 0.296364 | 1,100 | 46 | 78 | 23.913043 | 0.771318 | 0.259091 | 0 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.307692 | false | 0 | 0.230769 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
43ed97d89da66f52bdaf3d4f89d52ae7aa00db7d | 171,140 | py | Python | codec/gen_py/test_enclose/__init__.py | jjhesk/TrustVault | a0e8a9b3f2ae945f6f9f22eecc6a7fd4456cd206 | [
"MIT"
] | null | null | null | codec/gen_py/test_enclose/__init__.py | jjhesk/TrustVault | a0e8a9b3f2ae945f6f9f22eecc6a7fd4456cd206 | [
"MIT"
] | null | null | null | codec/gen_py/test_enclose/__init__.py | jjhesk/TrustVault | a0e8a9b3f2ae945f6f9f22eecc6a7fd4456cd206 | [
"MIT"
] | null | null | null | """Generated wrapper for TestEnclose Solidity contract."""
# pylint: disable=too-many-arguments
import json
from typing import ( # pylint: disable=unused-import
Any,
List,
Optional,
Tuple,
Union,
)
import time
from eth_utils import to_checksum_address
from mypy_extensions import TypedDict # pylint: disable=unused-import
from hexbytes import HexBytes
from web3 import Web3
from web3.contract import ContractFunction
from web3.datastructures import AttributeDict
from web3.providers.base import BaseProvider
from web3.exceptions import ContractLogicError
from moody.m.bases import ContractMethod, Validator, ContractBase, Signatures
from moody.m.tx_params import TxParams
from moody.libeb import MiliDoS
from moody import Bolors
# Try to import a custom validator class definition; if there isn't one,
# declare one that we can instantiate for the default argument to the
# constructor for TestEnclose below.
try:
# both mypy and pylint complain about what we're doing here, but this
# works just fine, so their messages have been disabled here.
from . import ( # type: ignore # pylint: disable=import-self
TestEncloseValidator,
)
except ImportError:
class TestEncloseValidator( # type: ignore
Validator
):
"""No-op input validator."""
try:
from .middleware import MIDDLEWARE # type: ignore
except ImportError:
pass
class DefaultAdminRoleMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the DEFAULT_ADMIN_ROLE method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address)
self._underlying_method = contract_function
self.sign = validator.getSignature("DEFAULT_ADMIN_ROLE")
def block_call(self, debug:bool=False) -> Union[bytes, str]:
_fn = self._underlying_method()
returned = _fn.call({
'from': self._operate
})
return Union[bytes, str](returned)
def block_send(self, _valeth:int=0) -> Union[bytes, str]:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method()
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("default_admin_role", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: default_admin_role")
message = f"Error {er}: default_admin_role"
self._on_fail("default_admin_role", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, default_admin_role: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, default_admin_role. Reason: Unknown")
self._on_fail("default_admin_role", message)
def send_transaction(self, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().transact(tx_params.as_dict())
def build_transaction(self, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().buildTransaction(tx_params.as_dict())
def estimate_gas(self, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().estimateGas(tx_params.as_dict())
class AddGovernorMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the addGovernor method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("addGovernor")
def validate_and_normalize_inputs(self, account: str)->any:
"""Validate the inputs to the addGovernor method."""
self.validator.assert_valid(
method_name='addGovernor',
parameter_name='account',
argument_value=account,
)
account = self.validate_and_checksum_address(account)
return (account)
def block_send(self, account: str,_valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(account)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("add_governor", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: add_governor")
message = f"Error {er}: add_governor"
self._on_fail("add_governor", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, add_governor: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, add_governor. Reason: Unknown")
self._on_fail("add_governor", message)
def send_transaction(self, account: str, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).transact(tx_params.as_dict())
def build_transaction(self, account: str, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).buildTransaction(tx_params.as_dict())
def estimate_gas(self, account: str, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).estimateGas(tx_params.as_dict())
class AddWhitelistAdminMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the addWhitelistAdmin method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("addWhitelistAdmin")
def validate_and_normalize_inputs(self, account: str)->any:
"""Validate the inputs to the addWhitelistAdmin method."""
self.validator.assert_valid(
method_name='addWhitelistAdmin',
parameter_name='account',
argument_value=account,
)
account = self.validate_and_checksum_address(account)
return (account)
def block_send(self, account: str,_valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(account)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("add_whitelist_admin", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: add_whitelist_admin")
message = f"Error {er}: add_whitelist_admin"
self._on_fail("add_whitelist_admin", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, add_whitelist_admin: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, add_whitelist_admin. Reason: Unknown")
self._on_fail("add_whitelist_admin", message)
def send_transaction(self, account: str, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).transact(tx_params.as_dict())
def build_transaction(self, account: str, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).buildTransaction(tx_params.as_dict())
def estimate_gas(self, account: str, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).estimateGas(tx_params.as_dict())
class CoinUserVaultMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the coin_user_vault method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("coin_user_vault")
def validate_and_normalize_inputs(self, index_0: Union[bytes, str])->any:
"""Validate the inputs to the coin_user_vault method."""
self.validator.assert_valid(
method_name='coin_user_vault',
parameter_name='index_0',
argument_value=index_0,
)
return (index_0)
def block_call(self,index_0: Union[bytes, str], debug:bool=False) -> int:
_fn = self._underlying_method(index_0)
returned = _fn.call({
'from': self._operate
})
return int(returned)
def block_send(self, index_0: Union[bytes, str],_valeth:int=0) -> int:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(index_0)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("coin_user_vault", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: coin_user_vault")
message = f"Error {er}: coin_user_vault"
self._on_fail("coin_user_vault", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, coin_user_vault: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, coin_user_vault. Reason: Unknown")
self._on_fail("coin_user_vault", message)
def send_transaction(self, index_0: Union[bytes, str], tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(index_0) = self.validate_and_normalize_inputs(index_0)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(index_0).transact(tx_params.as_dict())
def build_transaction(self, index_0: Union[bytes, str], tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(index_0) = self.validate_and_normalize_inputs(index_0)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(index_0).buildTransaction(tx_params.as_dict())
def estimate_gas(self, index_0: Union[bytes, str], tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(index_0) = self.validate_and_normalize_inputs(index_0)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(index_0).estimateGas(tx_params.as_dict())
class CoinUserVault2Method(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the coin_user_vault2 method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("coin_user_vault2")
def validate_and_normalize_inputs(self, index_0: str, index_1: str)->any:
"""Validate the inputs to the coin_user_vault2 method."""
self.validator.assert_valid(
method_name='coin_user_vault2',
parameter_name='index_0',
argument_value=index_0,
)
index_0 = self.validate_and_checksum_address(index_0)
self.validator.assert_valid(
method_name='coin_user_vault2',
parameter_name='index_1',
argument_value=index_1,
)
index_1 = self.validate_and_checksum_address(index_1)
return (index_0, index_1)
def block_call(self,index_0: str, index_1: str, debug:bool=False) -> int:
_fn = self._underlying_method(index_0, index_1)
returned = _fn.call({
'from': self._operate
})
return int(returned)
def block_send(self, index_0: str, index_1: str,_valeth:int=0) -> int:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(index_0, index_1)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("coin_user_vault2", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: coin_user_vault2")
message = f"Error {er}: coin_user_vault2"
self._on_fail("coin_user_vault2", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, coin_user_vault2: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, coin_user_vault2. Reason: Unknown")
self._on_fail("coin_user_vault2", message)
def send_transaction(self, index_0: str, index_1: str, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(index_0, index_1) = self.validate_and_normalize_inputs(index_0, index_1)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(index_0, index_1).transact(tx_params.as_dict())
def build_transaction(self, index_0: str, index_1: str, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(index_0, index_1) = self.validate_and_normalize_inputs(index_0, index_1)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(index_0, index_1).buildTransaction(tx_params.as_dict())
def estimate_gas(self, index_0: str, index_1: str, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(index_0, index_1) = self.validate_and_normalize_inputs(index_0, index_1)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(index_0, index_1).estimateGas(tx_params.as_dict())
class DepositErc20Method(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the deposit_erc20 method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("deposit_erc20")
def validate_and_normalize_inputs(self, coin: str, amount: int)->any:
"""Validate the inputs to the deposit_erc20 method."""
self.validator.assert_valid(
method_name='deposit_erc20',
parameter_name='coin',
argument_value=coin,
)
coin = self.validate_and_checksum_address(coin)
self.validator.assert_valid(
method_name='deposit_erc20',
parameter_name='amount',
argument_value=amount,
)
# safeguard against fractional inputs
amount = int(amount)
return (coin, amount)
def block_send(self, coin: str, amount: int,_valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(coin, amount)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("deposit_erc20", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: deposit_erc20")
message = f"Error {er}: deposit_erc20"
self._on_fail("deposit_erc20", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, deposit_erc20: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, deposit_erc20. Reason: Unknown")
self._on_fail("deposit_erc20", message)
def send_transaction(self, coin: str, amount: int, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(coin, amount) = self.validate_and_normalize_inputs(coin, amount)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(coin, amount).transact(tx_params.as_dict())
def build_transaction(self, coin: str, amount: int, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(coin, amount) = self.validate_and_normalize_inputs(coin, amount)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(coin, amount).buildTransaction(tx_params.as_dict())
def estimate_gas(self, coin: str, amount: int, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(coin, amount) = self.validate_and_normalize_inputs(coin, amount)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(coin, amount).estimateGas(tx_params.as_dict())
class GetRoleAdminMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the getRoleAdmin method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("getRoleAdmin")
def validate_and_normalize_inputs(self, role: Union[bytes, str])->any:
"""Validate the inputs to the getRoleAdmin method."""
self.validator.assert_valid(
method_name='getRoleAdmin',
parameter_name='role',
argument_value=role,
)
return (role)
def block_call(self,role: Union[bytes, str], debug:bool=False) -> Union[bytes, str]:
_fn = self._underlying_method(role)
returned = _fn.call({
'from': self._operate
})
return Union[bytes, str](returned)
def block_send(self, role: Union[bytes, str],_valeth:int=0) -> Union[bytes, str]:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(role)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("get_role_admin", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: get_role_admin")
message = f"Error {er}: get_role_admin"
self._on_fail("get_role_admin", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, get_role_admin: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, get_role_admin. Reason: Unknown")
self._on_fail("get_role_admin", message)
def send_transaction(self, role: Union[bytes, str], tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(role) = self.validate_and_normalize_inputs(role)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role).transact(tx_params.as_dict())
def build_transaction(self, role: Union[bytes, str], tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(role) = self.validate_and_normalize_inputs(role)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role).buildTransaction(tx_params.as_dict())
def estimate_gas(self, role: Union[bytes, str], tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(role) = self.validate_and_normalize_inputs(role)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role).estimateGas(tx_params.as_dict())
class GetRoleMemberMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the getRoleMember method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("getRoleMember")
def validate_and_normalize_inputs(self, role: Union[bytes, str], index: int)->any:
"""Validate the inputs to the getRoleMember method."""
self.validator.assert_valid(
method_name='getRoleMember',
parameter_name='role',
argument_value=role,
)
self.validator.assert_valid(
method_name='getRoleMember',
parameter_name='index',
argument_value=index,
)
# safeguard against fractional inputs
index = int(index)
return (role, index)
def block_call(self,role: Union[bytes, str], index: int, debug:bool=False) -> str:
_fn = self._underlying_method(role, index)
returned = _fn.call({
'from': self._operate
})
return str(returned)
def block_send(self, role: Union[bytes, str], index: int,_valeth:int=0) -> str:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(role, index)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("get_role_member", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: get_role_member")
message = f"Error {er}: get_role_member"
self._on_fail("get_role_member", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, get_role_member: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, get_role_member. Reason: Unknown")
self._on_fail("get_role_member", message)
def send_transaction(self, role: Union[bytes, str], index: int, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(role, index) = self.validate_and_normalize_inputs(role, index)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, index).transact(tx_params.as_dict())
def build_transaction(self, role: Union[bytes, str], index: int, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(role, index) = self.validate_and_normalize_inputs(role, index)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, index).buildTransaction(tx_params.as_dict())
def estimate_gas(self, role: Union[bytes, str], index: int, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(role, index) = self.validate_and_normalize_inputs(role, index)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, index).estimateGas(tx_params.as_dict())
class GetRoleMemberCountMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the getRoleMemberCount method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("getRoleMemberCount")
def validate_and_normalize_inputs(self, role: Union[bytes, str])->any:
"""Validate the inputs to the getRoleMemberCount method."""
self.validator.assert_valid(
method_name='getRoleMemberCount',
parameter_name='role',
argument_value=role,
)
return (role)
def block_call(self,role: Union[bytes, str], debug:bool=False) -> int:
_fn = self._underlying_method(role)
returned = _fn.call({
'from': self._operate
})
return int(returned)
def block_send(self, role: Union[bytes, str],_valeth:int=0) -> int:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(role)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("get_role_member_count", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: get_role_member_count")
message = f"Error {er}: get_role_member_count"
self._on_fail("get_role_member_count", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, get_role_member_count: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, get_role_member_count. Reason: Unknown")
self._on_fail("get_role_member_count", message)
def send_transaction(self, role: Union[bytes, str], tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(role) = self.validate_and_normalize_inputs(role)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role).transact(tx_params.as_dict())
def build_transaction(self, role: Union[bytes, str], tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(role) = self.validate_and_normalize_inputs(role)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role).buildTransaction(tx_params.as_dict())
def estimate_gas(self, role: Union[bytes, str], tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(role) = self.validate_and_normalize_inputs(role)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role).estimateGas(tx_params.as_dict())
class GovernorMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the governor method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address)
self._underlying_method = contract_function
self.sign = validator.getSignature("governor")
def block_call(self, debug:bool=False) -> Union[bytes, str]:
_fn = self._underlying_method()
returned = _fn.call({
'from': self._operate
})
return Union[bytes, str](returned)
def block_send(self, _valeth:int=0) -> Union[bytes, str]:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method()
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("governor", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: governor")
message = f"Error {er}: governor"
self._on_fail("governor", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, governor: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, governor. Reason: Unknown")
self._on_fail("governor", message)
def send_transaction(self, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().transact(tx_params.as_dict())
def build_transaction(self, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().buildTransaction(tx_params.as_dict())
def estimate_gas(self, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().estimateGas(tx_params.as_dict())
class GrantRoleMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the grantRole method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("grantRole")
def validate_and_normalize_inputs(self, role: Union[bytes, str], account: str)->any:
"""Validate the inputs to the grantRole method."""
self.validator.assert_valid(
method_name='grantRole',
parameter_name='role',
argument_value=role,
)
self.validator.assert_valid(
method_name='grantRole',
parameter_name='account',
argument_value=account,
)
account = self.validate_and_checksum_address(account)
return (role, account)
def block_send(self, role: Union[bytes, str], account: str,_valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(role, account)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("grant_role", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: grant_role")
message = f"Error {er}: grant_role"
self._on_fail("grant_role", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, grant_role: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, grant_role. Reason: Unknown")
self._on_fail("grant_role", message)
def send_transaction(self, role: Union[bytes, str], account: str, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(role, account) = self.validate_and_normalize_inputs(role, account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, account).transact(tx_params.as_dict())
def build_transaction(self, role: Union[bytes, str], account: str, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(role, account) = self.validate_and_normalize_inputs(role, account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, account).buildTransaction(tx_params.as_dict())
def estimate_gas(self, role: Union[bytes, str], account: str, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(role, account) = self.validate_and_normalize_inputs(role, account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, account).estimateGas(tx_params.as_dict())
class HasRoleMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the hasRole method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("hasRole")
def validate_and_normalize_inputs(self, role: Union[bytes, str], account: str)->any:
"""Validate the inputs to the hasRole method."""
self.validator.assert_valid(
method_name='hasRole',
parameter_name='role',
argument_value=role,
)
self.validator.assert_valid(
method_name='hasRole',
parameter_name='account',
argument_value=account,
)
account = self.validate_and_checksum_address(account)
return (role, account)
def block_call(self,role: Union[bytes, str], account: str, debug:bool=False) -> bool:
_fn = self._underlying_method(role, account)
returned = _fn.call({
'from': self._operate
})
return bool(returned)
def block_send(self, role: Union[bytes, str], account: str,_valeth:int=0) -> bool:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(role, account)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("has_role", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: has_role")
message = f"Error {er}: has_role"
self._on_fail("has_role", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, has_role: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, has_role. Reason: Unknown")
self._on_fail("has_role", message)
def send_transaction(self, role: Union[bytes, str], account: str, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(role, account) = self.validate_and_normalize_inputs(role, account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, account).transact(tx_params.as_dict())
def build_transaction(self, role: Union[bytes, str], account: str, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(role, account) = self.validate_and_normalize_inputs(role, account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, account).buildTransaction(tx_params.as_dict())
def estimate_gas(self, role: Union[bytes, str], account: str, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(role, account) = self.validate_and_normalize_inputs(role, account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, account).estimateGas(tx_params.as_dict())
class IsGovernorMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the isGovernor method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("isGovernor")
def validate_and_normalize_inputs(self, account: str)->any:
"""Validate the inputs to the isGovernor method."""
self.validator.assert_valid(
method_name='isGovernor',
parameter_name='account',
argument_value=account,
)
account = self.validate_and_checksum_address(account)
return (account)
def block_call(self,account: str, debug:bool=False) -> bool:
_fn = self._underlying_method(account)
returned = _fn.call({
'from': self._operate
})
return bool(returned)
def block_send(self, account: str,_valeth:int=0) -> bool:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(account)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("is_governor", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: is_governor")
message = f"Error {er}: is_governor"
self._on_fail("is_governor", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, is_governor: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, is_governor. Reason: Unknown")
self._on_fail("is_governor", message)
def send_transaction(self, account: str, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).transact(tx_params.as_dict())
def build_transaction(self, account: str, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).buildTransaction(tx_params.as_dict())
def estimate_gas(self, account: str, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).estimateGas(tx_params.as_dict())
class IsLockedMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the isLocked method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address)
self._underlying_method = contract_function
self.sign = validator.getSignature("isLocked")
def block_call(self, debug:bool=False) -> bool:
_fn = self._underlying_method()
returned = _fn.call({
'from': self._operate
})
return bool(returned)
def block_send(self, _valeth:int=0) -> bool:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method()
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("is_locked", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: is_locked")
message = f"Error {er}: is_locked"
self._on_fail("is_locked", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, is_locked: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, is_locked. Reason: Unknown")
self._on_fail("is_locked", message)
def send_transaction(self, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().transact(tx_params.as_dict())
def build_transaction(self, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().buildTransaction(tx_params.as_dict())
def estimate_gas(self, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().estimateGas(tx_params.as_dict())
class IsOwnerMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the isOwner method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address)
self._underlying_method = contract_function
self.sign = validator.getSignature("isOwner")
def block_call(self, debug:bool=False) -> bool:
_fn = self._underlying_method()
returned = _fn.call({
'from': self._operate
})
return bool(returned)
def block_send(self, _valeth:int=0) -> bool:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method()
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("is_owner", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: is_owner")
message = f"Error {er}: is_owner"
self._on_fail("is_owner", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, is_owner: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, is_owner. Reason: Unknown")
self._on_fail("is_owner", message)
def send_transaction(self, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().transact(tx_params.as_dict())
def build_transaction(self, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().buildTransaction(tx_params.as_dict())
def estimate_gas(self, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().estimateGas(tx_params.as_dict())
class IsPausedMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the isPaused method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address)
self._underlying_method = contract_function
self.sign = validator.getSignature("isPaused")
def block_call(self, debug:bool=False) -> bool:
_fn = self._underlying_method()
returned = _fn.call({
'from': self._operate
})
return bool(returned)
def block_send(self, _valeth:int=0) -> bool:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method()
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("is_paused", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: is_paused")
message = f"Error {er}: is_paused"
self._on_fail("is_paused", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, is_paused: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, is_paused. Reason: Unknown")
self._on_fail("is_paused", message)
def send_transaction(self, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().transact(tx_params.as_dict())
def build_transaction(self, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().buildTransaction(tx_params.as_dict())
def estimate_gas(self, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().estimateGas(tx_params.as_dict())
class IsWhitelistAdminMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the isWhitelistAdmin method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("isWhitelistAdmin")
def validate_and_normalize_inputs(self, account: str)->any:
"""Validate the inputs to the isWhitelistAdmin method."""
self.validator.assert_valid(
method_name='isWhitelistAdmin',
parameter_name='account',
argument_value=account,
)
account = self.validate_and_checksum_address(account)
return (account)
def block_call(self,account: str, debug:bool=False) -> bool:
_fn = self._underlying_method(account)
returned = _fn.call({
'from': self._operate
})
return bool(returned)
def block_send(self, account: str,_valeth:int=0) -> bool:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(account)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("is_whitelist_admin", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: is_whitelist_admin")
message = f"Error {er}: is_whitelist_admin"
self._on_fail("is_whitelist_admin", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, is_whitelist_admin: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, is_whitelist_admin. Reason: Unknown")
self._on_fail("is_whitelist_admin", message)
def send_transaction(self, account: str, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).transact(tx_params.as_dict())
def build_transaction(self, account: str, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).buildTransaction(tx_params.as_dict())
def estimate_gas(self, account: str, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).estimateGas(tx_params.as_dict())
class LockMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the lock method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address)
self._underlying_method = contract_function
self.sign = validator.getSignature("lock")
def block_send(self, _valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method()
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("lock", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: lock")
message = f"Error {er}: lock"
self._on_fail("lock", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, lock: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, lock. Reason: Unknown")
self._on_fail("lock", message)
def send_transaction(self, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().transact(tx_params.as_dict())
def build_transaction(self, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().buildTransaction(tx_params.as_dict())
def estimate_gas(self, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().estimateGas(tx_params.as_dict())
class OwnerMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the owner method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address)
self._underlying_method = contract_function
self.sign = validator.getSignature("owner")
def block_call(self, debug:bool=False) -> str:
_fn = self._underlying_method()
returned = _fn.call({
'from': self._operate
})
return str(returned)
def block_send(self, _valeth:int=0) -> str:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method()
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("owner", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: owner")
message = f"Error {er}: owner"
self._on_fail("owner", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, owner: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, owner. Reason: Unknown")
self._on_fail("owner", message)
def send_transaction(self, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().transact(tx_params.as_dict())
def build_transaction(self, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().buildTransaction(tx_params.as_dict())
def estimate_gas(self, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().estimateGas(tx_params.as_dict())
class PauseScMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the pauseSc method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address)
self._underlying_method = contract_function
self.sign = validator.getSignature("pauseSc")
def block_send(self, _valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method()
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("pause_sc", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: pause_sc")
message = f"Error {er}: pause_sc"
self._on_fail("pause_sc", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, pause_sc: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, pause_sc. Reason: Unknown")
self._on_fail("pause_sc", message)
def send_transaction(self, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().transact(tx_params.as_dict())
def build_transaction(self, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().buildTransaction(tx_params.as_dict())
def estimate_gas(self, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().estimateGas(tx_params.as_dict())
class RemoveGovernorMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the removeGovernor method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("removeGovernor")
def validate_and_normalize_inputs(self, account: str)->any:
"""Validate the inputs to the removeGovernor method."""
self.validator.assert_valid(
method_name='removeGovernor',
parameter_name='account',
argument_value=account,
)
account = self.validate_and_checksum_address(account)
return (account)
def block_send(self, account: str,_valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(account)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("remove_governor", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: remove_governor")
message = f"Error {er}: remove_governor"
self._on_fail("remove_governor", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, remove_governor: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, remove_governor. Reason: Unknown")
self._on_fail("remove_governor", message)
def send_transaction(self, account: str, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).transact(tx_params.as_dict())
def build_transaction(self, account: str, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).buildTransaction(tx_params.as_dict())
def estimate_gas(self, account: str, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).estimateGas(tx_params.as_dict())
class RemoveWhitelistAdminMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the removeWhitelistAdmin method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("removeWhitelistAdmin")
def validate_and_normalize_inputs(self, account: str)->any:
"""Validate the inputs to the removeWhitelistAdmin method."""
self.validator.assert_valid(
method_name='removeWhitelistAdmin',
parameter_name='account',
argument_value=account,
)
account = self.validate_and_checksum_address(account)
return (account)
def block_send(self, account: str,_valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(account)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("remove_whitelist_admin", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: remove_whitelist_admin")
message = f"Error {er}: remove_whitelist_admin"
self._on_fail("remove_whitelist_admin", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, remove_whitelist_admin: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, remove_whitelist_admin. Reason: Unknown")
self._on_fail("remove_whitelist_admin", message)
def send_transaction(self, account: str, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).transact(tx_params.as_dict())
def build_transaction(self, account: str, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).buildTransaction(tx_params.as_dict())
def estimate_gas(self, account: str, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(account) = self.validate_and_normalize_inputs(account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(account).estimateGas(tx_params.as_dict())
class RenounceOwnershipMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the renounceOwnership method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address)
self._underlying_method = contract_function
self.sign = validator.getSignature("renounceOwnership")
def block_send(self, _valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method()
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("renounce_ownership", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: renounce_ownership")
message = f"Error {er}: renounce_ownership"
self._on_fail("renounce_ownership", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, renounce_ownership: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, renounce_ownership. Reason: Unknown")
self._on_fail("renounce_ownership", message)
def send_transaction(self, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().transact(tx_params.as_dict())
def build_transaction(self, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().buildTransaction(tx_params.as_dict())
def estimate_gas(self, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().estimateGas(tx_params.as_dict())
class RenounceRoleMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the renounceRole method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("renounceRole")
def validate_and_normalize_inputs(self, role: Union[bytes, str], account: str)->any:
"""Validate the inputs to the renounceRole method."""
self.validator.assert_valid(
method_name='renounceRole',
parameter_name='role',
argument_value=role,
)
self.validator.assert_valid(
method_name='renounceRole',
parameter_name='account',
argument_value=account,
)
account = self.validate_and_checksum_address(account)
return (role, account)
def block_send(self, role: Union[bytes, str], account: str,_valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(role, account)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("renounce_role", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: renounce_role")
message = f"Error {er}: renounce_role"
self._on_fail("renounce_role", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, renounce_role: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, renounce_role. Reason: Unknown")
self._on_fail("renounce_role", message)
def send_transaction(self, role: Union[bytes, str], account: str, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(role, account) = self.validate_and_normalize_inputs(role, account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, account).transact(tx_params.as_dict())
def build_transaction(self, role: Union[bytes, str], account: str, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(role, account) = self.validate_and_normalize_inputs(role, account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, account).buildTransaction(tx_params.as_dict())
def estimate_gas(self, role: Union[bytes, str], account: str, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(role, account) = self.validate_and_normalize_inputs(role, account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, account).estimateGas(tx_params.as_dict())
class RevokeRoleMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the revokeRole method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("revokeRole")
def validate_and_normalize_inputs(self, role: Union[bytes, str], account: str)->any:
"""Validate the inputs to the revokeRole method."""
self.validator.assert_valid(
method_name='revokeRole',
parameter_name='role',
argument_value=role,
)
self.validator.assert_valid(
method_name='revokeRole',
parameter_name='account',
argument_value=account,
)
account = self.validate_and_checksum_address(account)
return (role, account)
def block_send(self, role: Union[bytes, str], account: str,_valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(role, account)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("revoke_role", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: revoke_role")
message = f"Error {er}: revoke_role"
self._on_fail("revoke_role", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, revoke_role: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, revoke_role. Reason: Unknown")
self._on_fail("revoke_role", message)
def send_transaction(self, role: Union[bytes, str], account: str, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(role, account) = self.validate_and_normalize_inputs(role, account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, account).transact(tx_params.as_dict())
def build_transaction(self, role: Union[bytes, str], account: str, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(role, account) = self.validate_and_normalize_inputs(role, account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, account).buildTransaction(tx_params.as_dict())
def estimate_gas(self, role: Union[bytes, str], account: str, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(role, account) = self.validate_and_normalize_inputs(role, account)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(role, account).estimateGas(tx_params.as_dict())
class SigMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the sig method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("sig")
def validate_and_normalize_inputs(self, user: str, coin: str)->any:
"""Validate the inputs to the sig method."""
self.validator.assert_valid(
method_name='sig',
parameter_name='user',
argument_value=user,
)
user = self.validate_and_checksum_address(user)
self.validator.assert_valid(
method_name='sig',
parameter_name='coin',
argument_value=coin,
)
coin = self.validate_and_checksum_address(coin)
return (user, coin)
def block_call(self,user: str, coin: str, debug:bool=False) -> Union[bytes, str]:
_fn = self._underlying_method(user, coin)
returned = _fn.call({
'from': self._operate
})
return Union[bytes, str](returned)
def block_send(self, user: str, coin: str,_valeth:int=0) -> Union[bytes, str]:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(user, coin)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("sig", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: sig")
message = f"Error {er}: sig"
self._on_fail("sig", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, sig: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, sig. Reason: Unknown")
self._on_fail("sig", message)
def send_transaction(self, user: str, coin: str, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(user, coin) = self.validate_and_normalize_inputs(user, coin)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(user, coin).transact(tx_params.as_dict())
def build_transaction(self, user: str, coin: str, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(user, coin) = self.validate_and_normalize_inputs(user, coin)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(user, coin).buildTransaction(tx_params.as_dict())
def estimate_gas(self, user: str, coin: str, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(user, coin) = self.validate_and_normalize_inputs(user, coin)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(user, coin).estimateGas(tx_params.as_dict())
class TransferOwnershipMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the transferOwnership method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address, validator)
self._underlying_method = contract_function
self.sign = validator.getSignature("transferOwnership")
def validate_and_normalize_inputs(self, new_owner: str)->any:
"""Validate the inputs to the transferOwnership method."""
self.validator.assert_valid(
method_name='transferOwnership',
parameter_name='newOwner',
argument_value=new_owner,
)
new_owner = self.validate_and_checksum_address(new_owner)
return (new_owner)
def block_send(self, new_owner: str,_valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method(new_owner)
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("transfer_ownership", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: transfer_ownership")
message = f"Error {er}: transfer_ownership"
self._on_fail("transfer_ownership", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, transfer_ownership: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, transfer_ownership. Reason: Unknown")
self._on_fail("transfer_ownership", message)
def send_transaction(self, new_owner: str, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
(new_owner) = self.validate_and_normalize_inputs(new_owner)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(new_owner).transact(tx_params.as_dict())
def build_transaction(self, new_owner: str, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
(new_owner) = self.validate_and_normalize_inputs(new_owner)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(new_owner).buildTransaction(tx_params.as_dict())
def estimate_gas(self, new_owner: str, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
(new_owner) = self.validate_and_normalize_inputs(new_owner)
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method(new_owner).estimateGas(tx_params.as_dict())
class UnlockMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the unlock method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address)
self._underlying_method = contract_function
self.sign = validator.getSignature("unlock")
def block_send(self, _valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method()
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("unlock", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: unlock")
message = f"Error {er}: unlock"
self._on_fail("unlock", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, unlock: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, unlock. Reason: Unknown")
self._on_fail("unlock", message)
def send_transaction(self, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().transact(tx_params.as_dict())
def build_transaction(self, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().buildTransaction(tx_params.as_dict())
def estimate_gas(self, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().estimateGas(tx_params.as_dict())
class UnpauseScMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the unpauseSc method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address)
self._underlying_method = contract_function
self.sign = validator.getSignature("unpauseSc")
def block_send(self, _valeth:int=0) -> None:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method()
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("unpause_sc", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: unpause_sc")
message = f"Error {er}: unpause_sc"
self._on_fail("unpause_sc", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, unpause_sc: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, unpause_sc. Reason: Unknown")
self._on_fail("unpause_sc", message)
def send_transaction(self, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().transact(tx_params.as_dict())
def build_transaction(self, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().buildTransaction(tx_params.as_dict())
def estimate_gas(self, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().estimateGas(tx_params.as_dict())
class WhitelistAdminsMethod(ContractMethod): # pylint: disable=invalid-name
"""Various interfaces to the whitelistAdmins method."""
def __init__(self, elib: MiliDoS, contract_address: str, contract_function: ContractFunction, validator: Validator=None):
"""Persist instance data."""
super().__init__(elib, contract_address)
self._underlying_method = contract_function
self.sign = validator.getSignature("whitelistAdmins")
def block_call(self, debug:bool=False) -> Union[bytes, str]:
_fn = self._underlying_method()
returned = _fn.call({
'from': self._operate
})
return Union[bytes, str](returned)
def block_send(self, _valeth:int=0) -> Union[bytes, str]:
"""Execute underlying contract method via eth_call.
:param tx_params: transaction parameters
:returns: the return value of the underlying method.
"""
_fn = self._underlying_method()
try:
_t = _fn.buildTransaction({
'from': self._operate,
'gas': self.gas_limit,
'gasPrice': self.gas_price_wei
})
_t['nonce'] = self._web3_eth.getTransactionCount(self._operate)
if _valeth > 0:
_t['value'] = _valeth
if self.debug_method:
print(f"======== Signing ✅ by {self._operate}")
print(f"======== Transaction ✅ check")
print(_t)
if 'data' in _t:
signed = self._web3_eth.account.sign_transaction(_t)
txHash = self._web3_eth.sendRawTransaction(signed.rawTransaction)
tx_receipt = None
if self.auto_reciept is True:
print(f"======== awaiting Confirmation 🚸️ {self.sign}")
tx_receipt = self._web3_eth.wait_for_transaction_receipt(txHash)
if self.debug_method:
print("======== TX Result ✅")
print(tx_receipt)
self._on_receipt_handle("whitelist_admins", tx_receipt, txHash)
if self.auto_reciept is False:
time.sleep(self._wait)
except ContractLogicError as er:
print(f"{Bolors.FAIL}Error {er} {Bolors.RESET}: whitelist_admins")
message = f"Error {er}: whitelist_admins"
self._on_fail("whitelist_admins", message)
except ValueError as err:
if "message" in err.args[0]:
message = err.args[0]["message"]
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, whitelist_admins: {message}")
else:
message = "Error Revert , Reason: Unknown"
print(f"{Bolors.FAIL}Error Revert {Bolors.RESET}, whitelist_admins. Reason: Unknown")
self._on_fail("whitelist_admins", message)
def send_transaction(self, tx_params: Optional[TxParams] = None) -> Union[HexBytes, bytes]:
"""Execute underlying contract method via eth_sendTransaction.
:param tx_params: transaction parameters
"""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().transact(tx_params.as_dict())
def build_transaction(self, tx_params: Optional[TxParams] = None) -> dict:
"""Construct calldata to be used as input to the method."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().buildTransaction(tx_params.as_dict())
def estimate_gas(self, tx_params: Optional[TxParams] = None) -> int:
"""Estimate gas consumption of method call."""
tx_params = super().normalize_tx_params(tx_params)
return self._underlying_method().estimateGas(tx_params.as_dict())
class SignatureGenerator(Signatures):
"""
The signature is generated for this and it is installed.
"""
def __init__(self, abi: any):
super().__init__(abi)
def default_admin_role(self) -> str:
return self._function_signatures["DEFAULT_ADMIN_ROLE"]
def add_governor(self) -> str:
return self._function_signatures["addGovernor"]
def add_whitelist_admin(self) -> str:
return self._function_signatures["addWhitelistAdmin"]
def coin_user_vault(self) -> str:
return self._function_signatures["coin_user_vault"]
def coin_user_vault2(self) -> str:
return self._function_signatures["coin_user_vault2"]
def deposit_erc20(self) -> str:
return self._function_signatures["deposit_erc20"]
def get_role_admin(self) -> str:
return self._function_signatures["getRoleAdmin"]
def get_role_member(self) -> str:
return self._function_signatures["getRoleMember"]
def get_role_member_count(self) -> str:
return self._function_signatures["getRoleMemberCount"]
def governor(self) -> str:
return self._function_signatures["governor"]
def grant_role(self) -> str:
return self._function_signatures["grantRole"]
def has_role(self) -> str:
return self._function_signatures["hasRole"]
def is_governor(self) -> str:
return self._function_signatures["isGovernor"]
def is_locked(self) -> str:
return self._function_signatures["isLocked"]
def is_owner(self) -> str:
return self._function_signatures["isOwner"]
def is_paused(self) -> str:
return self._function_signatures["isPaused"]
def is_whitelist_admin(self) -> str:
return self._function_signatures["isWhitelistAdmin"]
def lock(self) -> str:
return self._function_signatures["lock"]
def owner(self) -> str:
return self._function_signatures["owner"]
def pause_sc(self) -> str:
return self._function_signatures["pauseSc"]
def remove_governor(self) -> str:
return self._function_signatures["removeGovernor"]
def remove_whitelist_admin(self) -> str:
return self._function_signatures["removeWhitelistAdmin"]
def renounce_ownership(self) -> str:
return self._function_signatures["renounceOwnership"]
def renounce_role(self) -> str:
return self._function_signatures["renounceRole"]
def revoke_role(self) -> str:
return self._function_signatures["revokeRole"]
def sig(self) -> str:
return self._function_signatures["sig"]
def transfer_ownership(self) -> str:
return self._function_signatures["transferOwnership"]
def unlock(self) -> str:
return self._function_signatures["unlock"]
def unpause_sc(self) -> str:
return self._function_signatures["unpauseSc"]
def whitelist_admins(self) -> str:
return self._function_signatures["whitelistAdmins"]
# pylint: disable=too-many-public-methods,too-many-instance-attributes
class TestEnclose(ContractBase):
"""Wrapper class for TestEnclose Solidity contract."""
_fn_default_admin_role: DefaultAdminRoleMethod
"""Constructor-initialized instance of
:class:`DefaultAdminRoleMethod`.
"""
_fn_add_governor: AddGovernorMethod
"""Constructor-initialized instance of
:class:`AddGovernorMethod`.
"""
_fn_add_whitelist_admin: AddWhitelistAdminMethod
"""Constructor-initialized instance of
:class:`AddWhitelistAdminMethod`.
"""
_fn_coin_user_vault: CoinUserVaultMethod
"""Constructor-initialized instance of
:class:`CoinUserVaultMethod`.
"""
_fn_coin_user_vault2: CoinUserVault2Method
"""Constructor-initialized instance of
:class:`CoinUserVault2Method`.
"""
_fn_deposit_erc20: DepositErc20Method
"""Constructor-initialized instance of
:class:`DepositErc20Method`.
"""
_fn_get_role_admin: GetRoleAdminMethod
"""Constructor-initialized instance of
:class:`GetRoleAdminMethod`.
"""
_fn_get_role_member: GetRoleMemberMethod
"""Constructor-initialized instance of
:class:`GetRoleMemberMethod`.
"""
_fn_get_role_member_count: GetRoleMemberCountMethod
"""Constructor-initialized instance of
:class:`GetRoleMemberCountMethod`.
"""
_fn_governor: GovernorMethod
"""Constructor-initialized instance of
:class:`GovernorMethod`.
"""
_fn_grant_role: GrantRoleMethod
"""Constructor-initialized instance of
:class:`GrantRoleMethod`.
"""
_fn_has_role: HasRoleMethod
"""Constructor-initialized instance of
:class:`HasRoleMethod`.
"""
_fn_is_governor: IsGovernorMethod
"""Constructor-initialized instance of
:class:`IsGovernorMethod`.
"""
_fn_is_locked: IsLockedMethod
"""Constructor-initialized instance of
:class:`IsLockedMethod`.
"""
_fn_is_owner: IsOwnerMethod
"""Constructor-initialized instance of
:class:`IsOwnerMethod`.
"""
_fn_is_paused: IsPausedMethod
"""Constructor-initialized instance of
:class:`IsPausedMethod`.
"""
_fn_is_whitelist_admin: IsWhitelistAdminMethod
"""Constructor-initialized instance of
:class:`IsWhitelistAdminMethod`.
"""
_fn_lock: LockMethod
"""Constructor-initialized instance of
:class:`LockMethod`.
"""
_fn_owner: OwnerMethod
"""Constructor-initialized instance of
:class:`OwnerMethod`.
"""
_fn_pause_sc: PauseScMethod
"""Constructor-initialized instance of
:class:`PauseScMethod`.
"""
_fn_remove_governor: RemoveGovernorMethod
"""Constructor-initialized instance of
:class:`RemoveGovernorMethod`.
"""
_fn_remove_whitelist_admin: RemoveWhitelistAdminMethod
"""Constructor-initialized instance of
:class:`RemoveWhitelistAdminMethod`.
"""
_fn_renounce_ownership: RenounceOwnershipMethod
"""Constructor-initialized instance of
:class:`RenounceOwnershipMethod`.
"""
_fn_renounce_role: RenounceRoleMethod
"""Constructor-initialized instance of
:class:`RenounceRoleMethod`.
"""
_fn_revoke_role: RevokeRoleMethod
"""Constructor-initialized instance of
:class:`RevokeRoleMethod`.
"""
_fn_sig: SigMethod
"""Constructor-initialized instance of
:class:`SigMethod`.
"""
_fn_transfer_ownership: TransferOwnershipMethod
"""Constructor-initialized instance of
:class:`TransferOwnershipMethod`.
"""
_fn_unlock: UnlockMethod
"""Constructor-initialized instance of
:class:`UnlockMethod`.
"""
_fn_unpause_sc: UnpauseScMethod
"""Constructor-initialized instance of
:class:`UnpauseScMethod`.
"""
_fn_whitelist_admins: WhitelistAdminsMethod
"""Constructor-initialized instance of
:class:`WhitelistAdminsMethod`.
"""
SIGNATURES:SignatureGenerator = None
def __init__(
self,
core_lib: MiliDoS,
contract_address: str,
validator: TestEncloseValidator = None,
):
"""Get an instance of wrapper for smart contract.
"""
# pylint: disable=too-many-statements
super().__init__(contract_address, TestEnclose.abi())
web3 = core_lib.w3
if not validator:
validator = TestEncloseValidator(web3, contract_address)
# if any middleware was imported, inject it
try:
MIDDLEWARE
except NameError:
pass
else:
try:
for middleware in MIDDLEWARE:
web3.middleware_onion.inject(
middleware['function'], layer=middleware['layer'],
)
except ValueError as value_error:
if value_error.args == ("You can't add the same un-named instance twice",):
pass
self._web3_eth = web3.eth
functions = self._web3_eth.contract(address=to_checksum_address(contract_address), abi=TestEnclose.abi()).functions
self._signatures = SignatureGenerator(TestEnclose.abi())
validator.bindSignatures(self._signatures)
self._fn_default_admin_role = DefaultAdminRoleMethod(core_lib, contract_address, functions.DEFAULT_ADMIN_ROLE, validator)
self._fn_add_governor = AddGovernorMethod(core_lib, contract_address, functions.addGovernor, validator)
self._fn_add_whitelist_admin = AddWhitelistAdminMethod(core_lib, contract_address, functions.addWhitelistAdmin, validator)
self._fn_coin_user_vault = CoinUserVaultMethod(core_lib, contract_address, functions.coin_user_vault, validator)
self._fn_coin_user_vault2 = CoinUserVault2Method(core_lib, contract_address, functions.coin_user_vault2, validator)
self._fn_deposit_erc20 = DepositErc20Method(core_lib, contract_address, functions.deposit_erc20, validator)
self._fn_get_role_admin = GetRoleAdminMethod(core_lib, contract_address, functions.getRoleAdmin, validator)
self._fn_get_role_member = GetRoleMemberMethod(core_lib, contract_address, functions.getRoleMember, validator)
self._fn_get_role_member_count = GetRoleMemberCountMethod(core_lib, contract_address, functions.getRoleMemberCount, validator)
self._fn_governor = GovernorMethod(core_lib, contract_address, functions.governor, validator)
self._fn_grant_role = GrantRoleMethod(core_lib, contract_address, functions.grantRole, validator)
self._fn_has_role = HasRoleMethod(core_lib, contract_address, functions.hasRole, validator)
self._fn_is_governor = IsGovernorMethod(core_lib, contract_address, functions.isGovernor, validator)
self._fn_is_locked = IsLockedMethod(core_lib, contract_address, functions.isLocked, validator)
self._fn_is_owner = IsOwnerMethod(core_lib, contract_address, functions.isOwner, validator)
self._fn_is_paused = IsPausedMethod(core_lib, contract_address, functions.isPaused, validator)
self._fn_is_whitelist_admin = IsWhitelistAdminMethod(core_lib, contract_address, functions.isWhitelistAdmin, validator)
self._fn_lock = LockMethod(core_lib, contract_address, functions.lock, validator)
self._fn_owner = OwnerMethod(core_lib, contract_address, functions.owner, validator)
self._fn_pause_sc = PauseScMethod(core_lib, contract_address, functions.pauseSc, validator)
self._fn_remove_governor = RemoveGovernorMethod(core_lib, contract_address, functions.removeGovernor, validator)
self._fn_remove_whitelist_admin = RemoveWhitelistAdminMethod(core_lib, contract_address, functions.removeWhitelistAdmin, validator)
self._fn_renounce_ownership = RenounceOwnershipMethod(core_lib, contract_address, functions.renounceOwnership, validator)
self._fn_renounce_role = RenounceRoleMethod(core_lib, contract_address, functions.renounceRole, validator)
self._fn_revoke_role = RevokeRoleMethod(core_lib, contract_address, functions.revokeRole, validator)
self._fn_sig = SigMethod(core_lib, contract_address, functions.sig, validator)
self._fn_transfer_ownership = TransferOwnershipMethod(core_lib, contract_address, functions.transferOwnership, validator)
self._fn_unlock = UnlockMethod(core_lib, contract_address, functions.unlock, validator)
self._fn_unpause_sc = UnpauseScMethod(core_lib, contract_address, functions.unpauseSc, validator)
self._fn_whitelist_admins = WhitelistAdminsMethod(core_lib, contract_address, functions.whitelistAdmins, validator)
def event_ownership_transferred(
self, tx_hash: Union[HexBytes, bytes]
) -> Tuple[AttributeDict]:
"""
Implementation of event ownership_transferred in contract TestEnclose
Get log entry for OwnershipTransferred event.
:param tx_hash: hash of transaction emitting OwnershipTransferred event
"""
tx_receipt = self._web3_eth.getTransactionReceipt(tx_hash)
return self._web3_eth.contract(address=to_checksum_address(self.contract_address), abi=TestEnclose.abi()).events.OwnershipTransferred().processReceipt(tx_receipt)
def event_role_granted(
self, tx_hash: Union[HexBytes, bytes]
) -> Tuple[AttributeDict]:
"""
Implementation of event role_granted in contract TestEnclose
Get log entry for RoleGranted event.
:param tx_hash: hash of transaction emitting RoleGranted event
"""
tx_receipt = self._web3_eth.getTransactionReceipt(tx_hash)
return self._web3_eth.contract(address=to_checksum_address(self.contract_address), abi=TestEnclose.abi()).events.RoleGranted().processReceipt(tx_receipt)
def event_role_revoked(
self, tx_hash: Union[HexBytes, bytes]
) -> Tuple[AttributeDict]:
"""
Implementation of event role_revoked in contract TestEnclose
Get log entry for RoleRevoked event.
:param tx_hash: hash of transaction emitting RoleRevoked event
"""
tx_receipt = self._web3_eth.getTransactionReceipt(tx_hash)
return self._web3_eth.contract(address=to_checksum_address(self.contract_address), abi=TestEnclose.abi()).events.RoleRevoked().processReceipt(tx_receipt)
def event_usr_deposit(
self, tx_hash: Union[HexBytes, bytes]
) -> Tuple[AttributeDict]:
"""
Implementation of event usr_deposit in contract TestEnclose
Get log entry for UsrDeposit event.
:param tx_hash: hash of transaction emitting UsrDeposit event
"""
tx_receipt = self._web3_eth.getTransactionReceipt(tx_hash)
return self._web3_eth.contract(address=to_checksum_address(self.contract_address), abi=TestEnclose.abi()).events.UsrDeposit().processReceipt(tx_receipt)
def event_contract_paused(
self, tx_hash: Union[HexBytes, bytes]
) -> Tuple[AttributeDict]:
"""
Implementation of event contract_paused in contract TestEnclose
Get log entry for contractPaused event.
:param tx_hash: hash of transaction emitting contractPaused event
"""
tx_receipt = self._web3_eth.getTransactionReceipt(tx_hash)
return self._web3_eth.contract(address=to_checksum_address(self.contract_address), abi=TestEnclose.abi()).events.contractPaused().processReceipt(tx_receipt)
def event_traillock(
self, tx_hash: Union[HexBytes, bytes]
) -> Tuple[AttributeDict]:
"""
Implementation of event traillock in contract TestEnclose
Get log entry for traillock event.
:param tx_hash: hash of transaction emitting traillock event
"""
tx_receipt = self._web3_eth.getTransactionReceipt(tx_hash)
return self._web3_eth.contract(address=to_checksum_address(self.contract_address), abi=TestEnclose.abi()).events.traillock().processReceipt(tx_receipt)
def default_admin_role(self) -> Union[bytes, str]:
"""
Implementation of default_admin_role in contract TestEnclose
Method of the function
"""
self._fn_default_admin_role.callback_onfail = self._callback_onfail
self._fn_default_admin_role.callback_onsuccess = self._callback_onsuccess
self._fn_default_admin_role.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_default_admin_role.gas_limit = self.call_contract_fee_amount
self._fn_default_admin_role.gas_price_wei = self.call_contract_fee_price
self._fn_default_admin_role.debug_method = self.call_contract_debug_flag
return self._fn_default_admin_role.block_call()
def add_governor(self, account: str) -> None:
"""
Implementation of add_governor in contract TestEnclose
Method of the function
"""
self._fn_add_governor.callback_onfail = self._callback_onfail
self._fn_add_governor.callback_onsuccess = self._callback_onsuccess
self._fn_add_governor.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_add_governor.gas_limit = self.call_contract_fee_amount
self._fn_add_governor.gas_price_wei = self.call_contract_fee_price
self._fn_add_governor.debug_method = self.call_contract_debug_flag
return self._fn_add_governor.block_send(account)
def add_whitelist_admin(self, account: str) -> None:
"""
Implementation of add_whitelist_admin in contract TestEnclose
Method of the function
"""
self._fn_add_whitelist_admin.callback_onfail = self._callback_onfail
self._fn_add_whitelist_admin.callback_onsuccess = self._callback_onsuccess
self._fn_add_whitelist_admin.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_add_whitelist_admin.gas_limit = self.call_contract_fee_amount
self._fn_add_whitelist_admin.gas_price_wei = self.call_contract_fee_price
self._fn_add_whitelist_admin.debug_method = self.call_contract_debug_flag
return self._fn_add_whitelist_admin.block_send(account)
def coin_user_vault(self, index_0: Union[bytes, str]) -> int:
"""
Implementation of coin_user_vault in contract TestEnclose
Method of the function
"""
self._fn_coin_user_vault.callback_onfail = self._callback_onfail
self._fn_coin_user_vault.callback_onsuccess = self._callback_onsuccess
self._fn_coin_user_vault.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_coin_user_vault.gas_limit = self.call_contract_fee_amount
self._fn_coin_user_vault.gas_price_wei = self.call_contract_fee_price
self._fn_coin_user_vault.debug_method = self.call_contract_debug_flag
return self._fn_coin_user_vault.block_call(index_0)
def coin_user_vault2(self, index_0: str, index_1: str) -> int:
"""
Implementation of coin_user_vault2 in contract TestEnclose
Method of the function
"""
self._fn_coin_user_vault2.callback_onfail = self._callback_onfail
self._fn_coin_user_vault2.callback_onsuccess = self._callback_onsuccess
self._fn_coin_user_vault2.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_coin_user_vault2.gas_limit = self.call_contract_fee_amount
self._fn_coin_user_vault2.gas_price_wei = self.call_contract_fee_price
self._fn_coin_user_vault2.debug_method = self.call_contract_debug_flag
return self._fn_coin_user_vault2.block_call(index_0, index_1)
def deposit_erc20(self, coin: str, amount: int) -> None:
"""
Implementation of deposit_erc20 in contract TestEnclose
Method of the function
"""
self._fn_deposit_erc20.callback_onfail = self._callback_onfail
self._fn_deposit_erc20.callback_onsuccess = self._callback_onsuccess
self._fn_deposit_erc20.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_deposit_erc20.gas_limit = self.call_contract_fee_amount
self._fn_deposit_erc20.gas_price_wei = self.call_contract_fee_price
self._fn_deposit_erc20.debug_method = self.call_contract_debug_flag
return self._fn_deposit_erc20.block_send(coin, amount)
def get_role_admin(self, role: Union[bytes, str]) -> Union[bytes, str]:
"""
Implementation of get_role_admin in contract TestEnclose
Method of the function
"""
self._fn_get_role_admin.callback_onfail = self._callback_onfail
self._fn_get_role_admin.callback_onsuccess = self._callback_onsuccess
self._fn_get_role_admin.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_get_role_admin.gas_limit = self.call_contract_fee_amount
self._fn_get_role_admin.gas_price_wei = self.call_contract_fee_price
self._fn_get_role_admin.debug_method = self.call_contract_debug_flag
return self._fn_get_role_admin.block_call(role)
def get_role_member(self, role: Union[bytes, str], index: int) -> str:
"""
Implementation of get_role_member in contract TestEnclose
Method of the function
"""
self._fn_get_role_member.callback_onfail = self._callback_onfail
self._fn_get_role_member.callback_onsuccess = self._callback_onsuccess
self._fn_get_role_member.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_get_role_member.gas_limit = self.call_contract_fee_amount
self._fn_get_role_member.gas_price_wei = self.call_contract_fee_price
self._fn_get_role_member.debug_method = self.call_contract_debug_flag
return self._fn_get_role_member.block_call(role, index)
def get_role_member_count(self, role: Union[bytes, str]) -> int:
"""
Implementation of get_role_member_count in contract TestEnclose
Method of the function
"""
self._fn_get_role_member_count.callback_onfail = self._callback_onfail
self._fn_get_role_member_count.callback_onsuccess = self._callback_onsuccess
self._fn_get_role_member_count.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_get_role_member_count.gas_limit = self.call_contract_fee_amount
self._fn_get_role_member_count.gas_price_wei = self.call_contract_fee_price
self._fn_get_role_member_count.debug_method = self.call_contract_debug_flag
return self._fn_get_role_member_count.block_call(role)
def governor(self) -> Union[bytes, str]:
"""
Implementation of governor in contract TestEnclose
Method of the function
"""
self._fn_governor.callback_onfail = self._callback_onfail
self._fn_governor.callback_onsuccess = self._callback_onsuccess
self._fn_governor.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_governor.gas_limit = self.call_contract_fee_amount
self._fn_governor.gas_price_wei = self.call_contract_fee_price
self._fn_governor.debug_method = self.call_contract_debug_flag
return self._fn_governor.block_call()
def grant_role(self, role: Union[bytes, str], account: str) -> None:
"""
Implementation of grant_role in contract TestEnclose
Method of the function
"""
self._fn_grant_role.callback_onfail = self._callback_onfail
self._fn_grant_role.callback_onsuccess = self._callback_onsuccess
self._fn_grant_role.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_grant_role.gas_limit = self.call_contract_fee_amount
self._fn_grant_role.gas_price_wei = self.call_contract_fee_price
self._fn_grant_role.debug_method = self.call_contract_debug_flag
return self._fn_grant_role.block_send(role, account)
def has_role(self, role: Union[bytes, str], account: str) -> bool:
"""
Implementation of has_role in contract TestEnclose
Method of the function
"""
self._fn_has_role.callback_onfail = self._callback_onfail
self._fn_has_role.callback_onsuccess = self._callback_onsuccess
self._fn_has_role.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_has_role.gas_limit = self.call_contract_fee_amount
self._fn_has_role.gas_price_wei = self.call_contract_fee_price
self._fn_has_role.debug_method = self.call_contract_debug_flag
return self._fn_has_role.block_call(role, account)
def is_governor(self, account: str) -> bool:
"""
Implementation of is_governor in contract TestEnclose
Method of the function
"""
self._fn_is_governor.callback_onfail = self._callback_onfail
self._fn_is_governor.callback_onsuccess = self._callback_onsuccess
self._fn_is_governor.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_is_governor.gas_limit = self.call_contract_fee_amount
self._fn_is_governor.gas_price_wei = self.call_contract_fee_price
self._fn_is_governor.debug_method = self.call_contract_debug_flag
return self._fn_is_governor.block_call(account)
def is_locked(self) -> bool:
"""
Implementation of is_locked in contract TestEnclose
Method of the function
"""
self._fn_is_locked.callback_onfail = self._callback_onfail
self._fn_is_locked.callback_onsuccess = self._callback_onsuccess
self._fn_is_locked.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_is_locked.gas_limit = self.call_contract_fee_amount
self._fn_is_locked.gas_price_wei = self.call_contract_fee_price
self._fn_is_locked.debug_method = self.call_contract_debug_flag
return self._fn_is_locked.block_call()
def is_owner(self) -> bool:
"""
Implementation of is_owner in contract TestEnclose
Method of the function
"""
self._fn_is_owner.callback_onfail = self._callback_onfail
self._fn_is_owner.callback_onsuccess = self._callback_onsuccess
self._fn_is_owner.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_is_owner.gas_limit = self.call_contract_fee_amount
self._fn_is_owner.gas_price_wei = self.call_contract_fee_price
self._fn_is_owner.debug_method = self.call_contract_debug_flag
return self._fn_is_owner.block_call()
def is_paused(self) -> bool:
"""
Implementation of is_paused in contract TestEnclose
Method of the function
"""
self._fn_is_paused.callback_onfail = self._callback_onfail
self._fn_is_paused.callback_onsuccess = self._callback_onsuccess
self._fn_is_paused.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_is_paused.gas_limit = self.call_contract_fee_amount
self._fn_is_paused.gas_price_wei = self.call_contract_fee_price
self._fn_is_paused.debug_method = self.call_contract_debug_flag
return self._fn_is_paused.block_call()
def is_whitelist_admin(self, account: str) -> bool:
"""
Implementation of is_whitelist_admin in contract TestEnclose
Method of the function
"""
self._fn_is_whitelist_admin.callback_onfail = self._callback_onfail
self._fn_is_whitelist_admin.callback_onsuccess = self._callback_onsuccess
self._fn_is_whitelist_admin.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_is_whitelist_admin.gas_limit = self.call_contract_fee_amount
self._fn_is_whitelist_admin.gas_price_wei = self.call_contract_fee_price
self._fn_is_whitelist_admin.debug_method = self.call_contract_debug_flag
return self._fn_is_whitelist_admin.block_call(account)
def lock(self) -> None:
"""
Implementation of lock in contract TestEnclose
Method of the function
"""
self._fn_lock.callback_onfail = self._callback_onfail
self._fn_lock.callback_onsuccess = self._callback_onsuccess
self._fn_lock.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_lock.gas_limit = self.call_contract_fee_amount
self._fn_lock.gas_price_wei = self.call_contract_fee_price
self._fn_lock.debug_method = self.call_contract_debug_flag
return self._fn_lock.block_send()
def owner(self) -> str:
"""
Implementation of owner in contract TestEnclose
Method of the function
"""
self._fn_owner.callback_onfail = self._callback_onfail
self._fn_owner.callback_onsuccess = self._callback_onsuccess
self._fn_owner.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_owner.gas_limit = self.call_contract_fee_amount
self._fn_owner.gas_price_wei = self.call_contract_fee_price
self._fn_owner.debug_method = self.call_contract_debug_flag
return self._fn_owner.block_call()
def pause_sc(self) -> None:
"""
Implementation of pause_sc in contract TestEnclose
Method of the function
"""
self._fn_pause_sc.callback_onfail = self._callback_onfail
self._fn_pause_sc.callback_onsuccess = self._callback_onsuccess
self._fn_pause_sc.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_pause_sc.gas_limit = self.call_contract_fee_amount
self._fn_pause_sc.gas_price_wei = self.call_contract_fee_price
self._fn_pause_sc.debug_method = self.call_contract_debug_flag
return self._fn_pause_sc.block_send()
def remove_governor(self, account: str) -> None:
"""
Implementation of remove_governor in contract TestEnclose
Method of the function
"""
self._fn_remove_governor.callback_onfail = self._callback_onfail
self._fn_remove_governor.callback_onsuccess = self._callback_onsuccess
self._fn_remove_governor.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_remove_governor.gas_limit = self.call_contract_fee_amount
self._fn_remove_governor.gas_price_wei = self.call_contract_fee_price
self._fn_remove_governor.debug_method = self.call_contract_debug_flag
return self._fn_remove_governor.block_send(account)
def remove_whitelist_admin(self, account: str) -> None:
"""
Implementation of remove_whitelist_admin in contract TestEnclose
Method of the function
"""
self._fn_remove_whitelist_admin.callback_onfail = self._callback_onfail
self._fn_remove_whitelist_admin.callback_onsuccess = self._callback_onsuccess
self._fn_remove_whitelist_admin.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_remove_whitelist_admin.gas_limit = self.call_contract_fee_amount
self._fn_remove_whitelist_admin.gas_price_wei = self.call_contract_fee_price
self._fn_remove_whitelist_admin.debug_method = self.call_contract_debug_flag
return self._fn_remove_whitelist_admin.block_send(account)
def renounce_ownership(self) -> None:
"""
Implementation of renounce_ownership in contract TestEnclose
Method of the function
"""
self._fn_renounce_ownership.callback_onfail = self._callback_onfail
self._fn_renounce_ownership.callback_onsuccess = self._callback_onsuccess
self._fn_renounce_ownership.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_renounce_ownership.gas_limit = self.call_contract_fee_amount
self._fn_renounce_ownership.gas_price_wei = self.call_contract_fee_price
self._fn_renounce_ownership.debug_method = self.call_contract_debug_flag
return self._fn_renounce_ownership.block_send()
def renounce_role(self, role: Union[bytes, str], account: str) -> None:
"""
Implementation of renounce_role in contract TestEnclose
Method of the function
"""
self._fn_renounce_role.callback_onfail = self._callback_onfail
self._fn_renounce_role.callback_onsuccess = self._callback_onsuccess
self._fn_renounce_role.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_renounce_role.gas_limit = self.call_contract_fee_amount
self._fn_renounce_role.gas_price_wei = self.call_contract_fee_price
self._fn_renounce_role.debug_method = self.call_contract_debug_flag
return self._fn_renounce_role.block_send(role, account)
def revoke_role(self, role: Union[bytes, str], account: str) -> None:
"""
Implementation of revoke_role in contract TestEnclose
Method of the function
"""
self._fn_revoke_role.callback_onfail = self._callback_onfail
self._fn_revoke_role.callback_onsuccess = self._callback_onsuccess
self._fn_revoke_role.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_revoke_role.gas_limit = self.call_contract_fee_amount
self._fn_revoke_role.gas_price_wei = self.call_contract_fee_price
self._fn_revoke_role.debug_method = self.call_contract_debug_flag
return self._fn_revoke_role.block_send(role, account)
def sig(self, user: str, coin: str) -> Union[bytes, str]:
"""
Implementation of sig in contract TestEnclose
Method of the function
"""
self._fn_sig.callback_onfail = self._callback_onfail
self._fn_sig.callback_onsuccess = self._callback_onsuccess
self._fn_sig.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_sig.gas_limit = self.call_contract_fee_amount
self._fn_sig.gas_price_wei = self.call_contract_fee_price
self._fn_sig.debug_method = self.call_contract_debug_flag
return self._fn_sig.block_call(user, coin)
def transfer_ownership(self, new_owner: str) -> None:
"""
Implementation of transfer_ownership in contract TestEnclose
Method of the function
"""
self._fn_transfer_ownership.callback_onfail = self._callback_onfail
self._fn_transfer_ownership.callback_onsuccess = self._callback_onsuccess
self._fn_transfer_ownership.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_transfer_ownership.gas_limit = self.call_contract_fee_amount
self._fn_transfer_ownership.gas_price_wei = self.call_contract_fee_price
self._fn_transfer_ownership.debug_method = self.call_contract_debug_flag
return self._fn_transfer_ownership.block_send(new_owner)
def unlock(self) -> None:
"""
Implementation of unlock in contract TestEnclose
Method of the function
"""
self._fn_unlock.callback_onfail = self._callback_onfail
self._fn_unlock.callback_onsuccess = self._callback_onsuccess
self._fn_unlock.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_unlock.gas_limit = self.call_contract_fee_amount
self._fn_unlock.gas_price_wei = self.call_contract_fee_price
self._fn_unlock.debug_method = self.call_contract_debug_flag
return self._fn_unlock.block_send()
def unpause_sc(self) -> None:
"""
Implementation of unpause_sc in contract TestEnclose
Method of the function
"""
self._fn_unpause_sc.callback_onfail = self._callback_onfail
self._fn_unpause_sc.callback_onsuccess = self._callback_onsuccess
self._fn_unpause_sc.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_unpause_sc.gas_limit = self.call_contract_fee_amount
self._fn_unpause_sc.gas_price_wei = self.call_contract_fee_price
self._fn_unpause_sc.debug_method = self.call_contract_debug_flag
return self._fn_unpause_sc.block_send()
def whitelist_admins(self) -> Union[bytes, str]:
"""
Implementation of whitelist_admins in contract TestEnclose
Method of the function
"""
self._fn_whitelist_admins.callback_onfail = self._callback_onfail
self._fn_whitelist_admins.callback_onsuccess = self._callback_onsuccess
self._fn_whitelist_admins.auto_reciept = self.call_contract_enforce_tx_receipt
self._fn_whitelist_admins.gas_limit = self.call_contract_fee_amount
self._fn_whitelist_admins.gas_price_wei = self.call_contract_fee_price
self._fn_whitelist_admins.debug_method = self.call_contract_debug_flag
return self._fn_whitelist_admins.block_call()
def CallContractWait(self, t_long:int)-> "TestEnclose":
self._fn_default_admin_role.setWait(t_long)
self._fn_add_governor.setWait(t_long)
self._fn_add_whitelist_admin.setWait(t_long)
self._fn_coin_user_vault.setWait(t_long)
self._fn_coin_user_vault2.setWait(t_long)
self._fn_deposit_erc20.setWait(t_long)
self._fn_get_role_admin.setWait(t_long)
self._fn_get_role_member.setWait(t_long)
self._fn_get_role_member_count.setWait(t_long)
self._fn_governor.setWait(t_long)
self._fn_grant_role.setWait(t_long)
self._fn_has_role.setWait(t_long)
self._fn_is_governor.setWait(t_long)
self._fn_is_locked.setWait(t_long)
self._fn_is_owner.setWait(t_long)
self._fn_is_paused.setWait(t_long)
self._fn_is_whitelist_admin.setWait(t_long)
self._fn_lock.setWait(t_long)
self._fn_owner.setWait(t_long)
self._fn_pause_sc.setWait(t_long)
self._fn_remove_governor.setWait(t_long)
self._fn_remove_whitelist_admin.setWait(t_long)
self._fn_renounce_ownership.setWait(t_long)
self._fn_renounce_role.setWait(t_long)
self._fn_revoke_role.setWait(t_long)
self._fn_sig.setWait(t_long)
self._fn_transfer_ownership.setWait(t_long)
self._fn_unlock.setWait(t_long)
self._fn_unpause_sc.setWait(t_long)
self._fn_whitelist_admins.setWait(t_long)
return self
@staticmethod
def abi():
"""Return the ABI to the underlying contract."""
return json.loads(
'[{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"previousOwner","type":"address"},{"indexed":true,"internalType":"address","name":"newOwner","type":"address"}],"name":"OwnershipTransferred","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"address","name":"account","type":"address"},{"indexed":true,"internalType":"address","name":"sender","type":"address"}],"name":"RoleGranted","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"bytes32","name":"role","type":"bytes32"},{"indexed":true,"internalType":"address","name":"account","type":"address"},{"indexed":true,"internalType":"address","name":"sender","type":"address"}],"name":"RoleRevoked","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"address","name":"a","type":"address"},{"indexed":false,"internalType":"uint256","name":"b","type":"uint256"}],"name":"UsrDeposit","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"bool","name":"value","type":"bool"}],"name":"contractPaused","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"internalType":"uint8","name":"value","type":"uint8"}],"name":"traillock","type":"event"},{"inputs":[],"name":"DEFAULT_ADMIN_ROLE","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"addGovernor","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"addWhitelistAdmin","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"bytes32","name":"index_0","type":"bytes32"}],"name":"coin_user_vault","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"index_0","type":"address"},{"internalType":"address","name":"index_1","type":"address"}],"name":"coin_user_vault2","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"coin","type":"address"},{"internalType":"uint256","name":"amount","type":"uint256"}],"name":"deposit_erc20","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"}],"name":"getRoleAdmin","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"uint256","name":"index","type":"uint256"}],"name":"getRoleMember","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"}],"name":"getRoleMemberCount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"governor","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"grantRole","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"hasRole","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"isGovernor","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"isLocked","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"isOwner","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"isPaused","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"isWhitelistAdmin","outputs":[{"internalType":"bool","name":"","type":"bool"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"lock","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"owner","outputs":[{"internalType":"address","name":"","type":"address"}],"stateMutability":"view","type":"function"},{"inputs":[],"name":"pauseSc","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"removeGovernor","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"account","type":"address"}],"name":"removeWhitelistAdmin","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"renounceOwnership","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"renounceRole","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"bytes32","name":"role","type":"bytes32"},{"internalType":"address","name":"account","type":"address"}],"name":"revokeRole","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[{"internalType":"address","name":"user","type":"address"},{"internalType":"address","name":"coin","type":"address"}],"name":"sig","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"pure","type":"function"},{"inputs":[{"internalType":"address","name":"newOwner","type":"address"}],"name":"transferOwnership","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"unlock","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"unpauseSc","outputs":[],"stateMutability":"nonpayable","type":"function"},{"inputs":[],"name":"whitelistAdmins","outputs":[{"internalType":"bytes32","name":"","type":"bytes32"}],"stateMutability":"view","type":"function"}]' # noqa: E501 (line-too-long)
)
# pylint: disable=too-many-lines
| 42.028487 | 6,522 | 0.633341 | 19,163 | 171,140 | 5.360695 | 0.019465 | 0.039795 | 0.032319 | 0.014018 | 0.908612 | 0.861789 | 0.831563 | 0.80014 | 0.771929 | 0.736223 | 0 | 0.004205 | 0.256562 | 171,140 | 4,071 | 6,523 | 42.038811 | 0.802015 | 0.107993 | 0 | 0.677024 | 1 | 0.000409 | 0.145848 | 0.046252 | 0 | 0 | 0 | 0 | 0.01063 | 1 | 0.103843 | false | 0.001226 | 0.007768 | 0.012265 | 0.216271 | 0.110384 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a128838c1c37be37866b3770d9a22030cc6667d5 | 6,481 | py | Python | tests/fortify/download_test.py | matt-fevold/webbreaker | b500fc620ebba03a27321c8f832ab77bb760b9c5 | [
"MIT"
] | 7 | 2018-12-20T19:18:43.000Z | 2019-12-10T15:03:41.000Z | tests/fortify/download_test.py | matt-fevold/webbreaker | b500fc620ebba03a27321c8f832ab77bb760b9c5 | [
"MIT"
] | 5 | 2019-04-02T17:07:44.000Z | 2020-02-17T07:08:11.000Z | tests/fortify/download_test.py | matt-fevold/webbreaker | b500fc620ebba03a27321c8f832ab77bb760b9c5 | [
"MIT"
] | 7 | 2019-01-10T10:40:55.000Z | 2022-03-13T14:08:37.000Z | import mock
import pytest
from webbreaker.fortify.download import FortifyDownload
def attribute_error_exception(**kwargs):
raise AttributeError('Test Failure')
def unbound_local_error_exception(**kwargs):
raise UnboundLocalError('Test Failure')
def io_error_exception(**kwargs):
raise IOError('Test Failure')
@mock.patch('webbreaker.fortify.download.FortifyDownload.download')
@mock.patch('webbreaker.fortify.download.FortifyAuth')
@mock.patch('webbreaker.fortify.download.FortifyConfig')
def test_fortify_download_successful_init_application_name_is_not_none(config_mock, auth_mock, download_mock):
expected_username = 'user'
expected_password = 'password'
expected_application = 'Test Application'
expected_version = 'Test Version'
auth_mock.return_value.authenticate.return_value = expected_username, expected_password
fortify_download = FortifyDownload(username=None,
password=None,
application_name=expected_application,
version_name=expected_version)
assert fortify_download.username == expected_username
assert fortify_download.password == expected_password
download_mock.assert_called_once_with(expected_application, expected_version)
assert config_mock.call_count == 1
assert auth_mock.call_count == 1
assert download_mock.call_count == 1
@mock.patch('webbreaker.fortify.download.FortifyDownload.download')
@mock.patch('webbreaker.fortify.download.FortifyAuth')
@mock.patch('webbreaker.fortify.download.FortifyConfig')
def test_fortify_download_successful_init_application_name_is_none(config_mock, auth_mock, download_mock):
expected_username = 'user'
expected_password = 'password'
expected_application = 'Test Application'
expected_version = 'Test Version'
auth_mock.return_value.authenticate.return_value = expected_username, expected_password
config_mock.return_value.application_name = expected_application
config_mock.application_name()
fortify_download = FortifyDownload(username=None,
password=None,
application_name=None,
version_name=expected_version)
assert fortify_download.username == expected_username
assert fortify_download.password == expected_password
download_mock.assert_called_once_with(expected_application, expected_version)
assert config_mock.call_count == 1
assert auth_mock.call_count == 1
assert download_mock.call_count == 1
@mock.patch('webbreaker.fortify.download.FortifyDownload.download_scan')
@mock.patch('webbreaker.fortify.download.FortifyAuth')
def test_fortify_download_download_successful_download(auth_mock, download_mock):
expected_username = 'user'
expected_password = 'password'
expected_application = 'Test Application'
expected_version = 'Test Version'
auth_mock.return_value.authenticate.return_value = expected_username, expected_password
download_mock.return_value = None
fortify_download = FortifyDownload(username=expected_username,
password=expected_password,
application_name=expected_application,
version_name=expected_version)
assert fortify_download.username == expected_username
assert fortify_download.password == expected_password
download_mock.assert_called_once_with(expected_application, expected_version)
assert auth_mock.call_count == 1
assert download_mock.call_count == 1
@mock.patch('webbreaker.fortify.download.FortifyHelper')
@mock.patch('webbreaker.fortify.download.FortifyAuth')
@mock.patch('webbreaker.fortify.download.FortifyConfig')
@mock.patch('webbreaker.fortify.download.Logger.app.critical')
def test_fortify_download_download_throws_attribute_error(log_mock, config_mock, auth_mock, client_mock):
expected_username = 'user'
expected_password = 'password'
expected_application = 'Test Application'
expected_version = 'Test Version'
auth_mock.return_value.authenticate.return_value = expected_username, expected_password
client_mock.side_effect = attribute_error_exception
with pytest.raises(SystemExit):
FortifyDownload(username=expected_username,
password=expected_password,
application_name=expected_application,
version_name=expected_version)
assert log_mock.call_count == 1
@mock.patch('webbreaker.fortify.download.FortifyHelper')
@mock.patch('webbreaker.fortify.download.FortifyAuth')
@mock.patch('webbreaker.fortify.download.FortifyConfig')
@mock.patch('webbreaker.fortify.download.Logger.app.critical')
def test_fortify_download_download_throws_unbound_local_error(log_mock, config_mock, auth_mock, client_mock):
expected_username = 'user'
expected_password = 'password'
expected_application = 'Test Application'
expected_version = 'Test Version'
auth_mock.return_value.authenticate.return_value = expected_username, expected_password
client_mock.side_effect = unbound_local_error_exception
with pytest.raises(SystemExit):
FortifyDownload(username=expected_username,
password=expected_password,
application_name=expected_application,
version_name=expected_version)
assert log_mock.call_count == 1
@mock.patch('webbreaker.fortify.download.FortifyHelper')
@mock.patch('webbreaker.fortify.download.FortifyAuth')
@mock.patch('webbreaker.fortify.download.FortifyConfig')
@mock.patch('webbreaker.fortify.download.Logger.app.error')
def test_fortify_download_download_throws_io_error(log_mock, config_mock, auth_mock, client_mock):
expected_username = 'user'
expected_password = 'password'
expected_application = 'Test Application'
expected_version = 'Test Version'
auth_mock.return_value.authenticate.return_value = expected_username, expected_password
client_mock.side_effect = io_error_exception
with pytest.raises(SystemExit):
FortifyDownload(username=expected_username,
password=expected_password,
application_name=expected_application,
version_name=expected_version)
assert log_mock.call_count == 1
| 41.018987 | 110 | 0.739083 | 685 | 6,481 | 6.656934 | 0.089051 | 0.118421 | 0.115132 | 0.114035 | 0.900877 | 0.894298 | 0.876535 | 0.876535 | 0.876535 | 0.849561 | 0 | 0.00208 | 0.184077 | 6,481 | 157 | 111 | 41.280255 | 0.86025 | 0 | 0 | 0.788136 | 0 | 0 | 0.175463 | 0.13287 | 0 | 0 | 0 | 0 | 0.169492 | 1 | 0.076271 | false | 0.177966 | 0.025424 | 0 | 0.101695 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
a1659259eb1c70cebb06aa75acce4e74e5bd314d | 2,750 | py | Python | data/atmosphere_biogeochemistry/ice_cores/viz/generate.py | ilopezgp/human_impacts | b2758245edac0946080a647f1dbfd1098c0f0b27 | [
"MIT"
] | 4 | 2020-08-25T00:52:01.000Z | 2020-11-16T16:57:46.000Z | data/atmosphere_biogeochemistry/ice_cores/viz/generate.py | ilopezgp/human_impacts | b2758245edac0946080a647f1dbfd1098c0f0b27 | [
"MIT"
] | 5 | 2020-10-30T21:22:55.000Z | 2021-12-30T02:07:02.000Z | data/atmosphere_biogeochemistry/ice_cores/viz/generate.py | ilopezgp/human_impacts | b2758245edac0946080a647f1dbfd1098c0f0b27 | [
"MIT"
] | 2 | 2020-08-28T10:11:28.000Z | 2020-11-11T07:58:46.000Z | #%%
import numpy as np
import pandas as pd
import altair as alt
import anthro.io
# Load the ice core data.
data = pd.read_csv('../processed/law2006_by_year_clean.csv')
# %%
# Generate a plot inferred CO2 concentrations over the last 2k years
agg_data = pd.DataFrame()
agg_data['year'] = data['year_CE']
agg_data['CO2'] = data['CO2_spline_fit']
chart = alt.Chart(agg_data).encode(
x=alt.X(field='year', type='quantitative', title='year',
scale=alt.Scale(domain=(1, 2004))),
y=alt.Y(field=r'CO2', type='quantitative', title=r'Atmospheric CO2 concentration (ppm)',
scale=alt.Scale(zero=False)),
tooltip=[alt.Tooltip(field='year', type='quantitative', title='year'),
alt.Tooltip(field=r'CO2', type='nominal', title=r'[CO2] (ppm)')]
).properties(width='container', height=300)
l = chart.mark_line(color='dodgerblue')
p = chart.mark_point(color='dodgerblue', filled=True)
layer = alt.layer(l, p)
layer.save('CO2_ice_cores.json')
# %%
# Generate a plot inferred CH4 concentrations over the last 2k years
agg_data = pd.DataFrame()
agg_data['year'] = data['year_CE']
agg_data['CH4'] = data['CH4_spline_fit']
chart = alt.Chart(agg_data).encode(
x=alt.X(field='year', type='quantitative', title='year',
scale=alt.Scale(domain=(1, 2004))),
y=alt.Y(field=r'CH4', type='quantitative', title=r'Atmospheric CH4 concentration (ppb)',
scale=alt.Scale(zero=False)),
tooltip=[alt.Tooltip(field='year', type='quantitative', title='year'),
alt.Tooltip(field=r'CH4', type='nominal', title=r'[CH4] (ppb)')]
).properties(width='container', height=300)
l = chart.mark_line(color='dodgerblue')
p = chart.mark_point(color='dodgerblue', filled=True)
layer = alt.layer(l, p)
layer.save('CH4_ice_cores.json')
# %%
# Generate a plot inferred N2O concentrations over the last 2k years
agg_data = pd.DataFrame()
agg_data['year'] = data['year_CE']
agg_data['N2O'] = data['N2O_spline_fit']
chart = alt.Chart(agg_data).encode(
x=alt.X(field='year', type='quantitative', title='year',
scale=alt.Scale(domain=(1, 2004))),
y=alt.Y(field=r'N2O', type='quantitative', title=r'Atmospheric N2O concentration (ppb)',
scale=alt.Scale(zero=False)),
tooltip=[alt.Tooltip(field='year', type='quantitative', title='year'),
alt.Tooltip(field=r'N2O', type='nominal', title=r'[N2O] (ppb)')]
).properties(width='container', height=300)
l = chart.mark_line(color='dodgerblue')
p = chart.mark_point(color='dodgerblue', filled=True)
layer = alt.layer(l, p)
layer.save('N2O_ice_cores.json')
| 39.855072 | 100 | 0.638545 | 384 | 2,750 | 4.476563 | 0.203125 | 0.048866 | 0.109948 | 0.08726 | 0.819663 | 0.762071 | 0.762071 | 0.723677 | 0.723677 | 0.723677 | 0 | 0.024642 | 0.188364 | 2,750 | 68 | 101 | 40.441176 | 0.74552 | 0.086182 | 0 | 0.66 | 1 | 0 | 0.237924 | 0.01517 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.08 | 0 | 0.08 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a198523ea6d36247abe7801bf83c1e1b3088f820 | 1,614 | py | Python | Attendance_Management_System/vcet/student/migrations/0017_maths_ostl.py | jainritik153/ATTENDANCE-MANAGEMENT-SYSTEM | 182be2e07f63e6c5d6207e7e7d3c0d9629608e68 | [
"MIT"
] | null | null | null | Attendance_Management_System/vcet/student/migrations/0017_maths_ostl.py | jainritik153/ATTENDANCE-MANAGEMENT-SYSTEM | 182be2e07f63e6c5d6207e7e7d3c0d9629608e68 | [
"MIT"
] | null | null | null | Attendance_Management_System/vcet/student/migrations/0017_maths_ostl.py | jainritik153/ATTENDANCE-MANAGEMENT-SYSTEM | 182be2e07f63e6c5d6207e7e7d3c0d9629608e68 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.2 on 2018-04-03 14:28
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('student', '0016_auto_20180402_1511'),
]
operations = [
migrations.CreateModel(
name='Maths',
fields=[
('roll', models.IntegerField(primary_key=True, serialize=False)),
('_1', models.CharField(default='Not', max_length=20)),
('_2', models.CharField(default='Not', max_length=20)),
('_3', models.CharField(default='Not', max_length=20)),
('_4', models.CharField(default='Not', max_length=10)),
('_5', models.CharField(default='Not', max_length=10)),
('_6', models.CharField(default='Not', max_length=10)),
('_7', models.CharField(default='Not', max_length=10)),
],
),
migrations.CreateModel(
name='Ostl',
fields=[
('roll', models.IntegerField(primary_key=True, serialize=False)),
('_1', models.CharField(default='Not', max_length=20)),
('_2', models.CharField(default='Not', max_length=20)),
('_3', models.CharField(default='Not', max_length=20)),
('_4', models.CharField(default='Not', max_length=10)),
('_5', models.CharField(default='Not', max_length=10)),
('_6', models.CharField(default='Not', max_length=10)),
('_7', models.CharField(default='Not', max_length=10)),
],
),
]
| 40.35 | 81 | 0.540273 | 167 | 1,614 | 5.023952 | 0.299401 | 0.250298 | 0.367104 | 0.417163 | 0.750894 | 0.750894 | 0.750894 | 0.750894 | 0.750894 | 0.750894 | 0 | 0.063923 | 0.292441 | 1,614 | 39 | 82 | 41.384615 | 0.670753 | 0.027881 | 0 | 0.727273 | 1 | 0 | 0.074665 | 0.014678 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.030303 | 0 | 0.121212 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
a1f88a771568cca00b751a2e96dc94ba1ada12f4 | 138 | py | Python | progmod/integrerte/__init__.py | NadderudVGS/ProgMod-Pakka | d359f24ac1417890e6c639f46f8c9b3931a925cf | [
"MIT"
] | 1 | 2020-07-15T14:29:40.000Z | 2020-07-15T14:29:40.000Z | progmod/integrerte/__init__.py | NadderudVGS/ProgMod-Pakka | d359f24ac1417890e6c639f46f8c9b3931a925cf | [
"MIT"
] | 5 | 2020-02-13T18:09:30.000Z | 2020-06-09T08:29:37.000Z | progmod/integrerte/__init__.py | NadderudVGS/ProgMod-Pakka | d359f24ac1417890e6c639f46f8c9b3931a925cf | [
"MIT"
] | null | null | null | from progmod.Integrerte.rektangel import rektangel
from progmod.Integrerte.trapes import trapes
from progmod.Integrerte.array import array | 46 | 50 | 0.876812 | 18 | 138 | 6.722222 | 0.388889 | 0.272727 | 0.520661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07971 | 138 | 3 | 51 | 46 | 0.952756 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
62a8389b349070841f6a1520e0cd1356a469c92e | 147 | py | Python | src/tespy/connections/__init__.py | anmartens/tespy | 9a543d67cd8266c15cb9940ca640d6a8eda27a28 | [
"MIT"
] | 1 | 2022-03-23T10:25:36.000Z | 2022-03-23T10:25:36.000Z | src/tespy/connections/__init__.py | anmartens/tespy | 9a543d67cd8266c15cb9940ca640d6a8eda27a28 | [
"MIT"
] | null | null | null | src/tespy/connections/__init__.py | anmartens/tespy | 9a543d67cd8266c15cb9940ca640d6a8eda27a28 | [
"MIT"
] | null | null | null | # -*- coding: utf-8
from .bus import Bus # noqa: F401
from .connection import Connection # noqa: F401
from .connection import Ref # noqa: F401
| 24.5 | 48 | 0.70068 | 21 | 147 | 4.904762 | 0.47619 | 0.23301 | 0.23301 | 0.427184 | 0.543689 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084746 | 0.197279 | 147 | 5 | 49 | 29.4 | 0.788136 | 0.340136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
62b39629828fa81b089f607431282e4887759ff2 | 5,420 | py | Python | icons/mdesign.py | robert-hh/SSD1963-TFT-Library-for-PyBoard | db9786cdd95f9dab5334a9de28bed4e26436815c | [
"MIT"
] | 16 | 2016-02-23T12:20:36.000Z | 2021-02-02T06:41:49.000Z | icons/mdesign.py | robert-hh/SSD1963-TFT-Library-for-PyBoard-and-RP2040 | db9786cdd95f9dab5334a9de28bed4e26436815c | [
"MIT"
] | 2 | 2016-11-26T07:46:58.000Z | 2017-12-10T08:44:38.000Z | icons/mdesign.py | robert-hh/SSD1963-TFT-Library-for-PyBoard | db9786cdd95f9dab5334a9de28bed4e26436815c | [
"MIT"
] | 9 | 2016-06-04T08:22:55.000Z | 2020-04-19T14:40:36.000Z |
# Code generated by bmp_to_icon.py
from uctypes import addressof
_icons = {
0: (
b'\xff\xff\xff\x40\x01\xff\xff\xff\xff\xff\xf0\x00\x00\x07\xff\xff'
b'\xff\xff\x00\x00\x00\x00\x7f\xff\xff\xf0\x00\x00\x00\x00\x0f\xff'
b'\xff\xc0\x00\x00\x00\x00\x03\xff\xff\x00\x00\x00\x00\x00\x00\xff'
b'\xfc\x00\x00\x00\x00\x00\x00\x3f\xfc\x00\x00\x00\x00\x00\x00\x1f'
b'\xf0\x00\x00\x00\x00\x00\xa0\x0f\xf0\x00\x00\x00\x00\x02\xa8\x03'
b'\xc0\x00\x00\x00\x00\x0a\xa4\x03\xc0\x00\x00\x00\x00\x2a\x90\x03'
b'\x40\x00\x00\x00\x00\xaa\x40\x00\x00\x00\x00\x00\x02\xa9\x00\x00'
b'\x00\x06\x00\x00\x0a\xa4\x00\x00\x00\x1a\x80\x00\x2a\x90\x00\x00'
b'\x00\x2a\xa0\x00\xaa\x40\x00\x00\x00\x0a\xa8\x02\xa9\x00\x00\x00'
b'\x00\x02\xaa\x0a\xa4\x00\x00\x00\x40\x00\xaa\xaa\x90\x00\x00\x00'
b'\xc0\x00\x2a\xaa\x40\x00\x00\x01\xc0\x00\x0a\xa9\x00\x00\x00\x03'
b'\xd0\x00\x02\xa4\x00\x00\x00\x03\xf0\x00\x00\x90\x00\x00\x00\x0f'
b'\xf4\x00\x00\x00\x00\x00\x00\x0f\xfc\x00\x00\x00\x00\x00\x00\x3f'
b'\xff\x00\x00\x00\x00\x00\x00\xff\xff\xc0\x00\x00\x00\x00\x03\xff'
b'\xff\xf0\x00\x00\x00\x00\x0f\xff\xff\xfd\x00\x00\x00\x00\x3f\xff'
b'\xff\xff\xc0\x00\x00\x03\xff\xff\xff\xff\xff\x00\x00\x7f\xff\xff'
),
1: (
b'\xff\xff\xff\x40\x01\xff\xff\xff\xff\xff\xf0\x00\x00\x0b\xff\xff'
b'\xff\xff\x00\x00\x00\x00\x7f\xff\xff\xf0\x00\x07\xf8\x00\x0f\xff'
b'\xff\xc0\x03\xff\xff\xd0\x03\xff\xff\x00\x3f\xff\xff\xfc\x00\xff'
b'\xfc\x00\xff\xff\xff\xff\x40\x3f\xfc\x03\xff\xff\xff\xff\xd0\x2f'
b'\xf0\x0f\xff\xda\xa7\xff\xf0\x0f\xd0\x3f\xfd\xaa\xaa\x3f\xfc\x03'
b'\xc0\x3f\xf2\xaa\xaa\xaf\xff\x03\xc0\xff\xda\xaa\xaa\xab\xff\x01'
b'\x40\xff\xea\xaa\xaa\xab\xff\x00\x00\xff\x6a\xaa\xaa\xaa\xff\xc0'
b'\x01\xff\xaa\xaa\xaa\xaa\xff\xc0\x03\xff\xaa\xaa\xaa\xaa\xff\xc0'
b'\x03\xff\xaa\xaa\xaa\xaa\xff\xc0\x03\xff\xaa\xaa\xaa\xaa\xff\xc0'
b'\x02\xff\x6a\xaa\xaa\xaa\xff\xc0\x40\xff\xea\xaa\xaa\xab\xff\x80'
b'\xc0\xff\xca\xaa\xaa\xab\xff\x01\xc0\x7f\xfa\xaa\xaa\xaf\xff\x03'
b'\xe0\x3f\xfe\xaa\xaa\xbf\xfc\x03\xf0\x0f\xff\xea\xab\xff\xf4\x0f'
b'\xf4\x07\xff\xff\xff\xff\xf0\x0f\xfc\x01\xff\xff\xff\xff\xc0\x3f'
b'\xff\x00\x3f\xff\xff\xfd\x00\xff\xff\xc0\x0f\xff\xff\xf0\x03\xff'
b'\xff\xf0\x00\x3f\xfe\x00\x0f\xff\xff\xfe\x00\x00\x00\x00\x3f\xff'
b'\xff\xff\xc0\x00\x00\x03\xff\xff\xff\xff\xfd\x00\x00\x7f\xff\xff'
),
2: (
b'\xff\xff\xff\x40\x01\xff\xff\xff\xff\xff\xd0\x00\x00\x0b\xff\xff'
b'\xff\xff\x00\x00\x00\x00\x7f\xff\xff\xf0\x00\x07\xf8\x00\x0f\xff'
b'\xff\xc0\x03\xff\xff\xd0\x03\xff\xff\x00\x3f\xff\xff\xfc\x00\xff'
b'\xfc\x00\xff\xff\xff\xff\x40\x3f\xfc\x03\xff\xff\xff\xff\xd0\x2f'
b'\xf0\x0f\xff\xff\xff\xff\xf0\x0f\xd0\x3f\xff\xff\xff\xff\xfc\x03'
b'\xc0\x3f\xfe\xab\xfa\xbf\xff\x03\xc0\xff\xfe\xab\xfa\xbf\xff\x01'
b'\x40\xff\xfe\xab\xfa\xbf\xff\x00\x00\xff\xfe\xab\xfa\xbf\xff\xc0'
b'\x01\xff\xfe\xab\xfa\xbf\xff\xc0\x03\xff\xfe\xab\xfa\xbf\xff\xc0'
b'\x03\xff\xfe\xab\xfa\xbf\xff\xc0\x03\xff\xfe\xab\xfa\xbf\xff\xc0'
b'\x02\xff\xfe\xab\xfa\xbf\xff\xc0\x40\xff\xfe\xab\xfa\xbf\xff\x80'
b'\xc0\xff\xfe\xab\xfa\xbf\xff\x01\xc0\xbf\xfe\xab\xfa\xbf\xff\x03'
b'\xe0\x3f\xfe\xa7\xfa\xbf\xfc\x03\xf0\x0f\xff\xff\xff\xff\xf4\x0f'
b'\xf4\x07\xff\xff\xff\xff\xf0\x0f\xfc\x01\xff\xff\xff\xff\xc0\x3f'
b'\xff\x00\x3f\xff\xff\xfd\x00\xff\xff\xc0\x0f\xff\xff\xf0\x03\xff'
b'\xff\xf0\x00\x3f\xfe\x00\x0f\xff\xff\xfe\x00\x00\x00\x00\x3f\xff'
b'\xff\xff\xc0\x00\x00\x03\xff\xff\xff\xff\xfd\x00\x00\x7f\xff\xff'
),
3: (
b'\xff\xff\xff\x40\x01\xff\xff\xff\xff\xff\xd0\x00\x00\x0b\xff\xff'
b'\xff\xff\x00\x00\x00\x00\x7f\xff\xff\xf0\x00\x07\xf8\x00\x0f\xff'
b'\xff\xc0\x03\xff\xff\xd0\x03\xff\xff\x00\x3f\xff\xff\xfc\x00\xff'
b'\xfc\x00\xff\xff\xff\xff\x40\x3f\xfc\x03\xff\xff\xff\xff\xd0\x2f'
b'\xf0\x0f\xff\xff\xff\xff\xf0\x0f\xd0\x3f\xff\xef\xff\xff\xfc\x03'
b'\xc0\x3f\xff\xeb\xff\xff\xff\x03\xc0\xff\xff\xea\xff\xff\xff\x01'
b'\x40\xff\xff\xea\xaf\xff\xff\x00\x00\xff\xff\xea\xab\xff\xff\xc0'
b'\x01\xff\xff\xea\xaa\xff\xff\xc0\x03\xff\xff\xea\xaa\xaf\xff\xc0'
b'\x03\xff\xff\xea\xaa\xaf\xff\xc0\x03\xff\xff\xea\xaa\xbf\xff\xc0'
b'\x02\xff\xff\xea\xab\xff\xff\xc0\x40\xff\xff\xea\xaf\xff\xff\x80'
b'\xc0\xff\xff\xea\xbf\xff\xff\x01\xc0\xbf\xff\xeb\xff\xff\xff\x03'
b'\xe0\x3f\xff\xef\xff\xff\xfc\x03\xf0\x0f\xff\xff\xff\xff\xf4\x0f'
b'\xf4\x07\xff\xff\xff\xff\xf0\x0f\xfc\x01\xff\xff\xff\xff\xc0\x3f'
b'\xff\x00\x3f\xff\xff\xfd\x00\xff\xff\xc0\x0f\xff\xff\xf0\x03\xff'
b'\xff\xf0\x00\x3f\xfe\x00\x0f\xff\xff\xfe\x00\x00\x00\x00\x3f\xff'
b'\xff\xff\xc0\x00\x00\x03\xff\xff\xff\xff\xfd\x00\x00\x7f\xff\xff'
),
}
colortable = {
0: ( b'\x02\x06\x03\x02\x62\x65\x63\x00\x80\x80\x80\x00\xff\xff\xff\x00'),
1: ( b'\x02\x06\x03\x02\x62\x65\x63\x00\x00\x00\xff\x00\xff\xff\xff\x00'),
2: ( b'\x02\x06\x03\x02\x62\x65\x63\x00\x00\xff\x00\x00\xff\xff\xff\x00'),
3: ( b'\x02\x06\x03\x02\x62\x65\x63\x00\x00\xff\xff\x00\xff\xff\xff\x00'),
}
width = 32
height = 32
colors = 2
def get_icon(icon_index = 0, color_index = 0):
return width, height, addressof(_icons[icon_index]), colors, addressof(colortable[color_index])
def draw(x, y, icon_index, draw_fct, color_index = 0):
draw_fct(x - width//2, y - height // 2, width, height, addressof(_icons[icon_index]), colors, addressof(colortable[color_index]))
| 57.052632 | 133 | 0.678598 | 1,234 | 5,420 | 2.967585 | 0.064019 | 0.270344 | 0.174495 | 0.124522 | 0.839159 | 0.751775 | 0.689514 | 0.599672 | 0.515838 | 0.509011 | 0 | 0.189065 | 0.078782 | 5,420 | 94 | 134 | 57.659574 | 0.544362 | 0.005904 | 0 | 0.306818 | 1 | 0.772727 | 0.808171 | 0.808171 | 0 | 1 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.011364 | 0.011364 | 0.045455 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
62b9dac3fd9a8dce49ab323e4175195b74cc9add | 130 | py | Python | app/scmapp/__init__.py | muhiza/digital.cooperative | f57a749e10796b6e00920b21809ab56b9274d944 | [
"Unlicense"
] | null | null | null | app/scmapp/__init__.py | muhiza/digital.cooperative | f57a749e10796b6e00920b21809ab56b9274d944 | [
"Unlicense"
] | null | null | null | app/scmapp/__init__.py | muhiza/digital.cooperative | f57a749e10796b6e00920b21809ab56b9274d944 | [
"Unlicense"
] | null | null | null | from flask import Blueprint
supply_chain = Blueprint('supply_chain', __name__, template_folder="templates")
from . import views
| 21.666667 | 79 | 0.8 | 16 | 130 | 6.0625 | 0.6875 | 0.309278 | 0.412371 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 130 | 5 | 80 | 26 | 0.843478 | 0 | 0 | 0 | 0 | 0 | 0.161538 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
62e61562e6967a9a31a5345ff5af7bd109fa19c0 | 28,446 | py | Python | tests/unit/test_pack_unpack.py | andkononykhin/didcomm-python | a211440add554c9eb579536b87c9eed710015680 | [
"Apache-2.0"
] | null | null | null | tests/unit/test_pack_unpack.py | andkononykhin/didcomm-python | a211440add554c9eb579536b87c9eed710015680 | [
"Apache-2.0"
] | null | null | null | tests/unit/test_pack_unpack.py | andkononykhin/didcomm-python | a211440add554c9eb579536b87c9eed710015680 | [
"Apache-2.0"
] | null | null | null | import pytest
from authlib.common.encoding import json_dumps, json_loads
from didcomm.common.algorithms import SignAlg, AnonCryptAlg, AuthCryptAlg
from didcomm.common.resolvers import register_default_secrets_resolver, register_default_did_resolver
from didcomm.common.types import VerificationMethodType, VerificationMaterial, VerificationMaterialFormat
from didcomm.common.utils import parse_base64url_encoded_json
from didcomm.did_doc.did_doc import VerificationMethod
from didcomm.message import Message
from didcomm.pack_encrypted import pack_encrypted, PackEncryptedConfig
from didcomm.pack_signed import pack_signed
from didcomm.secrets.secrets_resolver import Secret
from didcomm.unpack import unpack, Metadata
from tests.common.test_resolvers import TestDIDDoc, TestDIDResolver, TestSecretsResolver
@pytest.mark.asyncio
async def test_pack_signed():
alice_secret = Secret(
kid="did:example:alice#key-1",
type=VerificationMethodType.ED25519_VERIFICATION_KEY_2018,
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"d": "pFRUKkyzx4kHdJtFSnlPA9WzqkDT1HWV0xZ5OYZd2SY",
"crv": "Ed25519",
"x": "G-boxFB6vOZBu-wXkm-9Lh79I8nf9Z50cILaOgKKGww"
})
)
)
register_default_secrets_resolver(TestSecretsResolver([alice_secret]))
alice_did_doc = TestDIDDoc(
did="did:example:alice",
key_agreement_kids=[],
authentication_kids=["did:example:alice#key-1"],
verification_methods=[VerificationMethod(
id="did:example:alice#key-1",
type=VerificationMethodType.ED25519_VERIFICATION_KEY_2018,
controller="did:example:alice",
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"crv": "Ed25519",
"x": "G-boxFB6vOZBu-wXkm-9Lh79I8nf9Z50cILaOgKKGww"
})
)
)],
didcomm_services=[]
)
register_default_did_resolver(TestDIDResolver([alice_did_doc]))
message = Message(
id="1234567890",
type="http://example.com/protocols/lets_do_lunch/1.0/proposal",
frm="did:example:alice",
to=[
"did:example:bob"
],
created_time=1516269022,
expires_time=1516385931,
body={
"messagespecificattribute": "and its value"
}
)
pack_result = await pack_signed(message, sign_frm="did:example:alice")
actual_decoded_packed_msg_wo_signature = _decode_and_remove_signatures(pack_result.packed_msg)
expected_packed_msg = json_dumps({
"payload": "eyJpZCI6IjEyMzQ1Njc4OTAiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJ0"
"eXBlIjoiaHR0cDovL2V4YW1wbGUuY29tL3Byb3RvY29scy9sZXRzX2RvX2x1bmNoLzEuMC9wcm9wb3Nh"
"bCIsImZyb20iOiJkaWQ6ZXhhbXBsZTphbGljZSIsInRvIjpbImRpZDpleGFtcGxlOmJvYiJdLCJjcmVh"
"dGVkX3RpbWUiOjE1MTYyNjkwMjIsImV4cGlyZXNfdGltZSI6MTUxNjM4NTkzMSwiYm9keSI6eyJtZXNz"
"YWdlc3BlY2lmaWNhdHRyaWJ1dGUiOiJhbmQgaXRzIHZhbHVlIn19",
"signatures": [
{
"protected": "eyJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXNpZ25lZCtqc29uIiwiYWxnIjoiRWREU0EifQ",
"signature": "FW33NnvOHV0Ted9-F7GZbkia-vYAfBKtH4oBxbrttWAhBZ6UFJMxcGjL3lwOl4YohI3kyyd08LHPWNMgP2EVCQ",
"header": {
"kid": "did:example:alice#key-1"
}
}
]
})
expected_decoded_packed_msg_wo_signature = _decode_and_remove_signatures(expected_packed_msg)
expected_sign_from_kid = "did:example:alice#key-1"
assert actual_decoded_packed_msg_wo_signature == expected_decoded_packed_msg_wo_signature
assert pack_result.sign_from_kid == expected_sign_from_kid
unpack_result = await unpack(pack_result.packed_msg)
assert unpack_result.message == message
expected_metadata = Metadata(
encrypted=False,
authenticated=False,
non_repudiation=True,
anonymous_sender=False,
sign_from=expected_sign_from_kid,
sign_alg=SignAlg.ED25519,
signed_message=pack_result.packed_msg
)
assert unpack_result.metadata == expected_metadata
@pytest.mark.asyncio
async def test_unpack_signed():
register_default_secrets_resolver(TestSecretsResolver([]))
alice_did_doc = TestDIDDoc(
did="did:example:alice",
key_agreement_kids=[],
authentication_kids=["did:example:alice#key-1"],
verification_methods=[VerificationMethod(
id="did:example:alice#key-1",
type=VerificationMethodType.ED25519_VERIFICATION_KEY_2018,
controller="did:example:alice",
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"crv": "Ed25519",
"x": "G-boxFB6vOZBu-wXkm-9Lh79I8nf9Z50cILaOgKKGww"
})
)
)],
didcomm_services=[]
)
register_default_did_resolver(TestDIDResolver([alice_did_doc]))
packed_message = json_dumps({
"payload": "eyJpZCI6IjEyMzQ1Njc4OTAiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJ0"
"eXBlIjoiaHR0cDovL2V4YW1wbGUuY29tL3Byb3RvY29scy9sZXRzX2RvX2x1bmNoLzEuMC9wcm9wb3Nh"
"bCIsImZyb20iOiJkaWQ6ZXhhbXBsZTphbGljZSIsInRvIjpbImRpZDpleGFtcGxlOmJvYiJdLCJjcmVh"
"dGVkX3RpbWUiOjE1MTYyNjkwMjIsImV4cGlyZXNfdGltZSI6MTUxNjM4NTkzMSwiYm9keSI6eyJtZXNz"
"YWdlc3BlY2lmaWNhdHRyaWJ1dGUiOiJhbmQgaXRzIHZhbHVlIn19",
"signatures": [
{
"protected": "eyJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXNpZ25lZCtqc29uIiwiYWxnIjoiRWREU0EifQ",
"signature": "FW33NnvOHV0Ted9-F7GZbkia-vYAfBKtH4oBxbrttWAhBZ6UFJMxcGjL3lwOl4YohI3kyyd08LHPWNMgP2EVCQ",
"header": {
"kid": "did:example:alice#key-1"
}
}
]
})
unpack_result = await unpack(packed_message)
expected_message = Message(
id="1234567890",
typ="application/didcomm-plain+json",
type="http://example.com/protocols/lets_do_lunch/1.0/proposal",
frm="did:example:alice",
to=[
"did:example:bob"
],
created_time=1516269022,
expires_time=1516385931,
body={
"messagespecificattribute": "and its value"
}
)
assert unpack_result.message == expected_message
expected_metadata = Metadata(
encrypted=False,
authenticated=False,
non_repudiation=True,
anonymous_sender=False,
sign_from="did:example:alice#key-1",
sign_alg=SignAlg.ED25519,
signed_message=packed_message
)
assert unpack_result.metadata == expected_metadata
@pytest.mark.asyncio
async def test_pack_encrypted_for_anoncrypt():
bob_secret_1 = Secret(
kid="did:example:bob#key-p256-1",
type=VerificationMethodType.JSON_WEB_KEY_2020,
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "EC",
"d": "9KIW7dohB1e0IlavGTSmV6nT6l27oNvNnkdoKNcXe88",
"crv": "P-256",
"x": "z2mxGIK8jf_Pk2t3pjwUno3e9s8n8KTyWddQvP9fKas",
"y": "BhwSorIWrU6xAh7qPTG9DmnbuNQhuIlELZoJrnFMnv0"
})
)
)
bob_secret_2 = Secret(
kid="did:example:bob#key-p256-2",
type=VerificationMethodType.JSON_WEB_KEY_2020,
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "EC",
"d": "RymzxQ6R8Rv04v9cOVM9Ygl2_WSZUw4isPFVDFx2htU",
"crv": "P-256",
"x": "-akiIaFTb8yQFMXuLCEnvi-_oX6uOXBKbeUXk7qRP7k",
"y": "PRqnktHWOk6cBPQI17pXjFVnU6K7JDdUJxeXLE8Y5Yo"
})
)
)
register_default_secrets_resolver(TestSecretsResolver([bob_secret_1, bob_secret_2]))
bob_did_doc = TestDIDDoc(
did="did:example:bob",
key_agreement_kids=["did:example:bob#key-p256-1", "did:example:bob#key-p256-2"],
authentication_kids=[],
verification_methods=[
VerificationMethod(
id="did:example:bob#key-p256-1",
type=VerificationMethodType.JSON_WEB_KEY_2020,
controller="did:example:bob",
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "EC",
"crv": "P-256",
"x": "z2mxGIK8jf_Pk2t3pjwUno3e9s8n8KTyWddQvP9fKas",
"y": "BhwSorIWrU6xAh7qPTG9DmnbuNQhuIlELZoJrnFMnv0"
})
)
),
VerificationMethod(
id="did:example:bob#key-p256-2",
type=VerificationMethodType.JSON_WEB_KEY_2020,
controller="did:example:bob",
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "EC",
"crv": "P-256",
"x": "-akiIaFTb8yQFMXuLCEnvi-_oX6uOXBKbeUXk7qRP7k",
"y": "PRqnktHWOk6cBPQI17pXjFVnU6K7JDdUJxeXLE8Y5Yo"
})
)
)
],
didcomm_services=[]
)
register_default_did_resolver(TestDIDResolver([bob_did_doc]))
message = Message(
id="1234567890",
type="http://example.com/protocols/lets_do_lunch/1.0/proposal",
frm="did:example:alice",
to=[
"did:example:bob"
],
created_time=1516269022,
expires_time=1516385931,
body={
"messagespecificattribute": "and its value"
}
)
pack_result = await pack_encrypted(
message,
to="did:example:bob",
pack_config=PackEncryptedConfig(
enc_alg_anon=AnonCryptAlg.XC20P_ECDH_ES_A256KW,
forward=False
)
)
assert pack_result.to_kids == ["did:example:bob#key-p256-1", "did:example:bob#key-p256-2"]
assert pack_result.from_kid is None
assert pack_result.sign_from_kid is None
unpack_result = await unpack(pack_result.packed_msg)
assert unpack_result.message == message
expected_metadata = Metadata(
encrypted=True,
authenticated=False,
non_repudiation=False,
anonymous_sender=True,
encrypted_to=["did:example:bob#key-p256-1", "did:example:bob#key-p256-2"],
enc_alg_anon=AnonCryptAlg.XC20P_ECDH_ES_A256KW
)
assert unpack_result.metadata == expected_metadata
@pytest.mark.asyncio
async def test_unpack_encrypted_for_anoncrypt():
bob_secret_1 = Secret(
kid="did:example:bob#key-p256-1",
type=VerificationMethodType.JSON_WEB_KEY_2020,
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "EC",
"d": "9KIW7dohB1e0IlavGTSmV6nT6l27oNvNnkdoKNcXe88",
"crv": "P-256",
"x": "z2mxGIK8jf_Pk2t3pjwUno3e9s8n8KTyWddQvP9fKas",
"y": "BhwSorIWrU6xAh7qPTG9DmnbuNQhuIlELZoJrnFMnv0"
})
)
)
bob_secret_2 = Secret(
kid="did:example:bob#key-p256-2",
type=VerificationMethodType.JSON_WEB_KEY_2020,
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "EC",
"d": "RymzxQ6R8Rv04v9cOVM9Ygl2_WSZUw4isPFVDFx2htU",
"crv": "P-256",
"x": "-akiIaFTb8yQFMXuLCEnvi-_oX6uOXBKbeUXk7qRP7k",
"y": "PRqnktHWOk6cBPQI17pXjFVnU6K7JDdUJxeXLE8Y5Yo"
})
)
)
register_default_secrets_resolver(TestSecretsResolver([bob_secret_1, bob_secret_2]))
bob_did_doc = TestDIDDoc(
did="did:example:bob",
key_agreement_kids=["did:example:bob#key-p256-1", "did:example:bob#key-p256-2"],
authentication_kids=[],
verification_methods=[
VerificationMethod(
id="did:example:bob#key-p256-1",
type=VerificationMethodType.JSON_WEB_KEY_2020,
controller="did:example:bob",
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "EC",
"crv": "P-256",
"x": "z2mxGIK8jf_Pk2t3pjwUno3e9s8n8KTyWddQvP9fKas",
"y": "BhwSorIWrU6xAh7qPTG9DmnbuNQhuIlELZoJrnFMnv0"
})
)
),
VerificationMethod(
id="did:example:bob#key-p256-2",
type=VerificationMethodType.JSON_WEB_KEY_2020,
controller="did:example:bob",
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "EC",
"crv": "P-256",
"x": "-akiIaFTb8yQFMXuLCEnvi-_oX6uOXBKbeUXk7qRP7k",
"y": "PRqnktHWOk6cBPQI17pXjFVnU6K7JDdUJxeXLE8Y5Yo"
})
)
)
],
didcomm_services=[]
)
register_default_did_resolver(TestDIDResolver([bob_did_doc]))
packed_message = json_dumps({
"ciphertext": "P9vF3kq-jyDvFXy0GcHk7m1IH3ieJLH8E8enC_ZXYWmdmkGj6F4DT0YXYCLwjU9SAE4fIbIWiz5C6xk-"
"iz7tgQbhoFFL1O5W5NCp2xPUViqs3jI1NyxiJZFbmvIvErvFiUBy49VT7-jJfD22G-6DgrequTu7lLoh"
"nzVbIkf0y9ckK9ycGaDuT6do0dJdxZagFP0ej4qZWJFojv227Qn32My8ohCnXOszj5Mgdbg1ad9E1JNk"
"dwZHkow-drz4f82hccohG2pr4sf_aue2kHLpwfs7dOnujvcNMq6UIVolulk-friOCAtR84nmXDrQcI0L"
"VEUrdgNUCGgcnX95DcbcxtAGvSiqxyWauBt4ZkUuzMBFjOKJOIkW",
"protected": "eyJlcGsiOnsia3R5IjoiRUMiLCJjcnYiOiJQLTI1NiIsIngiOiIxc1luMDM3U2lERnBKMGQzV3VKbks5"
"MzRxV2xUOURabEhhUy1IZTlCNzFzIiwieSI6Ik03SzZPUUNCWGwyZjU3SDFKZlZsRm9Ha3VZTVNGVHBn"
"Mk9Zazh0Y0JXWEEifSwiYXB2Ijoiei1McXB2VlhEYl9zR1luM21qUUxwdXUyQ1FMZXdZdVpvVFdPSVhQ"
"SDNGTSIsInR5cCI6ImFwcGxpY2F0aW9uL2RpZGNvbW0tZW5jcnlwdGVkK2pzb24iLCJlbmMiOiJYQzIw"
"UCIsImFsZyI6IkVDREgtRVMrQTI1NktXIn0",
"recipients": [
{
"encrypted_key": "DEOpYs3EMN0En_5sbWIfzBTsvyeBq5xQU8LgxPJcoaUK1cB9hszdOw",
"header": {
"kid": "did:example:bob#key-p256-1"
}
},
{
"encrypted_key": "VldoeyO8s90A4BHvVAjgUdl7gJyNUoaKG-AjRumvcm40uxQYk7KjsA",
"header": {
"kid": "did:example:bob#key-p256-2"
}
}
],
"tag": "orzOQsbjcwBiR4Pu_CF0bg",
"iv": "UxgEJcKTxP_3Hw_FRC3etaEIYBimlctx"
})
unpack_result = await unpack(packed_message)
expected_message = Message(
id="1234567890",
typ="application/didcomm-plain+json",
type="http://example.com/protocols/lets_do_lunch/1.0/proposal",
frm="did:example:alice",
to=[
"did:example:bob"
],
created_time=1516269022,
expires_time=1516385931,
body={
"messagespecificattribute": "and its value"
}
)
assert unpack_result.message == expected_message
expected_metadata = Metadata(
encrypted=True,
authenticated=False,
non_repudiation=False,
anonymous_sender=True,
encrypted_to=["did:example:bob#key-p256-1", "did:example:bob#key-p256-2"],
enc_alg_anon=AnonCryptAlg.XC20P_ECDH_ES_A256KW
)
assert unpack_result.metadata == expected_metadata
@pytest.mark.asyncio
async def test_pack_encrypted_for_authcrypt():
alice_secret = Secret(
kid="did:example:alice#key-x25519-1",
type=VerificationMethodType.JSON_WEB_KEY_2020,
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"d": "r-jK2cO3taR8LQnJB1_ikLBTAnOtShJOsHXRUWT-aZA",
"crv": "X25519",
"x": "avH0O2Y4tqLAq8y9zpianr8ajii5m4F_mICrzNlatXs"
})
)
)
alice_secrets_resolver = TestSecretsResolver([alice_secret])
bob_secret_1 = Secret(
kid="did:example:bob#key-x25519-1",
type=VerificationMethodType.JSON_WEB_KEY_2020,
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"d": "b9NnuOCB0hm7YGNvaE9DMhwH_wjZA1-gWD6dA0JWdL0",
"crv": "X25519",
"x": "GDTrI66K0pFfO54tlCSvfjjNapIs44dzpneBgyx0S3E"
})
)
)
bob_secret_2 = Secret(
kid="did:example:bob#key-x25519-2",
type=VerificationMethodType.JSON_WEB_KEY_2020,
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"d": "p-vteoF1gopny1HXywt76xz_uC83UUmrgszsI-ThBKk",
"crv": "X25519",
"x": "UT9S3F5ep16KSNBBShU2wh3qSfqYjlasZimn0mB8_VM"
})
)
)
bob_secrets_resolver = TestSecretsResolver([bob_secret_1, bob_secret_2])
alice_did_doc = TestDIDDoc(
did="did:example:alice",
key_agreement_kids=["did:example:alice#key-x25519-1"],
authentication_kids=[],
verification_methods=[
VerificationMethod(
id="did:example:alice#key-x25519-1",
type=VerificationMethodType.JSON_WEB_KEY_2020,
controller="did:example:alice",
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"crv": "X25519",
"x": "avH0O2Y4tqLAq8y9zpianr8ajii5m4F_mICrzNlatXs"
})
)
)
],
didcomm_services=[]
)
bob_did_doc = TestDIDDoc(
did="did:example:bob",
key_agreement_kids=["did:example:bob#key-x25519-1", "did:example:bob#key-x25519-2"],
authentication_kids=[],
verification_methods=[
VerificationMethod(
id="did:example:bob#key-x25519-1",
type=VerificationMethodType.JSON_WEB_KEY_2020,
controller="did:example:bob",
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"crv": "X25519",
"x": "GDTrI66K0pFfO54tlCSvfjjNapIs44dzpneBgyx0S3E"
})
)
),
VerificationMethod(
id="did:example:bob#key-x25519-2",
type=VerificationMethodType.JSON_WEB_KEY_2020,
controller="did:example:bob",
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"crv": "X25519",
"x": "UT9S3F5ep16KSNBBShU2wh3qSfqYjlasZimn0mB8_VM"
})
)
)
],
didcomm_services=[]
)
register_default_did_resolver(TestDIDResolver([alice_did_doc, bob_did_doc]))
register_default_secrets_resolver(alice_secrets_resolver)
message = Message(
id="1234567890",
type="http://example.com/protocols/lets_do_lunch/1.0/proposal",
frm="did:example:alice",
to=[
"did:example:bob"
],
created_time=1516269022,
expires_time=1516385931,
body={
"messagespecificattribute": "and its value"
}
)
pack_result = await pack_encrypted(
message,
to="did:example:bob",
frm="did:example:alice",
pack_config=PackEncryptedConfig(
enc_alg_auth=AuthCryptAlg.A256CBC_HS512_ECDH_1PU_A256KW,
forward=False
)
)
assert pack_result.to_kids == ["did:example:bob#key-x25519-1", "did:example:bob#key-x25519-2"]
assert pack_result.from_kid == "did:example:alice#key-x25519-1"
assert pack_result.sign_from_kid is None
register_default_secrets_resolver(bob_secrets_resolver)
unpack_result = await unpack(pack_result.packed_msg)
assert unpack_result.message == message
expected_metadata = Metadata(
encrypted=True,
authenticated=True,
non_repudiation=False,
anonymous_sender=False,
encrypted_from="did:example:alice#key-x25519-1",
encrypted_to=["did:example:bob#key-x25519-1", "did:example:bob#key-x25519-2"],
enc_alg_auth=AuthCryptAlg.A256CBC_HS512_ECDH_1PU_A256KW
)
assert unpack_result.metadata == expected_metadata
@pytest.mark.asyncio
async def test_unpack_encrypted_for_authcrypt():
bob_secret_1 = Secret(
kid="did:example:bob#key-x25519-1",
type=VerificationMethodType.JSON_WEB_KEY_2020,
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"d": "b9NnuOCB0hm7YGNvaE9DMhwH_wjZA1-gWD6dA0JWdL0",
"crv": "X25519",
"x": "GDTrI66K0pFfO54tlCSvfjjNapIs44dzpneBgyx0S3E"
})
)
)
bob_secret_2 = Secret(
kid="did:example:bob#key-x25519-2",
type=VerificationMethodType.JSON_WEB_KEY_2020,
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"d": "p-vteoF1gopny1HXywt76xz_uC83UUmrgszsI-ThBKk",
"crv": "X25519",
"x": "UT9S3F5ep16KSNBBShU2wh3qSfqYjlasZimn0mB8_VM"
})
)
)
register_default_secrets_resolver(TestSecretsResolver([bob_secret_1, bob_secret_2]))
alice_did_doc = TestDIDDoc(
did="did:example:alice",
key_agreement_kids=["did:example:alice#key-x25519-1"],
authentication_kids=[],
verification_methods=[
VerificationMethod(
id="did:example:alice#key-x25519-1",
type=VerificationMethodType.JSON_WEB_KEY_2020,
controller="did:example:alice",
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"crv": "X25519",
"x": "avH0O2Y4tqLAq8y9zpianr8ajii5m4F_mICrzNlatXs"
})
)
)
],
didcomm_services=[]
)
bob_did_doc = TestDIDDoc(
did="did:example:bob",
key_agreement_kids=["did:example:bob#key-x25519-1", "did:example:bob#key-x25519-2"],
authentication_kids=[],
verification_methods=[
VerificationMethod(
id="did:example:bob#key-x25519-1",
type=VerificationMethodType.JSON_WEB_KEY_2020,
controller="did:example:bob",
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"crv": "X25519",
"x": "GDTrI66K0pFfO54tlCSvfjjNapIs44dzpneBgyx0S3E"
})
)
),
VerificationMethod(
id="did:example:bob#key-x25519-2",
type=VerificationMethodType.JSON_WEB_KEY_2020,
controller="did:example:bob",
verification_material=VerificationMaterial(
format=VerificationMaterialFormat.JWK,
value=json_dumps({
"kty": "OKP",
"crv": "X25519",
"x": "UT9S3F5ep16KSNBBShU2wh3qSfqYjlasZimn0mB8_VM"
})
)
)
],
didcomm_services=[]
)
register_default_did_resolver(TestDIDResolver([alice_did_doc, bob_did_doc]))
packed_message = json_dumps({
"ciphertext": "MJezmxJ8DzUB01rMjiW6JViSaUhsZBhMvYtezkhmwts1qXWtDB63i4-FHZP6cJSyCI7eU-gqH8lBXO_U"
"VuviWIqnIUrTRLaumanZ4q1dNKAnxNL-dHmb3coOqSvy3ZZn6W17lsVudjw7hUUpMbeMbQ5W8GokK9ZC"
"GaaWnqAzd1ZcuGXDuemWeA8BerQsfQw_IQm-aUKancldedHSGrOjVWgozVL97MH966j3i9CJc3k9jS9x"
"DuE0owoWVZa7SxTmhl1PDetmzLnYIIIt-peJtNYGdpd-FcYxIFycQNRUoFEr77h4GBTLbC-vqbQHJC1v"
"W4O2LEKhnhOAVlGyDYkNbA4DSL-LMwKxenQXRARsKSIMn7z-ZIqTE-VCNj9vbtgR",
"protected": "eyJlcGsiOnsia3R5IjoiT0tQIiwiY3J2IjoiWDI1NTE5IiwieCI6IkdGY01vcEpsamY0cExaZmNoNGFf"
"R2hUTV9ZQWY2aU5JMWRXREd5VkNhdzAifSwiYXB2IjoiTmNzdUFuclJmUEs2OUEtcmtaMEw5WFdVRzRq"
"TXZOQzNaZzc0QlB6NTNQQSIsInNraWQiOiJkaWQ6ZXhhbXBsZTphbGljZSNrZXkteDI1NTE5LTEiLCJh"
"cHUiOiJaR2xrT21WNFlXMXdiR1U2WVd4cFkyVWphMlY1TFhneU5UVXhPUzB4IiwidHlwIjoiYXBwbGlj"
"YXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiIsImVuYyI6IkEyNTZDQkMtSFM1MTIiLCJhbGciOiJF"
"Q0RILTFQVStBMjU2S1cifQ",
"recipients": [
{
"encrypted_key": "o0FJASHkQKhnFo_rTMHTI9qTm_m2mkJp-wv96mKyT5TP7QjB"
"DuiQ0AMKaPI_RLLB7jpyE-Q80Mwos7CvwbMJDhIEBnk2qHVB",
"header": {
"kid": "did:example:bob#key-x25519-1"
}
},
{
"encrypted_key": "rYlafW0XkNd8kaXCqVbtGJ9GhwBC3lZ9AihHK4B6J6V2kT7v"
"jbSYuIpr1IlAjvxYQOw08yqEJNIwrPpB0ouDzKqk98FVN7rK",
"header": {
"kid": "did:example:bob#key-x25519-2"
}
}
],
"tag": "uYeo7IsZjN7AnvBjUZE5lNryNENbf6_zew_VC-d4b3U",
"iv": "o02OXDQ6_-sKz2PX_6oyJg"
})
unpack_result = await unpack(packed_message)
expected_message = Message(
id="1234567890",
typ="application/didcomm-plain+json",
type="http://example.com/protocols/lets_do_lunch/1.0/proposal",
frm="did:example:alice",
to=[
"did:example:bob"
],
created_time=1516269022,
expires_time=1516385931,
body={
"messagespecificattribute": "and its value"
}
)
assert unpack_result.message == expected_message
expected_metadata = Metadata(
encrypted=True,
authenticated=True,
non_repudiation=False,
anonymous_sender=False,
encrypted_from="did:example:alice#key-x25519-1",
encrypted_to=["did:example:bob#key-x25519-1", "did:example:bob#key-x25519-2"],
enc_alg_auth=AuthCryptAlg.A256CBC_HS512_ECDH_1PU_A256KW
)
assert unpack_result.metadata == expected_metadata
def _decode_and_remove_signatures(jws: str) -> dict:
jws = json_loads(jws)
jws['payload'] = parse_base64url_encoded_json(jws['payload'])
for s in jws['signatures']:
s['protected'] = parse_base64url_encoded_json(s['protected'])
del s['signature']
return jws
| 37.577279 | 118 | 0.607396 | 2,231 | 28,446 | 7.493052 | 0.116988 | 0.055632 | 0.046659 | 0.042113 | 0.802895 | 0.792128 | 0.770473 | 0.755219 | 0.743076 | 0.734402 | 0 | 0.070197 | 0.300886 | 28,446 | 756 | 119 | 37.626984 | 0.770403 | 0 | 0 | 0.738095 | 0 | 0 | 0.287879 | 0.228398 | 0 | 0 | 0 | 0 | 0.029762 | 1 | 0.001488 | false | 0 | 0.019345 | 0 | 0.022321 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a7ebb3e0241ca3d6679895f8c5bc6e72847cfa31 | 62,273 | py | Python | pkgs/sdk-pkg/src/genie/libs/sdk/apis/junos/chassis/verify.py | jbronikowski/genielibs | 200a34e5fe4838a27b5a80d5973651b2e34ccafb | [
"Apache-2.0"
] | 94 | 2018-04-30T20:29:15.000Z | 2022-03-29T13:40:31.000Z | pkgs/sdk-pkg/src/genie/libs/sdk/apis/junos/chassis/verify.py | jbronikowski/genielibs | 200a34e5fe4838a27b5a80d5973651b2e34ccafb | [
"Apache-2.0"
] | 67 | 2018-12-06T21:08:09.000Z | 2022-03-29T18:00:46.000Z | pkgs/sdk-pkg/src/genie/libs/sdk/apis/junos/chassis/verify.py | jbronikowski/genielibs | 200a34e5fe4838a27b5a80d5973651b2e34ccafb | [
"Apache-2.0"
] | 49 | 2018-06-29T18:59:03.000Z | 2022-03-10T02:07:59.000Z | """Common verification functions for class-of-service"""
# Python
import re
import logging
import operator
# Genie
from genie.utils.timeout import Timeout
from genie.metaparser.util.exceptions import SchemaEmptyParserError
from genie.utils import Dq
log = logging.getLogger(__name__)
def verify_chassis_fpc_slot_state(device, expected_state,
expected_slot=None,
all_slots=False,
environment=False,
max_time=60,
check_interval=10):
""" Verifies slot state via
- show chassis fpc
- show chassis environment fpc
Args:
device (obj): Device object
expected_state (list): Expected state of that slot. For example: ["Offline", "Online"].
expected_slot (str, optional): Expected slot to check. For example: "0".
all_slots(bool, optional): Flag that indicate all slots need to be verified. Defaults to False.
environment(bool, optional): Flag that indicate different show commands. Defaults to False.
max_time (int, optional): Maximum timeout time. Defaults to 60.
check_interval (int, optional): Check interval. Defaults to 10.
Returns:
True/False
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
out = None
try:
if environment:
out = device.parse("show chassis environment fpc")
else:
out = device.parse('show chassis fpc')
except SchemaEmptyParserError:
timeout.sleep()
continue
# Sample outputs:
#
# show chassis fpc:
# 'fpc-information': {
# 'fpc': [{'slot': '0',
# 'state': 'Offline'}]
# show chassis environment fpc:
# {'environment-component-information': {
# 'environment-component-item': [
# {
# 'name': 'FPC 0',
# 'state': 'Online', <-----------------
# ...
fpc_list = out.q.contains('slot|state', regex=True).get_values('fpc')
# Verify all slot have the same expected_state
if all_slots:
states_set = set(out.q.get_values('state'))
if states_set.issubset(set(expected_state)):
return True
# Verify given slot has the expected_state
else:
# show chassis environment fpc
if environment:
items = out.q.get_values("environment-component-item", None)
for i in items:
slot_pattern = re.compile(r'FPC +(?P<slot>\d+)')
slot_match = slot_pattern.match(i['name'])
if slot_match:
if slot_match.groupdict()['slot'] == expected_slot and i['state'] in expected_state:
return True
# show chassis fpc
else:
for fpc in fpc_list:
slot = fpc.get('slot')
state = fpc.get('state')
if slot == expected_slot and state in expected_state:
return True
timeout.sleep()
return False
def verify_chassis_re_state(device,
expected_re_state,
max_time=60,
check_interval=10,):
""" Verify output of show chassis routing-engine ends as expected state
Args:
device (`obj`): Device object
expected_re_state (`str`): Expected end of output state
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis routing-engine')
except SchemaEmptyParserError:
return None
# Sample output
# "route-engine-information": {
# "route-engine": [{
# "mastership-state": "Master",
# ...
# },
# {
# "mastership-state": "Backup",
# }]
# "re-state": {master}
re_state = output.q.get_values('re-state')
if expected_re_state in re_state:
return True
timeout.sleep()
return False
def verify_chassis_slots_present(device,
expected_slots,
invert=False,
max_time=60,
check_interval=10,):
""" Verify slots present in 'show chassis routing-engine'
Args:
device (`obj`): Device object
expected_slots (`list`): Given slots
invert ('bool', 'optional'): Inverts to check if it doesn't exist
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis routing-engine')
except SchemaEmptyParserError:
return None
# Sample output
# {
# "route-engine-information": {
# "route-engine": [{
# ...
# "model": "RE-VMX",
# "slot": "0", <------------------
# "start-time": {
# "#text": "2019-08-29 09:02:22 UTC"
# },
slots = output.q.get_values('slot')
if not invert:
# check if 'slots' has all elements in 'expected'
if all(i in slots for i in expected_slots):
return True
timeout.sleep()
else:
for slot in slots:
if str(slot) == expected_slots:
timeout.sleep()
continue
return True
return False
def verify_chassis_slot_state(device,
expected_slots_states_pairs,
max_time=60,
check_interval=10,):
""" Verify slot's state in 'show chassis routing-engine'
Args:
device (`obj`): Device object
expected_slots_states_pairs (`dict`): Expected states with given slots. E.g.,{'slot1':'state1', 'slot2':'state2'}
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
out = device.parse('show chassis routing-engine')
except SchemaEmptyParserError:
return None
# Sample output
# {
# "route-engine-information": {
# "route-engine": [{
# ...
# "model": "RE-VMX",
# "mastership-state": "Master", <------------------
# "slot": "0", <-----------------
# "start-time": {
# "#text": "2019-08-29 09:02:22 UTC"
# },
rout=Dq(out).contains('slot|mastership-state',regex=True).reconstruct()
# rout example:
# {'route-engine-information': {'route-engine':
# [{'mastership-state': 'Master',
# 'slot': '0'},
# {'mastership-state': 'Backup',
# 'slot': '1'}]}}
route_engines = Dq(rout).get_values('route-engine')
# 'route_engines' example:
# [{'mastership-state': 'Master', 'slot': '0'},
# {'mastership-state': 'Backup', 'slot': '1'}]
# 'expected_slots_states_pairs' example:
# {'0':'master', '1':'backup'}
for i in route_engines:
if i['slot'] in expected_slots_states_pairs and \
i['mastership-state'].lower() == expected_slots_states_pairs[i['slot']].lower():
return True
timeout.sleep()
return False
def verify_chassis_fan_tray_present(device,
fan_tray_list,
invert=False,
max_time=60,
check_interval=10,):
""" Verify fan_tray_list is present in 'show chassis hardware'
Args:
device (`obj`): Device object
fan_tray_list (`list`): Given fan tray list
invert (`bool',optional): Check fan tray not present. Defaults to False
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis hardware')
except SchemaEmptyParserError:
return None
# Sample output
# {
# "chassis-inventory": {
# "chassis": {
# "@junos:style": "inventory",
# "chassis-module": [
# {
# "name": "Midplane" <--------------------
# },
# {
# "description": "RE-VMX",
# "name": "Routing Engine 0"
modules = output.q.get_values('chassis-module', None)
if modules:
names = [i['name'] for i in modules]
# check if all items in fan_tray_list appears in names
# >>> l1=[1,2]
# >>> l2=[1,2,3,4]
# >>> all(i in l2 for i in l1)
# True
# >>> all(i in l1 for i in l2)
# False
if invert:
# verify all items in fan_tray_list are not in names
if (set(names)-set(fan_tray_list)) == set(names):
return True
else:
if all(i in names for i in fan_tray_list):
return True
timeout.sleep()
continue
return False
def verify_chassis_environment_present(device,
fan_tray_list,
expected_status,
max_time=60,
check_interval=10,):
""" Verify all item in fan_tray_list have expected_status in 'show chassis environment'
Args:
device (`obj`): Device object
fan_tray_list (`list`): Given fan tray list
expected_status (`str`): Expected status
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis environment')
except SchemaEmptyParserError:
timeout.sleep()
continue
# Sample output:
# {'environment-information': {'environment-item': [
# {'class': 'Fans',
# 'comment': '2760 RPM',
# 'name': 'Fan Tray 0 Fan 1', <---------------
# 'status': 'OK'}, <--------------------------
# {'class': 'Fans',
# 'comment': '2520 RPM',
# 'name': 'Fan Tray 0 Fan 2',
# 'status': 'OK'},]}}
environment_items_list = output.q.get_values('environment-item', None)
# Create a {name:status} dictionary
name_status_dict = {}
if environment_items_list:
for item in environment_items_list:
# >>> name
# 'Fan Tray 0 Fan 2'
# >>> m=re.search('(.+?) +Fan +\d+',name).group(1)
# >>> m
# 'Fan Tray 0'
res = re.search('(.+?) +Fan +\d+',item['name'])
if not res:
continue
name = res.group(1)
name_status_dict[name] = item['status']
# fan_tray_list = ['Fan Tray 0', 'Fan Tray 1', 'Fan Tray 2', 'Fan Tray 3']
#
# name_status_dict = {'Fan Tray 0': 'OK',
# 'Fan Tray 1': 'OK',
# 'Fan Tray 2': 'Not OK',
# 'Fan Tray 3': 'OK'
# }
# Group all names that have expected_status into a list
names_list = []
for name, status in name_status_dict.items():
if status == expected_status:
names_list.append(name)
# names_list = ['Fan Tray 0', 'Fan Tray 1', 'Fan Tray 3']
# Compare if all fan tray item in fan_tray_list appear in the list
if all(i in names_list for i in fan_tray_list):
return True
timeout.sleep()
return False
def verify_chassis_no_alarms(device,
max_time=60,
check_interval=10):
""" Verify there are no alarms via 'show chassis alarms'
Args:
device (`obj`): Device object
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis alarms')
except SchemaEmptyParserError:
timeout.sleep()
continue
# {
# "alarm-information": {
# "alarm-summary": {
# "no-active-alarms": True
# }
# },
# }
if output.q.get_values("no-active-alarms",0) == True:
return True
timeout.sleep()
return False
def verify_chassis_routing_engine(device,
expected_item,
invert=False,
max_time=60,
check_interval=10,):
""" Verify fan_tray_list is present in 'show chassis hardware'
Args:
device (`obj`): Device object
expected_item (`str`): Hardware inventory item expected
invert ('bool'): Invert function
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis hardware')
except SchemaEmptyParserError:
return None
# Sample output
# {
# "chassis-inventory": {
# "chassis": {
# "@junos:style": "inventory",
# "chassis-module": [
# {
# "name": "Midplane" <--------------------
# },
# {
# "description": "RE-VMX",
# "name": "Routing Engine 0"
modules = output.q.get_values('chassis-module', None)
if modules:
names = [i['name'] for i in modules]
if not invert:
for name in names:
if name == expected_item:
return True
else:
for name in names:
if name == expected_item:
timeout.sleep()
continue
return True
timeout.sleep()
return False
def verify_chassis_hardware_item_present(device,
expected_item,
invert=False,
max_time=60,
check_interval=10):
""" Verify fan_tray_list is present in 'show chassis hardware'
Args:
device (`obj`): Device object
expected_item (`list`): Item name
invert ('bool'): Invert function
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis hardware')
except SchemaEmptyParserError:
timeout.sleep()
continue
# Sample output
# {
# "chassis-inventory": {
# "chassis": {
# "@junos:style": "inventory",
# "chassis-module": [
# {
# "name": "Midplane" <--------------------
# },
# {
# "description": "RE-VMX",
# "name": "Routing Engine 0"
item_list = output.q.contains(expected_item).get_values('name')
if not invert:
if expected_item in item_list:
return True
else:
if expected_item not in item_list:
return True
timeout.sleep()
return False
def verify_chassis_environment_component_present(device,
name,
component_list,
expected_status,
max_time=60,
check_interval=10,):
""" Verify all item in fan_tray_list have expected_status in 'show chassis environment'
Args:
device (`obj`): Device object
fan_tray_list (`list`): Given fan tray list
expected_status (`str`): Expected status
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
result = True
try:
output = device.parse('show chassis environment {name}'.format(
name=name
))
except SchemaEmptyParserError:
result = False
timeout.sleep()
continue
# fan_tray_list:
# - Fan Tray 0
# - Fan Tray 1
# - Fan Tray 2
# - Fan Tray 3
# Sample output:
# {'environment-information': {'environment-component-item': [
# {'class': 'Fans',
# 'comment': '2760 RPM',
# 'name': 'Fan Tray 0 Fan 1', <---------------
# 'status': 'OK'}, <--------------------------
# {'class': 'Fans',
# 'comment': '2520 RPM',
# 'name': 'Fan Tray 0 Fan 2',
# 'status': 'OK'},]}}
environment_items_list = output.q.get_values('environment-component-item', None)
if environment_items_list:
for item in environment_items_list:
name_ = item.get('name', None)
state_ = item.get('state', None)
if name_ in component_list and state_ != expected_status:
result = False
break
if component_list[0] not in output.q.get_values('name', None):
timeout.sleep()
continue
if result:
return True
timeout.sleep()
return False
def verify_chassis_power_item_present(device,
component_list,
expected_status,
max_time=60,
check_interval=10):
""" Verify all item in fan_tray_list have expected_status in 'show chassis environment'
Args:
device (`obj`): Device object
fan_tray_list (`list`): Given fan tray list
expected_status (`str`): Expected status
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
result = True
try:
output = device.parse('show chassis power')
except SchemaEmptyParserError:
result = False
timeout.sleep()
continue
power_usage_item_list = output.q.contains('{}|name'.format(
expected_status,
), regex=True).get_values('power-usage-item')
if power_usage_item_list:
for item in power_usage_item_list:
name_ = item.get('name', None)
state_ = item.get('state', None)
if name_ in component_list and state_ != expected_status:
result = False
break
if result:
return True
timeout.sleep()
return False
def verify_chassis_environment_status(device,
expected_item,
expected_status,
max_time=60,
check_interval=10,):
""" Verify specific item in fan_tray_list has expected_status in 'show chassis environment'
Args:
device (`obj`): Device object
expected_item (`str`): Hardware inventory item expected
expected_status (`str`): Expected status
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis environment')
except SchemaEmptyParserError:
return None
# Sample output:
# {'environment-information': {'environment-item': [
# {'class': 'Fans',
# 'comment': '2760 RPM',
# 'name': 'Fan Tray 0 Fan 1', <---------------
# 'status': 'OK'}, <--------------------------
# {'class': 'Fans',
# 'comment': '2520 RPM',
# 'name': 'Fan Tray 0 Fan 2',
# 'status': 'OK'},]}}
environment_items_list = output.q.get_values('environment-item', None)
if environment_items_list:
for item in environment_items_list:
if item['name'] == expected_item and item['status'] == expected_status:
return True
timeout.sleep()
return True
def verify_chassis_alarm_output(device,
message_topic,
invert=False,
max_time=60,
check_interval=10):
""" Verify message_topic is mentioned via 'show chassis alarms'
Args:
device (`obj`): Device object
message_topic ('str'): Message information that should be in output
invert ('bool'): Invert function
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis alarms')
except SchemaEmptyParserError:
if invert:
return True
timeout.sleep()
continue
#"alarm-detail": {
# "alarm-class": "Major",
# "alarm-description": "PSM 15 Not OK",
# "alarm-short-description": "PSM 15 Not OK",
# "alarm-time": {
# "#text": "2020-07-16 13:38:21 EST",
# },
# "alarm-type": "Chassis"
# },
# "alarm-summary": {
# "active-alarm-count": "1"
# }
# ['PSM 15 Not OK']
alarm_description = output.q.get_values('alarm-description', None)
no_output = output.q.get_values('no-active-alarms', 0)
if not invert:
if no_output:
timeout.sleep()
continue
for single_description in alarm_description:
if message_topic in single_description:
return True
timeout.sleep()
continue
else:
match_flag = False
if no_output:
return True
for single_description in alarm_description:
if message_topic in single_description:
match_flag=True
if match_flag:
timeout.sleep()
continue
else:
return True
return False
def verify_chassis_usb_flag_exists(device,
flag,
usb,
invert=False,
max_time=60,
check_interval=10):
""" Verify there is/isn't usb flag in given usb in the routing engine via show chassis hardware detail
Args:
device (`obj`): Device object
flag (`str`): USB flag description in output,
usb (`str`): USB name in output,,
invert (`bool`, optional): Used to indicate a reverse verification. default: False
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
out = device.parse('show chassis hardware detail')
except SchemaEmptyParserError:
return None
# usb: USB
# usb_insert_flag: umass
#
# {
# "chassis-inventory": {
# "chassis": {
# "chassis-module": [
# {
# 'chassis-re-usb-module': [{'description': 'umass0', <-----------usb_insert_flag
# 'name': 'usb0 ' <------------- usb
# '(addr '
# '1)',
# 'product': 'EHCI '
# 'root '
# 'hub',
# 'product-number': '0',
# 'vendor': 'Intel'}]}
chassis_module_list = out.q.get_values("chassis-module", None)
for module in chassis_module_list:
if 'chassis-re-usb-module' in module:
usb_module_list = module['chassis-re-usb-module']
for usb_module in usb_module_list:
# check if current usb_module has name starts with 'usb'
# usb_name: 'usb0 (addr 1)'
# usb: 'USB'
usb_name = usb_module['name']
if usb.lower() in usb_name:
# check if current usb_module has the usb_insert_flag
usb_description = usb_module['description']
if flag in usb_description:
if invert:
return False
else:
return True
timeout.sleep()
continue
timeout.sleep()
if invert:
return True
else:
return False
def verify_chassis_alarms_no_error(device,
target_fpc,
max_time=60,
check_interval=10):
""" Verify there are no error about target FPC via 'show chassis alarms'
Args:
device (`obj`): Device object
target_fpc (`str`): Target fpc.
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis alarms')
except SchemaEmptyParserError:
return True
errored_pattern = re.compile(r'FPC (?P<slot>\d+) offline due to unreachable destinations')
# Sample output
# {
# "alarm-information": {
# "alarm-detail": [{
# "alarm-class": "Major",
# "alarm-description": "FPC 15 Not OK", <--------------------------
# "alarm-short-description": "FPC 15 Not OK",
# "alarm-time": {
# "#text": "2020-07-16 13:38:21 EST",
# },
# "alarm-type": "Chassis"
# }],
# "alarm-summary": {
# "active-alarm-count": "1"
# }
# },
# }
description = output.q.get_values('alarm-description', None)
for d in description:
if errored_pattern.match(d):
return True
timeout.sleep()
return False
def verify_chassis_fpc_pic_status(device,
expected_state,
pic,
fpc,
max_time=60,
check_interval=10):
""" Verifies slot state via
- show chassis fpc pic-status
Args:
device (obj): Device object
expected_state (str): Expected state of that slot. For example: "Online"
pic (int): PIC number
fpc (int): FPC number
max_time (int, optional): Maximum timeout time. Defaults to 60.
check_interval (int, optional): Check interval. Defaults to 10.
Returns:
True/False
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
out = None
try:
out = device.parse('show chassis fpc pic-status')
except SchemaEmptyParserError:
timeout.sleep()
continue
# Sample output
# {
# "fpc-information": {
# "fpc": [
# {
# "description": "DPCE 2x 10GE R",
# "pic": [
# {
# "pic-slot": "0",
# "pic-state": "Online",
# "pic-type": "1x 10GE(LAN/WAN)"
# },
# {
# "pic-slot": "1", <----------------------- pic
# "pic-state": "Online", <----------------- expected_state
# "pic-type": "1x 10GE(LAN/WAN)"
# }
# ],
# "slot": "0", <----------------------------------- fpc
# "state": "Online"
# },
fpc_list = out['fpc-information']['fpc']
for fpc_item in fpc_list:
if fpc_item['slot'] == str(fpc):
pic_list = fpc_item['pic']
for pic_item in pic_list:
if pic_item['pic-slot'] == str(pic) and pic_item['pic-state'] == expected_state:
return True
timeout.sleep()
return False
def verify_chassis_fpc_pic_not_exists(device,
pic,
fpc,
max_time=60,
check_interval=10):
""" Verifies PIC slot does not exist via
- show chassis fpc pic-status
Args:
device (obj): Device object
pic (int): PIC number
fpc (int): FPC number
max_time (int, optional): Maximum timeout time. Defaults to 60.
check_interval (int, optional): Check interval. Defaults to 10.
Returns:
True/False
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
out = None
try:
out = device.parse('show chassis fpc pic-status')
except SchemaEmptyParserError:
timeout.sleep()
continue
# Sample output
# {
# "fpc-information": {
# "fpc": [
# {
# "description": "DPCE 2x 10GE R",
# "pic": [
# {
# "pic-slot": "0",
# "pic-state": "Online",
# "pic-type": "1x 10GE(LAN/WAN)"
# },
# {
# "pic-slot": "1", <----------------------- pic
# "pic-state": "Online", <----------------- expected_state
# "pic-type": "1x 10GE(LAN/WAN)"
# }
# ],
# "slot": "0", <----------------------------------- fpc
# "state": "Online"
# },
fpc_list = out['fpc-information']['fpc']
result = True
for fpc_item in fpc_list:
if fpc_item['slot'] == str(fpc):
pic_list = fpc_item['pic']
for pic_item in pic_list:
if pic_item['pic-slot'] == str(pic):
result = False
if result:
return result
timeout.sleep()
return result
def verify_chassis_pic_exists_under_mic(device,
mic,
fpc,
invert=False,
max_time=60,
check_interval=10):
""" Verifies PIC exists under MIC $mic of FPC $fpc via
- show chassis hardware
Args:
device (obj): Device object
mic (int): MIC number
fpc (int): FPC number
invert (bool, optional): True means verifying PIC does not exist
max_time (int, optional): Maximum timeout time. Defaults to 60 seconds.
check_interval (int, optional): Check interval. Defaults to 10 seconds.
Returns:
True/False
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
out = None
try:
out = device.parse('show chassis hardware')
except SchemaEmptyParserError:
timeout.sleep()
continue
# Sample output
# {
# "chassis-inventory": {
# "chassis": {
# "@junos:style": "inventory",
# "chassis-module": [
# {
# "chassis-sub-module": [
# {
# "chassis-sub-sub-module": [{
# "description": "Virtual",
# "name": "PIC 0", <------------------------------------- PIC exists
# "part-number": "BUILTIN",
# "serial-number": "BUILTIN",
# },],
# "description": "Virtual",
# "name": "MIC 0", <-------------------------------------------- mic
# },
# ],
# "description": "Virtual FPC",
# "name": "FPC 0", <---------------------------------------------------- fpc
# },
# ],
# "description": "VMX",
# "name": "Chassis",
# "serial-number": "VM5D4C6B3599",
# }
# }
# }
chassis_module_list = out.q.get_values('chassis-module', None)
for chassis_module in chassis_module_list:
# "name": "FPC 0"
if chassis_module['name'] == 'FPC '+str(fpc):
chassis_sub_module_list = Dq(chassis_module).get_values('chassis-sub-module', None)
for sub in chassis_sub_module_list:
# "name": "MIC 0"
if sub['name'] == 'MIC '+str(mic):
# "chassis-sub-sub-module": [{
# "description": "Virtual",
# "name": "PIC 0",
# "part-number": "BUILTIN",
# "serial-number": "BUILTIN",
# },],
if 'chassis-sub-sub-module' in sub:
# >>> d1={'name':'aa'}
# >>> d2={'name':'bb'}
# >>> l=[d1,d2]
# >>> names=[i['name'] for i in l]
# >>> names
# ['aa', 'bb']
# >>>
names_list = [i['name'] for i in sub['chassis-sub-sub-module']]
# >>> l=['PIC 0', 'ABC']
# >>> ['PIC' in i for i in l]
# [True, False]
# >>>
if invert:
if not(True in ['PIC' in i for i in names_list]):
return True
else:
if True in ['PIC' in i for i in names_list]:
return True
else:
if invert:
return True
timeout.sleep()
return False
def verify_chassis_mic_exists_under_fpc(device,
mic,
fpc,
invert=False,
max_time=60,
check_interval=10):
""" Verifies MIC $mic exists under FPC $fpc via
- show chassis hardware
Args:
device (obj): Device object
mic (int): MIC number
fpc (int): FPC number
invert (bool, optional): True means verifying PIC does not exist
max_time (int, optional): Maximum timeout time. Defaults to 60 seconds.
check_interval (int, optional): Check interval. Defaults to 10 seconds.
Returns:
True/False
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
out = None
try:
out = device.parse('show chassis hardware')
except SchemaEmptyParserError:
timeout.sleep()
continue
# Sample output
# {
# "chassis-inventory": {
# "chassis": {
# "@junos:style": "inventory",
# "chassis-module": [
# {
# "chassis-sub-module": [
# {
# "description": "Virtual",
# "name": "MIC 0", <-------------------------------------------- mic
# },
# ],
# "description": "Virtual FPC",
# "name": "FPC 0", <---------------------------------------------------- fpc
# },
# ],
# "description": "VMX",
# "name": "Chassis",
# "serial-number": "VM5D4C6B3599",
# }
# }
# }
chassis_module_list = out.q.get_values('chassis-module', None)
for chassis_module in chassis_module_list:
# "name": "FPC 0"
if chassis_module['name'] == 'FPC '+str(fpc):
chassis_sub_module_list = Dq(chassis_module).get_values('chassis-sub-module', None)
if invert and chassis_sub_module_list==[]:
return True
# find all names in chassis-sub-module
names_list = [sub['name'] for sub in chassis_sub_module_list]
# "name": "MIC 0"
current_name = 'MIC '+str(mic)
if invert:
if current_name not in names_list:
return True
else:
if current_name in names_list:
return True
timeout.sleep()
return False
def verify_chassis_no_error_fpc_mic(device,
mic,
fpc,
max_time=60,
check_interval=10):
""" Verifies no errored FPC $fpc MIC $mic
- show chassis alarms
Args:
device (obj): Device object
mic (int): MIC number
fpc (int): FPC number
max_time (int, optional): Maximum timeout time. Defaults to 60 seconds.
check_interval (int, optional): Check interval. Defaults to 10 seconds.
Returns:
True/False
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
out = None
try:
out = device.parse('show chassis alarms')
except SchemaEmptyParserError:
return True
# Sample output
# {
# "alarm-information": {
# "alarm-detail": [{
# "alarm-class": "Major",
# "alarm-description": "PSM 15 Not OK", <------------------
# "alarm-short-description": "PSM 15 Not OK",
# "alarm-time": {
# "#text": "2020-07-16 13:38:21 EST",
# },
# "alarm-type": "Chassis"
# }],
# "alarm-summary": {
# "active-alarm-count": "1"
# }
# },
# }
errored_pattern = re.compile(r'FPC +(?P<fpc>\d+) +PIC +(?P<pic>\d+) +Invalid +port +profile +configuration')
description = out.q.get_values("alarm-description", None)
for d in description:
if errored_pattern.match(d):
return False
timeout.sleep()
return True
def verify_chassis_environment_multiple_status(device,
expected_item,
expected_status,
max_time=60,
check_interval=10):
""" Verify specific items status in 'show chassis environment'
Args:
device (`obj`): Device object
expected_item (`str`): Hardware inventory item expected
expected_status (`str`): Expected status
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis environment')
except SchemaEmptyParserError:
timeout.sleep()
continue
# Sample output:
# {'environment-information': {'environment-item': [
# {'class': 'Fans',
# 'comment': '2760 RPM',
# 'name': 'Fan Tray 0 Fan 1', <---------------
# 'status': 'OK'}, <--------------------------
# {'class': 'Fans',
# 'comment': '2520 RPM',
# 'name': 'Fan Tray 0 Fan 2',
# 'status': 'OK'},]}}
environment_items_list = output.q.get_values('environment-item', None)
if environment_items_list:
for item in environment_items_list:
if re.search(r"{} +([\S\s]+)".format(expected_item), item['name']) and item['status'] == expected_status:
return True
timeout.sleep()
return False
def verify_chassis_fabric_summary_status(device,
expected_item,
expected_status,
max_time=60,
check_interval=10):
""" Verify specific items status in 'show chassis fabric summary'
Args:
device (`obj`): Device object
expected_item (`list`): chassis fabric item expected
expected_status (`str`): Expected status
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis fabric summary')
except SchemaEmptyParserError:
timeout.sleep()
continue
# Sample output:
# "fm-state-item": [
# {
# "plane-slot": "0",
# "state": "Online",
# "up-time": "34 days, 18 hours, 43 minutes, 48 seconds"
# },
# {
# "plane-slot": "1",
# "state": "Online",
# "up-time": "34 days, 18 hours, 43 minutes, 47 seconds"
# }
fm_state_item = output.q.get_values('fm-state-item', None)
if fm_state_item:
for item in fm_state_item:
if str(item['plane-slot']) == str(expected_item) and item['state'].lower() == expected_status.lower():
return True
timeout.sleep()
return False
def verify_chassis_fabric_plane_status(device,
expected_item,
expected_status,
max_time=60,
check_interval=10):
""" Verify specific items status in 'show chassis fabric plane'
Args:
device (`obj`): Device object
expected_item (`list`): Chassis fabric items expected
expected_status (`str`): Expected status
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis fabric plane')
except SchemaEmptyParserError:
timeout.sleep()
continue
# Sample output:
# "fmp-plane": [
# {
# "fru-name": [
# "FPC",
# "FPC"
# ],
# "fru-slot": [
# "0",
# "1"
# ],
# "pfe-link-status": [
# "Links ok",
# "Links ok",
# "Links ok",
# "Links ok"
# ],
# "pfe-slot": [
# "0",
# "1",
# "0",
# "1"
# ],
# "slot": "0",
# "state": "ACTIVE"
# }
fm_state_item = output.q.get_values('fmp-plane', None)
if fm_state_item:
for item in fm_state_item:
if item and item['slot'] == expected_item and item['state'].lower() == expected_status.lower():
return True
timeout.sleep()
return False
def verify_chassis_environment_item(device,
expected_item,
invert=False,
max_time=60,
check_interval=10):
""" Verify specific item in show chassis environment exists or doesn't exist
Args:
device (`obj`): Device object
expected_item (`str`): Hardware inventory item expected
invert ('bool', 'optional'): Inverts to check if it doesn't exist
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis environment')
except SchemaEmptyParserError:
if invert:
return True
timeout.sleep()
continue
# Sample output:
# {'environment-information': {'environment-item': [
# {'class': 'Fans',
# 'comment': '2760 RPM',
# 'name': 'Fan Tray 0 Fan 1', <---------------
# 'status': 'OK'}, <--------------------------
# {'class': 'Fans',
# 'comment': '2520 RPM',
# 'name': 'Fan Tray 0 Fan 2',
# 'status': 'OK'},]}}
environment_items_list = output.q.get_values('name', None)
if environment_items_list:
if not invert:
if expected_item in environment_items_list :
return True
else:
if expected_item not in environment_items_list :
return True
timeout.sleep()
return False
def verify_chassis_fabric_plane_exists(device,
expected_item,
invert=False,
max_time=60,
check_interval=10):
""" Verify specific items status in 'show chassis fabric summary'
Args:
device (`obj`): Device object
expected_item (`list`): Chassis fabric items expected
invert ('bool', 'optional'): Inverts to check if it doesn't exist
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis fabric plane')
except SchemaEmptyParserError:
timeout.sleep()
continue
# Sample output:
# "fmp-plane": [
# {
# "fru-name": [
# "FPC",
# "FPC"
# ],
# "fru-slot": [
# "0",
# "1"
# ],
# "pfe-link-status": [
# "Links ok",
# "Links ok",
# "Links ok",
# "Links ok"
# ],
# "pfe-slot": [
# "0",
# "1",
# "0",
# "1"
# ],
# "slot": "0",
# "state": "ACTIVE"
# }
fm_state_item = output.q.get_values('slot', None)
if not invert:
if expected_item in fm_state_item:
return True
else:
if expected_item not in fm_state_item:
return True
timeout.sleep()
return True
def verify_chassis_fpc_slot_port(device, fpc_slot, pic_slot, expected_pic_port,
invert=False, max_time=60, check_interval=10):
"""Verifies chassis fpc slot exists
Args:
device (obj): Device object
fpc_slot (str/int): FPC slot number
pic_slot (str/int): PIC slot number
expected_pic_port (str): Expected PIC port
invert (bool): Inverts function
max_time (int, optional): Maximum timeout time. Defaults to 60.
check_interval (int, optional): Check interval. Defaults to 10.
"""
op = operator.truth
if invert:
op = lambda val: operator.not_(operator.truth(val))
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
out = device.parse('show chassis pic fpc-slot {fpc_slot} pic-slot {pic_slot}'.format(
fpc_slot=fpc_slot,
pic_slot=pic_slot,
))
except SchemaEmptyParserError:
timeout.sleep()
continue
"""
schema = {
"fpc-information": {
"fpc": {
"pic-detail": {
"port-information": {
"port": [
{
"port-number": str,
"""
if op(str(expected_pic_port) in out.q.get_values('port-number')):
return True
def verify_chassis_environment_item(device,
expected_item,
invert=False,
max_time=60,
check_interval=10):
""" Verify specific item in show chassis environment exists or doesn't exist
Args:
device (`obj`): Device object
expected_item (`str`): Hardware inventory item expected
invert ('bool', 'optional'): Inverts to check if it doesn't exist
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis environment')
except SchemaEmptyParserError:
if invert:
return True
timeout.sleep()
continue
# Sample output:
# {'environment-information': {'environment-item': [
# {'class': 'Fans',
# 'comment': '2760 RPM',
# 'name': 'Fan Tray 0 Fan 1', <---------------
# 'status': 'OK'}, <--------------------------
# {'class': 'Fans',
# 'comment': '2520 RPM',
# 'name': 'Fan Tray 0 Fan 2',
# 'status': 'OK'},]}}
environment_items_list = output.q.get_values('name', None)
if environment_items_list:
if not invert:
if expected_item in environment_items_list :
return True
else:
if expected_item not in environment_items_list :
return True
timeout.sleep()
return False
def verify_chassis_fabric_plane_exists(device,
expected_item,
invert=False,
max_time=60,
check_interval=10):
""" Verify specific items status in 'show chassis fabric summary'
Args:
device (`obj`): Device object
expected_item (`list`): Chassis fabric items expected
invert ('bool', 'optional'): Inverts to check if it doesn't exist
max_time (`int`): Max time, default: 60 seconds
check_interval (`int`): Check interval, default: 10 seconds
Returns:
result (`bool`): Verified result
Raises:
N/A
"""
timeout = Timeout(max_time, check_interval)
while timeout.iterate():
try:
output = device.parse('show chassis fabric plane')
except SchemaEmptyParserError:
timeout.sleep()
continue
# Sample output:
# "fmp-plane": [
# {
# "fru-name": [
# "FPC",
# "FPC"
# ],
# "fru-slot": [
# "0",
# "1"
# ],
# "pfe-link-status": [
# "Links ok",
# "Links ok",
# "Links ok",
# "Links ok"
# ],
# "pfe-slot": [
# "0",
# "1",
# "0",
# "1"
# ],
# "slot": "0",
# "state": "ACTIVE"
# }
fm_state_item = output.q.get_values('slot', None)
if not invert:
if expected_item in fm_state_item:
return True
else:
if expected_item not in fm_state_item:
return True
timeout.sleep()
return False
| 35.954388 | 125 | 0.423265 | 5,275 | 62,273 | 4.864645 | 0.058389 | 0.05674 | 0.018705 | 0.024863 | 0.805853 | 0.777016 | 0.756713 | 0.731304 | 0.714781 | 0.684463 | 0 | 0.016186 | 0.47022 | 62,273 | 1,731 | 126 | 35.975159 | 0.761632 | 0.426589 | 0 | 0.787834 | 0 | 0.001484 | 0.050995 | 0.004858 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041543 | false | 0 | 0.008902 | 0 | 0.176558 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c50be20f151e5124d77b192037d0f90e8ca34e22 | 143 | py | Python | deep_rl/network/__init__.py | runjerry/HyperDeepRL | 516be7cd9ee38a2f3ef6751dbd6a7b3d576febea | [
"MIT"
] | null | null | null | deep_rl/network/__init__.py | runjerry/HyperDeepRL | 516be7cd9ee38a2f3ef6751dbd6a7b3d576febea | [
"MIT"
] | 2 | 2019-12-09T23:12:26.000Z | 2019-12-09T23:46:23.000Z | deep_rl/network/__init__.py | runjerry/HyperDeepRL | 516be7cd9ee38a2f3ef6751dbd6a7b3d576febea | [
"MIT"
] | 1 | 2020-03-25T18:33:38.000Z | 2020-03-25T18:33:38.000Z | from .network_utils import *
from .network_bodies import *
from .network_heads import *
from .hyper_heads import *
from .hyper_bodies import *
| 23.833333 | 29 | 0.79021 | 20 | 143 | 5.4 | 0.35 | 0.37037 | 0.314815 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13986 | 143 | 5 | 30 | 28.6 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
c525e8efb790421ee82b290ddd9229796d427c79 | 17,739 | py | Python | Optimization/build_co_optimization.py | nickolasclarke/anciML | 365e46a7042e358c9288ec67c2eb744b4fbdea1a | [
"MIT"
] | 2 | 2020-07-11T14:11:24.000Z | 2021-03-18T02:39:15.000Z | Optimization/build_co_optimization.py | nickolasclarke/anciML | 365e46a7042e358c9288ec67c2eb744b4fbdea1a | [
"MIT"
] | 1 | 2020-04-30T20:17:20.000Z | 2020-04-30T20:17:20.000Z | Optimization/build_co_optimization.py | nickolasclarke/anciML | 365e46a7042e358c9288ec67c2eb744b4fbdea1a | [
"MIT"
] | 1 | 2020-11-26T12:50:01.000Z | 2020-11-26T12:50:01.000Z | import pandas as pd
import pyomo.environ as pyo
import numpy as np
class HybridDeterministic:
"""
Pyomo implementation of LP hybrid, with perfect DAM foresight and certain solar resource
"""
def __init__(self, model, E_price, solar_cf,ASM_price):
self.model = model
self.model.IDX = pyo.RangeSet(0, 8759)
self.model.E_price = E_price
self.model.ASM_price = ASM_price
self.model.solar_cf = solar_cf
def build(self, model, storage_size, eff, s_max, storage_power, solar_plant_size, grid_limit):
# ------------------ Parameters ---------------------------#
#Sizing based on: https://www.nrel.gov/docs/fy19osti/71714.pdf
model.storage_size = storage_size # in [MWh]
model.eff = eff # [0,1] usually 75% to 90%
model.s_max = s_max # AC interconnection limit in [MW]
model.storage_power = storage_power # in [MW]
model.solar_plant_size = solar_plant_size # in [MW]
model.solar_gen = model.solar_plant_size * model.solar_cf # in [MW]
# --------------- Optimization variables ------------------#
model.negcharge = pyo.Var(model.IDX, bounds=(0.00, model.storage_power))
model.poscharge = pyo.Var(model.IDX, bounds=(0.00, model.storage_power))
model.energy_gen = pyo.Var(model.IDX, bounds=(-np.inf, np.inf)) # Interconnection limit included in bounds
model.soc = pyo.Var(model.IDX, bounds=(0.00, model.storage_size)) # SOC bounds include size of storage system
model.reg = pyo.Var(model.IDX, bounds=(0.00, model.storage_power*2)) # bounds for regulation product provided
model.E0 = 0 # Storage starts empty
model.grid_limit = grid_limit ## no grid charging = 0, grid chargeing = 1
model.reg_penalty = 0.2 ## how much energy lost by providing regulation service
# -------------------- Objective fct. ---------------------#
# Objective function
def Objective_rule(m):
expr = sum([model.E_price[t] * model.energy_gen[t] for t in model.IDX]+ [model.ASM_price[t] * model.reg[t] for t in model.IDX])
return expr
model.Max_Revenue = pyo.Objective(rule=Objective_rule, sense=pyo.maximize)
# ----------------- Constraints ----------------------------#
# Export limits - unecertainty paramenters
def grid_import_limit(model, t):
return model.energy_gen[t] >= -model.storage_power*model.grid_limit
model.grid_import_limit = pyo.Constraint(model.IDX, rule=grid_import_limit)
# Export limits - unecertainty paramenters
def grid_export_limit(model, t):
return model.energy_gen[t] <= model.s_max
model.grid_export_limit = pyo.Constraint(model.IDX, rule=grid_export_limit)
# Grid balance equation
def grid_balance_rule(model, t):
return model.energy_gen[t] == model.negcharge[t] - model.poscharge[t] + model.solar_gen[t]
model.grid_balance_const = pyo.Constraint(model.IDX, rule=grid_balance_rule)
# Battery SOC equation
def storage_soc_rule(model, t):
if t == model.IDX.first():
expr = model.soc[t] == model.E0 + model.poscharge[t] - model.eff * model.negcharge[t] / model.eff + model.reg[t] * model.reg_penalty
else:
expr = model.soc[t] == model.soc[t - 1] + model.eff * model.poscharge[t] - model.negcharge[t] / model.eff + model.reg[t] * model.reg_penalty
return expr
model.storage_soc_const = pyo.Constraint(model.IDX, rule=storage_soc_rule)
# Simultaneous rule
def limit_simultaneous_rule(model, t):
return model.negcharge[t] + model.poscharge[t] <= model.storage_power
model.limit_simultaneous_const = pyo.Constraint(model.IDX, rule=limit_simultaneous_rule)
#regulation product
def regulation_product_rule(model, t):
return model.reg[t] <= model.storage_power + model.negcharge[t] - model.poscharge[t]
model.regulation_product = pyo.Constraint(model.IDX, rule=regulation_product_rule)
return model
class HybridCC_normal():
"""
Pyomo implementation of SOCP ??? Ask Scott"""
def __init__(self, model, E_price ,ASM_price, solar_mean_cf, solar_std_cf, inv_cdf):
self.model = model
self.model.IDX = pyo.RangeSet(0, 8759)
self.model.E_price = E_price
self.model.ASM_price = ASM_price
#self.model.solar_cf = solar_cf
###
self.model.solar_mean_cf = solar_mean_cf
self.model.solar_std_cf = solar_std_cf
self.model.invd = inv_cdf
def build(self, model, storage_size, eff, s_max, storage_power, solar_plant_size, grid_limit):
# ------------------ Parameters ---------------------------#
#Sizing based on: https://www.nrel.gov/docs/fy19osti/71714.pdf
model.storage_size = storage_size # in [MWh]
model.eff = eff # [0,1] usually 75% to 90%
model.s_max = s_max # AC interconnection limit in [MW]
model.storage_power = storage_power # in [MW]
model.solar_plant_size = solar_plant_size # in [MW]
model.solar_gen = model.solar_plant_size * model.solar_mean_cf # in [MW]
# --------------- Optimization variables ------------------#
model.negcharge = pyo.Var(model.IDX, bounds=(0.00, model.storage_power))
model.poscharge = pyo.Var(model.IDX, bounds=(0.00, model.storage_power))
model.soc = pyo.Var(model.IDX, bounds=(0.00, model.storage_size)) # SOC bounds include size of storage system
model.reg = pyo.Var(model.IDX, bounds=(0.00, model.storage_power*2)) # bounds for regulation product provided
model.E0 = 0 # Storage starts empty
model.grid_limit = grid_limit ## no grid charging = 0, grid chargeing = 1
model.reg_penalty = 0.2 ## how much energy lost by providing regulation service
# -------------------- Objective fct. ---------------------#
# Objective function
def Objective_rule(m):
expr = sum([model.E_price[t] * (model.negcharge[t] - model.poscharge[t] + model.solar_gen[t]) for t in model.IDX] + [model.ASM_price[t] * model.reg[t] for t in model.IDX])
return expr
model.Max_Revenue = pyo.Objective(rule=Objective_rule, sense=pyo.maximize)
# ----------------- Constraints ----------------------------#
# Grid balance equation - UNCERTAIN parameters
def grid_balance_rule_import(model, t):
return (-model.poscharge[t] + model.negcharge[t] + model.solar_gen[t] + model.storage_power*model.grid_limit) >= \
model.invd * model.solar_std_cf[t] #could add plus model.storage_power if chatging from grid allowed
model.grid_balance_rule_import = pyo.Constraint(model.IDX, rule=grid_balance_rule_import)
# Grid balance equation - UNCERTAIN parameters
def grid_balance_rule_export(model, t):
return (model.poscharge[t] - model.negcharge[t] - model.solar_gen[t] + model.s_max) >= \
model.invd*model.solar_std_cf[t]
model.grid_balance_rule_export = pyo.Constraint(model.IDX, rule=grid_balance_rule_export)
# Battery SOC equation
def storage_soc_rule(model, t):
if t == model.IDX.first():
expr = model.soc[t] == model.E0 + model.poscharge[t] - model.eff * model.negcharge[t] / model.eff + model.reg[t] * model.reg_penalty
else:
expr = model.soc[t] == model.soc[t - 1] + model.eff * model.poscharge[t] - model.negcharge[t] / model.eff + model.reg[t] * model.reg_penalty
return expr
model.storage_soc_const = pyo.Constraint(model.IDX, rule=storage_soc_rule)
# Simultaneous rule
def limit_simultaneous_rule(model, t):
return model.negcharge[t] + model.poscharge[t] <= model.storage_power
model.limit_simultaneous_const = pyo.Constraint(model.IDX, rule=limit_simultaneous_rule)
#regulation product
def regulation_product_rule(model, t):
return model.reg[t] <= model.storage_power + model.negcharge[t] - model.poscharge[t]
model.regulation_product = pyo.Constraint(model.IDX, rule=regulation_product_rule)
return model
class HybridCC_cdf():
"""
Pyomo implementation of SOCP ??? Ask Scott
I tried to follow the idea of HW3 PB5, but I am not sure how to model storage component (or how to write constrain in pyomo)
I think Appendix A of this article can help: https://arxiv.org/pdf/1906.04108.pdf
"""
"""
Pyomo implementation of LP hybrid, with perfect DAM foresight and certain solar resource
"""
def __init__(self, model, E_price , ASM_price, inv_cdf):
self.model = model
self.model.IDX = pyo.RangeSet(0, 8759)
self.model.E_price = E_price
self.model.ASM_price = ASM_price
self.model.inv_cdf = inv_cdf
def build(self, model, storage_size, eff, s_max, storage_power, solar_plant_size, grid_limit):
# ------------------ Parameters ---------------------------#
#Sizing based on: https://www.nrel.gov/docs/fy19osti/71714.pdf
model.storage_size = storage_size # in [MWh]
model.eff = eff # [0,1] usually 75% to 90%
model.s_max = s_max # AC interconnection limit in [MW]
model.storage_power = storage_power # in [MW]
model.solar_plant_size = solar_plant_size # in [MW]
model.solar_gen = model.solar_plant_size * model.inv_cdf # in [MW]
# --------------- Optimization variables ------------------#
model.negcharge = pyo.Var(model.IDX, bounds=(0.00, model.storage_power))
model.poscharge = pyo.Var(model.IDX, bounds=(0.00, model.storage_power))
model.soc = pyo.Var(model.IDX, bounds=(0.00, model.storage_size)) # SOC bounds include size of storage system
model.reg = pyo.Var(model.IDX, bounds=(0.00, model.storage_power*2)) # bounds for regulation product provided
model.E0 = 0 # Storage starts empty
model.grid_limit = grid_limit ## no grid charging = 0, grid chargeing = 1
model.reg_penalty = 0.2 ## how much energy lost by providing regulation service
# -------------------- Objective fct. ---------------------#
# Objective function
def Objective_rule(m):
expr = sum([model.E_price[t] * (model.negcharge[t] - model.poscharge[t] + model.solar_gen[t]) + model.ASM_price[t] * model.reg[t] for t in model.IDX])
return expr
model.Max_Revenue = pyo.Objective(rule=Objective_rule, sense=pyo.maximize)
# ----------------- Constraints ----------------------------#
# Grid balance equation - UNCERTAIN parameters
def grid_balance_rule_import(model, t):
return -model.poscharge[t] + model.negcharge[t] + model.solar_gen[t] >= \
-model.storage_power*model.grid_limit
model.grid_balance_rule_import = pyo.Constraint(model.IDX, rule=grid_balance_rule_import)
# Grid balance equation - UNCERTAIN parameters
def grid_balance_rule_export(model, t):
return model.poscharge[t] - model.negcharge[t] - model.solar_gen[t] >= \
-model.s_max
model.grid_balance_rule_export = pyo.Constraint(model.IDX, rule=grid_balance_rule_export)
# Battery SOC equation
def storage_soc_rule(model, t):
if t == model.IDX.first():
expr = model.soc[t] == model.E0 + model.poscharge[t] - model.eff * model.negcharge[t] / model.eff + model.reg[t] * model.reg_penalty
else:
expr = model.soc[t] == model.soc[t - 1] + model.eff * model.poscharge[t] - model.negcharge[t] / model.eff + model.reg[t] * model.reg_penalty
return expr
model.storage_soc_const = pyo.Constraint(model.IDX, rule=storage_soc_rule)
# Simultaneous rule
def limit_simultaneous_rule(model, t):
return model.negcharge[t] + model.poscharge[t] <= model.storage_power
model.limit_simultaneous_const = pyo.Constraint(model.IDX, rule=limit_simultaneous_rule)
#regulation product
def regulation_product_rule(model, t):
return model.reg[t] <= model.storage_power + model.negcharge[t] - model.poscharge[t]
model.regulation_product = pyo.Constraint(model.IDX, rule=regulation_product_rule)
return model
def ResultsAnalysisDet(model, filename):
# ------------------------ Optimization result ---------------------#
#Objective value in $
Max_Revenue = pyo.value(model.Max_Revenue)
# ------------------------ Optimization variables ---------------------#
# Save the schedule in python object
solar_gen = pd.Series(model.solar_gen)
energy_price = pd.Series(model.E_price)
as_price = pd.Series(model.ASM_price)
energy_gen = pd.Series([round(model.energy_gen[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
negcharge = pd.Series([round(model.negcharge[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
poscharge = pd.Series([round(model.poscharge[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
soc = pd.Series([round(model.soc[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
reg = pd.Series([round(model.reg[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
schedule = pd.DataFrame({'energy_gen':energy_gen, 'solar_gen':solar_gen,'energy_price':energy_price,'as_price':as_price, 'storage_out':negcharge, 'storage_in':poscharge, 'storage_soc':soc, 'regulation_sell':reg})
#Save the schedule in an excel
writer = pd.ExcelWriter(filename)
schedule.to_excel(writer, 'OptVariables')
# -------------------------- Dual variables ---------------------#
#Save dual variable values in python object and excel
i=0
for c in model.component_objects(pyo.Constraint, active=True):
name = ['AC import', 'AC export', 'Storage SOC rule', 'Simultaneous rule', 'Grid Balance', 'Regulation product']
dual_vb = pd.Series([round(model.dual[c[index]], 3) for index in c], index=[t for t in model.IDX])
constraint = pd.DataFrame({name[i]: dual_vb})
constraint.to_excel(writer, name[i])
i=i+1
writer.save()
return Max_Revenue
def ResultsAnalysisML(model, filename, ASM_price, E_price):
# ------------------------ Optimization variables ---------------------#
# Save the schedule in python object
solar_gen = pd.Series(model.solar_gen)
energy_price = pd.Series(model.E_price)
as_price = pd.Series(model.ASM_price)
energy_gen = pd.Series([round(model.energy_gen[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
negcharge = pd.Series([round(model.negcharge[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
poscharge = pd.Series([round(model.poscharge[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
soc = pd.Series([round(model.soc[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
reg = pd.Series([round(model.reg[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
schedule = pd.DataFrame({'energy_gen':energy_gen, 'solar_gen':solar_gen,'energy_price':energy_price,'as_price':as_price, 'storage_out':negcharge, 'storage_in':poscharge, 'storage_soc':soc, 'regulation_sell':reg})
# ------------------------ Comparison result ---------------------#
# Objective value in $
Max_Revenue = sum([E_price[t] * energy_gen[t] + ASM_price[t] * reg[t] for t in range(0,8760)])
return Max_Revenue
def ResultsAnalysisCC(model, filename, true_solar_gen):
# ------------------------ Optimization result ---------------------#
#Objective value in $
Max_Revenue_opt = pyo.value(model.Max_Revenue)
# ------------------------ Optimization variables ---------------------#
# Save the schedule in python object
solar_gen = pd.Series(model.solar_gen)
energy_price = pd.Series(model.E_price)
as_price = pd.Series(model.ASM_price)
negcharge = pd.Series([round(model.negcharge[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
poscharge = pd.Series([round(model.poscharge[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
soc = pd.Series([round(model.soc[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
reg = pd.Series([round(model.reg[t].value, 3) for t in model.IDX], index=[t for t in model.IDX])
schedule = pd.DataFrame({'solar_gen':solar_gen,'energy_price':energy_price,'as_price':as_price, 'storage_out':negcharge, 'storage_in':poscharge, 'storage_soc':soc, 'regulation_sell':reg})
#Save the schedule in an excel
writer = pd.ExcelWriter(filename)
schedule.to_excel(writer, 'OptVariables')
# -------------------------- Dual variables ---------------------#
#Save dual variable values in python object and excel
i=0
for c in model.component_objects(pyo.Constraint, active=True):
name = ['AC import', 'AC export', 'Storage SOC rule', 'Simultaneous rule', 'Regulation product']
dual_vb = pd.Series([round(model.dual[c[index]], 3) for index in c], index=[t for t in model.IDX])
constraint = pd.DataFrame({name[i]: dual_vb})
constraint.to_excel(writer, name[i])
i=i+1
writer.save()
Max_Revenue_for = sum([energy_price[t] * (negcharge[t] - poscharge[t] + true_solar_gen[t]) + as_price[t] * reg[t] for t in range(0, 8760)])
return Max_Revenue_opt, Max_Revenue_for
| 43.265854 | 216 | 0.628108 | 2,411 | 17,739 | 4.461634 | 0.084612 | 0.04016 | 0.020638 | 0.035791 | 0.921539 | 0.919308 | 0.901924 | 0.882495 | 0.872827 | 0.866784 | 0 | 0.011329 | 0.213766 | 17,739 | 409 | 217 | 43.371638 | 0.759948 | 0.205028 | 0 | 0.769608 | 0 | 0 | 0.030589 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137255 | false | 0 | 0.053922 | 0.063725 | 0.328431 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c54fc95e14d0c59431b74a05eb037170a2eb9a43 | 24,482 | py | Python | tests/test_lib.py | nyurik/py-ascii-graph | 09ca5901be94ec3563bdcc25d6396e18fd8ca5df | [
"MIT"
] | 107 | 2015-04-07T17:04:25.000Z | 2021-12-10T03:48:10.000Z | tests/test_lib.py | nyurik/py-ascii-graph | 09ca5901be94ec3563bdcc25d6396e18fd8ca5df | [
"MIT"
] | 16 | 2015-04-07T20:14:02.000Z | 2022-01-27T03:24:18.000Z | tests/test_lib.py | TheDen/py-ascii-graph | 01864cd355e6584a4a01b6d241a6a9e2f4d67269 | [
"MIT"
] | 20 | 2015-02-05T19:57:47.000Z | 2021-12-10T03:47:58.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import with_statement
from __future__ import unicode_literals
from ascii_graph.colors import *
import pytest
import sys
from ascii_graph import Pyasciigraph
def u(x):
if sys.version < '3':
import codecs
return codecs.unicode_escape_decode(x)[0]
else:
return x
def gprint(res):
for l in res:
print(l)
class TestLib(object):
def test_unsorted_default_params(self):
test = [('long_labe☭', 423), ('sl', 1234), ('line3', 531), ('line4', 200), ('line5', 834)]
graph = Pyasciigraph()
res = graph.graph('☭test print', test)
expected = [
'☭test print',
'###############################################################################',
'████████████████████ 423 long_labe☭',
'█████████████████████████████████████████████████████████████ 1234 sl ',
'██████████████████████████ 531 line3 ',
'█████████ 200 line4 ',
'█████████████████████████████████████████ 834 line5 ',
]
gprint(res)
gprint(expected)
assert res == expected
def test_noansi_info(self):
test = [('long_labe☭ (\033[92m+\033[0m)', 423), ('sl (\033[91m-\033[0m)', 1234), ('line3 (\033[91m-\033[0m)', 531), ('line4', 200), ('line5', 834)]
graph = Pyasciigraph()
res = graph.graph('☭test print', test)
expected = [u'\u262dtest print', u'###############################################################################', u'\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 423 long_labe\u262d (\x1b[92m+\x1b[0m)', u'\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 1234 sl (\x1b[91m-\x1b[0m) ', u'\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 531 line3 (\x1b[91m-\x1b[0m) ', u'\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 200 line4 ', u'\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 834 line5 ']
gprint(res)
gprint(expected)
assert res == expected
def test_float_format(self):
test = [('long_labe☭', 423.197), ('sl', 1234.12341), ('line3', 531.11), ('line4', 200), ('line5', 834)]
graph = Pyasciigraph(float_format='{0:,.2f}')
res = graph.graph('☭test print', test)
expected = [
'☭test print',
'###############################################################################',
'███████████████████ 423.20 long_labe☭',
'█████████████████████████████████████████████████████████ 1,234.12 sl ',
'████████████████████████ 531.11 line3 ',
'█████████ 200.00 line4 ',
'██████████████████████████████████████ 834.00 line5 ',
]
gprint(res)
gprint(expected)
assert res == expected
def test_zeros(self):
test = [('long_labe☭', 0), ('sl', 0), ('line3', 0), ('line4', 0), ('line5', 0)]
graph = Pyasciigraph()
res = graph.graph('☭test print', test)
expected = [
'☭test print',
'###############################################################################',
' 0 long_labe☭',
' 0 sl ',
' 0 line3 ',
' 0 line4 ',
' 0 line5 ',
]
gprint(res)
gprint(expected)
assert res == expected
def test_human_readable_si(self):
test = [('long_labe☭', 1234), ('sl', 1231234), ('line3', 1231231234), ('line4', 1231231231234), ('line5', 1231231231231234), ('line6', 1231231231231231234), ('line7', 1231231231231231231234), ('line8', 1231231231231231231231234), ('line9', 123231231231231231231231234)]
graph = Pyasciigraph(human_readable='si')
res = graph.graph('☭test print', test)
expected = [
'☭test print',
'###############################################################################',
' 1K long_labe☭',
' 1M sl ',
' 1G line3 ',
' 1T line4 ',
' 1P line5 ',
' 1E line6 ',
' 1Z line7 ',
' 1Y line8 ',
'█████████████████████████████████████████████████████████████ 123Y line9 ',
]
gprint(res)
gprint(expected)
assert res == expected
def test_human_readable_cs(self):
test = [('long_labe☭', 1234), ('sl', 1231234), ('line3', 1231231234), ('line4', 1231231231234), ('line5', 1231231231231234), ('line6', 1231231231231231234), ('line7', 1231231231231231231234), ('line8', 1231231231231231231231234)]
graph = Pyasciigraph(human_readable='cs')
res = graph.graph('☭test print', test)
expected = [
'☭test print',
'###############################################################################',
' 1K long_labe☭',
' 1M sl ',
' 1G line3 ',
' 1T line4 ',
' 1P line5 ',
' 1E line6 ',
' 1Z line7 ',
'███████████████████████████████████████████████████████████████ 1Y line8 '
]
gprint(res)
gprint(expected)
assert res == expected
def test_type_output(self):
test = [('long_labe☭', 423), ('sl', 1234), ('line3', 531), ('line4', 200), ('line5', 834)]
graph = Pyasciigraph()
res = graph.graph('test print', test)
if sys.version < '3':
expected = unicode
else:
expected = str
for line in res:
assert type(line) == expected
def test_convert_label(self):
test = [(1, 423), (2, 1234), (3, 531), ('line4', 200), ('line5', 834)]
graph = Pyasciigraph()
res = graph.graph('test print', test)
expected = [
'test print',
'###############################################################################',
'██████████████████████ 423 1 ',
'██████████████████████████████████████████████████████████████████ 1234 2 ',
'████████████████████████████ 531 3 ',
'██████████ 200 line4',
'████████████████████████████████████████████ 834 line5',
]
gprint(res)
gprint(expected)
assert res == expected
def test_no_label(self):
test = [(1, 423), (2, 1234), (3, 531), ('line4', 200), ('line5', 834)]
graph = Pyasciigraph()
res = graph.graph(data=test)
expected = [
'██████████████████████ 423 1 ',
'██████████████████████████████████████████████████████████████████ 1234 2 ',
'████████████████████████████ 531 3 ',
'██████████ 200 line4',
'████████████████████████████████████████████ 834 line5',
]
gprint(res)
gprint(expected)
assert res == expected
def test_vcolor(self):
from ascii_graph.colordata import vcolor
test = [('testval0', 600),
('testval1', 500),
('testval2', 400),
('testval3', 400),
('testval4', 300),
('testval5', 200),
('testval6', 100),
('testval7', 50 )]
expected = [('testval0', 600, Gre),
('testval1', 500, Yel),
('testval2', 400, Red),
('testval3', 400, Gre),
('testval4', 300, Yel),
('testval5', 200, Red),
('testval6', 100, Gre),
('testval7', 50, Yel)]
pattern = [Gre, Yel, Red]
data = vcolor(test, pattern)
assert data == expected
def test_alternate_graphsymbol(self):
test = [('long_labe☭', 423), ('sl', 1234), ('line3', 531), ('line4', 200), ('line5', 834)]
graph = Pyasciigraph(graphsymbol='*')
res = graph.graph('☭test print', test)
expected = [
'☭test print',
'###############################################################################',
'******************** 423 long_labe☭',
'************************************************************* 1234 sl ',
'************************** 531 line3 ',
'********* 200 line4 ',
'***************************************** 834 line5 ',
]
gprint(res)
gprint(expected)
assert res == expected
def test_graphsymbol_bad_length(self):
test = [('long_labe☭', 423), ('sl', 1234), ('line3', 531), ('line4', 200), ('line5', 834)]
with pytest.raises(Exception) as e:
graph = Pyasciigraph(graphsymbol='*0')
def test_color_graphs(self):
test = [('testval0', 142),
('testval1', 204, BPur),
('testval2', 501, URed),
('testval3', 103, IRed),
('testval4', 29, BIGre),
('testval5', 19, UYel),
('testval6', 99, ICya),
('testval7', 404, BBlu)]
graph = Pyasciigraph()
res = graph.graph('test graph', test)
expected = [u'test graph', u'###############################################################################', u'\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 142 testval0', u'\x1b[1;35m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[1;35m204\x1b[0m testval1', u'\x1b[4;31m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[4;31m501\x1b[0m testval2', u'\x1b[0;91m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;91m103\x1b[0m testval3', u'\x1b[1;92m\u2588\u2588\u2588\x1b[0m \x1b[1;92m29\x1b[0m testval4', u'\x1b[4;33m\u2588\u2588\x1b[0m \x1b[4;33m19\x1b[0m testval5', u'\x1b[0;96m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;96m99\x1b[0m testval6', u'\x1b[1;34m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[1;34m404\x1b[0m testval7']
gprint(res)
gprint(expected)
assert res == expected
def test_mulivalue_color_graphs(self):
test = [('testval0', 600),
('testval1', 400, Red),
('testval2', [(300, Gre),(500, Blu)]),
('testval3', [(200, Yel),(100,)]),
('testval4', 100, Cya),
('testval5', 50, Blu),
('testval6', [(100, Gre), (150, Red), (200, Yel), (600, Blu)]) ]
graph = Pyasciigraph(separator_length=4)
res = graph.graph('test graph', test)
expected = [u'test graph', u'#################################################################################', u'\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 600 testval0', u'\x1b[0;31m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;31m400\x1b[0m testval1', u'\x1b[0;32m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m\x1b[0;34m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;32m300\x1b[0m,\x1b[0;34m500\x1b[0m testval2', u'\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0;33m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;33m200\x1b[0m,100 testval3', u'\x1b[0;36m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;36m100\x1b[0m testval4', u'\x1b[0;34m\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;34m50\x1b[0m testval5', u'\x1b[0;32m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m\x1b[0;31m\u2588\u2588\u2588\u2588\x1b[0m\x1b[0;33m\u2588\u2588\u2588\u2588\x1b[0m\x1b[0;34m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;32m100\x1b[0m,\x1b[0;31m150\x1b[0m,\x1b[0;33m200\x1b[0m,\x1b[0;34m600\x1b[0m testval6']
gprint(res)
gprint(expected)
assert res == expected
def test_mulivalue_color_graphs_max(self):
test = [('testval0', 600),
('testval1', 400, Red),
('testval2', [(300, Gre),(500, Blu)]),
('testval3', [(200, Yel),(100,)]),
('testval4', 100, Cya),
('testval5', 50, Blu),
('testval6', [(100, Gre), (150, Red), (200, Yel), (600, Blu)]) ]
graph = Pyasciigraph(separator_length=4, multivalue=False)
res = graph.graph('test graph', test)
expected = [u'test graph', u'###############################################################################', u'\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 600 testval0', u'\x1b[0;31m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;31m400\x1b[0m testval1', u'\x1b[0;32m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m\x1b[0;34m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;34m500\x1b[0m testval2', u'\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0;33m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;33m200\x1b[0m testval3', u'\x1b[0;36m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;36m100\x1b[0m testval4', u'\x1b[0;34m\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;34m50\x1b[0m testval5', u'\x1b[0;32m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m\x1b[0;31m\u2588\u2588\u2588\u2588\u2588\x1b[0m\x1b[0;33m\u2588\u2588\u2588\u2588\u2588\x1b[0m\x1b[0;34m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;34m600\x1b[0m testval6']
gprint(res)
gprint(expected)
assert res == expected
def test_force_max_value_param(self):
test = [('long_labe☭', 423), ('sl', 1234), ('line3', 531), ('line4', 200), ('line5', 834)]
graph = Pyasciigraph(force_max_value=2000)
res = graph.graph('☭test print', test)
expected = [
'☭test print',
'###############################################################################',
'████████████ 423 long_labe☭',
'█████████████████████████████████████ 1234 sl ',
'████████████████ 531 line3 ',
'██████ 200 line4 ',
'█████████████████████████ 834 line5 '
]
gprint(res)
gprint(expected)
assert res == expected
def test_neg_simple(self):
test = [('testval0', 600),
('testval1', 500),
('testval2', -400),
('testval3', 400),
('testval4', 300),
('testval5', 200),
('testval6', 100),
('testval7', 50 )]
graph = Pyasciigraph()
res = graph.graph('☭test print', test)
expected = [
'☭test print',
'###############################################################################',
' █████████████████████████████████████ 600 testval0',
' ███████████████████████████████ 500 testval1',
'█████████████████████████ -400 testval2',
' █████████████████████████ 400 testval3',
' ██████████████████ 300 testval4',
' ████████████ 200 testval5',
' ██████ 100 testval6',
' ███ 50 testval7',
]
gprint(res)
gprint(expected)
assert res == expected
def test_neg_multicolor(self):
test = [('testval0', 600),
('testval1', 400, Red),
('testval2', [(600, Gre), (500, Blu)]),
('testval3', [(200, Yel), (100,)]),
('testval4', -170, Cya),
('testval5', 50, Blu),
('testval6', [(-300, Gre), (-230, Red)]),
('testval7', [(-100, Gre), (-230, Red), (200, Yel), (600, Blu)])]
graph = Pyasciigraph()
res = graph.graph('☭test print', test)
expected = [u'\u262dtest print', u'###############################################################################', u' \u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 600 testval0', u' \x1b[0;31m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;31m400\x1b[0m testval1', u' \x1b[0;34m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m\x1b[0;32m\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;32m600\x1b[0m,\x1b[0;34m500\x1b[0m testval2', u' \u2588\u2588\u2588\u2588\u2588\x1b[0;33m\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;33m200\x1b[0m,100 testval3', u' \x1b[0;36m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;36m-170\x1b[0m testval4', u' \x1b[0;34m\u2588\u2588\x1b[0m \x1b[0;34m50\x1b[0m testval5', u' \x1b[0;32m\u2588\u2588\u2588\x1b[0m\x1b[0;31m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;32m-300\x1b[0m,\x1b[0;31m-230\x1b[0m testval6', u' \x1b[0;31m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m\x1b[0;32m\u2588\u2588\u2588\u2588\u2588\x1b[0m\x1b[0;33m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m\x1b[0;34m\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\x1b[0m \x1b[0;32m-100\x1b[0m,\x1b[0;31m-230\x1b[0m,\x1b[0;33m200\x1b[0m,\x1b[0;34m600\x1b[0m testval7']
gprint(res)
gprint(expected)
assert res == expected
| 75.561728 | 2,207 | 0.445429 | 2,483 | 24,482 | 4.783729 | 0.091019 | 0.756861 | 1.070887 | 1.345344 | 0.805607 | 0.791211 | 0.782118 | 0.778835 | 0.769574 | 0.763849 | 0 | 0.330453 | 0.333388 | 24,482 | 323 | 2,208 | 75.795666 | 0.332659 | 0.001716 | 0 | 0.558219 | 0 | 0.116438 | 0.648007 | 0.361404 | 0 | 0 | 0 | 0 | 0.058219 | 1 | 0.068493 | false | 0 | 0.027397 | 0 | 0.106164 | 0.188356 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.