hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
96ecdeb7b91b53fb476cd19d209335bddd30ed72 | 21,284 | py | Python | uatu/watchers/networks.py | mclaughlin6464/uatu | c6f4e76c2477bb0e5221d575a9e0f4eb5a249e7b | [
"MIT"
] | 1 | 2018-05-07T17:16:18.000Z | 2018-05-07T17:16:18.000Z | uatu/watchers/networks.py | mclaughlin6464/uatu | c6f4e76c2477bb0e5221d575a9e0f4eb5a249e7b | [
"MIT"
] | 1 | 2018-05-07T17:16:40.000Z | 2018-06-01T02:20:37.000Z | uatu/watchers/networks.py | mclaughlin6464/uatu | c6f4e76c2477bb0e5221d575a9e0f4eb5a249e7b | [
"MIT"
] | null | null | null | """
This module holds all the neural network models for uatu.
To start, their architecture will be mostly hardcoded, but I may generalize it in the futuere.
"""
try:
import tensorflow as tf
except:
pass
def standard_convnet_init_fn(inputs, training= False):
#TODO add more customization
initializer = tf.variance_scaling_initializer(scale=2.0)
# TODO gotta be a better way to do this?
#prob = tf.cond(training, lambda : 0.5, lambda : 1.0) #should i do some fancier tf stuff?
axis = -1
# NOTE ask waren if i need separate relus
conv1_out = tf.layers.conv3d(inputs, 2, kernel_size=62, padding='same')
# kernel_initializer=initializer)
bn1_out = tf.layers.batch_normalization(conv1_out, axis = axis, training=training)
lr1_out = tf.nn.leaky_relu(bn1_out, alpha=0.01)
ap1_out = tf.layers.average_pooling3d(lr1_out, pool_size=(31, 31, 31), strides = 2)
conv2_out = tf.layers.conv3d(ap1_out, 12, kernel_size=(28, 28, 28), padding='same')
# kernel_initializer=initializer)
bn2_out = tf.layers.batch_normalization(conv2_out, axis = axis, training=training)
lr2_out = tf.nn.leaky_relu(bn2_out, alpha=0.01)
ap2_out = tf.layers.average_pooling3d(lr2_out, pool_size=(14, 14, 14), strides = 2)
conv3_out = tf.layers.conv3d(ap2_out, 64, kernel_size=(6, 6, 6), padding='same')
# kernel_initializer=initializer)
bn3_out = tf.layers.batch_normalization(conv3_out, axis = axis, training=training)
lr3_out = tf.nn.leaky_relu(bn3_out, alpha=0.01)
conv4_out = tf.layers.conv3d(lr3_out, 64, kernel_size=(4, 4, 4), padding='same')
# kernel_initializer=initializer)
bn4_out = tf.layers.batch_normalization(conv4_out, axis = axis, training=training)
lr4_out = tf.nn.leaky_relu(bn4_out, alpha=0.01)
conv5_out = tf.layers.conv3d(lr4_out, 128, kernel_size=(3, 3, 3), padding='same')
# kernel_initializer=initializer)
bn5_out= tf.layers.batch_normalization(conv5_out, axis = axis, training=training)
lr5_out = tf.nn.leaky_relu(bn5_out, alpha=0.01)
conv6_out = tf.layers.conv3d(lr5_out, 128, kernel_size=(2, 2, 2), padding='same')
# kernel_initializer=initializer)
bn6_out = tf.layers.batch_normalization(conv6_out, axis = axis, training= training)
lr6_out = tf.nn.leaky_relu(bn6_out, alpha=0.01)
flat_out = tf.layers.flatten(lr6_out)
dense1_out = tf.layers.dense(flat_out, 1024)# kernel_initializer=initializer)
#drop1_out = tf.layers.dropout(dense1_out, training=training)
lr7_out = tf.nn.leaky_relu(dense1_out, alpha=0.01)
dense2_out = tf.layers.dense(lr7_out, 256)# kernel_initializer=initializer)
#drop2_out = tf.layers.dropout(dense2_out, training=training)
lr8_out = tf.nn.leaky_relu(dense2_out, alpha=0.01)
dense3_out = tf.layers.dense(lr8_out, 2)# kernel_initializer=initializer)
return dense3_out
def bayesian_convnet_init_fn(inputs, bayes_prob=0.95, training= False):
#TODO add more customization
#initializer = tf.variance_scaling_initializer(scale=2.0)
# TODO gotta be a better way to do this?
#prob = tf.cond(training, lambda : 0.5, lambda : 1.0) #should i do some fancier tf stuff?
axis = -1
# NOTE ask waren if i need separate relus
conv1_out = tf.layers.conv3d(inputs, 2, kernel_size=62, padding='same')
# kernel_initializer=initializer)
bd1_out = tf.nn.dropout(conv1_out, keep_prob = bayes_prob)#, training = True)
bn1_out = tf.layers.batch_normalization(bd1_out, axis = axis, training=training)
lr1_out = tf.nn.leaky_relu(bn1_out, alpha=0.01)
ap1_out = tf.layers.average_pooling3d(lr1_out, pool_size=(31, 31, 31), strides = 2)
conv2_out = tf.layers.conv3d(ap1_out, 12, kernel_size=(28, 28, 28), padding='same')
# kernel_initializer=initializer)
bd2_out = tf.layers.dropout(conv2_out, rate = bayes_prob, training = True)
bn2_out = tf.layers.batch_normalization(bd2_out, axis = axis, training=training)
lr2_out = tf.nn.leaky_relu(bn2_out, alpha=0.01)
ap2_out = tf.layers.average_pooling3d(lr2_out, pool_size=(14, 14, 14), strides = 2)
conv3_out = tf.layers.conv3d(ap2_out, 64, kernel_size=(6, 6, 6), padding='same')
# kernel_initializer=initializer)
bd3_out = tf.layers.dropout(conv3_out, rate = bayes_prob, training = True)
bn3_out = tf.layers.batch_normalization(bd3_out, axis = axis, training=training)
lr3_out = tf.nn.leaky_relu(bn3_out, alpha=0.01)
conv4_out = tf.layers.conv3d(lr3_out, 64, kernel_size=(4, 4, 4), padding='same')
# kernel_initializer=initializer)
bd4_out = tf.layers.dropout(conv4_out, rate = bayes_prob, training = True)
bn4_out = tf.layers.batch_normalization(bd4_out, axis = axis, training=training)
lr4_out = tf.nn.leaky_relu(bn4_out, alpha=0.01)
conv5_out = tf.layers.conv3d(lr4_out, 128, kernel_size=(3, 3, 3), padding='same')
# kernel_initializer=initializer)
bd5_out = tf.layers.dropout(conv5_out, rate = bayes_prob, training = True)
bn5_out= tf.layers.batch_normalization(bd5_out, axis = axis, training=training)
lr5_out = tf.nn.leaky_relu(bn5_out, alpha=0.01)
conv6_out = tf.layers.conv3d(lr5_out, 128, kernel_size=(2, 2, 2), padding='same')
# kernel_initializer=initializer)
bd6_out = tf.layers.dropout(conv6_out, rate = bayes_prob, training = True)
bn6_out = tf.layers.batch_normalization(bd6_out, axis = axis, training= training)
lr6_out = tf.nn.leaky_relu(bn6_out, alpha=0.01)
flat_out = tf.layers.flatten(lr6_out)
dense1_out = tf.layers.dense(flat_out, 1024)# kernel_initializer=initializer)
drop1_out = tf.layers.dropout(dense1_out, training=training)
lr7_out = tf.nn.leaky_relu(drop1_out, alpha=0.01)
dense2_out = tf.layers.dense(lr7_out, 256)# kernel_initializer=initializer)
drop2_out = tf.layers.dropout(dense2_out, training=training)
lr8_out = tf.nn.leaky_relu(drop2_out, alpha=0.01)
dense3_out = tf.layers.dense(lr8_out, 5)# kernel_initializer=initializer)
return dense3_out
def shallow_convnet_init_fn(inputs, training=False):
# TODO add more customization
initializer = tf.variance_scaling_initializer(scale=2.0)
# TODO gotta be a better way to do this?
# prob = tf.cond(training, lambda : 0.5, lambda : 1.0) #should i do some fancier tf stuff?
axis = -1
# NOTE ask waren if i need separate relus
conv1_out = tf.layers.conv3d(inputs, 2, kernel_size=32, padding='same',
kernel_initializer=initializer)
bn1_out = tf.layers.batch_normalization(conv1_out, axis=axis, training=training)
lr1_out = tf.nn.leaky_relu(bn1_out, alpha=0.01)
ap1_out = tf.layers.average_pooling3d(lr1_out, pool_size=(24,24,24), strides=2)
conv2_out = tf.layers.conv3d(ap1_out, 12, kernel_size=16, padding='same',
kernel_initializer=initializer)
bn2_out = tf.layers.batch_normalization(conv2_out, axis=axis, training=training)
lr2_out = tf.nn.leaky_relu(bn2_out, alpha=0.01)
ap2_out = tf.layers.average_pooling3d(lr2_out, pool_size=(8, 8, 8), strides=2)
conv3_out = tf.layers.conv3d(ap2_out, 64, kernel_size=4, padding='same',
kernel_initializer=initializer)
bn3_out = tf.layers.batch_normalization(conv3_out, axis=axis, training=training)
lr3_out = tf.nn.leaky_relu(bn3_out, alpha=0.01)
# conv4_out = tf.layers.conv3d(lr3_out, 64, kernel_size=(4, 4, 4), padding='same')
# kernel_initializer=initializer)
# bn4_out = tf.layers.batch_normalization(conv4_out, axis = axis, training=training)
# lr4_out = tf.nn.leaky_relu(bn4_out, alpha=0.01)
# conv5_out = tf.layers.conv3d(lr4_out, 128, kernel_size=(3, 3, 3), padding='same')
# kernel_initializer=initializer)
# bn5_out= tf.layers.batch_normalization(conv5_out, axis = axis, training=training)
# lr5_out = tf.nn.leaky_relu(bn5_out, alpha=0.01)
# conv6_out = tf.layers.conv3d(lr5_out, 128, kernel_size=(2, 2, 2), padding='same')
# kernel_initializer=initializer)
# bn6_out = tf.layers.batch_normalization(conv6_out, axis = axis, training= training)
# lr6_out = tf.nn.leaky_relu(bn6_out, alpha=0.01)
flat_out = tf.layers.flatten(lr3_out)
dense1_out = tf.layers.dense(flat_out, 1024) # kernel_initializer=initializer)
drop1_out = tf.layers.dropout(dense1_out, training=training)
lr7_out = tf.nn.leaky_relu(drop1_out, alpha=0.01)
dense2_out = tf.layers.dense(lr7_out, 256) # kernel_initializer=initializer)
drop2_out = tf.layers.dropout(dense2_out, training=training)
lr8_out = tf.nn.leaky_relu(drop2_out, alpha=0.01)
dense3_out = tf.layers.dense(lr8_out, 2) # kernel_initializer=initializer)
return dense3_out
def shallow_bayesian_convnet_init_fn(inputs, training=False, keep_prob = 0.95):
# TODO add more customization
initializer = tf.variance_scaling_initializer(scale=2.0)
# TODO gotta be a better way to do this?
# prob = tf.cond(training, lambda : 0.5, lambda : 1.0) #should i do some fancier tf stuff?
axis = -1
# NOTE ask waren if i need separate relus
conv1_out = tf.layers.conv3d(inputs, 2, kernel_size=32, padding='same',
kernel_initializer=initializer)
bd1_out = tf.layers.dropout(conv1_out, rate= 1-keep_prob, training = True)
bn1_out = tf.layers.batch_normalization(bd1_out, axis=axis, training=training)
lr1_out = tf.nn.leaky_relu(bn1_out, alpha=0.01)
ap1_out = tf.layers.average_pooling3d(lr1_out, pool_size=(24,24,24), strides=2)
conv2_out = tf.layers.conv3d(ap1_out, 12, kernel_size=16, padding='same',
kernel_initializer=initializer)
bd2_out = tf.layers.dropout(conv2_out, rate= 1-keep_prob, training = True)
bn2_out = tf.layers.batch_normalization(bd2_out, axis=axis, training=training)
lr2_out = tf.nn.leaky_relu(bn2_out, alpha=0.01)
ap2_out = tf.layers.average_pooling3d(lr2_out, pool_size=(8, 8, 8), strides=2)
conv3_out = tf.layers.conv3d(ap2_out, 64, kernel_size=4, padding='same',
kernel_initializer=initializer)
bd3_out = tf.layers.dropout(conv3_out, rate= 1-keep_prob, training = True)
bn3_out = tf.layers.batch_normalization(bd3_out, axis=axis, training=training)
lr3_out = tf.nn.leaky_relu(bn3_out, alpha=0.01)
# conv4_out = tf.layers.conv3d(lr3_out, 64, kernel_size=(4, 4, 4), padding='same')
# kernel_initializer=initializer)
# bn4_out = tf.layers.batch_normalization(conv4_out, axis = axis, training=training)
# lr4_out = tf.nn.leaky_relu(bn4_out, alpha=0.01)
# conv5_out = tf.layers.conv3d(lr4_out, 128, kernel_size=(3, 3, 3), padding='same')
# kernel_initializer=initializer)
# bn5_out= tf.layers.batch_normalization(conv5_out, axis = axis, training=training)
# lr5_out = tf.nn.leaky_relu(bn5_out, alpha=0.01)
# conv6_out = tf.layers.conv3d(lr5_out, 128, kernel_size=(2, 2, 2), padding='same')
# kernel_initializer=initializer)
# bn6_out = tf.layers.batch_normalization(conv6_out, axis = axis, training= training)
# lr6_out = tf.nn.leaky_relu(bn6_out, alpha=0.01)
flat_out = tf.layers.flatten(lr3_out)
dense1_out = tf.layers.dense(flat_out, 1024) # kernel_initializer=initializer)
drop1_out = tf.layers.dropout(dense1_out, training=training)
lr7_out = tf.nn.leaky_relu(drop1_out, alpha=0.01)
dense2_out = tf.layers.dense(lr7_out, 256) # kernel_initializer=initializer)
drop2_out = tf.layers.dropout(dense2_out, training=training)
lr8_out = tf.nn.leaky_relu(drop2_out, alpha=0.01)
dense3_out = tf.layers.dense(lr8_out, 4) # kernel_initializer=initializer)
return dense3_out
def shallow_original_bayesian_convnet_init_fn(inputs, training=False, keep_prob = 0.95):
# TODO add more customization
initializer = tf.variance_scaling_initializer(scale=2.0)
# TODO gotta be a better way to do this?
# prob = tf.cond(training, lambda : 0.5, lambda : 1.0) #should i do some fancier tf stuff?
axis = -1
# NOTE ask waren if i need separate relus
conv1_out = tf.layers.conv3d(inputs, 2, kernel_size=32, padding='same',
kernel_initializer=initializer)
bd1_out = tf.layers.dropout(conv1_out, rate= 1-keep_prob, training = True)
bn1_out = tf.layers.batch_normalization(bd1_out, axis=axis, training=training)
lr1_out = tf.nn.leaky_relu(bn1_out, alpha=0.01)
ap1_out = tf.layers.average_pooling3d(lr1_out, pool_size=(24,24,24), strides=2)
conv2_out = tf.layers.conv3d(ap1_out, 12, kernel_size=16, padding='same',
kernel_initializer=initializer)
bd2_out = tf.layers.dropout(conv2_out, rate= 1-keep_prob, training = True)
bn2_out = tf.layers.batch_normalization(bd2_out, axis=axis, training=training)
lr2_out = tf.nn.leaky_relu(bn2_out, alpha=0.01)
ap2_out = tf.layers.average_pooling3d(lr2_out, pool_size=(8, 8, 8), strides=2)
conv3_out = tf.layers.conv3d(ap2_out, 64, kernel_size=4, padding='same',
kernel_initializer=initializer)
bd3_out = tf.layers.dropout(conv3_out, rate= 1-keep_prob, training = True)
bn3_out = tf.layers.batch_normalization(bd3_out, axis=axis, training=training)
lr3_out = tf.nn.leaky_relu(bn3_out, alpha=0.01)
# conv4_out = tf.layers.conv3d(lr3_out, 64, kernel_size=(4, 4, 4), padding='same')
# kernel_initializer=initializer)
# bn4_out = tf.layers.batch_normalization(conv4_out, axis = axis, training=training)
# lr4_out = tf.nn.leaky_relu(bn4_out, alpha=0.01)
# conv5_out = tf.layers.conv3d(lr4_out, 128, kernel_size=(3, 3, 3), padding='same')
# kernel_initializer=initializer)
# bn5_out= tf.layers.batch_normalization(conv5_out, axis = axis, training=training)
# lr5_out = tf.nn.leaky_relu(bn5_out, alpha=0.01)
# conv6_out = tf.layers.conv3d(lr5_out, 128, kernel_size=(2, 2, 2), padding='same')
# kernel_initializer=initializer)
# bn6_out = tf.layers.batch_normalization(conv6_out, axis = axis, training= training)
# lr6_out = tf.nn.leaky_relu(bn6_out, alpha=0.01)
flat_out = tf.layers.flatten(lr3_out)
dense1_out = tf.layers.dense(flat_out, 1024) # kernel_initializer=initializer)
drop1_out = tf.layers.dropout(dense1_out, training=training)
lr7_out = tf.nn.leaky_relu(drop1_out, alpha=0.01)
dense2_out = tf.layers.dense(lr7_out, 256) # kernel_initializer=initializer)
drop2_out = tf.layers.dropout(dense2_out, training=training)
lr8_out = tf.nn.leaky_relu(drop2_out, alpha=0.01)
dense3_out = tf.layers.dense(lr8_out, 5) # kernel_initializer=initializer)
return dense3_out
def very_shallow_convnet_init_fn(inputs, training=False):
# TODO add more customization
initializer = tf.variance_scaling_initializer(scale=2.0)
# TODO gotta be a better way to do this?
# prob = tf.cond(training, lambda : 0.5, lambda : 1.0) #should i do some fancier tf stuff?
axis = -1
# NOTE ask waren if i need separate relus
conv1_out = tf.layers.conv3d(inputs, 2, kernel_size=32, padding='same',
kernel_initializer=initializer)
bn1_out = tf.layers.batch_normalization(conv1_out, axis=axis, training=training)
lr1_out = tf.nn.leaky_relu(bn1_out, alpha=0.01)
ap1_out = tf.layers.average_pooling3d(lr1_out, pool_size=(24,24,24), strides=2)
flat_out = tf.layers.flatten(ap1_out)
dense1_out = tf.layers.dense(flat_out, 1024) # kernel_initializer=initializer)
drop1_out = tf.layers.dropout(dense1_out, training=training)
lr7_out = tf.nn.leaky_relu(drop1_out, alpha=0.01)
dense2_out = tf.layers.dense(lr7_out, 2) # kernel_initializer=initializer)
return dense2_out
def very_shallow_bayesian_convnet_init_fn(inputs, training=False, keep_prob = 0.95):
# TODO add more customization
initializer = tf.variance_scaling_initializer(scale=2.0)
# TODO gotta be a better way to do this?
# prob = tf.cond(training, lambda : 0.5, lambda : 1.0) #should i do some fancier tf stuff?
axis = -1
# NOTE ask waren if i need separate relus
conv1_out = tf.layers.conv3d(inputs, 2, kernel_size=32, padding='same',
kernel_initializer=initializer)
bd1_out = tf.layers.dropout(conv1_out, rate= 1-keep_prob, training = True)
bn1_out = tf.layers.batch_normalization(bd1_out, axis=axis, training=training)
lr1_out = tf.nn.leaky_relu(bn1_out, alpha=0.01)
ap1_out = tf.layers.average_pooling3d(lr1_out, pool_size=(24,24,24), strides=2)
flat_out = tf.layers.flatten(ap1_out)
dense1_out = tf.layers.dense(flat_out, 1024) # kernel_initializer=initializer)
drop1_out = tf.layers.dropout(dense1_out, training=training)
lr7_out = tf.nn.leaky_relu(drop1_out, alpha=0.01)
dense2_out = tf.layers.dense(lr7_out, 4) # kernel_initializer=initializer)
return dense2_out
def most_shallow_bayesian_convnet_init_fn(inputs, training=False, keep_prob = 0.95):
# TODO add more customization
initializer = tf.variance_scaling_initializer(scale=2.0)
# TODO gotta be a better way to do this?
# prob = tf.cond(training, lambda : 0.5, lambda : 1.0) #should i do some fancier tf stuff?
axis = -1
# NOTE ask waren if i need separate relus
conv1_out = tf.layers.conv3d(inputs, 2, kernel_size=16, padding='same',
kernel_initializer=initializer)
bd1_out = tf.layers.dropout(conv1_out, rate= 1-keep_prob, training = True)
bn1_out = tf.layers.batch_normalization(bd1_out, axis=axis, training=training)
lr1_out = tf.nn.leaky_relu(bn1_out, alpha=0.01)
ap1_out = tf.layers.average_pooling3d(lr1_out, pool_size=(10,10,10), strides=2)
flat_out = tf.layers.flatten(ap1_out)
dense1_out = tf.layers.dense(flat_out, 512) # kernel_initializer=initializer)
drop1_out = tf.layers.dropout(dense1_out, training=training)
lr7_out = tf.nn.leaky_relu(drop1_out, alpha=0.01)
dense2_out = tf.layers.dense(lr7_out, 4) # kernel_initializer=initializer)
return dense2_out
def standard_convnet_init_ob(inputs, training= False):
#TODO add more customization
initializer = tf.variance_scaling_initializer(scale=2.0)
# TODO gotta be a better way to do this?
prob = tf.cond(training, lambda : 0.5, lambda : 1.0) #should i do some fancier tf stuff?
# NOTE ask waren if i need separate relus
layers = [ tf.keras.layers.Conv3D(2,input_shape = (64,64,64,1), kernel_size=62, padding='same',
kernel_initializer=initializer, name = 'Dickhead'),
tf.keras.layers.BatchNormalization(name = 'Butts'),
tf.keras.layers.LeakyReLU(alpha=0.01),
tf.keras.layers.AveragePooling3D(pool_size=(31, 31, 31), strides = 2),
tf.layers.Conv3D(12, kernel_size=(28, 28, 28), padding='same',
kernel_initializer=initializer),
#tf.keras.layers.BatchNormalization(),
tf.keras.layers.LeakyReLU(alpha=0.01),
tf.keras.layers.AveragePooling3D(pool_size=(14, 14, 14), strides = 2),
tf.layers.Conv3D(64, kernel_size=(6, 6, 6), padding='same',
kernel_initializer=initializer),
tf.keras.keras.layers.BatchNormalization(axis=1,
gamma_initializer=initializer),
tf.keras.layers.LeakyReLU(alpha=0.01),
tf.keras.layers.Conv3D(64, kernel_size=(4, 4, 4), padding='same',
kernel_initializer=initializer),
#tf.keras.layers.BatchNormalization(),
tf.keras.layers.LeakyReLU(alpha=0.01),
tf.keras.layers.Conv3D(128, kernel_size=(3, 3, 3), padding='same',
kernel_initializer=initializer),
#tf.keras.layers.BatchNormalization(),
tf.keras.layers.LeakyReLU(alpha=0.01),
tf.keras.layers.Conv3D(128, kernel_size=(2, 2, 2), padding='same',
kernel_initializer=initializer),
#tf.keras.layers.BatchNormalization(),
tf.keras.layers.LeakyReLU(alpha=0.01),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1024, kernel_initializer=initializer),
tf.keras.layers.Dropout(rate=prob),
tf.keras.layers.LeakyReLU(alpha=0.01),
tf.keras.layers.Dense(256, kernel_initializer=initializer),
tf.keras.layers.Dropout(rate=prob),
tf.keras.layers.LeakyReLU(alpha=0.01),
tf.layers.Dense(2, kernel_initializer=initializer),]
model = tf.keras.Sequential(layers)
return model(inputs)
def standard_optimizer_init_fn(lr = 0.0005):
return tf.train.AdamOptimizer(learning_rate=lr)
| 54.157761 | 99 | 0.679947 | 3,090 | 21,284 | 4.473139 | 0.056634 | 0.065475 | 0.106642 | 0.039936 | 0.950297 | 0.944726 | 0.930835 | 0.924975 | 0.914412 | 0.914412 | 0 | 0.059382 | 0.203251 | 21,284 | 392 | 100 | 54.295918 | 0.755691 | 0.27523 | 0 | 0.776371 | 0 | 0 | 0.008694 | 0 | 0 | 0 | 0 | 0.002551 | 0 | 1 | 0.042194 | false | 0.004219 | 0.004219 | 0.004219 | 0.088608 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8c42eab5d6bfdc6f4afcb76c2a7725ae21350a4b | 13,217 | py | Python | aydin/it/test/test_cnn.py | royerloic/aydin | f9c61a24030891d008c318b250da5faec69fcd7d | [
"BSD-3-Clause"
] | 78 | 2021-11-08T16:11:23.000Z | 2022-03-27T17:51:04.000Z | aydin/it/test/test_cnn.py | royerloic/aydin | f9c61a24030891d008c318b250da5faec69fcd7d | [
"BSD-3-Clause"
] | 19 | 2021-11-08T17:15:40.000Z | 2022-03-30T17:46:55.000Z | aydin/it/test/test_cnn.py | royerloic/aydin | f9c61a24030891d008c318b250da5faec69fcd7d | [
"BSD-3-Clause"
] | 7 | 2021-11-09T17:42:32.000Z | 2022-03-09T00:37:57.000Z | import time
import numpy
import pytest
import tensorflow as tf # noqa: F401
from skimage.data import camera
from skimage.exposure import rescale_intensity
from skimage.metrics import peak_signal_noise_ratio as psnr
from skimage.metrics import structural_similarity as ssim
from tensorflow.python.keras.backend import clear_session
from aydin.io import io
from aydin.io.datasets import normalise, add_noise, examples_single
from aydin.it.cnn import ImageTranslatorCNN
def test_it_cnn_history():
"""
Check if training history is properly recorded.
"""
start = time.time()
max_epochs = 2
data = numpy.zeros((64, 64))
it = ImageTranslatorCNN(
model_architecture="unet",
training_architecture='checkran',
nb_unet_levels=1,
patch_size=64,
batch_size=1,
mask_size=3,
total_num_patches=1,
patience=1,
max_epochs=max_epochs,
)
it.train(data, data)
history = it.loss_history
for key, val in history.history.items():
assert len(val) == max_epochs
assert len(history.epoch) == max_epochs
stop = time.time()
print(f"Total elapsed time: {stop - start} ")
clear_session()
def test_it_cnn_shiftconv_light():
"""
Demo for self-supervised denoising using camera image with synthetic noise
"""
start = time.time()
max_epochs = 30
image_width = 100
image = normalise(camera())
H0, W0 = (numpy.array(image.shape) - image_width) // 2
image = image[H0 : H0 + image_width, W0 : W0 + image_width]
noisy = add_noise(image)
print("noisy shape: ", noisy.shape)
it = ImageTranslatorCNN(
model_architecture="unet",
training_architecture='shiftconv',
nb_unet_levels=2,
batch_norm=None, # 'instance',
max_epochs=max_epochs,
)
it.train(noisy, noisy)
denoised = it.translate(noisy, tile_size=image_width)
image = numpy.clip(image, 0, 1)
noisy = numpy.clip(noisy.reshape(image.shape), 0, 1)
denoised = numpy.clip(denoised, 0, 1)
psnr_noisy = psnr(noisy, image)
ssim_noisy = ssim(noisy, image)
print("noisy", psnr_noisy, ssim_noisy)
psnr_denoised = psnr(denoised, image)
ssim_denoised = ssim(denoised, image)
print("denoised", psnr_denoised, ssim_denoised)
stop = time.time()
print(f"Total elapsed time: {stop - start} ")
assert psnr_denoised > psnr_noisy and ssim_denoised > ssim_noisy
clear_session()
def test_it_cnn_checkerbox_light():
"""
Demo for self-supervised denoising using camera image with synthetic noise
"""
start = time.time()
max_epochs = 5
image_width = 100
image = normalise(camera())
H0, W0 = (numpy.array(image.shape) - image_width) // 2
image = image[H0 : H0 + image_width, W0 : W0 + image_width]
noisy = add_noise(image)
it = ImageTranslatorCNN(
model_architecture="unet",
training_architecture='checkerbox',
nb_unet_levels=2,
mask_size=3,
batch_norm='instance',
max_epochs=max_epochs,
)
it.train(noisy, noisy)
denoised = it.translate(noisy, tile_size=image_width)
image = numpy.clip(image, 0, 1)
noisy = numpy.clip(noisy.reshape(image.shape), 0, 1)
denoised = numpy.clip(denoised, 0, 1)
psnr_noisy = psnr(noisy, image)
ssim_noisy = ssim(noisy, image)
print("noisy", psnr_noisy, ssim_noisy)
psnr_denoised = psnr(denoised, image)
ssim_denoised = ssim(denoised, image)
print("denoised", psnr_denoised, ssim_denoised)
stop = time.time()
print(f"Total elapsed time: {stop - start} ")
assert psnr_denoised > psnr_noisy and ssim_denoised > ssim_noisy
clear_session()
def test_it_cnn_random_light():
"""
Demo for self-supervised denoising using camera image with synthetic noise
"""
start = time.time()
max_epochs = 5
image_width = 100
image = normalise(camera())
H0, W0 = (numpy.array(image.shape) - image_width) // 2
image = image[H0 : H0 + image_width, W0 : W0 + image_width]
noisy = add_noise(image)
it = ImageTranslatorCNN(
model_architecture="unet",
training_architecture='random',
nb_unet_levels=2,
batch_norm='instance',
max_epochs=max_epochs,
)
it.train(noisy, noisy)
denoised = it.translate(noisy, tile_size=image_width)
image = numpy.clip(image, 0, 1)
noisy = numpy.clip(noisy.reshape(image.shape), 0, 1)
denoised = numpy.clip(denoised, 0, 1)
psnr_noisy = psnr(noisy, image)
ssim_noisy = ssim(noisy, image)
print("noisy", psnr_noisy, ssim_noisy)
psnr_denoised = psnr(denoised, image)
ssim_denoised = ssim(denoised, image)
print("denoised", psnr_denoised, ssim_denoised)
stop = time.time()
print(f"Total elapsed time: {stop - start} ")
assert psnr_denoised > psnr_noisy * 0.9 and ssim_denoised > ssim_noisy * 0.9
clear_session()
def test_it_cnn_checkran_light():
"""
Demo for self-supervised denoising using camera image with synthetic noise
"""
start = time.time()
max_epochs = 5
image_width = 100
image = normalise(camera())
H0, W0 = (numpy.array(image.shape) - image_width) // 2
image = image[H0 : H0 + image_width, W0 : W0 + image_width]
# Test with arbitrary input shape
arbitrary_shape = (1, 1) + image.shape
batch_dims = tuple([True if i == 1 else False for i in arbitrary_shape])
image = image.reshape(arbitrary_shape)
noisy = add_noise(image)
it = ImageTranslatorCNN(
model_architecture="unet",
training_architecture='checkran',
nb_unet_levels=2,
mask_size=3,
batch_norm='instance',
max_epochs=max_epochs,
)
it.train(noisy, noisy, batch_axes=batch_dims)
denoised = it.translate(noisy, tile_size=image_width, batch_axes=batch_dims)
assert denoised.shape == noisy.shape
denoised = denoised.squeeze()
noisy = noisy.squeeze()
image = image.squeeze()
image = numpy.clip(image, 0, 1)
noisy = numpy.clip(noisy.reshape(image.shape), 0, 1)
denoised = numpy.clip(denoised, 0, 1)
psnr_noisy = psnr(noisy, image)
ssim_noisy = ssim(noisy, image)
print("noisy", psnr_noisy, ssim_noisy)
psnr_denoised = psnr(denoised, image)
ssim_denoised = ssim(denoised, image)
print("denoised", psnr_denoised, ssim_denoised)
stop = time.time()
print(f"Total elapsed time: {stop - start} ")
assert psnr_denoised > psnr_noisy and ssim_denoised > ssim_noisy
clear_session()
def test_it_cnn_jinet2D_light():
"""
Demo for self-supervised denoising using camera image with synthetic noise
"""
start = time.time()
max_epochs = 30
image_width = 100
image = normalise(camera())
H0, W0 = (numpy.array(image.shape) - image_width) // 2
image = image[H0 : H0 + image_width, W0 : W0 + image_width]
noisy = add_noise(image)
it = ImageTranslatorCNN(
model_architecture='jinet', patch_size=image_width, max_epochs=max_epochs
)
it.train(noisy, noisy)
denoised = it.translate(noisy, tile_size=image_width)
image = numpy.clip(image, 0, 1)
noisy = numpy.clip(noisy.reshape(image.shape), 0, 1)
denoised = numpy.clip(denoised, 0, 1)
psnr_noisy = psnr(noisy, image)
ssim_noisy = ssim(noisy, image)
print("noisy", psnr_noisy, ssim_noisy)
psnr_denoised = psnr(denoised, image)
ssim_denoised = ssim(denoised, image)
print("denoised", psnr_denoised, ssim_denoised)
stop = time.time()
print(f"Total elapsed time: {stop - start} ")
assert psnr_denoised > psnr_noisy and ssim_denoised > ssim_noisy
clear_session()
def test_it_cnn_jinet2D_supervised_light():
"""
Demo for self-supervised denoising using camera image with synthetic noise
"""
start = time.time()
max_epochs = 30
image_width = 100
image = normalise(camera())
H0, W0 = (numpy.array(image.shape) - image_width) // 2
image = image[H0 : H0 + image_width, W0 : W0 + image_width]
noisy = add_noise(image)
it = ImageTranslatorCNN(
model_architecture='jinet', patch_size=image_width, max_epochs=max_epochs
)
it.train(noisy, image)
denoised = it.translate(noisy, tile_size=image_width)
image = numpy.clip(image, 0, 1)
noisy = numpy.clip(noisy.reshape(image.shape), 0, 1)
denoised = numpy.clip(denoised, 0, 1)
psnr_noisy = psnr(noisy, image)
ssim_noisy = ssim(noisy, image)
print("noisy", psnr_noisy, ssim_noisy)
psnr_denoised = psnr(denoised, image)
ssim_denoised = ssim(denoised, image)
print("denoised", psnr_denoised, ssim_denoised)
stop = time.time()
print(f"Total elapsed time: {stop - start} ")
assert psnr_denoised > psnr_noisy and ssim_denoised > ssim_noisy
clear_session()
def test_it_cnn_jinet3D_light():
"""
Demo for self-supervised denoising
"""
start = time.time()
max_epochs = 30
image_width = 64
image_path = examples_single.royerlab_hcr.get_path()
image, metadata = io.imread(image_path)
image = image[10:20, 1:2, 100 : 100 + image_width, 200 : 200 + image_width]
image = rescale_intensity(
image.astype(numpy.float32), in_range='image', out_range=(0, 1)
)
noisy = add_noise(image)
it = ImageTranslatorCNN(
model_architecture='jinet', patch_size=image_width, max_epochs=max_epochs
)
it.train(
noisy, noisy, batch_axes=metadata.batch_axes, channel_axes=metadata.channel_axes
)
denoised = it.translate(
noisy,
tile_size=image_width,
batch_axes=metadata.batch_axes,
channel_axes=metadata.channel_axes,
)
image = numpy.clip(image, 0, 1)
noisy = numpy.clip(noisy.reshape(image.shape), 0, 1)
denoised = numpy.clip(denoised, 0, 1)
noisy = numpy.squeeze(noisy)
image = numpy.squeeze(image)
denoised = numpy.squeeze(denoised)
psnr_noisy = psnr(noisy, image)
ssim_noisy = ssim(noisy, image)
print("noisy", psnr_noisy, ssim_noisy)
psnr_denoised = psnr(denoised, image)
ssim_denoised = ssim(denoised, image)
print("denoised", psnr_denoised, ssim_denoised)
stop = time.time()
print(f"Total elapsed time: {stop - start} ")
assert psnr_denoised > (psnr_noisy * 0.5) and ssim_denoised > (ssim_noisy * 0.5)
clear_session()
def test_it_cnn_jinet3D_supervised_light():
"""
Demo for self-supervised denoising using camera image with synthetic noise
"""
start = time.time()
max_epochs = 30
image_width = 64
image_path = examples_single.royerlab_hcr.get_path()
image, metadata = io.imread(image_path)
image = image[10:20, 1:2, 100 : 100 + image_width, 200 : 200 + image_width]
image = rescale_intensity(
image.astype(numpy.float32), in_range='image', out_range=(0, 1)
)
noisy = add_noise(image)
it = ImageTranslatorCNN(
model_architecture='jinet', patch_size=image_width, max_epochs=max_epochs
)
it.train(
noisy, image, batch_axes=metadata.batch_axes, channel_axes=metadata.channel_axes
)
denoised = it.translate(
noisy,
tile_size=image_width,
batch_axes=metadata.batch_axes,
channel_axes=metadata.channel_axes,
)
image = numpy.clip(image, 0, 1)
noisy = numpy.clip(noisy.reshape(image.shape), 0, 1)
denoised = numpy.clip(denoised, 0, 1)
noisy = numpy.squeeze(noisy)
image = numpy.squeeze(image)
denoised = numpy.squeeze(denoised)
psnr_noisy = psnr(noisy, image)
ssim_noisy = ssim(noisy, image, multichannel=True)
print("noisy", psnr_noisy, ssim_noisy)
psnr_denoised = psnr(denoised, image)
ssim_denoised = ssim(denoised, image, multichannel=True)
print("denoised", psnr_denoised, ssim_denoised)
stop = time.time()
print(f"Total elapsed time: {stop - start} ")
assert psnr_denoised > (psnr_noisy * 0.5) and ssim_denoised > (ssim_noisy * 0.5)
clear_session()
@pytest.mark.heavy
def test_it_cnn_random_patching():
"""
Demo for self-supervised denoising using camera image with synthetic noise
"""
start = time.time()
max_epochs = 16
image_width = 100
image = normalise(camera())
H0, W0 = (numpy.array(image.shape) - image_width) // 2
image = image[H0 : H0 + image_width, W0 : W0 + image_width]
noisy = add_noise(image)
it = ImageTranslatorCNN(
training_architecture='random',
nb_unet_levels=2,
batch_norm='instance',
max_epochs=max_epochs,
patch_size=64,
)
it.train(noisy, noisy)
denoised = it.translate(noisy, tile_size=image_width)
image = numpy.clip(image, 0, 1)
noisy = numpy.clip(noisy.reshape(image.shape), 0, 1)
denoised = numpy.clip(denoised, 0, 1)
psnr_noisy = psnr(noisy, image)
ssim_noisy = ssim(noisy, image)
print("noisy", psnr_noisy, ssim_noisy)
psnr_denoised = psnr(denoised, image)
ssim_denoised = ssim(denoised, image)
print("denoised", psnr_denoised, ssim_denoised)
stop = time.time()
print(f"Total elapsed time: {stop - start} ")
assert psnr_denoised > psnr_noisy and ssim_denoised > ssim_noisy
| 30.737209 | 88 | 0.669668 | 1,741 | 13,217 | 4.883975 | 0.085583 | 0.055275 | 0.029637 | 0.015524 | 0.88016 | 0.87475 | 0.856521 | 0.84629 | 0.84629 | 0.84629 | 0 | 0.022315 | 0.220171 | 13,217 | 429 | 89 | 30.808858 | 0.802658 | 0.055837 | 0 | 0.784615 | 0 | 0 | 0.049408 | 0 | 0 | 0 | 0 | 0 | 0.036923 | 1 | 0.030769 | false | 0 | 0.036923 | 0 | 0.067692 | 0.089231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4fcaecaf2b7662077956627384842b20e16293eb | 211 | py | Python | microsetta_admin/_model.py | dhakim87/microsetta-admin | 306efb273e8fc7efa99f6bfd28372da3f3cf5f2e | [
"BSD-3-Clause"
] | null | null | null | microsetta_admin/_model.py | dhakim87/microsetta-admin | 306efb273e8fc7efa99f6bfd28372da3f3cf5f2e | [
"BSD-3-Clause"
] | null | null | null | microsetta_admin/_model.py | dhakim87/microsetta-admin | 306efb273e8fc7efa99f6bfd28372da3f3cf5f2e | [
"BSD-3-Clause"
] | null | null | null | from microsetta_private_api.model.sample import Sample
from microsetta_private_api.model.source import Source
from microsetta_private_api.model.account import Account
__all__ = ['Sample', 'Source', 'Account']
| 30.142857 | 56 | 0.829384 | 28 | 211 | 5.892857 | 0.357143 | 0.254545 | 0.381818 | 0.436364 | 0.527273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090047 | 211 | 6 | 57 | 35.166667 | 0.859375 | 0 | 0 | 0 | 0 | 0 | 0.090047 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
8b20a63d6ed8e1c2f8f09c70eb98381830d13638 | 105 | py | Python | mecab/__init__.py | DataLama/custom-python-mecab-ko | 5766a0e2369165dd12cc24fadc64a2e93cdebffd | [
"MIT"
] | null | null | null | mecab/__init__.py | DataLama/custom-python-mecab-ko | 5766a0e2369165dd12cc24fadc64a2e93cdebffd | [
"MIT"
] | null | null | null | mecab/__init__.py | DataLama/custom-python-mecab-ko | 5766a0e2369165dd12cc24fadc64a2e93cdebffd | [
"MIT"
] | null | null | null | from .mecab import MeCabError
from .mecab import MeCab
from .add_userdict import update_custom_dictionary | 35 | 50 | 0.866667 | 15 | 105 | 5.866667 | 0.6 | 0.204545 | 0.340909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104762 | 105 | 3 | 50 | 35 | 0.93617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
507bd2788ba9f44835bec6f71de41e58095fe903 | 258 | py | Python | entity/cards/BARL_017H/__init__.py | x014/lushi_script | edab2b88e3f0de8139de2541ab2daa331f777c0e | [
"MIT"
] | 102 | 2021-10-20T09:06:39.000Z | 2022-03-28T13:35:11.000Z | entity/cards/BARL_017H/__init__.py | x014/lushi_script | edab2b88e3f0de8139de2541ab2daa331f777c0e | [
"MIT"
] | 98 | 2021-10-19T16:13:27.000Z | 2022-03-27T13:27:49.000Z | entity/cards/BARL_017H/__init__.py | x014/lushi_script | edab2b88e3f0de8139de2541ab2daa331f777c0e | [
"MIT"
] | 55 | 2021-10-19T03:56:50.000Z | 2022-03-25T08:25:26.000Z | # -*- coding: utf-8 -*-
import entity.cards.BARL_017H.LETL_470
import entity.cards.BARL_017H.LETL_471
import entity.cards.BARL_017H.LETL_472
import entity.cards.BARL_017H.LETL_706
import entity.cards.BARL_017H.LETL_707
import entity.cards.BARL_017H.LETL_709
| 32.25 | 38 | 0.829457 | 45 | 258 | 4.488889 | 0.333333 | 0.356436 | 0.504951 | 0.623762 | 0.861386 | 0.861386 | 0 | 0 | 0 | 0 | 0 | 0.153527 | 0.065891 | 258 | 7 | 39 | 36.857143 | 0.684647 | 0.081395 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 8 |
5084a606448bbbffc7116c0388129f2c407438c7 | 39,260 | py | Python | tests/L0/run_amp/test_fused_sgd.py | Mahathi-Vatsal/apex | 063d720f1a41f1b5437f0cf7cbbc5e4a81392538 | [
"BSD-3-Clause"
] | 6 | 2020-06-01T17:27:13.000Z | 2022-01-10T08:59:50.000Z | tests/L0/run_amp/test_fused_sgd.py | Mahathi-Vatsal/apex | 063d720f1a41f1b5437f0cf7cbbc5e4a81392538 | [
"BSD-3-Clause"
] | 43 | 2020-04-28T17:09:02.000Z | 2022-03-31T18:10:01.000Z | tests/L0/run_amp/test_fused_sgd.py | Mahathi-Vatsal/apex | 063d720f1a41f1b5437f0cf7cbbc5e4a81392538 | [
"BSD-3-Clause"
] | 9 | 2020-05-14T18:41:24.000Z | 2022-03-30T00:09:42.000Z | import unittest
import functools as ft
import itertools as it
from apex import amp
from apex.amp import _amp_state
import torch
from torch import nn
import torch.nn.functional as F
from torch.nn import Parameter
from utils import common_init, HALF, FLOAT,\
ALWAYS_HALF, ALWAYS_FLOAT, MATCH_INPUT
try:
import amp_C
disabled = False
from apex.optimizers import FusedSGD as FusedSGD
except ImportError as err:
print("amp_C fused kernels unavailable, disabling TestMultiTensorApply. ImportError was ", err)
disabled = True
class MyModel(torch.nn.Module):
def __init__(self, unique):
super(MyModel, self).__init__()
self.weight0 = Parameter(unique +
torch.arange(2, device='cuda', dtype=torch.float32))
self.weight1 = Parameter(1. + unique + torch.arange(2, device='cuda', dtype=torch.float16))
@staticmethod
def ops(input, weight0, weight1):
return ((input*(weight0.float()))*(weight1.float())).sum()
def forward(self, input):
return self.ops(input, self.weight0, self.weight1)
# Abandon all hope, ye who enter here.
# This is hands down the ugliest code I have ever written, but it succeeds in testing
# multiple models/optimizers/losses fairly thoroughly. Many of the different test cases
# require slightly divergent code in a way that seems near-impossible to genericize into a simple
# cross product or nested loops.
class TestMultipleModelsOptimizersLosses(unittest.TestCase):
def setUp(self):
self.x = torch.ones((2), device='cuda', dtype=torch.float32)
common_init(self)
def tearDown(self):
pass
@unittest.skipIf(disabled, "amp_C is unavailable")
def test_2models2losses1optimizer(self):
model0 = MyModel(1)
model1 = MyModel(2)
optimizer = torch.optim.SGD([{'params' : model0.parameters(), 'lr' : 0.25},
{'params' : model1.parameters(), 'lr' : 0.5}],
momentum=0.125)
reference_grads = []
for i in range(2):
optimizer.zero_grad()
loss0 = model0(self.x)
loss1 = model1(self.x)
loss0.backward()
loss1.backward()
reference_grads.append([param.grad.data.clone() for param in model0.parameters()] +
[param.grad.data.clone() for param in model1.parameters()])
optimizer.step()
final_params = [param.data.clone() for param in model0.parameters()] + \
[param.data.clone() for param in model1.parameters()]
for materialize_master_grads in (False, True):
for opt_level in ("O0", "O1", "O2", "O3"):
for how_to_zero in ("none", "model", "optimizer"):
for use_multiple_loss_scalers in (False, True):
if opt_level == "O1" or opt_level == "O2":
inject_inf_iters = (-1, 0, 1)
else:
inject_inf_iters = (-1,)
for inject_inf in inject_inf_iters:
if inject_inf >= 0:
inject_inf_locs = ("fp16", "fp32")
which_backwards = (0, 1)
else:
inject_inf_locs = ("fdsa",)
which_backwards = (None,)
for inject_inf_loc in inject_inf_locs:
for which_backward in which_backwards:
if use_multiple_loss_scalers:
num_losses = 2
loss_ids = [0, 1]
else:
num_losses = 1
loss_ids = [0, 0]
if inject_inf >= 0:
iters = 3
else:
iters = 2
model0 = MyModel(1)
model1 = MyModel(2)
models = [model0, model1]
optimizer = FusedSGD([{'params' : model0.parameters(), 'lr' : 0.25},
{'params' : model1.parameters(), 'lr' : 0.5}],
momentum=0.125,
materialize_master_grads=materialize_master_grads)
_amp_state.allow_incoming_model_not_fp32 = True
[model0, model1], optimizer = amp.initialize(
[model0, model1],
optimizer,
opt_level=opt_level,
verbosity=0,
cast_model_type=False,
num_losses=num_losses)
_amp_state.allow_incoming_model_not_fp32 = False
_amp_state.loss_scalers[0]._loss_scale = 4.0
if use_multiple_loss_scalers:
_amp_state.loss_scalers[1]._loss_scale = 16.0
unskipped = 0
for i in range(iters):
if how_to_zero == "none":
for model in models:
for param in model.parameters():
param.grad = None
elif how_to_zero == "model":
for model in models:
model.zero_grad()
else:
optimizer.zero_grad()
loss0 = model0(self.x)
loss1 = model1(self.x)
with amp.scale_loss(loss0, optimizer, loss_id=loss_ids[0]) as scaled_loss:
scaled_loss.backward()
if i == inject_inf and which_backward == 0:
if inject_inf_loc == "fp32":
model0.weight0.grad[0] = float('inf')
elif inject_inf_loc == "fp16":
model0.weight1.grad[0] = float('inf')
with amp.scale_loss(loss1, optimizer, loss_id=loss_ids[1]) as scaled_loss:
scaled_loss.backward()
if i == inject_inf and which_backward == 1:
if inject_inf_loc == "fp32":
model1.weight0.grad[0] = float('inf')
elif inject_inf_loc == "fp16":
model1.weight1.grad[0] = float('inf')
if i != inject_inf:
master_params = amp.master_params(optimizer)
for param, reference_grad in zip(master_params, reference_grads[unskipped]):
if opt_level == "O2" and not materialize_master_grads:
continue
else:
self.assertTrue(torch.allclose(param.grad.float(), reference_grad.float()),
"opt_level {} i {} inject_inf {} which_backward {} inject_inf_loc {} use_multiple_loss_scalers {}".format(opt_level, i, inject_inf, which_backward, inject_inf_loc, use_multiple_loss_scalers))
unskipped += 1
optimizer.step()
model_params = [p for p in model0.parameters()] + [p for p in model1.parameters()]
for model, master, reference in zip(
model_params,
amp.master_params(optimizer),
final_params):
self.assertTrue(torch.allclose(model, reference))
self.assertTrue(torch.allclose(model, master.to(model.dtype)))
if opt_level == "O1":
_amp_state.handle._deactivate()
@unittest.skipIf(disabled, "amp_C is unavailable")
def test_3models2losses1optimizer(self):
model0 = MyModel(1)
model1 = MyModel(2)
model2 = MyModel(3)
optimizer = torch.optim.SGD([{'params' : model0.parameters(), 'lr' : 0.25},
{'params' : model1.parameters(), 'lr' : 0.5},
{'params' : model2.parameters(), 'lr' : 0.125}],
momentum=0.125)
reference_grads = []
for i in range(2):
optimizer.zero_grad()
loss0 = model0(self.x) + model2(self.x)
loss1 = model1(self.x) + model2(self.x)
loss0.backward()
loss1.backward()
reference_grads.append([param.grad.data.clone() for param in model0.parameters()] +
[param.grad.data.clone() for param in model1.parameters()] +
[param.grad.data.clone() for param in model2.parameters()])
optimizer.step()
final_params = [param.data.clone() for param in model0.parameters()] + \
[param.data.clone() for param in model1.parameters()] + \
[param.data.clone() for param in model2.parameters()]
for materialize_master_grads in (False, True):
for opt_level in ("O0", "O1", "O2", "O3"):
for how_to_zero in ("none", "model", "optimizer"):
for use_multiple_loss_scalers in (False, True):
if opt_level == "O1" or opt_level == "O2":
inject_inf_iters = (-1, 0, 1)
else:
inject_inf_iters = (-1,)
for inject_inf in inject_inf_iters:
if inject_inf >= 0:
inject_inf_locs = ("fp16", "fp32")
which_backwards = (0, 1)
else:
inject_inf_locs = ("fdsa",)
which_backwards = (None,)
for inject_inf_loc in inject_inf_locs:
for which_backward in which_backwards:
if use_multiple_loss_scalers:
num_losses = 2
loss_ids = [0, 1]
else:
num_losses = 1
loss_ids = [0, 0]
if inject_inf >= 0:
iters = 3
if which_backward == 0:
which_models = (0, 2)
elif which_backward == 1:
which_models = (1, 2)
else:
iters = 2
which_models = (None,)
for which_model in which_models:
model0 = MyModel(1)
model1 = MyModel(2)
model2 = MyModel(3)
models = [model0, model1, model2]
optimizer = FusedSGD([{'params' : model0.parameters(), 'lr' : 0.25},
{'params' : model1.parameters(), 'lr' : 0.5},
{'params' : model2.parameters(), 'lr' : 0.125}],
momentum=0.125,
materialize_master_grads=materialize_master_grads)
_amp_state.allow_incoming_model_not_fp32 = True
[model0, model1, model2], optimizer = amp.initialize(
[model0, model1, model2],
optimizer,
opt_level=opt_level,
verbosity=0,
cast_model_type=False,
num_losses=num_losses)
_amp_state.allow_incoming_model_not_fp32 = False
_amp_state.loss_scalers[0]._loss_scale = 4.0
if use_multiple_loss_scalers:
_amp_state.loss_scalers[1]._loss_scale = 16.0
unskipped = 0
for i in range(iters):
if how_to_zero == "none":
for model in models:
for param in model.parameters():
param.grad = None
elif how_to_zero == "model":
for model in models:
model.zero_grad()
else:
optimizer.zero_grad()
loss0 = model0(self.x) + model2(self.x)
loss1 = model1(self.x) + model2(self.x)
with amp.scale_loss(loss0, optimizer, loss_id=loss_ids[0]) as scaled_loss:
scaled_loss.backward()
if i == inject_inf and which_backward == 0:
if which_model == 0:
inj_model = model0
elif which_model == 2:
inj_model = model2
else:
raise RuntimeError(which_model + " invalid for loss 0")
if inject_inf_loc == "fp32":
inj_model.weight0.grad[0] = float('inf')
elif inject_inf_loc == "fp16":
inj_model.weight1.grad[0] = float('inf')
with amp.scale_loss(loss1, optimizer, loss_id=loss_ids[1]) as scaled_loss:
scaled_loss.backward()
if i == inject_inf and which_backward == 1:
if which_model == 1:
inj_model = model1
elif which_model == 2:
inj_model = model2
else:
raise RuntimeError(which_model + " invalid for loss 1 ")
if inject_inf_loc == "fp32":
inj_model.weight0.grad[0] = float('inf')
elif inject_inf_loc == "fp16":
inj_model.weight1.grad[0] = float('inf')
if i != inject_inf:
master_params = amp.master_params(optimizer)
for param, reference_grad in zip(master_params, reference_grads[unskipped]):
if opt_level == "O2" and not materialize_master_grads:
continue
else:
self.assertTrue(torch.allclose(param.grad.float(), reference_grad.float()),
"opt_level {} i {} inject_inf {} which_backward {} inject_inf_loc {} which_model {} use_multiple_loss_scalers {}".format(opt_level, i, inject_inf, which_backward, inject_inf_loc, which_model, use_multiple_loss_scalers))
unskipped += 1
optimizer.step()
model_params = [p for p in model0.parameters()] + \
[p for p in model1.parameters()] + \
[p for p in model2.parameters()]
for model, master, reference in zip(
model_params,
amp.master_params(optimizer),
final_params):
self.assertTrue(torch.allclose(model, reference))
self.assertTrue(torch.allclose(model, master.to(model.dtype)))
if opt_level == "O1":
_amp_state.handle._deactivate()
@unittest.skipIf(disabled, "amp_C is unavailable")
def test_2models2losses2optimizers(self):
model0 = MyModel(1)
model1 = MyModel(2)
optimizer0 = torch.optim.SGD([{'params' : model0.parameters(), 'lr' : 0.25}],
momentum=0.125)
optimizer1 = torch.optim.SGD([{'params' : model1.parameters(), 'lr' : 0.5}],
momentum=0.25)
# Don't do it like this: reference_grads = [[]]*5
# because then it creates a list of 5 references to the same "[]" and appending
# to any of them effectively makes you append to all of them, which multiplies
# the resulting size of reference_grads by 5x and needless to say makes the test fail.
reference_grads = [[], [], [], [], []]
final_params = [None, None, None, None, None]
for i in range(2):
optimizer0.zero_grad()
optimizer1.zero_grad()
loss0 = model0(self.x)
loss1 = model1(self.x)
loss0.backward()
loss1.backward()
reference_grads[0].append([param.grad.data.clone() for param in model0.parameters()] +
[param.grad.data.clone() for param in model1.parameters()])
optimizer0.step()
optimizer1.step()
final_params[0] = [param.data.clone() for param in model0.parameters()] + \
[param.data.clone() for param in model1.parameters()]
def what_got_skipped(which_iter, which_backward):
if which_iter == 0 and which_backward == 0:
return 1
if which_iter == 0 and which_backward == 1:
return 2
if which_iter == 1 and which_backward == 0:
return 3
if which_iter == 1 and which_backward == 1:
return 4
return 0
for which_iter in (0,1):
for which_backward in (0,1):
model0 = MyModel(1)
model1 = MyModel(2)
optimizer0 = torch.optim.SGD([{'params' : model0.parameters(), 'lr' : 0.25}],
momentum=0.125)
optimizer1 = torch.optim.SGD([{'params' : model1.parameters(), 'lr' : 0.5}],
momentum=0.25)
for i in range(3):
optimizer0.zero_grad()
optimizer1.zero_grad()
loss0 = model0(self.x)
loss1 = model1(self.x)
loss0.backward()
loss1.backward()
if i != which_iter:
reference_grads[what_got_skipped(which_iter, which_backward)].append(
[param.grad.data.clone() for param in model0.parameters()] +
[param.grad.data.clone() for param in model1.parameters()])
if i == which_iter:
if which_backward == 0:
optimizer1.step()
else:
optimizer0.step()
else:
optimizer0.step()
optimizer1.step()
final_params[what_got_skipped(which_iter, which_backward)] = \
[param.data.clone() for param in model0.parameters()] + \
[param.data.clone() for param in model1.parameters()]
for materialize_master_grads in (False, True):
for opt_level in ("O0", "O1", "O2", "O3"):
for how_to_zero in ("none", "model", "optimizer"):
for use_multiple_loss_scalers in (False, True):
if opt_level == "O1" or opt_level == "O2":
inject_inf_iters = (-1, 0, 1)
else:
inject_inf_iters = (-1,)
for inject_inf in inject_inf_iters:
if inject_inf >= 0:
inject_inf_locs = ("fp16", "fp32")
which_backwards = (0, 1)
else:
inject_inf_locs = ("fdsa",)
which_backwards = (None,)
for inject_inf_loc in inject_inf_locs:
for which_backward in which_backwards:
if use_multiple_loss_scalers:
num_losses = 2
loss_ids = [0, 1]
else:
num_losses = 1
loss_ids = [0, 0]
if inject_inf >= 0:
iters = 3
else:
iters = 2
model0 = MyModel(1)
model1 = MyModel(2)
models = [model0, model1]
optimizer0 = FusedSGD([{'params' : model0.parameters(), 'lr' : 0.25}],
momentum=0.125, materialize_master_grads=materialize_master_grads)
optimizer1 = FusedSGD([{'params' : model1.parameters(), 'lr' : 0.5}],
momentum=0.25, materialize_master_grads=materialize_master_grads)
_amp_state.allow_incoming_model_not_fp32 = True
[model0, model1], [optimizer0, optimizer1] = amp.initialize(
[model0, model1],
[optimizer0, optimizer1],
opt_level=opt_level,
verbosity=0,
cast_model_type=False,
num_losses=num_losses)
_amp_state.allow_incoming_model_not_fp32 = False
_amp_state.loss_scalers[0]._loss_scale = 4.0
if use_multiple_loss_scalers:
_amp_state.loss_scalers[1]._loss_scale = 16.0
unskipped = 0
for i in range(iters):
if how_to_zero == "none":
for model in models:
for param in model.parameters():
param.grad = None
elif how_to_zero == "model":
for model in models:
model.zero_grad()
else:
optimizer0.zero_grad()
optimizer1.zero_grad()
loss0 = model0(self.x)
loss1 = model1(self.x)
with amp.scale_loss(loss0, optimizer0, loss_id=loss_ids[0]) as scaled_loss:
scaled_loss.backward()
if i == inject_inf and which_backward == 0:
if inject_inf_loc == "fp32":
model0.weight0.grad[0] = float('inf')
elif inject_inf_loc == "fp16":
model0.weight1.grad[0] = float('inf')
with amp.scale_loss(loss1, optimizer1, loss_id=loss_ids[1]) as scaled_loss:
scaled_loss.backward()
if i == inject_inf and which_backward == 1:
if inject_inf_loc == "fp32":
model1.weight0.grad[0] = float('inf')
elif inject_inf_loc == "fp16":
model1.weight1.grad[0] = float('inf')
# print("opt_level {} i {} inject_inf {} which_backward {} inject_inf_loc {} use_multiple_loss_scalers {}".format(opt_level, i, inject_inf, which_backward, inject_inf_loc, use_multiple_loss_scalers))
if i != inject_inf:
master_params = list(amp.master_params(optimizer0)) + \
list(amp.master_params(optimizer1))
for param, reference_grad in zip(master_params,
reference_grads[what_got_skipped(inject_inf, which_backward)][unskipped]):
if opt_level == "O2" and not materialize_master_grads:
continue
else:
self.assertTrue(torch.allclose(param.grad.float(), reference_grad.float()))
unskipped += 1
optimizer0.step()
optimizer1.step()
model_params = [p for p in model0.parameters()] + [p for p in model1.parameters()]
master_params = [p for p in amp.master_params(optimizer0)] + \
[p for p in amp.master_params(optimizer1)]
for model, master, reference in zip(
model_params,
master_params,
final_params[what_got_skipped(inject_inf, which_backward)]):
self.assertTrue(torch.allclose(model, reference))
self.assertTrue(torch.allclose(model, master.to(model.dtype)))
if opt_level == "O1":
_amp_state.handle._deactivate()
@unittest.skipIf(disabled, "amp_C is unavailable")
def test_3models2losses2optimizers(self):
model0 = MyModel(1)
model1 = MyModel(2)
model2 = MyModel(3)
optimizer0 = torch.optim.SGD([{'params' : model0.parameters(), 'lr' : 0.25},
{'params' : model1.parameters(), 'lr' : 1.0}],
momentum=0.5)
optimizer1 = torch.optim.SGD([{'params' : model2.parameters(), 'lr' : 0.5}],
momentum=0.25)
# Again, can't do this: reference_grads = [[]]*9
reference_grads = [[], [], [], [], [], [], [], [], []]
final_params = [None, None, None, None, None, None, None, None, None]
for i in range(2):
optimizer0.zero_grad()
optimizer1.zero_grad()
loss0 = model0(self.x) + model1(self.x)
loss1 = model2(self.x) + model1(self.x)
loss0.backward()
loss1.backward()
reference_grads[0].append([param.grad.data.clone() for param in model0.parameters()] +
[param.grad.data.clone() for param in model1.parameters()])
optimizer0.step()
optimizer1.step()
final_params[0] = \
[param.data.clone() for param in model0.parameters()] + \
[param.data.clone() for param in model1.parameters()] + \
[param.data.clone() for param in model2.parameters()]
def what_got_skipped(which_iter, which_backward, which_model):
if which_iter == 0:
if which_backward == 0:
if which_model == 0:
return 1
if which_model == 1:
return 2
if which_backward == 1:
if which_model == 2:
return 3
if which_model == 1:
return 4
if which_iter == 1:
if which_backward == 0:
if which_model == 0:
return 5
if which_model == 1:
return 6
if which_backward == 1:
if which_model == 2:
return 7
if which_model == 1:
return 8
return 0
for which_iter in (0,1):
for which_backward in (0,1):
if which_backward == 0:
which_models = (0,1)
if which_backward == 1:
which_models = (2,1)
for which_model in which_models:
model0 = MyModel(1)
model1 = MyModel(2)
model2 = MyModel(3)
optimizer0 = torch.optim.SGD([{'params' : model0.parameters(), 'lr' : 0.25},
{'params' : model1.parameters(), 'lr' : 1.0}],
momentum=0.5)
optimizer1 = torch.optim.SGD([{'params' : model2.parameters(), 'lr' : 0.5}],
momentum=0.25)
for i in range(3):
optimizer0.zero_grad()
optimizer1.zero_grad()
loss0 = model0(self.x) + model1(self.x)
loss1 = model2(self.x) + model1(self.x)
loss0.backward()
loss1.backward()
if i != which_iter:
reference_grads[what_got_skipped(which_iter,
which_backward, which_model)].append(
[param.grad.data.clone() for param in model0.parameters()] +
[param.grad.data.clone() for param in model1.parameters()])
if i == which_iter:
if which_backward == 0:
# if which_model == 0:
optimizer1.step()
# if which_model == 1:
# optimizer1.step()
if which_backward == 1:
# if which_model == 2:
# optimizer0.step()
# if which_model == 1:
continue
else:
optimizer0.step()
optimizer1.step()
final_params[what_got_skipped(which_iter, which_backward, which_model)] = \
[param.data.clone() for param in model0.parameters()] + \
[param.data.clone() for param in model1.parameters()] + \
[param.data.clone() for param in model2.parameters()]
for materialize_master_grads in (False, True):
for opt_level in ("O0", "O1", "O2", "O3"):
for how_to_zero in ("none", "model", "optimizer"):
for use_multiple_loss_scalers in (False, True):
if opt_level == "O1" or opt_level == "O2":
inject_inf_iters = (-1, 0, 1)
else:
inject_inf_iters = (-1,)
for inject_inf in inject_inf_iters:
if inject_inf >= 0:
inject_inf_locs = ("fp16", "fp32")
which_backwards = (0, 1)
else:
inject_inf_locs = ("fdsa",)
which_backwards = (None,)
for inject_inf_loc in inject_inf_locs:
for which_backward in which_backwards:
if use_multiple_loss_scalers:
num_losses = 2
loss_ids = [0, 1]
else:
num_losses = 1
loss_ids = [0, 0]
if inject_inf >= 0:
iters = 3
if which_backward == 0:
which_models = (0, 1)
elif which_backward == 1:
which_models = (2, 1)
else:
iters = 2
which_models = (None,)
for which_model in which_models:
model0 = MyModel(1)
model1 = MyModel(2)
model2 = MyModel(3)
models = [model0, model1, model2]
optimizer0 = FusedSGD([{'params' : model0.parameters(), 'lr' : 0.25},
{'params' : model1.parameters(), 'lr' : 1.0}],
momentum=0.5, materialize_master_grads=materialize_master_grads)
optimizer1 = FusedSGD([{'params' : model2.parameters(), 'lr' : 0.5}],
momentum=0.25, materialize_master_grads=materialize_master_grads)
_amp_state.allow_incoming_model_not_fp32 = True
[model0, model1, model2], [optimizer0, optimizer1] = amp.initialize(
[model0, model1, model2],
[optimizer0, optimizer1],
opt_level=opt_level,
verbosity=0,
cast_model_type=False,
num_losses=num_losses)
_amp_state.allow_incoming_model_not_fp32 = False
_amp_state.loss_scalers[0]._loss_scale = 4.0
if use_multiple_loss_scalers:
_amp_state.loss_scalers[1]._loss_scale = 16.0
unskipped = 0
for i in range(iters):
if how_to_zero == "none":
for model in models:
for param in model.parameters():
param.grad = None
elif how_to_zero == "model":
for model in models:
model.zero_grad()
else:
optimizer0.zero_grad()
optimizer1.zero_grad()
loss0 = model0(self.x) + model1(self.x)
loss1 = model2(self.x) + model1(self.x)
with amp.scale_loss(loss0, optimizer0, loss_id=loss_ids[0]) as scaled_loss:
scaled_loss.backward()
if i == inject_inf and which_backward == 0:
if which_model == 0:
inj_model = model0
elif which_model == 1:
inj_model = model1
else:
raise RuntimeError(which_model + " invalid for loss 0")
if inject_inf_loc == "fp32":
inj_model.weight0.grad[0] = float('inf')
elif inject_inf_loc == "fp16":
inj_model.weight1.grad[0] = float('inf')
with amp.scale_loss(loss1, [optimizer0, optimizer1], loss_id=loss_ids[1]) as scaled_loss:
scaled_loss.backward()
if i == inject_inf and which_backward == 1:
if which_model == 2:
inj_model = model2
elif which_model == 1:
inj_model = model1
else:
raise RuntimeError(which_model + " invalid for loss 1 ")
if inject_inf_loc == "fp32":
inj_model.weight0.grad[0] = float('inf')
elif inject_inf_loc == "fp16":
inj_model.weight1.grad[0] = float('inf')
if i != inject_inf:
master_params = list(amp.master_params(optimizer0)) + \
list(amp.master_params(optimizer1))
for param, reference_grad in zip(master_params,
reference_grads[what_got_skipped(inject_inf,
which_backward, which_model)][unskipped]):
if opt_level == "O2" and not materialize_master_grads:
continue
else:
self.assertTrue(torch.allclose(param.grad.float(), reference_grad.float()))
unskipped += 1
optimizer0.step()
optimizer1.step()
model_params = [p for p in model0.parameters()] + \
[p for p in model1.parameters()] + \
[p for p in model2.parameters()]
master_params = [p for p in amp.master_params(optimizer0)] + \
[p for p in amp.master_params(optimizer1)]
# print("opt_level {} i {} inject_inf {} which_backward {} inject_inf_loc {} use_multiple_loss_scalers {} which_model {}".format(opt_level, i, inject_inf, which_backward, inject_inf_loc, use_multiple_loss_scalers, which_model))
for model, master, reference in zip(
model_params,
master_params,
final_params[what_got_skipped(inject_inf, which_backward, which_model)]):
self.assertTrue(torch.allclose(model, reference))
self.assertTrue(torch.allclose(model, master.to(model.dtype)))
if opt_level == "O1":
_amp_state.handle._deactivate()
if __name__ == '__main__':
unittest.main()
| 49.445844 | 261 | 0.428044 | 3,524 | 39,260 | 4.5542 | 0.070658 | 0.049349 | 0.019939 | 0.029659 | 0.904168 | 0.890149 | 0.878497 | 0.865474 | 0.852763 | 0.817434 | 0 | 0.038796 | 0.487901 | 39,260 | 793 | 262 | 49.508197 | 0.759463 | 0.031151 | 0 | 0.867176 | 0 | 0 | 0.026277 | 0.001868 | 0 | 0 | 0 | 0 | 0.018321 | 1 | 0.016794 | false | 0.001527 | 0.021374 | 0.003053 | 0.065649 | 0.001527 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
50f1a08189f543218733229d038ffda87ee8e29d | 68,595 | py | Python | benchmarks/SimResults/combinations_splash_mylocality/oldstuff/cmp_choleskybarnesfftraytrace/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_splash_mylocality/oldstuff/cmp_choleskybarnesfftraytrace/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/combinations_splash_mylocality/oldstuff/cmp_choleskybarnesfftraytrace/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.161722,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.329713,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.908078,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.575982,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.997392,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.572033,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 2.14541,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.430112,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 7.34003,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.171555,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0208798,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.210121,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.154419,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.381677,
'Execution Unit/Register Files/Runtime Dynamic': 0.175299,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.552045,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.38374,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 4.43953,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.0024986,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.0024986,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00217213,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.0008386,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00221824,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00938758,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0241046,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.148447,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.438245,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.504192,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 1.12438,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0654484,
'L2/Runtime Dynamic': 0.0146108,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 6.01103,
'Load Store Unit/Data Cache/Runtime Dynamic': 2.30575,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.154448,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.154447,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 6.74334,
'Load Store Unit/Runtime Dynamic': 3.22188,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.380841,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.761682,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.135162,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.136134,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0718762,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.792162,
'Memory Management Unit/Runtime Dynamic': 0.20801,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 28.4714,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.598518,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0366547,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.289168,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 0.924341,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 9.93275,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0230561,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.220798,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.139744,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.191082,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.308208,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.155573,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.654863,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.197117,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.4461,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0264006,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00801483,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0659696,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0592746,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0923703,
'Execution Unit/Register Files/Runtime Dynamic': 0.0672894,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.144745,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.368414,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.71674,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00203883,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00203883,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00183308,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000740937,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000851484,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00676222,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0175021,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0569822,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 3.62455,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.189809,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.193537,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 6.01898,
'Instruction Fetch Unit/Runtime Dynamic': 0.464592,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0183571,
'L2/Runtime Dynamic': 0.00401057,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.80455,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.755572,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.05071,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0507099,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.04401,
'Load Store Unit/Runtime Dynamic': 1.05637,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.125042,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.250084,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0443779,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0446408,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.225361,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0311538,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.457704,
'Memory Management Unit/Runtime Dynamic': 0.0757946,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 17.5746,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0694477,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00946625,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0959537,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.174868,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 3.49237,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0717681,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.259059,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.388123,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.232449,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.374932,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.189253,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.796634,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.20635,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.91026,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0733247,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00974996,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0973508,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0721069,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.170675,
'Execution Unit/Register Files/Runtime Dynamic': 0.0818569,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.223037,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.55886,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.10179,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00126511,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00126511,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00110895,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000433146,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00103582,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00467499,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.011878,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0693182,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 4.40923,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.204476,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.235436,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 6.84174,
'Instruction Fetch Unit/Runtime Dynamic': 0.525783,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0389993,
'L2/Runtime Dynamic': 0.00843938,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.13056,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.39167,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0936097,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0936096,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 4.57261,
'Load Store Unit/Runtime Dynamic': 1.94693,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.230826,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.461651,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0819207,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0825057,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.27415,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0335227,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.570984,
'Memory Management Unit/Runtime Dynamic': 0.116028,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 20.5241,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.192884,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0128348,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.115178,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.320897,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 5.01986,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0626027,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.251859,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.392593,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.268426,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.432961,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.218544,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.91993,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.246811,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.96643,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.0741693,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.011259,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.102637,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0832671,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.176806,
'Execution Unit/Register Files/Runtime Dynamic': 0.094526,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.231882,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.564409,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.2361,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00164886,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00164886,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.0014572,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000575617,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00119614,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00595107,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0150573,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0800467,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 5.09166,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.26057,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.271875,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 7.55728,
'Instruction Fetch Unit/Runtime Dynamic': 0.6335,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0359559,
'L2/Runtime Dynamic': 0.0131736,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 3.18966,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.964882,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0631693,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0631693,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.48796,
'Load Store Unit/Runtime Dynamic': 1.33958,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.155765,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.311529,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0552814,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0558204,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.316581,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0427194,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.567653,
'Memory Management Unit/Runtime Dynamic': 0.0985398,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 20.2047,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.195105,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.014485,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.133676,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.343266,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 4.66416,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 1.8508442514106407,
'Runtime Dynamic': 1.8508442514106407,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.122426,
'Runtime Dynamic': 0.0751825,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 86.8973,
'Peak Power': 120.009,
'Runtime Dynamic': 23.1843,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 86.7748,
'Total Cores/Runtime Dynamic': 23.1091,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.122426,
'Total L3s/Runtime Dynamic': 0.0751825,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | 75.049234 | 124 | 0.682018 | 8,082 | 68,595 | 5.782603 | 0.067929 | 0.12359 | 0.112977 | 0.093463 | 0.938954 | 0.930352 | 0.917706 | 0.885418 | 0.862972 | 0.842452 | 0 | 0.131719 | 0.224375 | 68,595 | 914 | 125 | 75.049234 | 0.746692 | 0 | 0 | 0.642232 | 0 | 0 | 0.657546 | 0.048108 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0fc2d44c55816f4f34f32593c4b1d28749e6d7c8 | 180 | py | Python | pylibrary/stats/__init__.py | pbmanis/pylibrary | d6cb41386cd39b7a1b6678a71a704f3b9d09faef | [
"MIT"
] | 1 | 2016-06-24T18:32:40.000Z | 2016-06-24T18:32:40.000Z | pylibrary/stats/__init__.py | pbmanis/pylibrary | d6cb41386cd39b7a1b6678a71a704f3b9d09faef | [
"MIT"
] | null | null | null | pylibrary/stats/__init__.py | pbmanis/pylibrary | d6cb41386cd39b7a1b6678a71a704f3b9d09faef | [
"MIT"
] | 1 | 2019-03-20T18:03:20.000Z | 2019-03-20T18:03:20.000Z | #!/usr/bin/env python
__author__ = "Paul B. Manis"
__version__ = "0.4"
import pylibrary.stats.bootstrap
import pylibrary.stats.permutation
import pylibrary.stats.permutation_test
| 22.5 | 39 | 0.8 | 24 | 180 | 5.625 | 0.708333 | 0.333333 | 0.444444 | 0.459259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01227 | 0.094444 | 180 | 7 | 40 | 25.714286 | 0.815951 | 0.111111 | 0 | 0 | 0 | 0 | 0.100629 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
0fef6c8bf3faf9ef2b9da765a82eb4453d920361 | 29,298 | py | Python | tests/api/test_api.py | binh-vu/serene-python-client | efc72dc36ffd9224ec7d6780821e0302cda661af | [
"Apache-2.0"
] | 6 | 2017-10-20T19:55:43.000Z | 2021-06-06T14:47:24.000Z | tests/api/test_api.py | binh-vu/serene-python-client | efc72dc36ffd9224ec7d6780821e0302cda661af | [
"Apache-2.0"
] | 2 | 2019-11-17T22:51:23.000Z | 2019-11-24T06:05:37.000Z | tests/api/test_api.py | binh-vu/serene-python-client | efc72dc36ffd9224ec7d6780821e0302cda661af | [
"Apache-2.0"
] | 5 | 2017-11-30T01:08:40.000Z | 2020-05-22T22:07:16.000Z | from functools import partial
from io import TextIOBase
from mock import MagicMock, Mock, call, patch
from requests import Response, Session
from unittest2 import TestCase
from serene.api.data_api import DataSetAPI
from serene.api.exceptions import InternalError
from serene.api.model_api import ModelAPI
from serene.api.octopus_api import OctopusAPI
from serene.api.ontology_api import OntologyAPI, OwlFormat
from serene.api.ssd_api import SsdAPI
from serene.elements import DataSet, Ontology, SSD
class TestDataSetAPI(TestCase):
def setUp(self):
self.connection = Mock(Session())
self.root_uri = "http://localhost/"
self.dataset_path = "dataset/"
self.uri = self.root_uri + self.dataset_path
self.api = DataSetAPI(self.root_uri, self.connection)
self.response = Mock(Response())
self.description = "this python file"
self.file_path = __file__
self.type_map = {"a": "int"}
self.update_description = "another python file"
self.update_type_map = {"b": "string"}
def test_keys(self):
keys = [1, 2]
self.response.status_code = 200
self.response.json = Mock(return_value=keys)
self.connection.get = Mock(return_value=self.response)
result = self.api.keys()
self.assertEqual(result, keys)
self.connection.get.assert_called_with(self.uri)
def test_keys_with_connection_exception(self):
self.connection.get = Mock(side_effect=Exception)
self.assertRaises(InternalError, self.api.keys)
def test_post(self):
message = "Created"
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.post = Mock(return_value=self.response)
result = self.api.post(self.description, self.file_path, self.type_map)
args = self.connection.post.call_args
self.assertEqual(args[0][0], self.uri)
self.assertEqual(
args[1]["data"],
{"description": self.description, "typeMap": self.type_map})
self.assertIsNotNone(args[1]["files"])
self.assertEqual(result, message)
def test_post_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
api_post = partial(
self.api.post, self.description, self.file_path, self.type_map)
self.assertRaises(InternalError, api_post)
def test_update(self):
message = "Updated"
key = 1
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.post = Mock(return_value=self.response)
result = self.api.update(
key, self.update_description, self.update_type_map)
args = self.connection.post.call_args
self.assertEqual(args[0][0], self.uri + str(key))
self.assertEqual(
args[1]["data"],
{
"description": self.update_description,
"typeMap": self.update_type_map
})
self.assertEqual(result, message)
def test_update_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
api_update = partial(
self.api.update, 1, self.update_description, self.update_type_map)
self.assertRaises(InternalError, api_update)
def test_item(self):
key = 1
item = {
"description": self.description,
"typeMap": self.type_map
}
self.response.status_code = 200
self.response.json = Mock(return_value=item)
self.connection.get = Mock(return_value=self.response)
result = self.api.item(key)
self.assertEqual(result, item)
self.connection.get.assert_called_with(self.uri + str(key))
def test_item_with_connection_exception(self):
self.connection.get = Mock(side_effect=Exception)
api_item = partial(self.api.item, 1)
self.assertRaises(InternalError, api_item)
def test_delete(self):
message = "Deleted"
key = 1
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.delete = Mock(return_value=self.response)
result = self.api.delete(key)
self.assertEqual(result, message)
self.connection.delete.assert_called_with(self.uri + str(key))
def test_delete_with_connection_exception(self):
self.connection.delete = Mock(side_effect=Exception)
api_delete = partial(self.api.delete, 1)
self.assertRaises(InternalError, api_delete)
class TestModelAPI(TestCase):
def setUp(self):
self.connection = Mock(Session())
self.root_uri = "http://localhost/"
self.model_path = "model/"
self.uri = self.root_uri + self.model_path
self.api = ModelAPI(self.root_uri, self.connection)
self.response = Mock(Response())
self.feature_config = {
"activeFeatures": [
"num-unique-vals"
]
}
self.description = "test model"
self.classes = ["name", "age"]
self.model_type = "randomForest"
self.labels = {
"123": "name",
"666": "age"
}
self.cost_matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
self.resampling_strategy = "ResampleToMean"
self.num_bags = 60
self.bag_size = 90
self.data = {
"features": self.feature_config,
"description": self.description,
"classes": ["unknown"],
"modelType": self.model_type,
"labelData": self.labels,
"costMatrix": self.cost_matrix,
"resamplingStrategy": self.resampling_strategy,
"numBags": self.num_bags,
"bagSize": self.bag_size
}
def test_post(self):
message = "Created"
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.post = Mock(return_value=self.response)
result = self.api.post(
feature_config=self.feature_config,
description=self.description,
classes=None,
model_type=self.model_type,
labels=self.labels,
cost_matrix=self.cost_matrix,
resampling_strategy=self.resampling_strategy,
num_bags=self.num_bags,
bag_size=self.bag_size)
self.assertEqual(result, message)
self.connection.post.assert_called_with(self.uri, json=self.data)
def test_post_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
api_post = partial(
self.api.post,
feature_config=self.feature_config,
description=self.description,
classes=None,
model_type=self.model_type,
labels=self.labels,
cost_matrix=self.cost_matrix,
resampling_strategy=self.resampling_strategy,
num_bags=self.num_bags)
self.assertRaises(InternalError, api_post)
def test_update(self):
message = "Updated"
key = 1
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.post = Mock(return_value=self.response)
result = self.api.update(
key,
feature_config=self.feature_config,
description=self.description,
classes=["unknown"],
model_type=self.model_type,
labels=self.labels,
cost_matrix=self.cost_matrix,
resampling_strategy=self.resampling_strategy,
num_bags=self.num_bags,
bag_size=self.bag_size)
self.assertEqual(result, message)
self.connection.post.assert_called_with(
self.uri + str(key),
json=self.data)
def test_update_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
api_update = partial(
self.api.update,
1,
feature_config=self.feature_config,
description=self.description,
classes=["unknown"],
model_type=self.model_type,
labels=self.labels,
cost_matrix=self.cost_matrix,
resampling_strategy=self.resampling_strategy,
num_bags=self.num_bags)
self.assertRaises(InternalError, api_update)
def test_item(self):
key = 1
self.response.status_code = 200
self.response.json = Mock(return_value=self.data)
self.connection.get = Mock(return_value=self.response)
result = self.api.item(key)
self.assertEqual(result, self.data)
self.connection.get.assert_called_with(self.uri + str(key))
def test_item_with_connection_exception(self):
self.connection.get = Mock(side_effect=Exception)
api_item = partial(self.api.item, 1)
self.assertRaises(InternalError, api_item)
def test_delete(self):
message = "Deleted"
key = 1
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.delete = Mock(return_value=self.response)
result = self.api.delete(key)
self.assertEqual(result, message)
self.connection.delete.assert_called_with(self.uri + str(key))
def test_delete_with_connection_exception(self):
self.connection.delete = Mock(side_effect=Exception)
api_delete = partial(self.api.delete, 1)
self.assertRaises(InternalError, api_delete)
def test_keys(self):
keys = [1, 2]
self.response.status_code = 200
self.response.json = Mock(return_value=keys)
self.connection.get = Mock(return_value=self.response)
result = self.api.keys()
self.assertEqual(result, keys)
self.connection.get.assert_called_with(self.uri)
def test_keys_with_connection_exception(self):
self.connection.get = Mock(side_effect=Exception)
self.assertRaises(InternalError, self.api.keys)
def test_train(self):
key = 1
self.response.status_code = 200
self.connection.post = Mock(return_value=self.response)
result = self.api.train(key)
self.assertEqual(result, True)
self.connection.post.assert_called_with(
self.uri + str(key) + "/train")
def test_train_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
self.assertRaises(InternalError, partial(self.api.train, 1))
def test_predict(self):
modelKey = 1
dataSetKey = 2
predictions = {
"modelID": modelKey,
"dataSetID": dataSetKey,
"predictions": {
"label": "name",
"confidence": 0.6,
"scores": {"name": 0.6},
"features": {}
}
}
self.response.status_code = 200
self.response.json = Mock(return_value=predictions)
self.connection.post = Mock(return_value=self.response)
result = self.api.predict(modelKey, dataSetKey)
self.assertEqual(result, predictions)
self.connection.post.assert_called_with(
self.uri + str(modelKey) + "/predict/" + str(dataSetKey))
def test_predict_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
self.assertRaises(InternalError, partial(self.api.predict, 1, 2))
class TestOntologyAPI(TestCase):
def setUp(self):
self.connection = Mock(Session())
self.root_uri = "http://localhost/"
self.model_path = "owl/"
self.uri = self.root_uri + self.model_path
self.api = OntologyAPI(self.root_uri, self.connection)
self.response = Mock(Response())
self.description = "test ontology"
self.file_path = __file__
self.owl_format = OwlFormat.TURTLE.value
def test_post(self):
message = "Created"
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.post = Mock(return_value=self.response)
result = self.api.post(
self.description, self.file_path, self.owl_format)
args = self.connection.post.call_args
self.assertEqual(args[0][0], self.uri)
self.assertEqual(
args[1]["data"],
{"description": self.description, "format": self.owl_format})
self.assertIsNotNone(args[1]["files"]["file"])
self.assertEqual(result, message)
def test_post_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
api_post = partial(
self.api.post, self.description, self.file_path, self.owl_format)
self.assertRaises(InternalError, api_post)
def test_post_with_unsupported_format(self):
api_post = partial(
self.api.post, self.description, self.file_path, "unknown")
self.assertRaises(ValueError, api_post)
def test_update(self):
message = "Updated"
key = 1
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.post = Mock(return_value=self.response)
result = self.api.update(
key, self.description, self.file_path, self.owl_format)
args = self.connection.post.call_args
self.assertEqual(args[0][0], self.uri + str(key))
self.assertEqual(
args[1]["data"],
{"description": self.description, "format": self.owl_format})
self.assertIsNotNone(args[1]["files"]["file"])
self.assertEqual(result, message)
def test_update_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
api_update = partial(
self.api.update,
1,
self.description,
self.file_path,
self.owl_format)
self.assertRaises(InternalError, api_update)
def test_update_with_unsupported_format(self):
api_update = partial(
self.api.update, 1, self.description, self.file_path, "unknown")
self.assertRaises(ValueError, api_update)
def test_delete(self):
message = "Deleted"
key = 1
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.delete = Mock(return_value=self.response)
result = self.api.delete(key)
self.assertEqual(result, message)
self.connection.delete.assert_called_with(self.uri + str(key))
def test_delete_with_connection_exception(self):
self.connection.delete = Mock(side_effect=Exception)
api_delete = partial(self.api.delete, 1)
self.assertRaises(InternalError, api_delete)
def test_keys(self):
keys = [1, 2]
self.response.status_code = 200
self.response.json = Mock(return_value=keys)
self.connection.get = Mock(return_value=self.response)
result = self.api.keys()
self.assertEqual(result, keys)
self.connection.get.assert_called_with(self.uri)
def test_keys_with_connection_exception(self):
self.connection.get = Mock(side_effect=Exception)
self.assertRaises(InternalError, self.api.keys)
def test_item(self):
key = 1
item = {
"description": self.description,
"format": self.owl_format
}
self.response.status_code = 200
self.response.json = Mock(return_value=item)
self.connection.get = Mock(return_value=self.response)
result = self.api.item(key)
self.assertEqual(result, item)
self.connection.get.assert_called_with(self.uri + str(key))
def test_item_with_connection_exception(self):
self.connection.get = Mock(side_effect=Exception)
api_item = partial(self.api.item, 1)
self.assertRaises(InternalError, api_item)
@patch("requests.get")
def test_owl_file(self, get):
key = 3
self.api.item = Mock(return_value={"name": "test.ttl"})
self.response.status_code = 200
chunks = ["1", "2"]
self.response.iter_content = Mock(return_value=chunks)
get.configure_mock(return_value=self.response)
f = MagicMock(spec=TextIOBase)
f.__enter__ = Mock(return_value=f)
self.api._create_local_owl_file = Mock(return_value=f)
result = self.api.owl_file(key)
get.assert_called_with(self.uri + str(key) + "/file", stream=True)
self.api._create_local_owl_file.assert_called_with(result)
f.write.assert_has_calls([call(chunk) for chunk in chunks])
@patch("requests.get")
def test_owl_file_with_connection_exception(self, get):
self.api.item = Mock(return_value={"name": "test.ttl"})
get.configure_mock(side_effect=Exception)
api_owl_file = partial(self.api.owl_file, 1)
self.assertRaises(InternalError, api_owl_file)
def init_ssd(target):
target.dataset_json = {
"dateCreated": "2017-03-16T15:29:03.388",
"dateModified": "2017-03-16T15:29:03.388",
"description": "",
"filename": "businessInfo.csv",
"id": 2035625835,
"path": "/Users/li151/Dev/serene/./storage/datasets/2035625835/businessinfo.csv",
"typeMap": {},
"columns": [
{
"datasetID": 2035625835,
"id": 1246005714,
"index": 0,
"logicalType": "string",
"name": "company",
"path": "/Users/li151/Dev/serene/./storage/datasets/2035625835/businessinfo.csv",
"sample": ["Data61"],
"size": 59
},
{
"datasetID": 2035625835,
"id": 281689915,
"index": 1,
"logicalType": "string",
"name": "ceo",
"path": "/Users/li151/Dev/serene/./storage/datasets/2035625835/businessinfo.csv",
"sample": ["Garv Mcowen"],
"size": 59
}
]
}
target.dataset = DataSet(target.dataset_json)
target.ontology = Ontology().update({
"name": __file__,
"id": 123,
"description": "test ontology",
"dateCreated": "2017-03-16T15:29:03.388",
"dateModified": "2017-03-16T15:29:03.388"
})
target.ssd = SSD(target.dataset, target.ontology, "test ssd")
target.ssd_json = target.ssd.json
class TestSsdAPI(TestCase):
def setUp(self):
self.connection = Mock(Session())
self.root_uri = "http://localhost/"
self.ssd_path = "ssd/"
self.uri = self.root_uri + self.ssd_path
self.api = SsdAPI(self.root_uri, self.connection)
self.response = Mock(Response())
init_ssd(self)
def test_keys(self):
keys = [1, 2]
self.response.status_code = 200
self.response.json = Mock(return_value=keys)
self.connection.get = Mock(return_value=self.response)
result = self.api.keys()
self.assertEqual(result, keys)
self.connection.get.assert_called_with(self.uri)
def test_keys_with_connection_exception(self):
self.connection.get = Mock(side_effect=Exception)
self.assertRaises(InternalError, self.api.keys)
def test_post(self):
message = "Created"
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.post = Mock(return_value=self.response)
result = self.api.post(self.ssd_json)
args = self.connection.post.call_args
self.assertEqual(args[0][0], self.uri)
self.assertEqual(
args[1]["data"],
self.ssd_json)
self.assertEqual(result, message)
def test_post_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
api_post = partial(
self.api.post, self.ssd_json)
self.assertRaises(InternalError, api_post)
def test_update(self):
message = "Updated"
key = 1
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.post = Mock(return_value=self.response)
result = self.api.update(key, self.ssd_json)
args = self.connection.post.call_args
self.assertEqual(args[0][0], self.uri + str(key))
self.assertEqual(args[1]["data"], self.ssd_json)
self.assertEqual(result, message)
def test_update_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
api_update = partial(self.api.update, 1, self.ssd_json)
self.assertRaises(InternalError, api_update)
def test_item(self):
key = 1
self.response.status_code = 200
self.response.json = Mock(return_value=self.ssd_json)
self.connection.get = Mock(return_value=self.response)
result = self.api.item(key)
self.assertEqual(result, self.ssd_json)
self.connection.get.assert_called_with(self.uri + str(key))
def test_item_with_connection_exception(self):
self.connection.get = Mock(side_effect=Exception)
api_item = partial(self.api.item, 1)
self.assertRaises(InternalError, api_item)
def test_delete(self):
message = "Deleted"
key = 1
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.delete = Mock(return_value=self.response)
result = self.api.delete(key)
self.assertEqual(result, message)
self.connection.delete.assert_called_with(self.uri + str(key))
def test_delete_with_connection_exception(self):
self.connection.delete = Mock(side_effect=Exception)
api_delete = partial(self.api.delete, 1)
self.assertRaises(InternalError, api_delete)
class TestOctopusAPI(TestCase):
def setUp(self):
self.connection = Mock(Session())
self.root_uri = "http://localhost/"
self.octopus_path = "octopus/"
self.uri = self.root_uri + self.octopus_path
self.api = OctopusAPI(self.root_uri, self.connection)
self.response = Mock(Response())
init_ssd(self)
self.name = "test octopus"
self.feature_config = {
"activeFeatures": [
"num-unique-vals"
]
}
self.description = "test model"
self.model_type = "randomForest"
self.resampling_strategy = "ResampleToMean"
self.num_bags = 60
self.bag_size = 90
self.modeling_props = {"topkSteinerTrees": 1}
self.data = {
"ssds": [self.ssd],
"name": self.name,
"description": self.description,
"modelType": self.model_type,
"resamplingStrategy": self.resampling_strategy,
"features": self.feature_config,
"numBags": self.num_bags,
"bagSize": self.bag_size,
"ontologies": [self.ontology],
"modelingProps": self.modeling_props
}
def test_post(self):
message = "Created"
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.post = Mock(return_value=self.response)
result = self.api.post(
ssds=[self.ssd],
name=self.name,
description=self.description,
feature_config=self.feature_config,
model_type=self.model_type,
resampling_strategy=self.resampling_strategy,
num_bags=self.num_bags,
bag_size=self.bag_size,
ontologies=[self.ontology],
modeling_props=self.modeling_props)
self.assertEqual(result, message)
self.connection.post.assert_called_with(self.uri, json=self.data)
def test_post_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
api_post = partial(
self.api.post,
ssds=[self.ssd],
name=self.name,
description=self.description,
feature_config=self.feature_config,
model_type=self.model_type,
resampling_strategy=self.resampling_strategy,
num_bags=self.num_bags,
bag_size=self.bag_size,
ontologies=[self.ontology],
modeling_props=self.modeling_props)
self.assertRaises(InternalError, api_post)
def test_update(self):
message = "Updated"
key = 1
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.post = Mock(return_value=self.response)
result = self.api.update(
key,
ssds=[self.ssd],
name=self.name,
description=self.description,
feature_config=self.feature_config,
model_type=self.model_type,
resampling_strategy=self.resampling_strategy,
num_bags=self.num_bags,
bag_size=self.bag_size,
ontologies=[self.ontology],
modeling_props=self.modeling_props)
self.assertEqual(result, message)
self.connection.post.assert_called_with(
self.uri + str(key),
json=self.data)
def test_update_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
api_update = partial(
self.api.update,
1,
ssds=[self.ssd],
name=self.name,
description=self.description,
feature_config=self.feature_config,
model_type=self.model_type,
resampling_strategy=self.resampling_strategy,
num_bags=self.num_bags,
bag_size=self.bag_size,
ontologies=[self.ontology],
modeling_props=self.modeling_props)
self.assertRaises(InternalError, api_update)
def test_item(self):
key = 1
self.response.status_code = 200
self.response.json = Mock(return_value=self.data)
self.connection.get = Mock(return_value=self.response)
result = self.api.item(key)
self.assertEqual(result, self.data)
self.connection.get.assert_called_with(self.uri + str(key))
def test_item_with_connection_exception(self):
self.connection.get = Mock(side_effect=Exception)
api_item = partial(self.api.item, 1)
self.assertRaises(InternalError, api_item)
def test_delete(self):
message = "Deleted"
key = 1
self.response.status_code = 200
self.response.json = Mock(return_value=message)
self.connection.delete = Mock(return_value=self.response)
result = self.api.delete(key)
self.assertEqual(result, message)
self.connection.delete.assert_called_with(self.uri + str(key))
def test_delete_with_connection_exception(self):
self.connection.delete = Mock(side_effect=Exception)
api_delete = partial(self.api.delete, 1)
self.assertRaises(InternalError, api_delete)
def test_keys(self):
keys = [1, 2]
self.response.status_code = 200
self.response.json = Mock(return_value=keys)
self.connection.get = Mock(return_value=self.response)
result = self.api.keys()
self.assertEqual(result, keys)
self.connection.get.assert_called_with(self.uri)
def test_keys_with_connection_exception(self):
self.connection.get = Mock(side_effect=Exception)
self.assertRaises(InternalError, self.api.keys)
def test_train(self):
key = 1
self.response.status_code = 200
self.connection.post = Mock(return_value=self.response)
result = self.api.train(key)
self.assertEqual(result, True)
self.connection.post.assert_called_with(
self.uri + str(key) + "/train")
def test_train_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
self.assertRaises(InternalError, partial(self.api.train, 1))
def test_predict(self):
octopusKey = 1
dataSetKey = 2
predictions = {
"predictions": [{
"ssd": {
"name": "test ssd",
},
"score": {
"linkCost": 0.5
}
}]
}
self.response.status_code = 200
self.response.json = Mock(return_value=predictions)
self.connection.post = Mock(return_value=self.response)
result = self.api.predict(octopusKey, dataSetKey)
self.assertEqual(result, predictions)
self.connection.post.assert_called_with(
self.uri + str(octopusKey) + "/predict/" + str(dataSetKey))
def test_predict_with_connection_exception(self):
self.connection.post = Mock(side_effect=Exception)
self.assertRaises(InternalError, partial(self.api.predict, 1, 2))
| 34.631206 | 97 | 0.626015 | 3,361 | 29,298 | 5.274918 | 0.060994 | 0.076598 | 0.052456 | 0.035366 | 0.880083 | 0.863557 | 0.854983 | 0.846015 | 0.833606 | 0.826668 | 0 | 0.017018 | 0.265923 | 29,298 | 845 | 98 | 34.672189 | 0.807319 | 0 | 0 | 0.765557 | 0 | 0 | 0.053894 | 0.010308 | 0 | 0 | 0 | 0 | 0.147612 | 1 | 0.098408 | false | 0 | 0.017366 | 0 | 0.12301 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e83a65a438b6a13194bbb441caeb795d640cb798 | 2,442 | py | Python | test/test_collapses.py | volfpeter/markyp-bootstrap4 | 1af5a1f9dc861a14323706ace28882ef6555739a | [
"MIT"
] | 21 | 2019-07-16T15:03:43.000Z | 2021-11-16T10:51:58.000Z | test/test_collapses.py | volfpeter/markyp-bootstrap4 | 1af5a1f9dc861a14323706ace28882ef6555739a | [
"MIT"
] | null | null | null | test/test_collapses.py | volfpeter/markyp-bootstrap4 | 1af5a1f9dc861a14323706ace28882ef6555739a | [
"MIT"
] | null | null | null | from markyp_bootstrap4.collapses import *
def test_a_args_for():
assert a_args_for("foo") == {
"href": "#foo",
"data-toggle": "collapse",
"aria-controls": "foo",
"aria-expanded": False
}
assert a_args_for("foo", expanded=True) == {
"href": "#foo",
"data-toggle": "collapse",
"aria-controls": "foo",
"aria-expanded": True
}
assert a_args_for("foo", foo="foo", bar=42) == {
"foo": "foo",
"bar": 42,
"href": "#foo",
"data-toggle": "collapse",
"aria-controls": "foo",
"aria-expanded": False
}
def test_button_args_for():
assert button_args_for("foo") == {
"data-target": "#foo",
"data-toggle": "collapse",
"aria-controls": "foo",
"aria-expanded": False
}
assert button_args_for("foo", expanded=True) == {
"data-target": "#foo",
"data-toggle": "collapse",
"aria-controls": "foo",
"aria-expanded": True
}
assert button_args_for("foo", foo="foo", bar=42) == {
"foo": "foo",
"bar": 42,
"data-target": "#foo",
"data-toggle": "collapse",
"aria-controls": "foo",
"aria-expanded": False
}
def test_collapse():
assert collapse(identifier="collapse-id").markup ==\
'<div id="collapse-id" class="collapse"></div>'
assert collapse("First", "Second", identifier="collapse-id").markup ==\
'<div id="collapse-id" class="collapse">\nFirst\nSecond\n</div>'
assert collapse("First", "Second", identifier="collapse-id", class_="my-collapse").markup ==\
'<div id="collapse-id" class="collapse my-collapse">\nFirst\nSecond\n</div>'
assert collapse("First", "Second", identifier="collapse-id", class_="my-collapse", show=True).markup ==\
'<div id="collapse-id" class="collapse show my-collapse">\nFirst\nSecond\n</div>'
assert collapse("First", "Second", identifier="collapse-id", class_="my-collapse", show=True, foo="foo", bar="bar").markup ==\
'<div id="collapse-id" foo="foo" bar="bar" class="collapse show my-collapse">\nFirst\nSecond\n</div>'
assert collapse("First", "Second", identifier="collapse-id", accordion_id="acc-1", class_="my-collapse", show=True, foo="foo", bar="bar").markup ==\
'<div id="collapse-id" foo="foo" bar="bar" data-parent="#acc-1" class="collapse show my-collapse">\nFirst\nSecond\n</div>'
| 40.7 | 152 | 0.580262 | 293 | 2,442 | 4.750853 | 0.139932 | 0.086207 | 0.051724 | 0.090517 | 0.918103 | 0.84842 | 0.84842 | 0.808908 | 0.788793 | 0.757184 | 0 | 0.00572 | 0.212531 | 2,442 | 59 | 153 | 41.389831 | 0.718149 | 0 | 0 | 0.5 | 0 | 0.071429 | 0.43407 | 0.084767 | 0 | 0 | 0 | 0 | 0.214286 | 1 | 0.053571 | true | 0 | 0.017857 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e899c683c384970166dde42cf599b2b782a4e3e9 | 4,262 | py | Python | tests/test_Counter.py | mouckatron/pyprogress | 11b1a2f427bb67b17409acc5ea117a041e9630e4 | [
"MIT"
] | null | null | null | tests/test_Counter.py | mouckatron/pyprogress | 11b1a2f427bb67b17409acc5ea117a041e9630e4 | [
"MIT"
] | 1 | 2015-12-09T16:40:43.000Z | 2017-02-01T13:00:31.000Z | tests/test_Counter.py | mouckatron/pyprogress | 11b1a2f427bb67b17409acc5ea117a041e9630e4 | [
"MIT"
] | null | null | null |
from . import TestStdoutReader
import pyprogress
class TestCounter(TestStdoutReader):
def tearDown(self):
self.c.stop()
self.c.join()
TestStdoutReader.tearDown(self)
def test_counter_no_total(self):
output = ['0', '\b1', '\b2', '\b3', '\b4', '\b5']
self.c = pyprogress.Counter()
self.c.start()
assert self.stdout.getvalue().strip() == output[0]
self.stdout.truncate(0)
for x in range(1, 6):
self.c.inc()
self.c.write() # force write output
assert self.stdout.getvalue().strip('\x00').strip() == output[x]
self.stdout.truncate(0)
def test_counter_with_total(self):
output = ['0/5', '\b\b\b1/5', '\b\b\b2/5', '\b\b\b3/5', '\b\b\b4/5', '\b\b\b5/5']
self.c = pyprogress.Counter(total=5)
self.c.start()
assert self.stdout.getvalue().strip() == output[0]
self.stdout.truncate(0)
for x in range(1, 6):
self.c.inc()
self.c.write() # force write output
assert self.stdout.getvalue().strip('\x00').strip() == output[x]
self.stdout.truncate(0)
def test_counter_initial(self):
output = ['2', '\b3', '\b4', '\b5']
self.c = pyprogress.Counter(initial=2)
self.c.start()
assert self.stdout.getvalue().strip() == output[0]
self.stdout.truncate(0)
for x in range(1, 4):
self.c.inc()
self.c.write() # force write output
assert self.stdout.getvalue().strip('\x00').strip() == output[x]
self.stdout.truncate(0)
def test_counter_inc_2(self):
output = ['0/10',
'\b\b\b\b2/10',
'\b\b\b\b4/10',
'\b\b\b\b6/10',
'\b\b\b\b8/10',
'\b\b\b\b10/10']
self.c = pyprogress.Counter(total=10)
self.c.start()
assert self.stdout.getvalue().strip() == output[0]
self.stdout.truncate(0)
for x in range(1, 6):
self.c.inc(2)
self.c.write()
assert self.stdout.getvalue().strip('\x00').strip() == output[x]
self.stdout.truncate(0)
class TestCounter(TestStdoutReader):
def test_counter_no_total(self):
output = ['0', '\b1', '\b2', '\b3', '\b4', '\b5']
with pyprogress.Counter() as c:
assert self.stdout.getvalue().strip() == output[0]
self.stdout.truncate(0)
for x in range(1, 6):
c.inc()
c.write() # force write output
assert self.stdout.getvalue().strip('\x00').strip() == output[x]
self.stdout.truncate(0)
def test_counter_with_total(self):
output = ['0/5', '\b\b\b1/5', '\b\b\b2/5', '\b\b\b3/5', '\b\b\b4/5', '\b\b\b5/5']
with pyprogress.Counter(total=5) as c:
assert self.stdout.getvalue().strip() == output[0]
self.stdout.truncate(0)
for x in range(1, 6):
c.inc()
c.write() # force write output
assert self.stdout.getvalue().strip('\x00').strip() == output[x]
self.stdout.truncate(0)
def test_counter_initial(self):
output = ['2', '\b3', '\b4', '\b5']
with pyprogress.Counter(initial=2) as c:
assert self.stdout.getvalue().strip() == output[0]
self.stdout.truncate(0)
for x in range(1, 4):
c.inc()
c.write() # force write output
assert self.stdout.getvalue().strip('\x00').strip() == output[x]
self.stdout.truncate(0)
def test_counter_inc_2(self):
output = ['0/10',
'\b\b\b\b2/10',
'\b\b\b\b4/10',
'\b\b\b\b6/10',
'\b\b\b\b8/10',
'\b\b\b\b10/10']
with pyprogress.Counter(total=10) as c:
assert self.stdout.getvalue().strip() == output[0]
self.stdout.truncate(0)
for x in range(1, 6):
c.inc(2)
c.write()
assert self.stdout.getvalue().strip('\x00').strip() == output[x]
self.stdout.truncate(0)
| 36.118644 | 89 | 0.499061 | 555 | 4,262 | 3.792793 | 0.09009 | 0.152019 | 0.121615 | 0.182423 | 0.88266 | 0.857007 | 0.837055 | 0.816152 | 0.816152 | 0.816152 | 0 | 0.052265 | 0.326607 | 4,262 | 117 | 90 | 36.42735 | 0.681185 | 0.026513 | 0 | 0.843137 | 0 | 0 | 0.074861 | 0 | 0 | 0 | 0 | 0 | 0.156863 | 1 | 0.088235 | false | 0 | 0.019608 | 0 | 0.127451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e8b00ff9035fefe6e4f3e61af93453f299e81a52 | 17,379 | py | Python | samplesheets/tests/test_views_ajax_taskflow.py | bihealth/sodar-server | 0c6a03c274ab34cd8987280fe97dc8989551d4bd | [
"MIT"
] | null | null | null | samplesheets/tests/test_views_ajax_taskflow.py | bihealth/sodar-server | 0c6a03c274ab34cd8987280fe97dc8989551d4bd | [
"MIT"
] | 1 | 2021-05-28T10:59:49.000Z | 2021-06-03T12:30:23.000Z | samplesheets/tests/test_views_ajax_taskflow.py | bihealth/sodar-server | 0c6a03c274ab34cd8987280fe97dc8989551d4bd | [
"MIT"
] | null | null | null | """Tests for Ajax API views in the samplesheets app with Taskflow enabled"""
import os
from django.conf import settings
from django.urls import reverse
from unittest.case import skipIf
from samplesheets.models import IrodsDataRequest
from samplesheets.tests.test_views import (
IRODS_BACKEND_ENABLED,
IRODS_BACKEND_SKIP_MSG,
)
from samplesheets.tests.test_views_taskflow import (
TestIrodsRequestViewsBase,
TEST_FILE_NAME2,
)
# Local constants
IRODS_NON_PROJECT_PATH = (
'/' + settings.IRODS_ZONE + '/home/' + settings.IRODS_USER
)
IRODS_FAIL_COLL = 'xeiJ1Vie'
@skipIf(not IRODS_BACKEND_ENABLED, IRODS_BACKEND_SKIP_MSG)
class TestIrodsRequestCreateAjaxView(TestIrodsRequestViewsBase):
"""Tests for IrodsRequestCreateAjaxView"""
def test_create_request(self):
"""Test creating a delete request on a data object"""
self.assertEqual(IrodsDataRequest.objects.count(), 0)
self.assertEqual(self._get_create_alert_count(self.user), 0)
self.assertEqual(self._get_create_alert_count(self.user_delegate), 0)
with self.login(self.user):
response = self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
self.assertEqual(IrodsDataRequest.objects.count(), 1)
# Assert response
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data['detail'], 'ok')
self.assertEqual(response.data['status'], 'ACTIVE')
self.assertEqual(self._get_create_alert_count(self.user), 1)
self.assertEqual(self._get_create_alert_count(self.user_delegate), 1)
def test_create_exists_same_user(self):
"""Test creating delete request if request for same user exists"""
with self.login(self.user_contrib):
self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
self.assertEqual(IrodsDataRequest.objects.count(), 1)
self.assertEqual(self._get_create_alert_count(self.user), 1)
self.assertEqual(self._get_create_alert_count(self.user_delegate), 1)
with self.login(self.user_contrib):
response = self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
# Assert response
self.assertEqual(response.status_code, 400)
self.assertEqual(
response.data['detail'], 'active request for path already exists'
)
self.assertEqual(self._get_create_alert_count(self.user), 1)
self.assertEqual(self._get_create_alert_count(self.user_delegate), 1)
def test_create_exists_as_admin_by_contributor(self):
"""Test creating request as admin if request from contributor exists"""
with self.login(self.user_contrib):
self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
self.assertEqual(IrodsDataRequest.objects.count(), 1)
with self.login(self.user):
response = self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
# Assert response
self.assertEqual(response.status_code, 400)
self.assertEqual(
response.data['detail'], 'active request for path already exists'
)
def test_create_exists_as_contributor_by_contributor2(self):
"""Test creating request as contributor if request by contributor2 exists"""
with self.login(self.user_contrib):
self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
self.assertEqual(IrodsDataRequest.objects.count(), 1)
with self.login(self.user_contrib2):
response = self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
# Assert response
self.assertEqual(response.status_code, 400)
self.assertEqual(
response.data['detail'], 'active request for path already exists'
)
def test_create_multiple(self):
"""Test creating multiple delete requests"""
path2 = os.path.join(self.assay_path, TEST_FILE_NAME2)
path2_md5 = os.path.join(self.assay_path, TEST_FILE_NAME2 + '.md5')
self.irods_session.data_objects.create(path2)
self.irods_session.data_objects.create(path2_md5)
self.assertEqual(IrodsDataRequest.objects.count(), 0)
self.assertEqual(self._get_create_alert_count(self.user), 0)
self.assertEqual(self._get_create_alert_count(self.user_delegate), 0)
with self.login(self.user):
self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
with self.login(self.user):
self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': path2},
)
self.assertEqual(IrodsDataRequest.objects.count(), 2)
self.assertEqual(self._get_create_alert_count(self.user), 1)
self.assertEqual(self._get_create_alert_count(self.user_delegate), 1)
@skipIf(not IRODS_BACKEND_ENABLED, IRODS_BACKEND_SKIP_MSG)
class TestIrodsRequestDeleteAjaxView(TestIrodsRequestViewsBase):
"""Tests for IrodsRequestDeleteAjaxView"""
def test_delete_request(self):
"""Test GET request for deleting an existing delete request"""
# Create request
with self.login(self.user_contrib):
self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
self.assertEqual(IrodsDataRequest.objects.count(), 1)
self.assertEqual(self._get_create_alert_count(self.user), 1)
self.assertEqual(self._get_create_alert_count(self.user_delegate), 1)
# Delete request
with self.login(self.user_contrib):
response = self.client.post(
reverse(
'samplesheets:ajax_irods_request_delete',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
self.assertEqual(IrodsDataRequest.objects.count(), 0)
# Assert response
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data['detail'], 'ok')
self.assertEqual(response.data['status'], None)
self.assertEqual(self._get_create_alert_count(self.user), 0)
self.assertEqual(self._get_create_alert_count(self.user_delegate), 0)
def test_delete_request_as_admin_by_contributor(self):
"""Test deleting an existing delete request"""
# Create request
with self.login(self.user_contrib):
self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
self.assertEqual(IrodsDataRequest.objects.count(), 1)
# Delete request
with self.login(self.user):
response = self.client.post(
reverse(
'samplesheets:ajax_irods_request_delete',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
self.assertEqual(IrodsDataRequest.objects.count(), 0)
# Assert response
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data['detail'], 'ok')
self.assertEqual(response.data['status'], None)
def test_delete_request_as_contributor_by_contributor2(self):
"""Test GET request for deleting an existing delete request"""
# Create request
with self.login(self.user_contrib):
self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
self.assertEqual(IrodsDataRequest.objects.count(), 1)
# Delete request
with self.login(self.user_contrib2):
response = self.client.post(
reverse(
'samplesheets:ajax_irods_request_delete',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
self.assertEqual(IrodsDataRequest.objects.count(), 1)
# Assert response
self.assertEqual(response.status_code, 403)
self.assertEqual(
response.data['detail'], 'User not allowed to delete request'
)
def test_delete_request_doesnt_exist(self):
"""Test deleting a delete request that doesn't exist"""
self.assertEqual(IrodsDataRequest.objects.count(), 0)
# Delete request
with self.login(self.user):
response = self.client.post(
reverse(
'samplesheets:ajax_irods_request_delete',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
# Assert response
self.assertEqual(response.status_code, 404)
self.assertEqual(response.data['detail'], 'Request not found')
def test_delete_one_of_multiple(self):
"""Test deleting one of multiple requests"""
path2 = os.path.join(self.assay_path, TEST_FILE_NAME2)
path2_md5 = os.path.join(self.assay_path, TEST_FILE_NAME2 + '.md5')
self.irods_session.data_objects.create(path2)
self.irods_session.data_objects.create(path2_md5)
self.assertEqual(IrodsDataRequest.objects.count(), 0)
self.assertEqual(self._get_create_alert_count(self.user), 0)
self.assertEqual(self._get_create_alert_count(self.user_delegate), 0)
with self.login(self.user_contrib):
self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': path2},
)
self.assertEqual(IrodsDataRequest.objects.count(), 2)
self.assertEqual(self._get_create_alert_count(self.user), 1)
self.assertEqual(
self._get_create_alert_count(self.user_delegate), 1
)
self.client.post(
reverse(
'samplesheets:ajax_irods_request_delete',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
self.assertEqual(IrodsDataRequest.objects.count(), 1)
self.assertEqual(self._get_create_alert_count(self.user), 1)
self.assertEqual(
self._get_create_alert_count(self.user_delegate), 1
)
@skipIf(not IRODS_BACKEND_ENABLED, IRODS_BACKEND_SKIP_MSG)
class TestIrodsObjectListAjaxView(TestIrodsRequestViewsBase):
"""Tests for IrodsObjectListAjaxView"""
def test_get_coll_obj_with_delete_request(self):
"""Test listing collection with data object with delete request"""
# Create request
with self.login(self.user_contrib):
self.client.post(
reverse(
'samplesheets:ajax_irods_request_create',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.path},
)
self.assertEqual(IrodsDataRequest.objects.count(), 1)
with self.login(self.user_contrib):
response = self.client.get(
reverse(
'samplesheets:ajax_irods_objects',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.assay_path},
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.json()['irods_data'][0]['name'], 'test1')
self.assertEqual(response.json()['irods_data'][0]['path'], self.path)
self.assertEqual(
response.json()['irods_data'][0]['irods_request_status'],
'ACTIVE',
)
def test_get_empty_coll(self):
"""Test GET request for listing an empty collection in iRODS"""
self.irods_session.data_objects.get(self.path).unlink(force=True)
self.irods_session.data_objects.get(self.path_md5).unlink(force=True)
with self.login(self.user):
response = self.client.get(
reverse(
'samplesheets:ajax_irods_objects',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.assay_path},
)
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.data['irods_data']), 0)
def test_get_coll_obj(self):
"""Test GET request for listing a collection with a data object"""
with self.login(self.user):
response = self.client.get(
reverse(
'samplesheets:ajax_irods_objects',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.assay_path},
)
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.data['irods_data']), 1)
list_obj = response.data['irods_data'][0]
self.assertNotIn('md5_file', list_obj)
self.assertEqual(self.file_obj.name, list_obj['name'])
self.assertEqual(self.file_obj.path, list_obj['path'])
self.assertEqual(self.file_obj.size, 0)
def test_get_coll_not_found(self):
"""Test GET request for listing a collection which doesn't exist"""
fail_path = self.assay_path + '/' + IRODS_FAIL_COLL
self.assertEqual(
self.irods_session.collections.exists(fail_path), False
)
with self.login(self.user):
response = self.client.get(
reverse(
'samplesheets:ajax_irods_objects',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': fail_path},
)
self.assertEqual(response.status_code, 404)
def test_get_coll_not_in_project(self):
"""Test GET request for listing a collection not belonging to project"""
self.assertEqual(
self.irods_session.collections.exists(IRODS_NON_PROJECT_PATH), True
)
with self.login(self.user):
response = self.client.get(
reverse(
'samplesheets:ajax_irods_objects',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': IRODS_NON_PROJECT_PATH},
)
self.assertEqual(response.status_code, 400)
def test_get_no_access(self):
"""Test GET request for listing with no acces for the iRODS folder"""
new_user = self.make_user('new_user')
self._make_assignment(
self.project, new_user, self.role_contributor
) # No taskflow
with self.login(new_user):
response = self.client.get(
reverse(
'samplesheets:ajax_irods_objects',
kwargs={'project': self.project.sodar_uuid},
),
data={'path': self.assay_path},
)
self.assertEqual(response.status_code, 403)
| 37.862745 | 84 | 0.58611 | 1,793 | 17,379 | 5.461796 | 0.081428 | 0.114878 | 0.065761 | 0.074339 | 0.839988 | 0.800368 | 0.769019 | 0.746349 | 0.725314 | 0.724497 | 0 | 0.009895 | 0.308015 | 17,379 | 458 | 85 | 37.945415 | 0.804424 | 0.078083 | 0 | 0.727528 | 0 | 0 | 0.101987 | 0.059482 | 0 | 0 | 0 | 0 | 0.213483 | 1 | 0.044944 | false | 0 | 0.019663 | 0 | 0.073034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e8ce61e2a727c52c683fc9027e14ddf6172505fb | 12,921 | py | Python | halotools/mock_observables/pair_counters/test_pair_counters/test_positional_marked_npairs_xy_z.py | nehapjoshi/halotools | ad9e183ee7471f7876201ce83fdc36a76e653902 | [
"BSD-3-Clause"
] | null | null | null | halotools/mock_observables/pair_counters/test_pair_counters/test_positional_marked_npairs_xy_z.py | nehapjoshi/halotools | ad9e183ee7471f7876201ce83fdc36a76e653902 | [
"BSD-3-Clause"
] | null | null | null | halotools/mock_observables/pair_counters/test_pair_counters/test_positional_marked_npairs_xy_z.py | nehapjoshi/halotools | ad9e183ee7471f7876201ce83fdc36a76e653902 | [
"BSD-3-Clause"
] | null | null | null | """
"""
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import pytest
from ..positional_marked_npairs_xy_z import positional_marked_npairs_xy_z
from ..npairs_xy_z import npairs_xy_z
from ...tests.cf_helpers import (
generate_3d_regular_mesh,
generate_locus_of_3d_points,
)
from ....utils.vector_utilities import (
normalized_vectors,
angles_between_list_of_vectors,
)
slow = pytest.mark.slow
__all__ = (
"test_1",
"test_2",
"test_3",
"test_4",
"test_threading",
"test_unweighted_counts",
)
def generate_interlacing_grids(npts_per_dim, period=1.0):
"""
return two sets of interlaced points on a grid
"""
dmin, dmax = 0.0, period
dx = (dmax - dmin) / float(npts_per_dim)
npts_mesh1 = npts_per_dim ** 3
mesh1_points = generate_3d_regular_mesh(npts_per_dim, dmin=dmin, dmax=dmax)
mesh2_points = mesh1_points + dx / 2.0
npts_mesh2 = mesh2_points.shape[0]
return mesh1_points, mesh2_points
def generate_aligned_vectors(npts, dim=2):
"""
return a set of aligned vectors, all pointing in a random direction
"""
vector = normalized_vectors(np.random.random(dim))
vectors = np.tile(vector, npts).reshape((npts, dim))
return vectors
def test_1():
"""
test weighting function 1
"""
# generate two locusts of points
npts = 100
epsilon = 0.001
# #cluster 1
coords1 = generate_locus_of_3d_points(npts, 0.1, 0.1, 0.1, epsilon=epsilon)
# cluster 2
coords2 = generate_locus_of_3d_points(npts, 0.9, 0.9, 0.9, epsilon=epsilon)
# generate orientation vectors for cluster 1
vectors1 = generate_aligned_vectors(len(coords1))
# calculate dot product between vectors1 and cluster 2
rp = np.sqrt((0.9 - 0.1) ** 2 + (0.9 - 0.1) ** 2)
pi = 0.9 - 0.1
# s, vector between coords1 and cluster2
sp = np.zeros((npts, 2))
sp[:, 0] = 0.9 - coords1[:, 0]
sp[:, 1] = 0.9 - coords1[:, 1]
# calculate dot product between orientation and direction between cluster 1 and 2
angles = angles_between_list_of_vectors(vectors1, sp)
costheta = np.cos(angles) # dot product between vectors
avg_costheta = np.mean(costheta)
# define radial bins
rp_bins = np.array([0.0, 0.1, rp + 2.0 * epsilon])
pi_bins = np.array([0.0, 0.1, pi + 2.0 * epsilon])
# define weights appropiate for weighting function
weights1 = np.ones((npts, 3))
weights1[:, 1] = vectors1[:, 0]
weights1[:, 2] = vectors1[:, 1]
weights2 = np.ones(npts)
# calculate weighted counts
weighted_counts, counts = positional_marked_npairs_xy_z(
coords1,
coords2,
rp_bins,
pi_bins,
period=None,
weights1=weights1,
weights2=weights2,
weight_func_id=1,
num_threads=1,
)
msg = "weighted counts do not match expected result given the weighting function"
print(weighted_counts)
assert np.isclose(
weighted_counts[-1, -1], avg_costheta * counts[-1, -1], rtol=1.0 / npts
), msg
def test_2():
"""
test weighting function 2
"""
# generate two locusts of points
npts = 100
epsilon = 0.001
# #cluster 1
coords1 = generate_locus_of_3d_points(npts, 0.1, 0.1, 0.1, epsilon=epsilon)
# cluster 2
coords2 = generate_locus_of_3d_points(npts, 0.9, 0.9, 0.9, epsilon=epsilon)
# generate orientation vectors for cluster 1
vectors1 = generate_aligned_vectors(len(coords1))
# calculate dot product between vectors1 and cluster 2
rp = np.sqrt((0.9 - 0.1) ** 2 + (0.9 - 0.1) ** 2)
pi = 0.9 - 0.1
# s, vector between coords1 and cluster2
sp = np.zeros((npts, 2))
sp[:, 0] = 0.9 - coords1[:, 0]
sp[:, 1] = 0.9 - coords1[:, 1]
# calculate dot product between orientation and direction between cluster 1 and 2
angles = angles_between_list_of_vectors(vectors1, sp)
avg_two_costheta_1 = np.mean(np.cos(2.0 * angles))
avg_two_costheta_2 = np.mean(2.0 * np.cos(angles) * np.cos(angles) - 1.0)
assert np.isclose(
avg_two_costheta_1, avg_two_costheta_2
) # test trig identify used in weighting function
avg_two_costheta = avg_two_costheta_2
# define radial bins
rp_bins = np.array([0.0, 0.1, rp + 2.0 * epsilon])
pi_bins = np.array([0.0, 0.1, pi + 2.0 * epsilon])
# define weights appropiate for weighting function
weights1 = np.ones((npts, 3))
weights1[:, 1] = vectors1[:, 0]
weights1[:, 2] = vectors1[:, 1]
weights2 = np.ones(npts)
# calculate weighted counts
weighted_counts, counts = positional_marked_npairs_xy_z(
coords1,
coords2,
rp_bins,
pi_bins,
period=None,
weights1=weights1,
weights2=weights2,
weight_func_id=2,
num_threads=1,
)
msg = "weighted counts do not match expected result given the weighting function"
assert np.isclose(
weighted_counts[-1, -1], avg_two_costheta * counts[-1, -1], rtol=1.0 / npts
), msg
def test_3():
"""
test weighting function 3
"""
# generate two locusts of points
npts = 100
epsilon = 0.001
# #cluster 1
coords1 = generate_locus_of_3d_points(npts, 0.1, 0.1, 0.1, epsilon=epsilon)
# cluster 2
coords2 = generate_locus_of_3d_points(npts, 0.9, 0.9, 0.9, epsilon=epsilon)
# generate orientation vectors for cluster 1
vectors1 = generate_aligned_vectors(len(coords1))
# calculate dot product between vectors1 and cluster 2
rp = np.sqrt((0.9 - 0.1) ** 2 + (0.9 - 0.1) ** 2)
pi = 0.9 - 0.1
# s, vector between coords1 and cluster2
sp = np.zeros((npts, 2))
sp[:, 0] = 0.9 - coords1[:, 0]
sp[:, 1] = 0.9 - coords1[:, 1]
# calculate dot product between orientation and direction between cluster 1 and 2
angles = angles_between_list_of_vectors(vectors1, sp)
avg_two_sintheta = np.mean(np.sin(2.0 * angles))
# define radial bins
rp_bins = np.array([0.0, 0.1, rp + 2.0 * epsilon])
pi_bins = np.array([0.0, 0.1, pi + 2.0 * epsilon])
# define weights appropiate for weighting function
weights1 = np.ones((npts, 3))
weights1[:, 1] = vectors1[:, 0]
weights1[:, 2] = vectors1[:, 1]
weights2 = np.ones(npts)
# calculate weighted counts
weighted_counts, counts = positional_marked_npairs_xy_z(
coords1,
coords2,
rp_bins,
pi_bins,
period=None,
weights1=weights1,
weights2=weights2,
weight_func_id=3,
num_threads=1,
)
msg = "weighted counts do not match expected result given the weighting function"
assert np.isclose(
weighted_counts[-1, -1], avg_two_sintheta * counts[-1, -1], rtol=1.0 / npts
), msg
def test_4():
"""
test weighting function 4
"""
# generate two locusts of points
npts = 100
epsilon = 0.001
# #cluster 1
coords1 = generate_locus_of_3d_points(npts, 0.1, 0.1, 0.1, epsilon=epsilon)
# cluster 2
coords2 = generate_locus_of_3d_points(npts, 0.9, 0.9, 0.9, epsilon=epsilon)
# generate orientation vectors for cluster 1
vectors1 = generate_aligned_vectors(len(coords1))
# calculate dot product between vectors1 and cluster 2
rp = np.sqrt((0.9 - 0.1) ** 2 + (0.9 - 0.1) ** 2)
pi = 0.9 - 0.1
# s, vector between coords1 and cluster2
sp = np.zeros((npts, 2))
sp[:, 0] = 0.9 - coords1[:, 0]
sp[:, 1] = 0.9 - coords1[:, 1]
# calculate dot product between orientation and direction between cluster 1 and 2
angles = angles_between_list_of_vectors(vectors1, sp)
costheta_squared = np.cos(angles) * np.cos(angles) # dot product between vectors
avg_costheta_squared = np.mean(costheta_squared)
# define radial bins
rp_bins = np.array([0.0, 0.1, rp + 2.0 * epsilon])
pi_bins = np.array([0.0, 0.1, pi + 2.0 * epsilon])
# define weights appropiate for weighting function
weights1 = np.ones((npts, 3))
weights1[:, 1] = vectors1[:, 0]
weights1[:, 2] = vectors1[:, 1]
weights2 = np.ones(npts)
# calculate weighted counts
weighted_counts, counts = positional_marked_npairs_xy_z(
coords1,
coords2,
rp_bins,
pi_bins,
period=None,
weights1=weights1,
weights2=weights2,
weight_func_id=4,
num_threads=1,
)
msg = "weighted counts do not match expected result given the weighting function"
assert np.isclose(
weighted_counts[-1, -1], avg_costheta_squared * counts[-1, -1], rtol=1.0 / npts
), msg
def test_threading():
"""
test to make sure the result is the same with and without threading for each weighting function
"""
npts = 100
random_coords = np.random.random((npts, 3))
random_vectors = np.random.random((npts, 3)) * 2.0 - 1.0
period = np.array([1.0, 1.0, 1.0])
rp_bins = np.linspace(0.0, 0.3, 5)
pi_bins = np.linspace(0.0, 0.3, 5)
weights1 = np.ones((npts, 3))
weights1[:, 1] = random_vectors[:, 0]
weights1[:, 2] = random_vectors[:, 1]
weights2 = np.ones(npts)
msg = "counts do not match for different ``num_threads``."
weighted_counts_1, counts_1 = positional_marked_npairs_xy_z(
random_coords,
random_coords,
rp_bins,
pi_bins,
period=period,
weights1=weights1,
weights2=weights2,
weight_func_id=1,
num_threads=1,
)
weighted_counts_2, counts_2 = positional_marked_npairs_xy_z(
random_coords,
random_coords,
rp_bins,
pi_bins,
period=period,
weights1=weights1,
weights2=weights2,
weight_func_id=1,
num_threads=3,
)
assert np.allclose(weighted_counts_1, weighted_counts_2), msg
assert np.allclose(counts_1, counts_2), msg
weighted_counts_1, counts_1 = positional_marked_npairs_xy_z(
random_coords,
random_coords,
rp_bins,
pi_bins,
period=period,
weights1=weights1,
weights2=weights2,
weight_func_id=2,
num_threads=1,
)
weighted_counts_2, counts_2 = positional_marked_npairs_xy_z(
random_coords,
random_coords,
rp_bins,
pi_bins,
period=period,
weights1=weights1,
weights2=weights2,
weight_func_id=2,
num_threads=3,
)
assert np.allclose(weighted_counts_1, weighted_counts_2), msg
assert np.allclose(counts_1, counts_2), msg
weighted_counts_1, counts_1 = positional_marked_npairs_xy_z(
random_coords,
random_coords,
rp_bins,
pi_bins,
period=period,
weights1=weights1,
weights2=weights2,
weight_func_id=3,
num_threads=1,
)
weighted_counts_2, counts_2 = positional_marked_npairs_xy_z(
random_coords,
random_coords,
rp_bins,
pi_bins,
period=period,
weights1=weights1,
weights2=weights2,
weight_func_id=3,
num_threads=3,
)
assert np.allclose(weighted_counts_1, weighted_counts_2), msg
assert np.allclose(counts_1, counts_2), msg
weighted_counts_1, counts_1 = positional_marked_npairs_xy_z(
random_coords,
random_coords,
rp_bins,
pi_bins,
period=period,
weights1=weights1,
weights2=weights2,
weight_func_id=4,
num_threads=1,
)
weighted_counts_2, counts_2 = positional_marked_npairs_xy_z(
random_coords,
random_coords,
rp_bins,
pi_bins,
period=period,
weights1=weights1,
weights2=weights2,
weight_func_id=4,
num_threads=3,
)
assert np.allclose(weighted_counts_1, weighted_counts_2), msg
assert np.allclose(counts_1, counts_2), msg
def test_unweighted_counts():
"""
test to make sure the unweighted counts result is the same as npairs_3d
"""
npts = 100
random_coords = np.random.random((npts, 3))
random_vectors = np.random.random((npts, 3)) * 2.0 - 1.0
period = np.array([1.0, 1.0, 1.0])
rp_bins = np.linspace(0.0, 0.3, 5)
pi_bins = np.linspace(0.0, 0.3, 5)
weights1 = np.ones((npts, 3))
weights1[:, 1] = random_vectors[:, 0]
weights1[:, 2] = random_vectors[:, 1]
weights2 = np.ones(npts)
weighted_counts_1, counts_1 = positional_marked_npairs_xy_z(
random_coords,
random_coords,
rp_bins,
pi_bins,
period=period,
weights1=weights1,
weights2=weights2,
weight_func_id=1,
num_threads=1,
)
counts_2 = npairs_xy_z(
random_coords, random_coords, rp_bins, pi_bins, period=period, num_threads=3
)
msg = "unweighted counts do no match npairs_3d result"
assert np.allclose(counts_1, counts_2), msg
| 28.21179 | 99 | 0.632691 | 1,813 | 12,921 | 4.293988 | 0.083839 | 0.009762 | 0.007707 | 0.046243 | 0.824277 | 0.807193 | 0.802954 | 0.802954 | 0.798715 | 0.781503 | 0 | 0.062135 | 0.258881 | 12,921 | 457 | 100 | 28.273523 | 0.750835 | 0.149292 | 0 | 0.766773 | 0 | 0 | 0.041412 | 0.002034 | 0 | 0 | 0 | 0 | 0.044728 | 1 | 0.025559 | false | 0 | 0.022364 | 0 | 0.054313 | 0.00639 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2cd1a23052ac78acdf138f8c8c37f8dbfa1490c8 | 139 | py | Python | cvxpy/cvxcore/python/__init__.py | jasondark/cvxpy | 56aaa01b0e9d98ae5a91a923708129a7b37a6f18 | [
"ECL-2.0",
"Apache-2.0"
] | 3,285 | 2015-01-03T04:02:29.000Z | 2021-04-19T14:51:29.000Z | cvxpy/cvxcore/python/__init__.py | h-vetinari/cvxpy | 86307f271819bb78fcdf64a9c3a424773e8269fa | [
"ECL-2.0",
"Apache-2.0"
] | 1,138 | 2015-01-01T19:40:14.000Z | 2021-04-18T23:37:31.000Z | cvxpy/cvxcore/python/__init__.py | h-vetinari/cvxpy | 86307f271819bb78fcdf64a9c3a424773e8269fa | [
"ECL-2.0",
"Apache-2.0"
] | 765 | 2015-01-02T19:29:39.000Z | 2021-04-20T00:50:43.000Z | # TODO(akshayka): This is a hack; the swig-auto-generated cvxcore.py
# tries to import cvxcore as `from . import _cvxcore`
import _cvxcore
| 34.75 | 68 | 0.76259 | 22 | 139 | 4.727273 | 0.772727 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151079 | 139 | 3 | 69 | 46.333333 | 0.881356 | 0.848921 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
2cd2903dc217190bffdbf9be8d377be8f0d08301 | 4,600 | py | Python | tests/test_parse_stimulus_elements.py | learningsimulator/learningsimulator | 79b00bb0155537a4219637e68d5092fd10a1017f | [
"MIT"
] | 7 | 2020-07-14T20:30:23.000Z | 2022-02-14T05:58:22.000Z | tests/test_parse_stimulus_elements.py | learningsimulator/learningsimulator | 79b00bb0155537a4219637e68d5092fd10a1017f | [
"MIT"
] | 89 | 2020-11-25T18:38:21.000Z | 2022-02-25T12:37:45.000Z | tests/test_parse_stimulus_elements.py | learningsimulator/learningsimulator | 79b00bb0155537a4219637e68d5092fd10a1017f | [
"MIT"
] | null | null | null | from .testutil import LsTestCase
from keywords import STIMULUS_ELEMENTS
from parsing import Script
def parse(text):
script = Script(text)
script.parse()
return script.script_parser.parameters.val[STIMULUS_ELEMENTS]
class TestBasic(LsTestCase):
def setUp(self):
pass
def test_simple(self):
text = '''
stimulus_elements: B1, b1, B2, b2
'''
stimulus_elements = parse(text)
self.assertEqual(stimulus_elements, {'B1', 'b1', 'B2', 'b2'})
text = '''
stimulus_elements : B1,b1, B2, b2
'''
stimulus_elements = parse(text)
self.assertEqual(stimulus_elements, {'B1', 'b1', 'B2', 'b2'})
def test_multiline(self):
text = '''
stimulus_elements: b1, b2,
b3, b4
'''
stimulus_elements = parse(text)
self.assertEqual(stimulus_elements, {'b1', 'b2', 'b3', 'b4'})
text = '''
stimulus_elements : b1,
b2, b3,
b4,
b5
'''
stimulus_elements = parse(text)
self.assertEqual(stimulus_elements, {'b1', 'b2', 'b3', 'b4', 'b5'})
def test_redefinition(self):
text = '''
stimulus_elements: b1, b2, b3, b4
stimulus_elements: x1, x2
'''
stimulus_elements = parse(text)
self.assertEqual(stimulus_elements, {'x1', 'x2'})
text = '''
stimulus_elements: x1, x2
stimulus_elements: b1, b2, b3, b4
'''
stimulus_elements = parse(text)
self.assertEqual(stimulus_elements, {'b1', 'b2', 'b3', 'b4'})
text = '''
stimulus_elements: b1, b2,
b3, b4
stimulus_elements: x1,
x2
'''
stimulus_elements = parse(text)
self.assertEqual(stimulus_elements, {'x1', 'x2'})
text = '''
stimulus_elements: x1, x2
stimulus_elements: b1,
b2, b3,
b4
'''
stimulus_elements = parse(text)
self.assertEqual(stimulus_elements, {'b1', 'b2', 'b3', 'b4'})
class TestParsestimulus_elementsErrors(LsTestCase):
def setUp(self):
pass
def test_empty_name(self):
text = '''
stimulus_elements: b1, , b2, b3
'''
msg = "Error on line 2: Found empty stimulus element name."
with self.assertRaisesMsg(msg):
parse(text)
def test_duplicate(self):
text = '''
stimulus_elements: b1, b2, b3, b4, b2, b1
'''
msg = "Error on line 2: The stimulus element name 'b2' occurs more than once."
with self.assertRaisesMsg(msg):
parse(text)
text = '''
stimulus_elements: b1, b2, b3,
b4, b2, b1
'''
msg = "Error on line 3: The stimulus element name 'b2' occurs more than once."
with self.assertRaisesMsg(msg):
parse(text)
def test_stimulus_element_is_behavior(self):
text = '''
behaviors: e1, e2, e3
stimulus_elements: b1, b2, b3, b4, e2
'''
msg = "Error on line 3: The stimulus element name 'e2' is invalid, since it is a behavior name."
with self.assertRaisesMsg(msg):
parse(text)
text = '''
behaviors: e1, e2, e3
stimulus_elements: b1, b2, b3, b4,
e2, b1
'''
msg = "Error on line 4: The stimulus element name 'e2' is invalid, since it is a behavior name."
with self.assertRaisesMsg(msg):
parse(text)
def test_behavior_is_variable(self):
text = '''
@variables v1:1.2, v2:2.3, v3:3.4
stimulus_elements: b1, b2, b3, b4, v2, v3, v1
'''
msg = "Error on line 3: The stimulus element name 'v2' is invalid, since it is a variable name."
with self.assertRaisesMsg(msg):
parse(text)
text = '''
@variables v1:1.2, v2:2.3, v3:3.4
stimulus_elements: b1, b2, b3, b4,
v2
'''
msg = "Error on line 4: The stimulus element name 'v2' is invalid, since it is a variable name."
with self.assertRaisesMsg(msg):
parse(text)
def test_invalid_identifier(self):
text = '''
@variables v1:1.2, v2:2.3, v3:3.4
stimulus_elements: b1, b2, b3, b4, v2. v3, v1
'''
msg = "Error on line 3: Stimulus element name 'v2. v3' is not a valid identifier."
with self.assertRaisesMsg(msg):
parse(text)
| 30.263158 | 104 | 0.534565 | 529 | 4,600 | 4.544423 | 0.137996 | 0.252912 | 0.164725 | 0.14975 | 0.825707 | 0.816556 | 0.801997 | 0.760399 | 0.755824 | 0.71173 | 0 | 0.054667 | 0.347826 | 4,600 | 151 | 105 | 30.463576 | 0.746667 | 0 | 0 | 0.689922 | 0 | 0.031008 | 0.460435 | 0 | 0 | 0 | 0 | 0 | 0.124031 | 1 | 0.085271 | false | 0.015504 | 0.023256 | 0 | 0.131783 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fa38962ce8a587a96d8da67e88203fb5ed0597f5 | 3,203 | py | Python | tests/manage/z_cluster/conftest.py | annagitel/ocs-ci | 284fe04aeb6e3d6cb70c99e65fec8ff1b1ea1dd5 | [
"MIT"
] | 130 | 2019-04-08T06:22:53.000Z | 2022-03-23T06:11:19.000Z | tests/manage/z_cluster/conftest.py | annagitel/ocs-ci | 284fe04aeb6e3d6cb70c99e65fec8ff1b1ea1dd5 | [
"MIT"
] | 4,359 | 2019-04-09T18:48:47.000Z | 2022-03-31T20:04:55.000Z | tests/manage/z_cluster/conftest.py | annagitel/ocs-ci | 284fe04aeb6e3d6cb70c99e65fec8ff1b1ea1dd5 | [
"MIT"
] | 183 | 2019-04-18T15:55:30.000Z | 2022-03-11T06:16:50.000Z | # -*- coding: utf8 -*-
import logging
import pytest
from ocs_ci.ocs.fiojob import workload_fio_storageutilization
logger = logging.getLogger(__name__)
@pytest.fixture(scope="function")
def workload_storageutilization_rbd(
request,
project,
fio_pvc_dict,
fio_job_dict,
fio_configmap_dict,
measurement_dir,
tmp_path,
supported_configuration,
):
"""
In order to use this fixture you need to pass 3 indirect parameters:
target_percentage (float): the percentage storage utilization(from 0.01 to 0.99).
keep_fio_data (bool): indicate if you want to keep the fio data after the test is finished.
minimal_time (int): Minimal number of seconds to monitor a system
(See more details in the function 'measure_operation').
For example: Let's say I want to use workload_storageutilization_rbd fixture with
'target_percentage'=0.25, 'keep_fio_data'=True, 'minimal_time'=120
then In my test I will specify these parameters:
@pytest.mark.parametrize("workload_storageutilization_rbd",
[(0.25, True, 120)], indirect=["workload_storageutilization_rbd"])
"""
target_percentage, keep_fio_data, minimal_time = request.param
percent_to_fill = int(target_percentage * 100)
fixture_name = f"workload_storageutilization_{percent_to_fill}p_rbd"
measured_op = workload_fio_storageutilization(
fixture_name,
project,
fio_pvc_dict,
fio_job_dict,
fio_configmap_dict,
measurement_dir,
tmp_path,
target_percentage=target_percentage,
keep_fio_data=keep_fio_data,
minimal_time=minimal_time,
)
return measured_op
@pytest.fixture(scope="function")
def workload_storageutilization_cephfs(
request,
project,
fio_pvc_dict,
fio_job_dict,
fio_configmap_dict,
measurement_dir,
tmp_path,
supported_configuration,
):
"""
In order to use this fixture you need to pass 3 indirect parameters:
target_percentage (float): the percentage storage utilization(from 0.01 to 0.99).
keep_fio_data (bool): indicate if you want to keep the fio data after the test is finished.
minimal_time (int): Minimal number of seconds to monitor a system
(See more details in the function 'measure_operation').
For example: Let's say I want to use workload_storageutilization_cephfs fixture with
'target_percentage'=0.25, 'keep_fio_data'=True, 'minimal_time'=120
then In my test I will specify these parameters:
@pytest.mark.parametrize("workload_storageutilization_cephfs",
[(0.25, True, 120)], indirect=["workload_storageutilization_cephfs"])
"""
target_percentage, keep_fio_data, minimal_time = request.param
percent_to_fill = int(target_percentage * 100)
fixture_name = f"workload_storageutilization_{percent_to_fill}p_cephfs"
measured_op = workload_fio_storageutilization(
fixture_name,
project,
fio_pvc_dict,
fio_job_dict,
fio_configmap_dict,
measurement_dir,
tmp_path,
target_percentage=target_percentage,
keep_fio_data=keep_fio_data,
minimal_time=minimal_time,
)
return measured_op
| 33.020619 | 95 | 0.724321 | 419 | 3,203 | 5.23389 | 0.245823 | 0.087551 | 0.05016 | 0.031008 | 0.927497 | 0.927497 | 0.927497 | 0.837209 | 0.837209 | 0.837209 | 0 | 0.017619 | 0.202623 | 3,203 | 96 | 96 | 33.364583 | 0.841034 | 0.440837 | 0 | 0.827586 | 0 | 0 | 0.070498 | 0.061019 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.051724 | 0 | 0.12069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d72fd93f383719c11d34cdc9edb8f03595e70f1a | 164 | py | Python | ptrlib/pwn/__init__.py | alissonbezerra/ptrlib | 67a557acfa5069a66dd26670f53d94e63b023642 | [
"MIT"
] | null | null | null | ptrlib/pwn/__init__.py | alissonbezerra/ptrlib | 67a557acfa5069a66dd26670f53d94e63b023642 | [
"MIT"
] | null | null | null | ptrlib/pwn/__init__.py | alissonbezerra/ptrlib | 67a557acfa5069a66dd26670f53d94e63b023642 | [
"MIT"
] | null | null | null | # coding: utf-8
from ptrlib.pwn.fsb import *
from ptrlib.pwn.sock import *
from ptrlib.pwn.proc import *
from ptrlib.pwn.robot import *
from ptrlib.pwn.dl import *
| 23.428571 | 30 | 0.75 | 28 | 164 | 4.392857 | 0.428571 | 0.406504 | 0.528455 | 0.617886 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007092 | 0.140244 | 164 | 6 | 31 | 27.333333 | 0.865248 | 0.079268 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
d75706c33bd76214611f91bdc2d4af5ba87fc01d | 6,607 | py | Python | lib/dfext.py | VisualComputingInstitute/reid-tracking | 13c90ec698c6ce39aff8bc88d1ca9510b94bf931 | [
"MIT"
] | 81 | 2017-05-12T14:56:39.000Z | 2021-03-23T03:25:27.000Z | lib/dfext.py | AsuradaYuci/towards-reid-tracking | 13c90ec698c6ce39aff8bc88d1ca9510b94bf931 | [
"MIT"
] | 3 | 2017-08-23T10:19:21.000Z | 2018-06-10T14:13:06.000Z | lib/dfext.py | AsuradaYuci/towards-reid-tracking | 13c90ec698c6ce39aff8bc88d1ca9510b94bf931 | [
"MIT"
] | 19 | 2017-05-22T00:13:22.000Z | 2020-06-16T02:58:52.000Z | import DeepFried2 as df
def resblock(chan_in, chan_out=None, chan_mid=None, stride=1,
mkbn=lambda chan: df.BatchNormalization(chan, 0.95),
mknl=lambda: df.ReLU()):
chan_out = chan_out or chan_in
chan_mid = chan_mid or chan_in
return df.Sequential(
df.RepeatInput(
df.Sequential(
mkbn(chan_in), mknl(),
df.SpatialConvolutionCUDNN(chan_in, chan_mid, (3,3), border='same', stride=stride, init=df.init.prelu(), bias=False),
mkbn(chan_mid), mknl(),
df.SpatialConvolutionCUDNN(chan_mid, chan_out, (3,3), border='same', init=df.init.prelu()),
),
df.Identity() if chan_in == chan_out else df.SpatialConvolutionCUDNN(chan_in, chan_out, (1,1), stride=stride)
),
df.zoo.resnet.Add()
)
def resblock2(chan_in, chan_out=None, chan_mid=None, stride=1,
mkbn=lambda chan: df.BatchNormalization(chan, 0.95),
mknl=lambda: df.ReLU()):
chan_out = chan_out or chan_in
chan_mid = chan_mid or chan_in
identity_or_projection = df.Identity()
if chan_in != chan_out:
identity_or_projection = df.Sequential(
mkbn(chan_in), mknl(),
df.SpatialConvolutionCUDNN(chan_in, chan_out, (1,1), stride=stride, init=df.init.prelu()),
)
return df.Sequential(
df.RepeatInput(
df.Sequential(
mkbn(chan_in), mknl(),
df.SpatialConvolutionCUDNN(chan_in, chan_mid, (3,3), border='same', stride=stride, init=df.init.prelu(), bias=False),
mkbn(chan_mid), mknl(),
df.SpatialConvolutionCUDNN(chan_mid, chan_out, (3,3), border='same', init=df.init.prelu()),
),
identity_or_projection,
),
df.zoo.resnet.Add()
)
def resblock_bottle(chan_in, chan_out=None, chan_mid=None, stride=1,
mkbn=lambda chan: df.BatchNormalization(chan, 0.95),
mknl=lambda: df.ReLU()):
chan_out = chan_out or chan_in
chan_mid = chan_mid or chan_out//4
return df.Sequential(
df.RepeatInput(
df.Sequential(
mkbn(chan_in), mknl(),
df.SpatialConvolutionCUDNN(chan_in, chan_mid, (1,1), stride=stride, init=df.init.prelu(), bias=False),
mkbn(chan_mid), mknl(),
df.SpatialConvolutionCUDNN(chan_mid, chan_mid, (3,3), border='same', init=df.init.prelu(), bias=False),
mkbn(chan_mid), mknl(),
df.SpatialConvolutionCUDNN(chan_mid, chan_out, (1,1), init=df.init.prelu()),
),
df.Identity() if chan_in == chan_out else df.SpatialConvolutionCUDNN(chan_in, chan_out, (1,1), stride=stride)
),
df.zoo.resnet.Add()
)
def resblock_bottle2(chan_in, chan_out=None, chan_mid=None, stride=1,
mkbn=lambda chan: df.BatchNormalization(chan, 0.95),
mknl=lambda: df.ReLU()):
chan_out = chan_out or chan_in
chan_mid = chan_mid or chan_out//4
identity_or_projection = df.Identity()
if chan_in != chan_out:
identity_or_projection = df.Sequential(
mkbn(chan_in), mknl(),
df.SpatialConvolutionCUDNN(chan_in, chan_out, (1,1), stride=stride, init=df.init.prelu()),
)
return df.Sequential(
df.RepeatInput(
df.Sequential(
mkbn(chan_in), mknl(),
df.SpatialConvolutionCUDNN(chan_in, chan_mid, (1,1), init=df.init.prelu(), bias=False),
mkbn(chan_mid), mknl(),
df.SpatialConvolutionCUDNN(chan_mid, chan_mid, (3,3), stride=stride, border='same', init=df.init.prelu(), bias=False),
mkbn(chan_mid), mknl(),
df.SpatialConvolutionCUDNN(chan_mid, chan_out, (1,1), init=df.init.prelu()),
),
identity_or_projection,
),
df.zoo.resnet.Add()
)
def repeat_apply_merge(modules, merger, *tail):
return df.Sequential(df.RepeatInput(*modules), merger, *tail)
def nextblock_a(chan_in, cardin, chan_out=None, chan_mid=None, stride=1,
mkbn=lambda chan: df.BatchNormalization(chan, 0.95),
mknl=lambda: df.ReLU()):
chan_out = chan_out or chan_in
chan_mid = chan_mid or chan_out//cardin//2
identity_or_projection = df.Identity()
if chan_in != chan_out:
identity_or_projection = df.Sequential(
df.SpatialConvolutionCUDNN(chan_in, chan_out, (1,1), stride=stride, init=df.init.prelu()),
mkbn(chan_out),
)
return repeat_apply_merge([
repeat_apply_merge([
df.Sequential(
df.SpatialConvolutionCUDNN(chan_in, chan_mid, (1,1), init=df.init.prelu(), bias=False),
mkbn(chan_mid), mknl(),
df.SpatialConvolutionCUDNN(chan_mid, chan_mid, (3,3), init=df.init.prelu(), bias=False,
stride=stride, border='same'),
mkbn(chan_mid), mknl(),
df.SpatialConvolutionCUDNN(chan_mid, chan_out, (1,1), init=df.init.prelu(), bias=False),
) for _ in range(cardin)
], df.zoo.resnet.Add(), mkbn(chan_out)),
identity_or_projection
], df.zoo.resnet.Add(), mknl())
def nextblock_b(chan_in, cardin, chan_out=None, chan_mid=None, stride=1,
mkbn=lambda chan: df.BatchNormalization(chan, 0.95),
mknl=lambda: df.ReLU()):
chan_out = chan_out or chan_in
chan_mid = chan_mid or chan_out//cardin//2
identity_or_projection = df.Identity()
if chan_in != chan_out:
identity_or_projection = df.Sequential(
df.SpatialConvolutionCUDNN(chan_in, chan_out, (1,1), stride=stride, init=df.init.prelu()),
mkbn(chan_out),
)
return repeat_apply_merge([
repeat_apply_merge([
df.Sequential(
df.SpatialConvolutionCUDNN(chan_in, chan_mid, (1,1), init=df.init.prelu(), bias=False),
mkbn(chan_mid), mknl(),
df.SpatialConvolutionCUDNN(chan_mid, chan_mid, (3,3), init=df.init.prelu(), bias=False,
stride=stride, border='same'),
mkbn(chan_mid), mknl(),
) for _ in range(cardin)
],
df.Concat(),
df.SpatialConvolutionCUDNN(chan_mid*cardin, chan_out, (1,1), init=df.init.prelu(), bias=False),
mkbn(chan_out)
),
identity_or_projection
], df.zoo.resnet.Add(), mknl())
| 39.327381 | 134 | 0.586651 | 830 | 6,607 | 4.46506 | 0.068675 | 0.090664 | 0.075553 | 0.08095 | 0.946843 | 0.928764 | 0.924447 | 0.924447 | 0.924447 | 0.924447 | 0 | 0.015833 | 0.283033 | 6,607 | 167 | 135 | 39.562874 | 0.766519 | 0 | 0 | 0.811594 | 0 | 0 | 0.004843 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050725 | false | 0 | 0.007246 | 0.007246 | 0.108696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ad1cad2844c39eae53cedc4dc7a7400a1e79bcc3 | 50,912 | py | Python | radiomanager_sdk/api/item_api.py | Pluxbox/radiomanager-python-client | a25450c079110fb12d8e5b00f8b96c2619ed6172 | [
"MIT"
] | null | null | null | radiomanager_sdk/api/item_api.py | Pluxbox/radiomanager-python-client | a25450c079110fb12d8e5b00f8b96c2619ed6172 | [
"MIT"
] | 1 | 2018-09-05T08:51:24.000Z | 2018-09-06T14:56:30.000Z | radiomanager_sdk/api/item_api.py | Pluxbox/radiomanager-python-client | a25450c079110fb12d8e5b00f8b96c2619ed6172 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
RadioManager
RadioManager # noqa: E501
OpenAPI spec version: 2.0
Contact: support@pluxbox.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from radiomanager_sdk.api_client import ApiClient
class ItemApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_item(self, **kwargs): # noqa: E501
"""Create an new item. # noqa: E501
Create item. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_item(async=True)
>>> result = thread.get()
:param async bool
:param ItemDataInput data: Data *(Optional)*
:return: PostSuccess
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.create_item_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.create_item_with_http_info(**kwargs) # noqa: E501
return data
def create_item_with_http_info(self, **kwargs): # noqa: E501
"""Create an new item. # noqa: E501
Create item. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.create_item_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param ItemDataInput data: Data *(Optional)*
:return: PostSuccess
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['data'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_item" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'data' in params:
body_params = params['data']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API Key'] # noqa: E501
return self.api_client.call_api(
'/items', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PostSuccess', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def current_item_post_structure(self, **kwargs): # noqa: E501
"""Post a current playing item, keep structure # noqa: E501
Post a current playing item, keep structure # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.current_item_post_structure(async=True)
>>> result = thread.get()
:param async bool
:param ImportItem data: Data *(Optional)*
:return: Success
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.current_item_post_structure_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.current_item_post_structure_with_http_info(**kwargs) # noqa: E501
return data
def current_item_post_structure_with_http_info(self, **kwargs): # noqa: E501
"""Post a current playing item, keep structure # noqa: E501
Post a current playing item, keep structure # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.current_item_post_structure_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param ImportItem data: Data *(Optional)*
:return: Success
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['data'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method current_item_post_structure" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'data' in params:
body_params = params['data']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API Key'] # noqa: E501
return self.api_client.call_api(
'/items/current/structure', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Success', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def current_item_post_timing(self, **kwargs): # noqa: E501
"""Post a current playing item # noqa: E501
Post a current playing item # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.current_item_post_timing(async=True)
>>> result = thread.get()
:param async bool
:param ImportItem data: Data *(Optional)*
:return: Success
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.current_item_post_timing_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.current_item_post_timing_with_http_info(**kwargs) # noqa: E501
return data
def current_item_post_timing_with_http_info(self, **kwargs): # noqa: E501
"""Post a current playing item # noqa: E501
Post a current playing item # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.current_item_post_timing_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param ImportItem data: Data *(Optional)*
:return: Success
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['data'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method current_item_post_timing" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'data' in params:
body_params = params['data']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API Key'] # noqa: E501
return self.api_client.call_api(
'/items/current/timing', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Success', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_item_by_id(self, id, **kwargs): # noqa: E501
"""Delete item by ID. # noqa: E501
Delete item by id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_item_by_id(id, async=True)
>>> result = thread.get()
:param async bool
:param int id: ID of Item **(Required)** (required)
:return: Success
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.delete_item_by_id_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.delete_item_by_id_with_http_info(id, **kwargs) # noqa: E501
return data
def delete_item_by_id_with_http_info(self, id, **kwargs): # noqa: E501
"""Delete item by ID. # noqa: E501
Delete item by id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.delete_item_by_id_with_http_info(id, async=True)
>>> result = thread.get()
:param async bool
:param int id: ID of Item **(Required)** (required)
:return: Success
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_item_by_id" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `delete_item_by_id`") # noqa: E501
if 'id' in params and params['id'] < 0: # noqa: E501
raise ValueError("Invalid value for parameter `id` when calling `delete_item_by_id`, must be a value greater than or equal to `0`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API Key'] # noqa: E501
return self.api_client.call_api(
'/items/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Success', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_current_item(self, **kwargs): # noqa: E501
"""Get current Item # noqa: E501
Get current Item # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_current_item(async=True)
>>> result = thread.get()
:param async bool
:param bool lastplayed: Show last played item if there is no current item*(Optional)*
:return: ItemResult
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_current_item_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_current_item_with_http_info(**kwargs) # noqa: E501
return data
def get_current_item_with_http_info(self, **kwargs): # noqa: E501
"""Get current Item # noqa: E501
Get current Item # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_current_item_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param bool lastplayed: Show last played item if there is no current item*(Optional)*
:return: ItemResult
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['lastplayed'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_current_item" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'lastplayed' in params:
query_params.append(('lastplayed', params['lastplayed'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API Key'] # noqa: E501
return self.api_client.call_api(
'/items/current', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ItemResult', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_item_by_id(self, id, **kwargs): # noqa: E501
"""Get extended item details by ID. # noqa: E501
Read item by id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_item_by_id(id, async=True)
>>> result = thread.get()
:param async bool
:param int id: ID of Item **(Required)** (required)
:param int external_station_id: Query on a different (content providing) station *(Optional)*
:return: ItemResult
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.get_item_by_id_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.get_item_by_id_with_http_info(id, **kwargs) # noqa: E501
return data
def get_item_by_id_with_http_info(self, id, **kwargs): # noqa: E501
"""Get extended item details by ID. # noqa: E501
Read item by id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.get_item_by_id_with_http_info(id, async=True)
>>> result = thread.get()
:param async bool
:param int id: ID of Item **(Required)** (required)
:param int external_station_id: Query on a different (content providing) station *(Optional)*
:return: ItemResult
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'external_station_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_item_by_id" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `get_item_by_id`") # noqa: E501
if 'id' in params and params['id'] < 0: # noqa: E501
raise ValueError("Invalid value for parameter `id` when calling `get_item_by_id`, must be a value greater than or equal to `0`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
if 'external_station_id' in params:
query_params.append(('_external_station_id', params['external_station_id'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API Key'] # noqa: E501
return self.api_client.call_api(
'/items/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ItemResult', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_items(self, **kwargs): # noqa: E501
"""Get a list of all the items currently in your station. # noqa: E501
Get a list of all the items currently in your station. This feature supports pagination and will give a maximum results of 50 items back. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_items(async=True)
>>> result = thread.get()
:param async bool
:param int page: Current page *(Optional)*
:param int block_id: Search on Block ID *(Optional)* `(Relation)`
:param int broadcast_id: Search on Broadcast ID *(Optional)* `(Relation)`
:param int model_type_id: Search on ModelType ID *(Optional)* `(Relation)`
:param int tag_id: Search on Tag ID *(Optional)* `(Relation)`
:param int campaign_id: Search on Campaign ID *(Optional)* `(Relation)`
:param int contact_id: Search on Contact ID *(Optional)* `(Relation)`
:param int program_draft_id: Search on Program Draft ID *(Optional)*
:param int user_draft_id: Search on User Draft ID *(Optional)*
:param int station_draft_id: Search on Station Draft ID *(Optional)*
:param int program_id: Search on Program ID *(Optional)* `(Relation)`
:param str external_id: Search on External ID *(Optional)*
:param datetime start_min: Minimum start date *(Optional)*
:param datetime start_max: Maximum start date *(Optional)*
:param int duration_min: Minimum duration (seconds) *(Optional)*
:param int duration_max: Maximum duration (seconds) *(Optional)*
:param str status: Play Status of item *(Optional)*
:param int limit: Results per page *(Optional)*
:param str order_by: Field to order the results *(Optional)*
:param str order_direction: Direction of ordering *(Optional)*
:param int external_station_id: Query on a different (content providing) station *(Optional)*
:return: ItemResults
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.list_items_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.list_items_with_http_info(**kwargs) # noqa: E501
return data
def list_items_with_http_info(self, **kwargs): # noqa: E501
"""Get a list of all the items currently in your station. # noqa: E501
Get a list of all the items currently in your station. This feature supports pagination and will give a maximum results of 50 items back. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.list_items_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param int page: Current page *(Optional)*
:param int block_id: Search on Block ID *(Optional)* `(Relation)`
:param int broadcast_id: Search on Broadcast ID *(Optional)* `(Relation)`
:param int model_type_id: Search on ModelType ID *(Optional)* `(Relation)`
:param int tag_id: Search on Tag ID *(Optional)* `(Relation)`
:param int campaign_id: Search on Campaign ID *(Optional)* `(Relation)`
:param int contact_id: Search on Contact ID *(Optional)* `(Relation)`
:param int program_draft_id: Search on Program Draft ID *(Optional)*
:param int user_draft_id: Search on User Draft ID *(Optional)*
:param int station_draft_id: Search on Station Draft ID *(Optional)*
:param int program_id: Search on Program ID *(Optional)* `(Relation)`
:param str external_id: Search on External ID *(Optional)*
:param datetime start_min: Minimum start date *(Optional)*
:param datetime start_max: Maximum start date *(Optional)*
:param int duration_min: Minimum duration (seconds) *(Optional)*
:param int duration_max: Maximum duration (seconds) *(Optional)*
:param str status: Play Status of item *(Optional)*
:param int limit: Results per page *(Optional)*
:param str order_by: Field to order the results *(Optional)*
:param str order_direction: Direction of ordering *(Optional)*
:param int external_station_id: Query on a different (content providing) station *(Optional)*
:return: ItemResults
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['page', 'block_id', 'broadcast_id', 'model_type_id', 'tag_id', 'campaign_id', 'contact_id', 'program_draft_id', 'user_draft_id', 'station_draft_id', 'program_id', 'external_id', 'start_min', 'start_max', 'duration_min', 'duration_max', 'status', 'limit', 'order_by', 'order_direction', 'external_station_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_items" % key
)
params[key] = val
del params['kwargs']
if 'page' in params and params['page'] < 1: # noqa: E501
raise ValueError("Invalid value for parameter `page` when calling `list_items`, must be a value greater than or equal to `1`") # noqa: E501
if 'limit' in params and params['limit'] > 50: # noqa: E501
raise ValueError("Invalid value for parameter `limit` when calling `list_items`, must be a value less than or equal to `50`") # noqa: E501
if 'limit' in params and params['limit'] < 1: # noqa: E501
raise ValueError("Invalid value for parameter `limit` when calling `list_items`, must be a value greater than or equal to `1`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'page' in params:
query_params.append(('page', params['page'])) # noqa: E501
if 'block_id' in params:
query_params.append(('block_id', params['block_id'])) # noqa: E501
if 'broadcast_id' in params:
query_params.append(('broadcast_id', params['broadcast_id'])) # noqa: E501
if 'model_type_id' in params:
query_params.append(('model_type_id', params['model_type_id'])) # noqa: E501
if 'tag_id' in params:
query_params.append(('tag_id', params['tag_id'])) # noqa: E501
if 'campaign_id' in params:
query_params.append(('campaign_id', params['campaign_id'])) # noqa: E501
if 'contact_id' in params:
query_params.append(('contact_id', params['contact_id'])) # noqa: E501
if 'program_draft_id' in params:
query_params.append(('program_draft_id', params['program_draft_id'])) # noqa: E501
if 'user_draft_id' in params:
query_params.append(('user_draft_id', params['user_draft_id'])) # noqa: E501
if 'station_draft_id' in params:
query_params.append(('station_draft_id', params['station_draft_id'])) # noqa: E501
if 'program_id' in params:
query_params.append(('program_id', params['program_id'])) # noqa: E501
if 'external_id' in params:
query_params.append(('external_id', params['external_id'])) # noqa: E501
if 'start_min' in params:
query_params.append(('start-min', params['start_min'])) # noqa: E501
if 'start_max' in params:
query_params.append(('start-max', params['start_max'])) # noqa: E501
if 'duration_min' in params:
query_params.append(('duration-min', params['duration_min'])) # noqa: E501
if 'duration_max' in params:
query_params.append(('duration-max', params['duration_max'])) # noqa: E501
if 'status' in params:
query_params.append(('status', params['status'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'order_by' in params:
query_params.append(('order-by', params['order_by'])) # noqa: E501
if 'order_direction' in params:
query_params.append(('order-direction', params['order_direction'])) # noqa: E501
if 'external_station_id' in params:
query_params.append(('_external_station_id', params['external_station_id'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API Key'] # noqa: E501
return self.api_client.call_api(
'/items', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ItemResults', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def playlist_post_merge(self, **kwargs): # noqa: E501
"""Post a playlist, do not remove previously imported items # noqa: E501
Post a playlist, do not remove previously imported items # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.playlist_post_merge(async=True)
>>> result = thread.get()
:param async bool
:param Data2 data: Data *(Optional)*
:return: InlineResponse202
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.playlist_post_merge_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.playlist_post_merge_with_http_info(**kwargs) # noqa: E501
return data
def playlist_post_merge_with_http_info(self, **kwargs): # noqa: E501
"""Post a playlist, do not remove previously imported items # noqa: E501
Post a playlist, do not remove previously imported items # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.playlist_post_merge_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param Data2 data: Data *(Optional)*
:return: InlineResponse202
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['data'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method playlist_post_merge" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'data' in params:
body_params = params['data']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API Key'] # noqa: E501
return self.api_client.call_api(
'/items/playlist/merge', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='InlineResponse202', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def playlist_post_structure(self, **kwargs): # noqa: E501
"""Post a playlist, keep current structure # noqa: E501
Post a playlist, keep current structure # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.playlist_post_structure(async=True)
>>> result = thread.get()
:param async bool
:param Data1 data: Data *(Optional)*
:return: InlineResponse202
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.playlist_post_structure_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.playlist_post_structure_with_http_info(**kwargs) # noqa: E501
return data
def playlist_post_structure_with_http_info(self, **kwargs): # noqa: E501
"""Post a playlist, keep current structure # noqa: E501
Post a playlist, keep current structure # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.playlist_post_structure_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param Data1 data: Data *(Optional)*
:return: InlineResponse202
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['data'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method playlist_post_structure" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'data' in params:
body_params = params['data']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API Key'] # noqa: E501
return self.api_client.call_api(
'/items/playlist/structure', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='InlineResponse202', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def playlist_post_timing(self, **kwargs): # noqa: E501
"""Post a playlist # noqa: E501
Post a playlist # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.playlist_post_timing(async=True)
>>> result = thread.get()
:param async bool
:param Data data: Data *(Optional)*
:return: InlineResponse202
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.playlist_post_timing_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.playlist_post_timing_with_http_info(**kwargs) # noqa: E501
return data
def playlist_post_timing_with_http_info(self, **kwargs): # noqa: E501
"""Post a playlist # noqa: E501
Post a playlist # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.playlist_post_timing_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param Data data: Data *(Optional)*
:return: InlineResponse202
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['data'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method playlist_post_timing" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'data' in params:
body_params = params['data']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API Key'] # noqa: E501
return self.api_client.call_api(
'/items/playlist/timing', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='InlineResponse202', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def stop_current_item(self, **kwargs): # noqa: E501
"""Stop an Item # noqa: E501
Set a current playing or specific item on played # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.stop_current_item(async=True)
>>> result = thread.get()
:param async bool
:param Data3 data: Data *(Optional)*
:return: Success
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.stop_current_item_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.stop_current_item_with_http_info(**kwargs) # noqa: E501
return data
def stop_current_item_with_http_info(self, **kwargs): # noqa: E501
"""Stop an Item # noqa: E501
Set a current playing or specific item on played # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.stop_current_item_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param Data3 data: Data *(Optional)*
:return: Success
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['data'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method stop_current_item" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'data' in params:
body_params = params['data']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API Key'] # noqa: E501
return self.api_client.call_api(
'/items/stopcurrent', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Success', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_item_by_id(self, id, **kwargs): # noqa: E501
"""Update extended item details by ID. # noqa: E501
Update item by id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.update_item_by_id(id, async=True)
>>> result = thread.get()
:param async bool
:param int id: ID of Item **(Required)** (required)
:param ItemDataInput data: Data *(Optional)*
:return: Success
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.update_item_by_id_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.update_item_by_id_with_http_info(id, **kwargs) # noqa: E501
return data
def update_item_by_id_with_http_info(self, id, **kwargs): # noqa: E501
"""Update extended item details by ID. # noqa: E501
Update item by id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.update_item_by_id_with_http_info(id, async=True)
>>> result = thread.get()
:param async bool
:param int id: ID of Item **(Required)** (required)
:param ItemDataInput data: Data *(Optional)*
:return: Success
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'data'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_item_by_id" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `update_item_by_id`") # noqa: E501
if 'id' in params and params['id'] < 0: # noqa: E501
raise ValueError("Invalid value for parameter `id` when calling `update_item_by_id`, must be a value greater than or equal to `0`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'data' in params:
body_params = params['data']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['API Key'] # noqa: E501
return self.api_client.call_api(
'/items/{id}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Success', # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 39.558664 | 344 | 0.600664 | 5,912 | 50,912 | 4.943674 | 0.040088 | 0.057481 | 0.022992 | 0.029562 | 0.948986 | 0.942143 | 0.927978 | 0.916413 | 0.912581 | 0.903411 | 0 | 0.01939 | 0.30207 | 50,912 | 1,286 | 345 | 39.589425 | 0.803141 | 0.055331 | 0 | 0.77551 | 1 | 0.008746 | 0.188565 | 0.031894 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.005831 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
ad24262598cd1db5442ebbdc9cfd32249c1f9c58 | 454 | py | Python | codepack/service/__init__.py | ihnokim/codepack | 9d043b2db977de503faf7f5f1370c1424c6cb19f | [
"MIT"
] | 2 | 2021-04-18T17:51:49.000Z | 2021-06-22T10:21:30.000Z | codepack/service/__init__.py | ihnokim/codepack | 9d043b2db977de503faf7f5f1370c1424c6cb19f | [
"MIT"
] | 24 | 2021-12-23T18:02:01.000Z | 2022-03-27T03:03:38.000Z | codepack/service/__init__.py | ihnokim/codepack | 9d043b2db977de503faf7f5f1370c1424c6cb19f | [
"MIT"
] | 1 | 2021-09-13T12:56:40.000Z | 2021-09-13T12:56:40.000Z | from codepack.service.delivery_service import MemoryDeliveryService, FileDeliveryService, MongoDeliveryService, DeliveryServiceAlias
from codepack.service.storage_service import MemoryStorageService, FileStorageService, MongoStorageService, StorageServiceAlias
from codepack.service.snapshot_service import MemorySnapshotService, FileSnapshotService, MongoSnapshotService, SnapshotServiceAlias
from codepack.service.default_service import DefaultService
| 90.8 | 132 | 0.907489 | 37 | 454 | 11.027027 | 0.567568 | 0.117647 | 0.186275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055066 | 454 | 4 | 133 | 113.5 | 0.951049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
ad2a6821c20abdcb5572aca4ced07b93de500795 | 39,556 | py | Python | tests/asg/test_asg_actions.py | mvollman/chaostoolkit-aws | 17029b763fc20e36bc4ebee35ef1012b9d28bc14 | [
"Apache-2.0"
] | 3 | 2020-08-04T18:45:23.000Z | 2021-11-12T16:14:49.000Z | tests/asg/test_asg_actions.py | mvollman/chaostoolkit-aws | 17029b763fc20e36bc4ebee35ef1012b9d28bc14 | [
"Apache-2.0"
] | 1 | 2021-03-18T18:07:37.000Z | 2021-03-18T18:07:37.000Z | tests/asg/test_asg_actions.py | mvollman/chaostoolkit-aws | 17029b763fc20e36bc4ebee35ef1012b9d28bc14 | [
"Apache-2.0"
] | 1 | 2020-09-14T10:43:46.000Z | 2020-09-14T10:43:46.000Z | # -*- coding: utf-8 -*-
from unittest.mock import MagicMock, patch
from chaoslib.exceptions import FailedActivity
from chaosaws.asg.actions import (
suspend_processes, resume_processes, terminate_random_instances,
detach_random_instances, change_subnets, detach_random_volume,
attach_volume, stop_random_instances)
import pytest
def test_suspend_process_no_name_or_tag():
with pytest.raises(FailedActivity) as x:
suspend_processes()
assert 'one of the following arguments are required: ' \
'asg_names or tags' in str(x.value)
def test_suspend_process_both_name_and_tag_one():
with pytest.raises(FailedActivity) as x:
suspend_processes(
asg_names=['AutoScalingGroup-A'],
tags=[{"Key": "TagKey", "Values": ["TagValues"]}])
assert 'only one of the following arguments are allowed: ' \
'asg_names/tags' in str(x.value)
def test_suspend_process_invalid_process():
with pytest.raises(FailedActivity) as x:
suspend_processes(
asg_names=['AutoScalingGroup-A'],
process_names=['Lunch'])
assert "invalid process(es): ['Lunch'] not in" in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_suspend_process_asg_names(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": "AutoScalingGroup-A",
"DesiredCapacity": 1,
"Instances": [{
"HealthStatus": "Healthy",
"LifecycleState": "InService"
}],
"SuspendedProcesses": []
}]
}
suspend_processes(asg_names=asg_names)
client.suspend_processes.assert_called_with(
AutoScalingGroupName=asg_names[0])
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_suspend_process_asg_tags(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.get_paginator.return_value.paginate.return_value = [{
'Tags': [{
'ResourceId': 'AutoScalingGroup-A',
'ResourceType': 'auto-scaling-group',
'Key': 'TargetKey',
'Value': 'TargetValue',
'PropagateAtLaunch': False}]
}]
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": "AutoScalingGroup-A",
"DesiredCapacity": 1,
"Instances": [{
"HealthStatus": "Healthy",
"LifecycleState": "InService"
}],
"SuspendedProcesses": []
}]
}
suspend_processes(tags=[{'Key': 'TargetKey', 'Value': 'TargetValue'}])
client.suspend_processes.assert_called_with(
AutoScalingGroupName=asg_names[0])
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_suspend_process_asg_invalid_names(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": []}
with pytest.raises(FailedActivity) as x:
suspend_processes(asg_names=asg_names, process_names=["Launch"])
assert 'Unable to locate ASG(s): %s' % asg_names in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_suspend_process_asg_invalid_name(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A', 'AutoScalingGroup-B']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": "AutoScalingGroup-A",
"DesiredCapacity": 1,
"Instances": [{
"HealthStatus": "Healthy",
"LifecycleState": "InService"
}],
"SuspendedProcesses": []
}]
}
with pytest.raises(FailedActivity) as x:
suspend_processes(asg_names=asg_names, process_names=["Launch"])
assert 'No ASG(s) found with name(s): %s' % ([asg_names[1]]) in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_suspend_process_asg_invalid_tags(aws_client):
client = MagicMock()
aws_client.return_value = client
tags = [{'Key': 'TargetKey', 'Value': 'TargetValue'}]
client.get_paginator.return_value.paginate.return_value = [{'Tags': []}]
with pytest.raises(FailedActivity) as x:
suspend_processes(tags=tags)
assert 'No ASG(s) found with matching tag(s): %s.' % tags in str(x.value)
def test_resume_process_no_name_or_tag():
with pytest.raises(FailedActivity) as x:
resume_processes()
assert 'one of the following arguments are required: ' \
'asg_names or tags' in str(x.value)
def test_resume_process_both_name_and_tag():
with pytest.raises(FailedActivity) as x:
resume_processes(
asg_names=['AutoScalingGroup-A'],
tags=[{"Key": "TagKey", "Values": ["TagValues"]}])
assert 'only one of the following arguments are allowed: ' \
'asg_names/tags' in str(x.value)
def test_resume_process_invalid_process():
with pytest.raises(FailedActivity) as x:
resume_processes(
asg_names=['AutoScalingGroup-A'],
process_names=['Lunch'])
assert "invalid process(es): ['Lunch'] not in" in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_resume_process_asg_names(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": "AutoScalingGroup-A",
"DesiredCapacity": 1,
"Instances": [{
"HealthStatus": "Healthy",
"LifecycleState": "InService"
}],
"SuspendedProcesses": [{"ProcessName": "Launch"}]
}]
}
resume_processes(asg_names=asg_names, process_names=["Launch"])
client.resume_processes.assert_called_with(
AutoScalingGroupName=asg_names[0], ScalingProcesses=["Launch"])
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_resume_process_asg_tags(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
tags = [{'Key': 'TargetKey', 'Value': 'TargetValue'}]
client.get_paginator.return_value.paginate.return_value = [{
'Tags': [{
'ResourceId': 'AutoScalingGroup-A',
'ResourceType': 'auto-scaling-group',
'Key': 'TargetKey',
'Value': 'TargetValue',
'PropagateAtLaunch': False}]
}]
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": "AutoScalingGroup-A",
"DesiredCapacity": 1,
"Instances": [{
"HealthStatus": "Healthy",
"LifecycleState": "InService"
}],
"SuspendedProcesses": [{"ProcessName": "Launch"}]
}]
}
resume_processes(tags=tags, process_names=["Launch"])
client.resume_processes.assert_called_with(
AutoScalingGroupName=asg_names[0], ScalingProcesses=["Launch"])
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_resume_process_asg_invalid_tags(aws_client):
client = MagicMock()
aws_client.return_value = client
tags = [{'Key': 'TargetKey', 'Value': 'TargetValue'}]
client.get_paginator.return_value.paginate.return_value = [{'Tags': []}]
with pytest.raises(FailedActivity) as x:
resume_processes(tags=tags)
assert 'No ASG(s) found with matching tag(s): %s.' % tags in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_resume_process_asg_invalid_names(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": []}
with pytest.raises(FailedActivity) as x:
resume_processes(asg_names=asg_names, process_names=["Launch"])
assert 'Unable to locate ASG(s): ' in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_resume_process_asg_invalid_name(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A', 'AutoScalingGroup-B']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": "AutoScalingGroup-A",
"DesiredCapacity": 1,
"Instances": [{
"HealthStatus": "Healthy",
"LifecycleState": "InService"
}],
"SuspendedProcesses": [{"ProcessName": "Launch"}]
}]
}
with pytest.raises(FailedActivity) as x:
resume_processes(asg_names=asg_names, process_names=["Launch"])
assert 'No ASG(s) found with name(s): %s' % ([asg_names[1]]) in str(x.value)
def test_terminate_instances_no_asgs():
with pytest.raises(FailedActivity) as x:
terminate_random_instances(instance_count=10)
assert 'one of the following arguments are required: ' \
'asg_names or tags' in str(x.value)
def test_terminate_instances_no_numbers():
asg_names = ['AutoScalingGroup-A', 'AutoScalingGroup-B']
with pytest.raises(FailedActivity) as x:
terminate_random_instances(asg_names)
assert 'Must specify one of "instance_count", ' \
'"instance_percent", "az"' in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_terminate_instances_count_pass(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [
{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{
"InstanceId": "i-00000000000000001",
"AvailabilityZone": "us-east-1a",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000002",
"AvailabilityZone": "us-east-1b",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000003",
"AvailabilityZone": "us-east-1c",
"LifecycleState": "InService"
},
]
}
]
}
terminate_random_instances(asg_names=asg_names, instance_count=2)
instance_calls = [
['i-00000000000000001', 'i-00000000000000002'],
['i-00000000000000001', 'i-00000000000000003'],
['i-00000000000000002', 'i-00000000000000003']
]
ex = None
for i in instance_calls:
try:
client.terminate_instances.assert_called_with(
InstanceIds=sorted(i))
return True
except AssertionError as e:
ex = e.args
raise AssertionError(ex)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_terminate_instances_percent_pass(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [
{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{
"InstanceId": "i-00000000000000001",
"AvailabilityZone": "us-east-1a",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000002",
"AvailabilityZone": "us-east-1b",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000003",
"AvailabilityZone": "us-east-1c",
"LifecycleState": "InService"
},
]
}
]
}
terminate_random_instances(asg_names=asg_names, instance_percent=50)
instance_calls = [
'i-00000000000000001', 'i-00000000000000002', 'i-00000000000000003']
ex = None
for i in instance_calls:
try:
client.terminate_instances.assert_called_with(
InstanceIds=[i])
return True
except AssertionError as e:
ex = e.args
raise AssertionError(ex)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_terminate_instances_valid_az(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [
{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{
"InstanceId": "i-00000000000000001",
"AvailabilityZone": "us-east-1a",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000002",
"AvailabilityZone": "us-east-1b",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000003",
"AvailabilityZone": "us-east-1c",
"LifecycleState": "InService"
},
]
}
]
}
terminate_random_instances(asg_names=asg_names, az='us-east-1a')
client.terminate_instances.assert_called_with(
InstanceIds=['i-00000000000000001'])
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_terminate_instances_invalid_az(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [
{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{
"InstanceId": "i-00000000000000001",
"AvailabilityZone": "us-east-1a",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000002",
"AvailabilityZone": "us-east-1b",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000003",
"AvailabilityZone": "us-east-1c",
"LifecycleState": "InService"
},
]
}
]
}
with pytest.raises(FailedActivity) as x:
terminate_random_instances(asg_names=asg_names, az='us-east-1d')
assert 'No instances found in Availability Zone: us-east-1d' in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_terminate_instances_invalid_count(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [
{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{
"InstanceId": "i-00000000000000001",
"AvailabilityZone": "us-east-1a",
"LifecycleState": "InService"
}
]
}
]
}
with pytest.raises(FailedActivity) as x:
terminate_random_instances(asg_names=asg_names, instance_count=2)
assert 'Not enough healthy instances in {} to satisfy ' \
'termination count {} ({})'.format(asg_names[0], 2, 1) in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_terminate_instances_tags(aws_client):
client = MagicMock()
aws_client.return_value = client
tags = [{'Key': 'TargetKey', 'Value': 'TargetValue'}]
client.get_paginator.return_value.paginate.return_value = [{
'Tags': [{
'ResourceId': 'AutoScalingGroup-A',
'ResourceType': 'auto-scaling-group',
'Key': 'TargetKey',
'Value': 'TargetValue',
'PropagateAtLaunch': False}]
}]
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{
"InstanceId": "i-00000000000000001",
"AvailabilityZone": "us-east-1a",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000002",
"AvailabilityZone": "us-east-1b",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000003",
"AvailabilityZone": "us-east-1c",
"LifecycleState": "InService"
},
]
}]
}
terminate_random_instances(tags=tags, instance_count=2)
instance_calls = [
['i-00000000000000001', 'i-00000000000000002'],
['i-00000000000000001', 'i-00000000000000003'],
['i-00000000000000002', 'i-00000000000000003']
]
ex = None
for i in instance_calls:
try:
client.terminate_instances.assert_called_with(
InstanceIds=sorted(i))
return True
except AssertionError as e:
ex = e.args
raise AssertionError(ex)
def test_detach_instance_no_name_or_tag():
with pytest.raises(FailedActivity) as x:
detach_random_instances()
assert 'one of the following arguments are required: ' \
'asg_names or tags' in str(x.value)
def test_detach_instance_both_name_and_tag_one():
with pytest.raises(FailedActivity) as x:
detach_random_instances(
asg_names=['AutoScalingGroup-A'],
tags=[{"Key": "TagKey", "Values": ["TagValues"]}])
assert 'only one of the following arguments are allowed: ' \
'asg_names/tags' in str(x.value)
def test_detach_instance_no_count():
with pytest.raises(FailedActivity) as x:
detach_random_instances(
asg_names=['AutoScalingGroup-A'])
assert 'You must specify either "instance_count" or ' \
'"instance_percent"' in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_detach_instances_invalid_count(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [
{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{
"InstanceId": "i-00000000000000001",
"AvailabilityZone": "us-east-1a",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000002",
"AvailabilityZone": "us-east-1b",
"LifecycleState": "InService"
}
]
}
]
}
with pytest.raises(FailedActivity) as x:
detach_random_instances(asg_names, instance_count=3)
assert 'You are attempting to detach more instances than exist on ' \
'asg %s' % asg_names[0] in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_detach_instances_count(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [
{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{
"InstanceId": "i-00000000000000001",
"AvailabilityZone": "us-east-1a",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000002",
"AvailabilityZone": "us-east-1b",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000003",
"AvailabilityZone": "us-east-1c",
"LifecycleState": "InService"
},
]
}
]
}
detach_random_instances(asg_names, instance_count=2)
instance_calls = [
['i-00000000000000001', 'i-00000000000000002'],
['i-00000000000000001', 'i-00000000000000003'],
['i-00000000000000002', 'i-00000000000000003']]
ex = None
for i in instance_calls:
try:
client.detach_instances.assert_called_with(
AutoScalingGroupName=asg_names[0],
InstanceIds=sorted(i),
ShouldDecrementDesiredCapacity=False)
return True
except AssertionError as e:
ex = str(e.args)
raise AssertionError(ex)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_detach_instances_percent(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [
{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{
"InstanceId": "i-00000000000000001",
"AvailabilityZone": "us-east-1a",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000002",
"AvailabilityZone": "us-east-1b",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000003",
"AvailabilityZone": "us-east-1c",
"LifecycleState": "InService"
},
]
}
]
}
detach_random_instances(asg_names, instance_percent=67)
instance_calls = [
['i-00000000000000001', 'i-00000000000000002'],
['i-00000000000000001', 'i-00000000000000003'],
['i-00000000000000002', 'i-00000000000000003']]
ex = None
for i in instance_calls:
try:
client.detach_instances.assert_called_with(
AutoScalingGroupName=asg_names[0],
InstanceIds=sorted(i),
ShouldDecrementDesiredCapacity=False)
return True
except AssertionError as e:
ex = str(e.args)
raise AssertionError(ex)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_detach_instances_tags(aws_client):
client = MagicMock()
aws_client.return_value = client
tags = [{'Key': 'TargetKey', 'Value': 'TargetValue'}]
client.get_paginator.return_value.paginate.return_value = [{
'Tags': [{
'ResourceId': 'AutoScalingGroup-A',
'ResourceType': 'auto-scaling-group',
'Key': 'TargetKey',
'Value': 'TargetValue',
'PropagateAtLaunch': False}]
}]
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{
"InstanceId": "i-00000000000000001",
"AvailabilityZone": "us-east-1a",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000002",
"AvailabilityZone": "us-east-1b",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000003",
"AvailabilityZone": "us-east-1c",
"LifecycleState": "InService"
},
]
}]
}
detach_random_instances(tags=tags, instance_count=2)
instance_calls = [
['i-00000000000000001', 'i-00000000000000002'],
['i-00000000000000001', 'i-00000000000000003'],
['i-00000000000000002', 'i-00000000000000003']
]
ex = None
for i in instance_calls:
try:
client.detach_instances.assert_called_with(
AutoScalingGroupName='AutoScalingGroup-A',
InstanceIds=sorted(i),
ShouldDecrementDesiredCapacity=False)
return True
except AssertionError as e:
ex = e.args
raise AssertionError(ex)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_change_subnets_valid_names(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
params = dict(
asg_names=asg_names,
subnets=['subnet-123456789', 'subnet-23456789a'])
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": "AutoScalingGroup-A",
"VPCZoneIdentifier": "subnet-012345678,subnet-123456789"}]}
change_subnets(**params)
client.update_auto_scaling_group.assert_called_with(
AutoScalingGroupName=asg_names[0],
VPCZoneIdentifier="subnet-123456789,subnet-23456789a")
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_change_subnets_valid_tags(aws_client):
client = MagicMock()
aws_client.return_value = client
tags = [{'Key': 'TargetKey', 'Value': 'TargetValue'}]
params = dict(
tags=tags,
subnets=['subnet-123456789', 'subnet-23456789a'])
client.get_paginator.return_value.paginate.return_value = [{
'Tags': [{
'ResourceId': 'AutoScalingGroup-A',
'ResourceType': 'auto-scaling-group',
'Key': 'TargetKey',
'Value': 'TargetValue'}]}]
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": "AutoScalingGroup-A",
"VPCZoneIdentifier": "subnet-012345678,subnet-123456789"}]}
change_subnets(**params)
client.update_auto_scaling_group.assert_called_with(
AutoScalingGroupName="AutoScalingGroup-A",
VPCZoneIdentifier="subnet-123456789,subnet-23456789a")
def test_change_subnets_no_subnet():
asg_names = ['AutoScalingGroup-A']
with pytest.raises(TypeError) as x:
change_subnets(asg_names=asg_names)
assert "missing 1 required positional argument: 'subnets'" in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_detach_random_volume_asg_name(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{"InstanceId": "i-00000000000000001"}]}]}
client.describe_instances.return_value = {
'Reservations': [{
'Instances': [{
'InstanceId': 'i-00000000000000001',
'BlockDeviceMappings': [
{
'DeviceName': '/dev/xvda',
'Ebs': {'VolumeId': 'vol-00000001'}
},
{
'DeviceName': '/dev/sdc',
'Ebs': {'VolumeId': 'vol-00000002'}
}]}]}]}
client.detach_volume.return_value = {
'Device': '/dev/sdc',
'InstanceId': 'i-00000000000000001',
'State': 'detaching',
'VolumeId': 'vol-00000002'}
results = detach_random_volume(asg_names=asg_names)
client.describe_auto_scaling_groups.assert_called_with(
AutoScalingGroupNames=asg_names)
client.describe_instances.assert_called_with(
InstanceIds=['i-00000000000000001'])
client.detach_volume.assert_called_with(
Device='/dev/sdc',
Force=True,
InstanceId='i-00000000000000001',
VolumeId='vol-00000002')
assert results[0]['Device'] == '/dev/sdc'
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_detach_random_volume_asg_tags(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
tags = [{'Key': 'TargetKey', 'Value': 'TargetValue'}]
client.get_paginator.return_value.paginate.return_value = [{
'Tags': [{
'ResourceId': 'AutoScalingGroup-A',
'ResourceType': 'auto-scaling-group',
'Key': 'TargetKey',
'Value': 'TargetValue',
'PropagateAtLaunch': False}]}]
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{"InstanceId": "i-00000000000000001"}]}]}
client.describe_instances.return_value = {
'Reservations': [{
'Instances': [{
'InstanceId': 'i-00000000000000001',
'BlockDeviceMappings': [
{
'DeviceName': '/dev/xvda',
'Ebs': {'VolumeId': 'vol-00000001'}
},
{
'DeviceName': '/dev/sdb',
'Ebs': {'VolumeId': 'vol-00000002'}
}]}]}]}
client.detach_volume.return_value = {
'Device': '/dev/sdb',
'InstanceId': 'i-00000000000000001',
'State': 'detaching',
'VolumeId': 'vol-00000002'}
results = detach_random_volume(tags=tags)
client.describe_auto_scaling_groups.assert_called_with(
AutoScalingGroupNames=asg_names)
client.describe_instances.assert_called_with(
InstanceIds=['i-00000000000000001'])
client.detach_volume.assert_called_with(
Device='/dev/sdb',
Force=True,
InstanceId='i-00000000000000001',
VolumeId='vol-00000002')
assert results[0]['Device'] == '/dev/sdb'
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_detach_random_volume_asg_invalid_name(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": []}
with pytest.raises(FailedActivity) as x:
detach_random_volume(asg_names=asg_names)
assert "Unable to locate ASG(s): %s" % asg_names in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_detach_random_volume_asg_invalid_tags(aws_client):
client = MagicMock()
aws_client.return_value = client
tags = [{'Key': 'TargetKey', 'Value': 'TargetValue'}]
client.describe_instances.return_value = {'Reservations': []}
with pytest.raises(FailedActivity) as x:
detach_random_volume(tags=tags)
assert "No ASG(s) found with matching tag(s): %s" % tags in str(x.value)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_attach_volume_asg_name(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": asg_names[0],
"Instances": [
{"InstanceId": "i-00000000000000001"}]}]}
client.describe_volumes.return_value = {
'Volumes': [
{
'VolumeId': 'vol-00000001',
'Tags': [{
'Key': 'ChaosToolkitDetached',
'Value': 'DeviceName=/dev/sdc;InstanceId=%s;ASG=%s' % (
'i-987654321fabcde', asg_names[0])}]
},
{
'VolumeId': 'vol-00000002',
'Tags': [{
'Key': 'ChaosToolkitDetached',
'Value': 'DeviceName=/dev/sdb;InstanceId='
'i-987654321fefghi'
}]}]}
client.attach_volume.return_value = {
'DeviceName': '/dev/sdc',
'InstanceId': 'i-987654321fabcde',
'State': 'attaching',
'VolumeId': 'vol-00000001'}
results = attach_volume(asg_names=asg_names)
client.describe_auto_scaling_groups.assert_called_with(
AutoScalingGroupNames=asg_names)
client.describe_volumes.assert_called_with(
Filters=[{'Name': 'tag-key', 'Values': ['ChaosToolkitDetached']}])
client.attach_volume.assert_called_with(
Device='/dev/sdc',
InstanceId='i-987654321fabcde',
VolumeId='vol-00000001')
assert results[0]['DeviceName'] == '/dev/sdc'
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_attach_volume_asg_tags(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
tags = [{'Key': 'TargetKey', 'Value': 'TargetValue'}]
client.get_paginator.return_value.paginate.return_value = [{
'Tags': [{
'ResourceId': asg_names[0],
'ResourceType': 'auto-scaling-group',
'Key': 'TargetKey',
'Value': 'TargetValue',
'PropagateAtLaunch': False}]}]
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": asg_names[0],
"Instances": [
{"InstanceId": "i-00000000000000001"}]}]}
client.describe_volumes.return_value = {
'Volumes': [
{
'VolumeId': 'vol-00000001',
'Tags': [{
'Key': 'ChaosToolkitDetached',
'Value': 'DeviceName=/dev/sdb;InstanceId=%s;ASG=%s' % (
'i-00000000000000001', asg_names[0])}]
},
{
'VolumeId': 'vol-00000002',
'Tags': [{
'Key': 'ChaosToolkitDetached',
'Value': 'DeviceName=/dev/sdb;InstanceId='
'i-987654321fghij'
}]}]}
client.attach_volume.return_value = {
'DeviceName': '/dev/sdb',
'InstanceId': 'i-00000000000000001',
'State': 'attaching',
'VolumeId': 'vol-00000001'}
results = attach_volume(tags=tags)
client.describe_auto_scaling_groups.assert_called_with(
AutoScalingGroupNames=asg_names)
client.get_paginator.return_value.paginate.assert_called_with(
Filters=[
{'Name': 'key', 'Values': ['TargetKey']},
{'Name': 'value', 'Values': ['TargetValue']}])
client.describe_volumes.assert_called_with(
Filters=[{'Name': 'tag-key', 'Values': ['ChaosToolkitDetached']}])
client.attach_volume.assert_called_with(
Device='/dev/sdb',
InstanceId='i-00000000000000001',
VolumeId='vol-00000001')
assert results[0]['DeviceName'] == '/dev/sdb'
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_asg_stop_random_instance_name(aws_client):
client = MagicMock()
aws_client.return_value = client
asg_names = ['AutoScalingGroup-A']
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [
{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{
"InstanceId": "i-00000000000000001",
"AvailabilityZone": "us-east-1a",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000002",
"AvailabilityZone": "us-east-1b",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000003",
"AvailabilityZone": "us-east-1c",
"LifecycleState": "InService"
},
]
}
]
}
stop_random_instances(asg_names=asg_names, instance_percent=50)
instance_calls = [
'i-00000000000000001', 'i-00000000000000002', 'i-00000000000000003']
ex = None
for i in instance_calls:
try:
client.stop_instances.assert_called_with(
Force=False, InstanceIds=[i])
return True
except AssertionError as e:
ex = e.args
raise AssertionError(ex)
@patch('chaosaws.asg.actions.aws_client', autospec=True)
def test_asg_stop_random_instance_tags(aws_client):
client = MagicMock()
aws_client.return_value = client
tags = [{'Key': 'TargetKey', 'Value': 'TargetValue'}]
client.get_paginator.return_value.paginate.return_value = [{
'Tags': [{
'ResourceId': 'AutoScalingGroup-A',
'ResourceType': 'auto-scaling-group',
'Key': 'TargetKey',
'Value': 'TargetValue',
'PropagateAtLaunch': False}]}]
client.describe_auto_scaling_groups.return_value = {
"AutoScalingGroups": [{
"AutoScalingGroupName": "AutoScalingGroup-A",
"Instances": [
{
"InstanceId": "i-00000000000000001",
"AvailabilityZone": "us-east-1a",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000002",
"AvailabilityZone": "us-east-1b",
"LifecycleState": "InService"
},
{
"InstanceId": "i-00000000000000003",
"AvailabilityZone": "us-east-1c",
"LifecycleState": "InService"
}]}]}
stop_random_instances(tags=tags, instance_count=2)
instance_calls = [
['i-00000000000000001', 'i-00000000000000002'],
['i-00000000000000001', 'i-00000000000000003'],
['i-00000000000000002', 'i-00000000000000003']]
ex = None
for i in instance_calls:
try:
client.stop_instances.assert_called_with(
Force=False, InstanceIds=sorted(i))
return True
except AssertionError as e:
ex = e.args
raise AssertionError(ex)
| 36.65987 | 81 | 0.574325 | 3,490 | 39,556 | 6.286819 | 0.05616 | 0.036461 | 0.033089 | 0.035322 | 0.956565 | 0.944715 | 0.929629 | 0.922747 | 0.910761 | 0.895173 | 0 | 0.069386 | 0.305921 | 39,556 | 1,078 | 82 | 36.693878 | 0.729776 | 0.000531 | 0 | 0.734651 | 0 | 0 | 0.283611 | 0.030456 | 0 | 0 | 0 | 0 | 0.073881 | 1 | 0.043704 | false | 0.002081 | 0.004162 | 0 | 0.056191 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ad4e5d33eff0c656a4d5b49343eff6d835b7dd82 | 5,216 | py | Python | tests/test_visitors/test_tokenize/test_comments/test_shebang.py | cdhiraj40/wemake-python-styleguide | 7cef9be081d594c30045b7a98cae77a9be46e1aa | [
"MIT"
] | 1,931 | 2018-03-17T13:52:45.000Z | 2022-03-27T09:39:17.000Z | tests/test_visitors/test_tokenize/test_comments/test_shebang.py | amansr02/wemake-python-styleguide | 681035ed21fbe28ebfb32b8807b98e8de76b64aa | [
"MIT"
] | 2,231 | 2018-03-09T21:19:05.000Z | 2022-03-31T08:35:37.000Z | tests/test_visitors/test_tokenize/test_comments/test_shebang.py | amansr02/wemake-python-styleguide | 681035ed21fbe28ebfb32b8807b98e8de76b64aa | [
"MIT"
] | 492 | 2018-05-18T21:20:28.000Z | 2022-03-20T14:11:50.000Z | import pytest
from wemake_python_styleguide.violations.best_practices import ShebangViolation
from wemake_python_styleguide.visitors.tokenize import comments
template_empty = ''
template_newlines = '\n\n'
template_regular = '{0}'
template_with_leading_comment = """{0}
# some other
"""
template_regular_comment = 'x = 1{0}'
@pytest.mark.parametrize('template', [
template_regular,
template_with_leading_comment,
])
@pytest.mark.parametrize(('code', 'executable'), [
('x = 1', False),
('#!/bin/python', True),
])
def test_correct_shebang_executable1(
make_file,
assert_errors,
parse_file_tokens,
default_options,
template,
code,
executable,
):
"""Testing cases when no errors should be reported."""
path_to_file = make_file('test_file.py', template.format(code), executable)
file_tokens = parse_file_tokens(path_to_file)
visitor = comments.ShebangVisitor(
default_options,
filename=path_to_file,
file_tokens=file_tokens,
)
visitor.run()
assert_errors(visitor, [])
@pytest.mark.parametrize('template', [
template_regular_comment,
template_empty,
])
@pytest.mark.parametrize(('code', 'executable'), [
('#!/bin/some', False),
('#!/bin/python', False),
('# any text', False),
(' # any text with padding', False),
])
def test_correct_shebang_executable2(
make_file,
assert_errors,
parse_file_tokens,
default_options,
template,
code,
executable,
):
"""Testing cases when no errors should be reported."""
path_to_file = make_file('test_file.py', template.format(code), executable)
file_tokens = parse_file_tokens(path_to_file)
visitor = comments.ShebangVisitor(
default_options,
filename=path_to_file,
file_tokens=file_tokens,
)
visitor.run()
assert_errors(visitor, [])
@pytest.mark.parametrize('template', [
template_regular,
template_with_leading_comment,
template_regular_comment,
template_empty,
])
@pytest.mark.parametrize(('code', 'executable'), [
('#!/bin/python', False),
('#!/bin/python', True),
('# any text', False),
('# any text', True),
])
def test_shebang_on_windows(
make_file,
monkeypatch,
assert_errors,
parse_file_tokens,
default_options,
template,
code,
executable,
):
"""Testing cases when no errors should be reported."""
monkeypatch.setattr(comments, 'is_windows', lambda: True)
path_to_file = make_file('test_file.py', template.format(code), executable)
file_tokens = parse_file_tokens(path_to_file)
visitor = comments.ShebangVisitor(
default_options,
filename=path_to_file,
file_tokens=file_tokens,
)
visitor.run()
assert_errors(visitor, [])
@pytest.mark.parametrize('template', [
template_regular,
template_with_leading_comment,
template_regular_comment,
template_empty,
])
@pytest.mark.parametrize(('code', 'executable'), [
('#!/bin/python', False),
('#!/bin/python', True),
('# any text', False),
('# any text', True),
])
def test_shebang_with_stdin(
make_file,
monkeypatch,
assert_errors,
parse_file_tokens,
default_options,
template,
code,
executable,
):
"""Testing cases when no errors should be reported."""
path_to_file = make_file('test_file.py', template.format(code), executable)
file_tokens = parse_file_tokens(path_to_file)
visitor = comments.ShebangVisitor(
default_options,
filename='stdin',
file_tokens=file_tokens,
)
visitor.run()
assert_errors(visitor, [])
@pytest.mark.parametrize('template', [
template_regular,
template_with_leading_comment,
])
@pytest.mark.parametrize(('code', 'executable'), [
('#!/bin/python', False),
('# regular comment', True),
])
def test_wrong_shebang_executable(
make_file,
assert_errors,
parse_file_tokens,
default_options,
template,
code,
executable,
):
"""Testing cases when no errors should be reported."""
path_to_file = make_file('test_file.py', template.format(code), executable)
file_tokens = parse_file_tokens(path_to_file)
visitor = comments.ShebangVisitor(
default_options,
filename=path_to_file,
file_tokens=file_tokens,
)
visitor.run()
assert_errors(visitor, [ShebangViolation])
@pytest.mark.parametrize('template', [
template_with_leading_comment,
])
@pytest.mark.parametrize('code', [
'#!/bin/other', # does not include `python`
' #!/bin/python', # has extra whitespace
'\n\n#!python', # has extra newlines
])
def test_wrong_shebang_format(
make_file,
assert_errors,
parse_file_tokens,
default_options,
template,
code,
):
"""Testing cases when no errors should be reported."""
path_to_file = make_file(
'test_file.py', template.format(code), is_executable=True,
)
file_tokens = parse_file_tokens(path_to_file)
visitor = comments.ShebangVisitor(
default_options,
filename=path_to_file,
file_tokens=file_tokens,
)
visitor.run()
assert_errors(visitor, [ShebangViolation])
| 24.260465 | 79 | 0.673505 | 592 | 5,216 | 5.631757 | 0.133446 | 0.089982 | 0.05099 | 0.046791 | 0.818536 | 0.804139 | 0.804139 | 0.804139 | 0.784643 | 0.784643 | 0 | 0.001679 | 0.200729 | 5,216 | 214 | 80 | 24.373832 | 0.798033 | 0.069018 | 0 | 0.805556 | 0 | 0 | 0.099316 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.033333 | false | 0 | 0.016667 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ad8f694ed91f7b40ca4c8720c5f1ecd12978adf2 | 4,563 | py | Python | comapv/tests/tests_viz.py | co-map-v/co-map-v.github.io | 8431b20d64605d07b59734da77fb22f7fc9587c0 | [
"BSD-2-Clause"
] | null | null | null | comapv/tests/tests_viz.py | co-map-v/co-map-v.github.io | 8431b20d64605d07b59734da77fb22f7fc9587c0 | [
"BSD-2-Clause"
] | 6 | 2020-12-04T03:11:06.000Z | 2021-06-02T03:34:51.000Z | comapv/tests/tests_viz.py | co-map-v/co-map-v.github.io | 8431b20d64605d07b59734da77fb22f7fc9587c0 | [
"BSD-2-Clause"
] | 1 | 2020-12-15T07:02:54.000Z | 2020-12-15T07:02:54.000Z | """
Unit tests to ensure that each function in app.py generates a plotly figure. Pylint = 9.83
"""
import unittest
import os
import json
import urllib.request
import pathlib
import pandas as pd
from .. import app
class UnitTests(unittest.TestCase):
def test_smoke1 (self):
"""Smoke Test: Death Counts Map
Should check to see if the function generates a plotly plot
"""
#URLs left long, ouside of PEP8 compliance to favour readability!
# Load data from Github Repo
with urllib.request.urlopen('https://raw.githubusercontent.com/co-map-v/co-map-v.github.io/main/comapv/data/ma_map.geojson') as response: # pylint: disable=line-too-long
counties_1 = json.load(response)
wd_of_script = pathlib.Path(__file__).parent.absolute()
filepath_read = os.path.join(wd_of_script, './', 'smoketest_data.csv')# pylint: disable=line-too-long
df_time_1 = pd.read_csv(filepath_read)
fig = app.death_counts_map(df_time_1,counties_1)
string = str(type(fig))
self.assertEqual(string, "<class 'plotly.graph_objs._figure.Figure'>")
def test_smoke2 (self):
"""Smoke Test: Case Counts Map
Should check to see if the function generates a plotly plot
"""
#URLs left long, ouside of PEP8 compliance to favour readability!
# Load data from Github Repo
with urllib.request.urlopen('https://raw.githubusercontent.com/co-map-v/co-map-v.github.io/main/comapv/data/ma_map.geojson') as response: # pylint: disable=line-too-long
counties_1 = json.load(response)
wd_of_script = pathlib.Path(__file__).parent.absolute()
filepath_read = os.path.join(wd_of_script, './', 'smoketest_data.csv')# pylint: disable=line-too-long
df_time_1 = pd.read_csv(filepath_read)
fig = app.case_count_map(df_time_1,counties_1)
string = str(type(fig))
self.assertEqual(string, "<class 'plotly.graph_objs._figure.Figure'>")
def test_smoke3 (self):
"""Smoke Test: Pop Counts Map
Should check to see if the function generates a plotly plot
"""
#URLs left long, ouside of PEP8 compliance to favour readability!
# Load data from Github Repo
with urllib.request.urlopen('https://raw.githubusercontent.com/co-map-v/co-map-v.github.io/main/comapv/data/ma_map.geojson') as response: # pylint: disable=line-too-long
counties_1 = json.load(response)
wd_of_script = pathlib.Path(__file__).parent.absolute()
filepath_read = os.path.join(wd_of_script, './', 'smoketest_data.csv')# pylint: disable=line-too-long
df_time_1 = pd.read_csv(filepath_read)
fig = app.population_map(df_time_1,counties_1)
string = str(type(fig))
self.assertEqual(string, "<class 'plotly.graph_objs._figure.Figure'>")
def test_smoke4 (self):
"""Smoke Test: Pop Counts Chart
Should check to see if the function generates a plotly plot
"""
wd_of_script = pathlib.Path(__file__).parent.absolute()
filepath_read = os.path.join(wd_of_script, './', 'smoketest_data.csv')# pylint: disable=line-too-long
df_time_1 = pd.read_csv(filepath_read)
fig = app.population_histogram(df_time_1)
string = str(type(fig))
self.assertEqual(string, "<class 'plotly.graph_objs._figure.Figure'>")
def test_smoke5 (self):
"""Smoke Test: Death Counts Chart
Should check to see if the function generates a plotly plot
"""
wd_of_script = pathlib.Path(__file__).parent.absolute()
filepath_read = os.path.join(wd_of_script, './', 'smoketest_data.csv')# pylint: disable=line-too-long
df_time_1 = pd.read_csv(filepath_read)
fig = app.deaths_histogram(df_time_1)
string = str(type(fig))
self.assertEqual(string, "<class 'plotly.graph_objs._figure.Figure'>")
def test_smoke6 (self):
"""Smoke Test: Case Counts Chart
Should check to see if the function generates a plotly plot
"""
wd_of_script = pathlib.Path(__file__).parent.absolute()
filepath_read = os.path.join(wd_of_script, './', 'smoketest_data.csv')# pylint: disable=line-too-long
df_time_1 = pd.read_csv(filepath_read)
fig = app.case_histogram(df_time_1)
string = str(type(fig))
self.assertEqual(string, "<class 'plotly.graph_objs._figure.Figure'>")
suite = unittest.TestLoader().loadTestsFromTestCase(UnitTests)
_ = unittest.TextTestRunner().run(suite)
| 45.178218 | 179 | 0.674556 | 635 | 4,563 | 4.63937 | 0.179528 | 0.016293 | 0.040733 | 0.0611 | 0.88391 | 0.849287 | 0.849287 | 0.849287 | 0.849287 | 0.849287 | 0 | 0.008324 | 0.210169 | 4,563 | 100 | 180 | 45.63 | 0.809101 | 0.257287 | 0 | 0.62069 | 1 | 0.051724 | 0.201236 | 0.064915 | 0 | 0 | 0 | 0 | 0.103448 | 1 | 0.103448 | false | 0 | 0.12069 | 0 | 0.241379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d171cfb9994369c889653b5c2c50677d646c6278 | 11,968 | py | Python | Python Code/Algorithm Range 0 - 20000/bubble.py | Roisin-Fallon/Sorting_Algorithms | 5ebb0fb3175982bbaa991556a5b09bb443636422 | [
"Apache-2.0"
] | null | null | null | Python Code/Algorithm Range 0 - 20000/bubble.py | Roisin-Fallon/Sorting_Algorithms | 5ebb0fb3175982bbaa991556a5b09bb443636422 | [
"Apache-2.0"
] | null | null | null | Python Code/Algorithm Range 0 - 20000/bubble.py | Roisin-Fallon/Sorting_Algorithms | 5ebb0fb3175982bbaa991556a5b09bb443636422 | [
"Apache-2.0"
] | null | null | null | from random import * # Import python random module
def random_array(n): # Function takes as input a value n
array = [] # create an array variable
for i in range(0, n, 1): # i start at 0 stop at n an increment by 1 (e.g. if n=4 0,1,2,3)
array.append(randint(0,100)) # Add random generated integers with values between 0 and 99 to the array
return array
# assign the random array to alist
alist1= random_array(100)
alist2= random_array(250)
alist3= random_array(500)
alist4 = random_array(750)
alist5 = random_array(1000)
alist6 = random_array(1250)
alist7 = random_array(2500)
alist8 = random_array(3750)
alist9 = random_array(5000)
alist10 = random_array(6250)
alist11 = random_array(7500)
alist12 = random_array(8750)
alist13 = random_array(10000)
alist14 = random_array(15000)
alist15 = random_array(20000)
# Code adapted from: https://www.geeksforgeeks.org/bubble-sort/
def bubbleSort(alist):
n = len(alist)
for i in range(n): # Traverse through all elements in the array
for j in range(0, n-i-1): # Last i elements are already in place
if alist[j] > alist[j+1]: # Swap if the element is greater than the next element
alist[j], alist[j+1] = alist[j+1], alist[j]
import time # import time module
num_runs = 10 # Number of times to test the function i.e. we want 10 runs
results = [] # array to store results for each test
bubble_avglist = []
def benchmark_bubble():
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist1) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist2) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist3) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist4) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist5) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist6) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist7) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist8) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist9) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist10) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist11) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist12) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist13) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist14) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
for r in range(num_runs): # Benchmark the function
start_time = time.time() # Log the start time in seconds
bubbleSort(alist15) # Call the function insertion to benchmark
end_time = time.time() # Log the end time in seconds
time_elapsed= end_time - start_time # Calculate the elapsed time
results.append(time_elapsed)
b = sum(results) # Sum the results of the 10 runs
average = (b/num_runs) # Calculate the average of a run
bubble_avglist.append(average)
print(bubble_avglist)
benchmark_bubble()
| 55.665116 | 128 | 0.528576 | 1,368 | 11,968 | 4.510234 | 0.096491 | 0.077796 | 0.058347 | 0.072934 | 0.823177 | 0.816856 | 0.816856 | 0.816856 | 0.816856 | 0.816856 | 0 | 0.022302 | 0.415525 | 11,968 | 214 | 129 | 55.925234 | 0.859757 | 0.312834 | 0 | 0.710059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017751 | false | 0 | 0.011834 | 0 | 0.035503 | 0.005917 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
66ffeb367675e14a4d37bf1160fbabcd0cd6e4aa | 8,310 | py | Python | web-scraping/converter.py | stivenramireza/nutibara-web-scraping | 48be40b735f011ac93901a2ca97fcacc6d2d5ca6 | [
"MIT"
] | 1 | 2019-10-02T14:29:48.000Z | 2019-10-02T14:29:48.000Z | web-scraping/converter.py | stivenramireza/nutibara-web-scraping | 48be40b735f011ac93901a2ca97fcacc6d2d5ca6 | [
"MIT"
] | 1 | 2019-10-02T14:42:31.000Z | 2019-10-02T14:42:31.000Z | web-scraping/converter.py | stivenramireza/anutibara-web-scraping | 48be40b735f011ac93901a2ca97fcacc6d2d5ca6 | [
"MIT"
] | null | null | null | import crawl
import generator
from datetime import datetime
import json, re
date = datetime.now()
scraping_date = str(date.strftime("%d")) + '/' + str(date.strftime("%m")) + '/' + str(date.strftime("%Y"))
scraping_hour = str(date.strftime("%X"))
def convert_string_to_json(url):
soup = crawl.scrape_html(url)
pattern = re.compile("var sfAdvert = \{.*\:.*\:.*\};")
json_property = ''
for script in soup.find_all("script", type="text/javascript"):
if(pattern.findall(script.text)):
json_property = pattern.findall(script.text)[0].split()[3:]
json_to_strip = (json_property[-1])[0:-1]
json_property = json_property[0:-1]
json_property.append(json_to_strip)
json_property = " ".join(json_property)
json_property_agency = json.loads(json_property)
return json_property_agency
def convert_12_to_24(str1):
if str1[-2:] == "AM" and str1[:2] == "12":
return "00" + str1[2:-2]
elif str1[-2:] == "AM":
return str1[:-2]
elif str1[-2:] == "PM" and str1[:2] == "12":
return str1[:-2]
else:
return str(int(str1[:2]) + 12) + str1[2:8]
def convert_new_property_to_json(json_property_agency, property_location, owner_property, property_features, property_hidden_features, array_offers_type, url):
modify_date = json_property_agency["ModifyDate"].split()[0]
modify_date = modify_date.split('/')
modify_date = modify_date[1] + '/' + modify_date[0] + '/' + modify_date[2]
modify_date_object = datetime.strptime(modify_date, '%d/%m/%Y')
modify_date = datetime.strftime(modify_date_object, '%d/%m/%Y')
hour = json_property_agency["ModifyDate"].split()[1]
am_pm = json_property_agency["ModifyDate"].split()[2]
modify_hour_object = datetime.strptime(hour, '%H:%M:%S')
modify_hour_str = datetime.strftime(modify_hour_object, '%H:%M:%S')
modify_hour = modify_hour_str + " " + am_pm
modify_hour = convert_12_to_24(modify_hour)
array_interior_features = ''
array_exterior_features = ''
array_sector_features = ''
for key in property_hidden_features:
if(key == 'interiorFeatures'):
array_interior_features = property_hidden_features[key]
elif(key == 'exteriorFeatures'):
array_exterior_features = property_hidden_features[key]
else:
array_sector_features = property_hidden_features[key]
new_property_dict = {
'urlProperty': url,
'scrapingDate': scraping_date,
'scrapingHour': scraping_hour,
'modifyDate': modify_date,
'modifyHour': modify_hour,
'code': int(json_property_agency["AdvertId"]),
'status': json_property_agency["Status"],
'type': json_property_agency["TransactionType"],
'use': 'Nuevo',
'nameProject': json_property_agency["Title"],
'country': property_location['country'],
'department': property_location['department'],
'city': property_location['city'],
'sector': property_location['sector'],
'neighborhood': property_location['neighborhood'],
'address': property_location['address'],
'latitude': property_location['latitude'],
'longitude': property_location['longitude'],
'idOwnerProperty': owner_property['id'],
'nameOwnerProperty': owner_property['name'],
'contractType': owner_property['contractType'],
'financing': owner_property['financing'],
'schedule': owner_property['schedule'],
'description': json_property_agency["Description"],
'price': property_features['price'],
'squareMeters': property_features['squareMeters'],
'rooms': property_features['rooms'],
'bathrooms': property_features['bathrooms'],
'garages': property_features['garages'],
'privateArea': property_features['privateArea'],
'constructionArea': property_features['constructionArea'],
'squareMetersPrice': property_features['squareMetersPrice'],
'stratum': property_features['stratum'],
'condition': property_features['condition'],
'antiquity': property_features['antiquity'],
'floor': property_features['floor'],
'interiorFloors': property_features['interiorFloors'],
'weather': property_features['weather'],
'includesAdministration': property_features['includesAdministration'],
'admonPrice': property_features['admonPrice'],
'interiorFeatures': array_interior_features,
'exteriorFeatures': array_exterior_features,
'sectorFeatures': array_sector_features,
'offersType': array_offers_type[1:]
}
generator.create_json(new_property_dict)
def convert_old_property_to_json(json_property_agency, property_location, owner_property, property_features, property_hidden_features, array_offers_type, url):
modify_date = json_property_agency["ModifyDate"].split()[0]
modify_date = modify_date.split('/')
modify_date = modify_date[1] + '/' + modify_date[0] + '/' + modify_date[2]
modify_date_object = datetime.strptime(modify_date, '%d/%m/%Y')
modify_date = datetime.strftime(modify_date_object, '%d/%m/%Y')
hour = json_property_agency["ModifyDate"].split()[1]
am_pm = json_property_agency["ModifyDate"].split()[2]
modify_hour_object = datetime.strptime(hour, '%H:%M:%S')
modify_hour_str = datetime.strftime(modify_hour_object, '%H:%M:%S')
modify_hour = modify_hour_str + " " + am_pm
modify_hour = convert_12_to_24(modify_hour)
array_interior_features = ''
array_exterior_features = ''
array_sector_features = ''
for key in property_hidden_features:
if(key == 'interiorFeatures'):
array_interior_features = property_hidden_features[key]
elif(key == 'exteriorFeatures'):
array_exterior_features = property_hidden_features[key]
else:
array_sector_features = property_hidden_features[key]
old_property_dict = {
'urlProperty': url,
'scrapingDate': scraping_date,
'scrapingHour': scraping_hour,
'modifyDate': modify_date,
'modifyHour': modify_hour,
'code': int(json_property_agency["AdvertId"]),
'status': json_property_agency["Status"],
'type': json_property_agency["TransactionType"],
'use': 'Usado',
'nameProject': json_property_agency["Title"],
'country': property_location['country'],
'department': property_location['department'],
'city': property_location['city'],
'sector': property_location['sector'],
'neighborhood': property_location['neighborhood'],
'address': property_location['address'],
'latitude': property_location['latitude'],
'longitude': property_location['longitude'],
'idOwnerProperty': owner_property['id'],
'nameOwnerProperty': owner_property['name'],
'contractType': owner_property['contractType'],
'financing': owner_property['financing'],
'schedule': owner_property['schedule'],
'description': json_property_agency["Description"],
'price': property_features['price'],
'squareMeters': property_features['squareMeters'],
'rooms': property_features['rooms'],
'bathrooms': property_features['bathrooms'],
'garages': property_features['garages'],
'privateArea': property_features['privateArea'],
'constructionArea': property_features['constructionArea'],
'squareMetersPrice': property_features['squareMetersPrice'],
'stratum': property_features['stratum'],
'condition': property_features['condition'],
'antiquity': property_features['antiquity'],
'floor': property_features['floor'],
'interiorFloors': property_features['interiorFloors'],
'weather': property_features['weather'],
'includesAdministration': property_features['includesAdministration'],
'admonPrice': property_features['admonPrice'],
'interiorFeatures': array_interior_features,
'exteriorFeatures': array_exterior_features,
'sectorFeatures': array_sector_features,
'offersType': array_offers_type[1:]
}
generator.create_json(old_property_dict) | 46.424581 | 159 | 0.667268 | 845 | 8,310 | 6.236686 | 0.153846 | 0.103226 | 0.068311 | 0.045541 | 0.872865 | 0.858065 | 0.858065 | 0.858065 | 0.858065 | 0.858065 | 0 | 0.009537 | 0.192419 | 8,310 | 179 | 160 | 46.424581 | 0.775741 | 0 | 0 | 0.784431 | 0 | 0 | 0.213452 | 0.010588 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023952 | false | 0 | 0.023952 | 0 | 0.077844 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0f58e6280f341628cc9f1c7135ec9b84af946a00 | 21,451 | py | Python | core/views/tasks.py | ruizdiazever/inter-webapp | dc79c9e1d7bfb296d6c5326d2e52d83e14f3a5a7 | [
"MIT"
] | null | null | null | core/views/tasks.py | ruizdiazever/inter-webapp | dc79c9e1d7bfb296d6c5326d2e52d83e14f3a5a7 | [
"MIT"
] | null | null | null | core/views/tasks.py | ruizdiazever/inter-webapp | dc79c9e1d7bfb296d6c5326d2e52d83e14f3a5a7 | [
"MIT"
] | null | null | null | import json
from datetime import datetime, timedelta, timezone
import pytz
from zoneinfo import ZoneInfo
from flask import request, jsonify, make_response
from core.settings import VERSION_API, TIME_ZONE, ISO_8601, KEY_DELETE_ALL_TASKS
from core.models import Unit, User, Task
from flask import jsonify
from core.instance import app
from core.session import *
# GET ALL TASK OF CALENDAR
@app.route(f'/api/{VERSION_API}/tasks', methods=['GET', 'POST'])
@token_required
def get_all_tasks(current_unit):
tasks = Task.query.all()
units = Unit.query.all()
tz = ZoneInfo(TIME_ZONE)
result = []
for task in tasks:
task_data = {}
task_data['id'] = task.id
task_data['user'] = task.user
# START DATETIME
start_utc = datetime(int(task.start.year), int(task.start.month), int(task.start.day), int(task.start.hour), int(task.start.minute), tzinfo=timezone.utc)
start_ba = start_utc.astimezone(tz)
task_data['startTime'] = start_ba.strftime(ISO_8601)
# END DATETIME
end_utc = datetime(int(task.end.year), int(task.end.month), int(task.end.day), int(task.end.hour), int(task.end.minute), tzinfo=timezone.utc)
end_ba = end_utc.astimezone(tz)
task_data['endTime'] = end_ba.strftime(ISO_8601)
for unit in units:
if unit.id == task.unit_id:
task_data['unit_name'] = unit.name
task_data['public_id'] = unit.public_id
result.append(task_data)
return jsonify({'tasks': result})
# GET ALL TASK OF CALENDAR IN CURRENT UNIT
@app.route(f'/api/{VERSION_API}/tasks/current', methods=['GET', 'POST'])
@token_required
def get_all_current_tasks(current_unit):
tz = ZoneInfo(TIME_ZONE)
tasks = Task.query.filter_by(unit_id=current_unit.id)
users = User.query.all()
result = []
for task in tasks:
task_data = {}
for user in users:
if user.id == int(task.user):
task_data['phone'] = user.phone
task_data['specialty'] = user.specialty
task_data['name'] = user.name
task_data['lastName'] = user.last_name
task_data['position'] = user.position
task_data['id'] = task.id
task_data['user'] = task.user
task_data['unitName'] = task.unit_name
# START DATETIME
start_utc = datetime(int(task.start.year), int(task.start.month), int(task.start.day), int(task.start.hour), int(task.start.minute), tzinfo=timezone.utc)
start_ba = start_utc.astimezone(tz)
task_data['startTime'] = start_ba.strftime("%d-%m-%Y %H:%M")
task_data['dateStart'] = start_ba.strftime("%Y-%m-%d")
task_data['dateStartArg'] = start_ba.strftime("%d-%m-%Y")
task_data['hourStart'] = start_ba.strftime("%H:%M")
# END DATETIME
end_utc = datetime(int(task.end.year), int(task.end.month), int(task.end.day), int(task.end.hour), int(task.end.minute), tzinfo=timezone.utc)
end_ba = end_utc.astimezone(tz)
task_data['endTime'] = end_ba.strftime("%d-%m-%Y %H:%M")
task_data['dateEndArg'] = end_ba.strftime("%d-%m-%Y")
task_data['hourEnd'] = end_ba.strftime("%H:%M")
# Special
task_data['rangeHour'] = f'{start_ba.strftime("%H:%M")} a {end_ba.strftime("%H:%M")}'
result.append(task_data)
return jsonify({'tasks': result})
# GET TASKS OF TODAY
@app.route(f'/api/{VERSION_API}/tasks/current/today/v2', methods=['GET'])
@token_required
def get_task_today_v2(current_unit):
tasks = Task.query.filter_by(unit_id=current_unit.id)
users = User.query.filter_by(unit_id=current_unit.id)
result = {
"before": [],
"after" : [],
"current" : [],
"out" : []
}
time = {}
time['timeZone'] = TIME_ZONE
tz = ZoneInfo(TIME_ZONE)
now = datetime.utcnow()
limit = datetime.utcnow() + timedelta(hours=24)
todayArg = datetime.now(pytz.timezone(TIME_ZONE))
limit = limit.replace(tzinfo=None)
time['dateUtc'] = now.strftime("%d-%m-%Y %H:%M")
time['dateArg'] = todayArg.strftime("%d-%m-%Y %H:%M")
time['limitUtc'] = limit.strftime("%d-%m-%Y %H:%M")
time['limitArg'] = (todayArg + timedelta(hours=24)).strftime("%d-%m-%Y a las %H:%M")
for task in tasks:
if task.start < limit:
# BEFORE
if (task.end < now):
before = {}
before['startUtc'] = task.start
before['endUtc'] = task.end
before['id'] = task.id
before['unitName'] = task.unit_name
for user in users:
if user.id == int(task.user):
before['phone'] = user.phone
before['specialty'] = user.specialty
before['name'] = user.name
before['lastName'] = user.last_name
before['user'] = task.user
before['position'] = user.position
start_utc = datetime(int(task.start.year), int(task.start.month), int(task.start.day), int(task.start.hour), int(task.start.minute), tzinfo=timezone.utc)
start_ba = start_utc.astimezone(tz)
before['start'] = start_ba.strftime("%H:%M")
before['startFull'] = start_ba.strftime("%d-%m-%Y %H:%M")
# END DATETIME
end_utc = datetime(int(task.end.year), int(task.end.month), int(task.end.day), int(task.end.hour), int(task.end.minute), tzinfo=timezone.utc)
end_ba = end_utc.astimezone(tz)
before['end'] = end_ba.strftime("%H:%M")
before['endFull'] = end_ba.strftime("%d-%m-%Y %H:%M")
before['beforeFormat'] = end_ba.strftime("%H:%M del %d/%m")
result['before'].append(before)
# CURRENT
elif (task.start <= now and task.end >= now):
current = {}
current['startUtc'] = task.start
current['endUtc'] = task.end
current['id'] = task.id
current['unitName'] = task.unit_name
for user in users:
if user.id == int(task.user):
current['phone'] = user.phone
current['specialty'] = user.specialty
current['name'] = user.name
current['lastName'] = user.last_name
current['user'] = task.user
current['position'] = user.position
# START DATETIME
start_utc = datetime(int(task.start.year), int(task.start.month), int(task.start.day), int(task.start.hour), int(task.start.minute), tzinfo=timezone.utc)
start_ba = start_utc.astimezone(tz)
current['start'] = start_ba.strftime("%H:%M")
current['startFull'] = start_ba.strftime("%d-%m-%Y %H:%M")
# END DATETIME
end_utc = datetime(int(task.end.year), int(task.end.month), int(task.end.day), int(task.end.hour), int(task.end.minute), tzinfo=timezone.utc)
end_ba = end_utc.astimezone(tz)
current['end'] = end_ba.strftime("%H:%M")
current['endFull'] = end_ba.strftime("%d-%m-%Y %H:%M")
result['current'].append(current)
# AFTER
elif (task.start > now and task.start < limit):
after = {}
after['startUtc'] = task.start
after['endUtc'] = task.end
after['id'] = task.id
after['unitName'] = task.unit_name
for user in users:
if user.id == int(task.user):
after['phone'] = user.phone
after['specialty'] = user.specialty
after['name'] = user.name
after['lastName'] = user.last_name
after['user'] = task.user
after['position'] = user.position
# START DATETIME
start_utc = datetime(int(task.start.year), int(task.start.month), int(task.start.day), int(task.start.hour), int(task.start.minute), tzinfo=timezone.utc)
start_ba = start_utc.astimezone(tz)
after['start'] = start_ba.strftime("%H:%M")
after['startFull'] = start_ba.strftime("%d-%m-%Y %H:%M")
# END DATETIME
end_utc = datetime(int(task.end.year), int(task.end.month), int(task.end.day), int(task.end.hour), int(task.end.minute), tzinfo=timezone.utc)
end_ba = end_utc.astimezone(tz)
after['end'] = end_ba.strftime("%H:%M")
after['endFull'] = end_ba.strftime("%d-%m-%Y %H:%M")
after['afterFormat'] = start_ba.strftime("%H:%M del %d/%m")
result['after'].append(after)
else:
out = {}
out['startUtc'] = task.start.strftime("%d-%m-%Y %H:%M")
out['endUtc'] = task.end.strftime("%d-%m-%Y %H:%M")
out['unitName'] = task.unit_name
for user in users:
if user.id == int(task.user):
out['name'] = user.name
# START DATETIME
start_utc = datetime(int(task.start.year), int(task.start.month), int(task.start.day), int(task.start.hour), int(task.start.minute), tzinfo=timezone.utc)
start_ba = start_utc.astimezone(tz)
out['startFull'] = start_ba.strftime("%d-%m-%Y %H:%M")
# END DATETIME
end_utc = datetime(int(task.end.year), int(task.end.month), int(task.end.day), int(task.end.hour), int(task.end.minute), tzinfo=timezone.utc)
end_ba = end_utc.astimezone(tz)
out['endFull'] = end_ba.strftime("%d-%m-%Y %H:%M")
result['out'].append(out)
if not tasks:
return jsonify({'message': 'Task does not exist.'})
return jsonify({'tasks': result, 'time': time})
# GET TASKS OF TODAY WITH PUBLIC ID
@app.route(f'/api/{VERSION_API}/tasks/current/<public_id>', methods=['GET'])
@token_required
def get_task_today_main(current_unit, public_id):
current_unit = None
tasks = Task.query.filter_by(public_id=public_id)
users = User.query.filter_by(public_id=public_id)
result = {
"before": [],
"after" : [],
"current" : [],
"out" : []
}
time = {}
time['timeZone'] = TIME_ZONE
tz = ZoneInfo(TIME_ZONE)
now = datetime.utcnow()
limit = datetime.utcnow() + timedelta(hours=24)
todayArg = datetime.now(pytz.timezone(TIME_ZONE))
limit = limit.replace(tzinfo=None)
time['dateUtc'] = now.strftime("%d-%m-%Y %H:%M")
time['dateArg'] = todayArg.strftime("%d-%m-%Y %H:%M")
time['limitUtc'] = limit.strftime("%d-%m-%Y %H:%M")
time['limitArg'] = (todayArg + timedelta(hours=24)).strftime("%d-%m-%Y a las %H:%M")
for task in tasks:
if task.start < limit:
# BEFORE
if (task.end < now):
before = {}
before['startUtc'] = task.start
before['endUtc'] = task.end
before['id'] = task.id
before['unitName'] = task.unit_name
for user in users:
if user.id == int(task.user):
before['phone'] = user.phone
before['specialty'] = user.specialty
before['name'] = user.name
before['lastName'] = user.last_name
before['user'] = task.user
before['position'] = user.position
start_utc = datetime(int(task.start.year), int(task.start.month), int(task.start.day), int(task.start.hour), int(task.start.minute), tzinfo=timezone.utc)
start_ba = start_utc.astimezone(tz)
before['start'] = start_ba.strftime("%H:%M")
before['startFull'] = start_ba.strftime("%d-%m-%Y %H:%M")
# END DATETIME
end_utc = datetime(int(task.end.year), int(task.end.month), int(task.end.day), int(task.end.hour), int(task.end.minute), tzinfo=timezone.utc)
end_ba = end_utc.astimezone(tz)
before['end'] = end_ba.strftime("%H:%M")
before['endFull'] = end_ba.strftime("%d-%m-%Y %H:%M")
before['beforeFormat'] = end_ba.strftime("%H:%M del %d/%m")
result['before'].append(before)
# CURRENT
elif (task.start <= now and task.end >= now):
current = {}
current['startUtc'] = task.start
current['endUtc'] = task.end
current['id'] = task.id
current['unitName'] = task.unit_name
for user in users:
if user.id == int(task.user):
current['phone'] = user.phone
current['specialty'] = user.specialty
current['name'] = user.name
current['lastName'] = user.last_name
current['user'] = task.user
current['position'] = user.position
# START DATETIME
start_utc = datetime(int(task.start.year), int(task.start.month), int(task.start.day), int(task.start.hour), int(task.start.minute), tzinfo=timezone.utc)
start_ba = start_utc.astimezone(tz)
current['start'] = start_ba.strftime("%H:%M")
current['startFull'] = start_ba.strftime("%d-%m-%Y %H:%M")
# END DATETIME
end_utc = datetime(int(task.end.year), int(task.end.month), int(task.end.day), int(task.end.hour), int(task.end.minute), tzinfo=timezone.utc)
end_ba = end_utc.astimezone(tz)
current['end'] = end_ba.strftime("%H:%M")
current['endFull'] = end_ba.strftime("%d-%m-%Y %H:%M")
result['current'].append(current)
# AFTER
elif (task.start > now and task.start < limit):
after = {}
after['startUtc'] = task.start
after['endUtc'] = task.end
after['id'] = task.id
after['unitName'] = task.unit_name
for user in users:
if user.id == int(task.user):
after['phone'] = user.phone
after['specialty'] = user.specialty
after['name'] = user.name
after['lastName'] = user.last_name
after['user'] = task.user
after['position'] = user.position
# START DATETIME
start_utc = datetime(int(task.start.year), int(task.start.month), int(task.start.day), int(task.start.hour), int(task.start.minute), tzinfo=timezone.utc)
start_ba = start_utc.astimezone(tz)
after['start'] = start_ba.strftime("%H:%M")
after['startFull'] = start_ba.strftime("%d-%m-%Y %H:%M")
after['afterFormat'] = start_ba.strftime("%H:%M del %d/%m")
# END DATETIME
end_utc = datetime(int(task.end.year), int(task.end.month), int(task.end.day), int(task.end.hour), int(task.end.minute), tzinfo=timezone.utc)
end_ba = end_utc.astimezone(tz)
after['end'] = end_ba.strftime("%H:%M")
after['endFull'] = end_ba.strftime("%d-%m-%Y %H:%M")
result['after'].append(after)
else:
out = {}
out['startUtc'] = task.start.strftime("%d-%m-%Y %H:%M")
out['endUtc'] = task.end.strftime("%d-%m-%Y %H:%M")
out['unitName'] = task.unit_name
for user in users:
if user.id == int(task.user):
out['name'] = user.name
# START DATETIME
start_utc = datetime(int(task.start.year), int(task.start.month), int(task.start.day), int(task.start.hour), int(task.start.minute), tzinfo=timezone.utc)
start_ba = start_utc.astimezone(tz)
out['startFull'] = start_ba.strftime("%d-%m-%Y %H:%M")
# END DATETIME
end_utc = datetime(int(task.end.year), int(task.end.month), int(task.end.day), int(task.end.hour), int(task.end.minute), tzinfo=timezone.utc)
end_ba = end_utc.astimezone(tz)
out['endFull'] = end_ba.strftime("%d-%m-%Y %H:%M")
result['out'].append(out)
if not tasks:
return jsonify({'message': 'Task does not exist.'})
return jsonify({'tasks': result, 'time': time})
# GET ALL TASK OF CALENDAR IN CURRENT UNIT
@app.route(f'/api/{VERSION_API}/tasks/<public_id>', methods=['GET'])
@token_required
def get_all_tasks_id(current_unit, public_id):
current_unit = None
tz = ZoneInfo(TIME_ZONE)
tasks = Task.query.filter_by(public_id=public_id)
users = User.query.all()
result = []
for task in tasks:
task_data = {}
for user in users:
if user.id == int(task.user):
task_data['phone'] = user.phone
task_data['specialty'] = user.specialty
task_data['name'] = user.name
task_data['lastName'] = user.last_name
task_data['position'] = user.position
task_data['id'] = task.id
task_data['user'] = task.user
task_data['unitName'] = task.unit_name
# START DATETIME
start_utc = datetime(int(task.start.year), int(task.start.month), int(task.start.day), int(task.start.hour), int(task.start.minute), tzinfo=timezone.utc)
start_ba = start_utc.astimezone(tz)
task_data['startTime'] = start_ba.strftime("%H:%M")
task_data['startTimeFull'] = start_ba.strftime("%d-%m-%Y %H:%M")
task_data['dateStart'] = start_ba.strftime("%Y-%m-%d")
# END DATETIME
end_utc = datetime(int(task.end.year), int(task.end.month), int(task.end.day), int(task.end.hour), int(task.end.minute), tzinfo=timezone.utc)
end_ba = end_utc.astimezone(tz)
task_data['endTime'] = end_ba.strftime("%H:%M")
task_data['date'] = start_ba.strftime("%d-%m-%Y")
task_data['endTimeFull'] = end_ba.strftime("%d-%m-%Y %H:%M")
result.append(task_data)
return jsonify({'tasks': result})
# CREATE TASK
@app.route(f'/api/{VERSION_API}/create/task', methods=['POST'])
@token_required
def create_task(current_unit):
data = request.get_json()
new_task = Task(start=datetime.strptime(data['start'], ISO_8601), end=datetime.strptime(data['end'], ISO_8601), user=data['user'], public_id=current_unit.public_id, unit_id=current_unit.id, unit_name=current_unit.name)
db.session.add(new_task)
db.session.commit()
return make_response(jsonify({'message': 'New task created.'}), 201)
# UPDATE TASK
@app.route(f'/api/{VERSION_API}/tasks/update/<task_id>', methods=['PUT'])
@token_required
def update_task(current_unit, task_id):
data = request.get_json()
task = Task.query.filter_by(id=task_id, unit_id=current_unit.id).first()
if not task:
return jsonify({'message': 'Task does not exist.'})
task.start = datetime.strptime(data['start'], ISO_8601)
task.end = datetime.strptime(data['end'], ISO_8601)
task.user = data['user']
db.session.merge(task)
db.session.flush()
db.session.commit()
return jsonify({'message': 'Task updated.'})
# DELETE TASK
@app.route(f'/api/{VERSION_API}/tasks/delete/<task_id>', methods=['DELETE'])
@token_required
def delete_task(current_unit, task_id):
task = Task.query.filter_by(id=task_id, unit_id=current_unit.id).first()
if not task:
return jsonify({'message': 'Task does not exist.'})
db.session.delete(task)
db.session.commit()
return jsonify({'message': 'Task deleted.'})
# USED IN DELETE EXPIRE TASKS
def task_expired(task, hours=24):
tz = ZoneInfo(TIME_ZONE)
end_utc = datetime(int(task.end.year), int(task.end.month), int(task.end.day), int(task.end.hour), int(task.end.minute), tzinfo=timezone.utc)
end_arg = end_utc.astimezone(tz)
return end_arg + timedelta(hours=hours)
# DELETE EXPIRED TASKS
@app.route(f'/api/{VERSION_API}/tasks/delete/expired', methods=['DELETE'])
@token_required
def delete_task_expired(current_unit):
tasks = Task.query.all()
nowArg = datetime.now(pytz.timezone(TIME_ZONE))
deleted = 0
for task in tasks:
if task_expired(task) <= nowArg:
deleted = deleted + 1
db.session.delete(task)
db.session.commit()
if not tasks:
return jsonify({'message': 'Task does not exist.'})
return jsonify({'tasks': f'{deleted} expired tasks deleted.', 'argTimeNow': nowArg})
# DELETE ALL TASKS
@app.route(f'/api/{VERSION_API}/tasks/delete/{KEY_DELETE_ALL_TASKS}', methods=['DELETE'])
@token_required
def delete_all_tasks(current_unit):
try:
num_rows_deleted = db.session.query(Task).delete()
db.session.commit()
except:
db.session.rollback()
return jsonify({'message': 'All tasks of the Unit deleted.'})
| 46.632609 | 222 | 0.56044 | 2,721 | 21,451 | 4.303197 | 0.058067 | 0.074729 | 0.051243 | 0.032881 | 0.885985 | 0.871039 | 0.854044 | 0.81416 | 0.771885 | 0.755829 | 0 | 0.002928 | 0.283623 | 21,451 | 459 | 223 | 46.734205 | 0.759029 | 0.027458 | 0 | 0.802083 | 0 | 0 | 0.121692 | 0.020938 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028646 | false | 0 | 0.026042 | 0 | 0.096354 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0f76268d35b1aaaeb71f3d19d719654aaa3000c0 | 54 | py | Python | projects/faces/insight/insight/common/__init__.py | Bingwen-Hu/hackaway | 69727d76fd652390d9660e9ea4354ba5cc76dd5c | [
"BSD-2-Clause"
] | null | null | null | projects/faces/insight/insight/common/__init__.py | Bingwen-Hu/hackaway | 69727d76fd652390d9660e9ea4354ba5cc76dd5c | [
"BSD-2-Clause"
] | null | null | null | projects/faces/insight/insight/common/__init__.py | Bingwen-Hu/hackaway | 69727d76fd652390d9660e9ea4354ba5cc76dd5c | [
"BSD-2-Clause"
] | null | null | null | from . import face_image
from . import face_preprocess | 27 | 29 | 0.833333 | 8 | 54 | 5.375 | 0.625 | 0.465116 | 0.651163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12963 | 54 | 2 | 29 | 27 | 0.914894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
7e9fe931b8d2e75aa0d9bc2390a75ba1e3340fd9 | 102 | py | Python | nnpy/utils/math_utils.py | AlexBacho/nnpy | e88fe6965a0b69ca3e6d4e31cc76a58349321c08 | [
"MIT"
] | null | null | null | nnpy/utils/math_utils.py | AlexBacho/nnpy | e88fe6965a0b69ca3e6d4e31cc76a58349321c08 | [
"MIT"
] | null | null | null | nnpy/utils/math_utils.py | AlexBacho/nnpy | e88fe6965a0b69ca3e6d4e31cc76a58349321c08 | [
"MIT"
] | null | null | null | import numpy as np
def get_random_array(*dims, offset=0):
return np.random.rand(*dims) + offset
| 17 | 41 | 0.715686 | 17 | 102 | 4.176471 | 0.764706 | 0.28169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011765 | 0.166667 | 102 | 5 | 42 | 20.4 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
0e1a23d3b7e2d06f3a0c3f2b2bc8c52088bc3a02 | 27,908 | py | Python | symphony/bdk/gen/pod_api/app_entitlement_api.py | SymphonyOSF/symphony-api-client-python | 70137a893f4385381a3158ef80e1be156e0fc4bd | [
"Apache-2.0"
] | null | null | null | symphony/bdk/gen/pod_api/app_entitlement_api.py | SymphonyOSF/symphony-api-client-python | 70137a893f4385381a3158ef80e1be156e0fc4bd | [
"Apache-2.0"
] | null | null | null | symphony/bdk/gen/pod_api/app_entitlement_api.py | SymphonyOSF/symphony-api-client-python | 70137a893f4385381a3158ef80e1be156e0fc4bd | [
"Apache-2.0"
] | null | null | null | """
Pod API
This document refers to Symphony API calls that do not need encryption or decryption of content. - sessionToken can be obtained by calling the authenticationAPI on the symphony back end and the key manager respectively. Refer to the methods described in authenticatorAPI.yaml. - Actions are defined to be atomic, ie will succeed in their entirety or fail and have changed nothing. - If it returns a 40X status then it will have made no change to the system even if ome subset of the request would have succeeded. - If this contract cannot be met for any reason then this is an error and the response code will be 50X. # noqa: E501
The version of the OpenAPI document: 20.14.1
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from symphony.bdk.gen.api_client import ApiClient, Endpoint as _Endpoint
from symphony.bdk.gen.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from symphony.bdk.gen.pod_model.error import Error
from symphony.bdk.gen.pod_model.pod_app_entitlement_list import PodAppEntitlementList
from symphony.bdk.gen.pod_model.user_app_entitlement_list import UserAppEntitlementList
from symphony.bdk.gen.pod_model.user_app_entitlements_patch_list import UserAppEntitlementsPatchList
class AppEntitlementApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self.v1_admin_app_entitlement_list_get_endpoint = _Endpoint(
settings={
'response_type': (PodAppEntitlementList,),
'auth': [],
'endpoint_path': '/v1/admin/app/entitlement/list',
'operation_id': 'v1_admin_app_entitlement_list_get',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'session_token',
],
'required': [
'session_token',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'session_token':
(str,),
},
'attribute_map': {
'session_token': 'sessionToken',
},
'location_map': {
'session_token': 'header',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.v1_admin_app_entitlement_list_post_endpoint = _Endpoint(
settings={
'response_type': (PodAppEntitlementList,),
'auth': [],
'endpoint_path': '/v1/admin/app/entitlement/list',
'operation_id': 'v1_admin_app_entitlement_list_post',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'session_token',
'payload',
],
'required': [
'session_token',
'payload',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'session_token':
(str,),
'payload':
(PodAppEntitlementList,),
},
'attribute_map': {
'session_token': 'sessionToken',
},
'location_map': {
'session_token': 'header',
'payload': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.v1_admin_user_uid_app_entitlement_list_get_endpoint = _Endpoint(
settings={
'response_type': (UserAppEntitlementList,),
'auth': [],
'endpoint_path': '/v1/admin/user/{uid}/app/entitlement/list',
'operation_id': 'v1_admin_user_uid_app_entitlement_list_get',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'session_token',
'uid',
],
'required': [
'session_token',
'uid',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'session_token':
(str,),
'uid':
(int,),
},
'attribute_map': {
'session_token': 'sessionToken',
'uid': 'uid',
},
'location_map': {
'session_token': 'header',
'uid': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client
)
self.v1_admin_user_uid_app_entitlement_list_patch_endpoint = _Endpoint(
settings={
'response_type': (UserAppEntitlementList,),
'auth': [],
'endpoint_path': '/v1/admin/user/{uid}/app/entitlement/list',
'operation_id': 'v1_admin_user_uid_app_entitlement_list_patch',
'http_method': 'PATCH',
'servers': None,
},
params_map={
'all': [
'session_token',
'uid',
'payload',
],
'required': [
'session_token',
'uid',
'payload',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'session_token':
(str,),
'uid':
(int,),
'payload':
(UserAppEntitlementsPatchList,),
},
'attribute_map': {
'session_token': 'sessionToken',
'uid': 'uid',
},
'location_map': {
'session_token': 'header',
'uid': 'path',
'payload': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
self.v1_admin_user_uid_app_entitlement_list_post_endpoint = _Endpoint(
settings={
'response_type': (UserAppEntitlementList,),
'auth': [],
'endpoint_path': '/v1/admin/user/{uid}/app/entitlement/list',
'operation_id': 'v1_admin_user_uid_app_entitlement_list_post',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'session_token',
'uid',
'payload',
],
'required': [
'session_token',
'uid',
'payload',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'session_token':
(str,),
'uid':
(int,),
'payload':
(UserAppEntitlementList,),
},
'attribute_map': {
'session_token': 'sessionToken',
'uid': 'uid',
},
'location_map': {
'session_token': 'header',
'uid': 'path',
'payload': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client
)
def v1_admin_app_entitlement_list_get(
self,
session_token,
**kwargs
):
"""Get the list of application entitlements for the company # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = pod_api.v1_admin_app_entitlement_list_get(session_token, async_req=True)
>>> result = thread.get()
Args:
session_token (str): Session authentication token.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
PodAppEntitlementList
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['session_token'] = \
session_token
return self.v1_admin_app_entitlement_list_get_endpoint.call_with_http_info(**kwargs)
def v1_admin_app_entitlement_list_post(
self,
session_token,
payload,
**kwargs
):
"""Update the application entitlements for the company # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = pod_api.v1_admin_app_entitlement_list_post(session_token, payload, async_req=True)
>>> result = thread.get()
Args:
session_token (str): Session authentication token.
payload (PodAppEntitlementList):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
PodAppEntitlementList
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['session_token'] = \
session_token
kwargs['payload'] = \
payload
return self.v1_admin_app_entitlement_list_post_endpoint.call_with_http_info(**kwargs)
def v1_admin_user_uid_app_entitlement_list_get(
self,
session_token,
uid,
**kwargs
):
"""Get the list of application entitlements for this user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = pod_api.v1_admin_user_uid_app_entitlement_list_get(session_token, uid, async_req=True)
>>> result = thread.get()
Args:
session_token (str): Session authentication token.
uid (int): User ID as a decimal integer
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
UserAppEntitlementList
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['session_token'] = \
session_token
kwargs['uid'] = \
uid
return self.v1_admin_user_uid_app_entitlement_list_get_endpoint.call_with_http_info(**kwargs)
def v1_admin_user_uid_app_entitlement_list_patch(
self,
session_token,
uid,
payload,
**kwargs
):
"""Update unique entitlement of an app for this user. Entitlement can be installation, visibility or product # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = pod_api.v1_admin_user_uid_app_entitlement_list_patch(session_token, uid, payload, async_req=True)
>>> result = thread.get()
Args:
session_token (str): Session authentication token.
uid (int): User ID as a decimal integer
payload (UserAppEntitlementsPatchList):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
UserAppEntitlementList
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['session_token'] = \
session_token
kwargs['uid'] = \
uid
kwargs['payload'] = \
payload
return self.v1_admin_user_uid_app_entitlement_list_patch_endpoint.call_with_http_info(**kwargs)
def v1_admin_user_uid_app_entitlement_list_post(
self,
session_token,
uid,
payload,
**kwargs
):
"""Update the application entitlements for this user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = pod_api.v1_admin_user_uid_app_entitlement_list_post(session_token, uid, payload, async_req=True)
>>> result = thread.get()
Args:
session_token (str): Session authentication token.
uid (int): User ID as a decimal integer
payload (UserAppEntitlementList):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
UserAppEntitlementList
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_spec_property_naming'] = kwargs.get(
'_spec_property_naming', False
)
kwargs['_content_type'] = kwargs.get(
'_content_type')
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['session_token'] = \
session_token
kwargs['uid'] = \
uid
kwargs['payload'] = \
payload
return self.v1_admin_user_uid_app_entitlement_list_post_endpoint.call_with_http_info(**kwargs)
| 37.970068 | 637 | 0.521392 | 2,699 | 27,908 | 5.134124 | 0.09485 | 0.043299 | 0.041567 | 0.018186 | 0.89305 | 0.89305 | 0.885473 | 0.869741 | 0.858483 | 0.857545 | 0 | 0.004232 | 0.39881 | 27,908 | 734 | 638 | 38.021798 | 0.821671 | 0.385158 | 0 | 0.703934 | 0 | 0 | 0.232212 | 0.058907 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012422 | false | 0 | 0.016563 | 0 | 0.041408 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0e1f5070f5bbd8c5e4bbd6103600e30f75d7b0eb | 3,967 | py | Python | python3/lib/python3.6/site-packages/tensorflow/_api/v1/saved_model/__init__.py | TruongThuyLiem/keras2tensorflow | 726f2370160701081cb43fbd8b56154c10d7ad63 | [
"MIT"
] | 3 | 2020-10-12T15:47:01.000Z | 2022-01-14T19:51:26.000Z | python3/lib/python3.6/site-packages/tensorflow/_api/v1/saved_model/__init__.py | TruongThuyLiem/keras2tensorflow | 726f2370160701081cb43fbd8b56154c10d7ad63 | [
"MIT"
] | null | null | null | python3/lib/python3.6/site-packages/tensorflow/_api/v1/saved_model/__init__.py | TruongThuyLiem/keras2tensorflow | 726f2370160701081cb43fbd8b56154c10d7ad63 | [
"MIT"
] | 2 | 2020-08-03T13:02:06.000Z | 2020-11-04T03:15:44.000Z | # This file is MACHINE GENERATED! Do not edit.
# Generated by: tensorflow/python/tools/api/generator/create_python_api.py script.
"""Public API for tf.saved_model namespace.
"""
from __future__ import print_function as _print_function
from tensorflow._api.v1.saved_model import builder
from tensorflow._api.v1.saved_model import constants
from tensorflow._api.v1.saved_model import experimental
from tensorflow._api.v1.saved_model import loader
from tensorflow._api.v1.saved_model import main_op
from tensorflow._api.v1.saved_model import signature_constants
from tensorflow._api.v1.saved_model import signature_def_utils
from tensorflow._api.v1.saved_model import tag_constants
from tensorflow._api.v1.saved_model import utils
from tensorflow.lite.python.lite import _load as load_v2
from tensorflow.python.saved_model.builder import SavedModelBuilder as Builder
from tensorflow.python.saved_model.constants import ASSETS_DIRECTORY
from tensorflow.python.saved_model.constants import ASSETS_KEY
from tensorflow.python.saved_model.constants import LEGACY_INIT_OP_KEY
from tensorflow.python.saved_model.constants import MAIN_OP_KEY
from tensorflow.python.saved_model.constants import SAVED_MODEL_FILENAME_PB
from tensorflow.python.saved_model.constants import SAVED_MODEL_FILENAME_PBTXT
from tensorflow.python.saved_model.constants import SAVED_MODEL_SCHEMA_VERSION
from tensorflow.python.saved_model.constants import VARIABLES_DIRECTORY
from tensorflow.python.saved_model.constants import VARIABLES_FILENAME
from tensorflow.python.saved_model.loader import load
from tensorflow.python.saved_model.loader import maybe_saved_model_directory
from tensorflow.python.saved_model.loader import maybe_saved_model_directory as contains_saved_model
from tensorflow.python.saved_model.main_op import main_op_with_restore
from tensorflow.python.saved_model.save import save
from tensorflow.python.saved_model.saved_model import simple_save
from tensorflow.python.saved_model.signature_constants import CLASSIFY_INPUTS
from tensorflow.python.saved_model.signature_constants import CLASSIFY_METHOD_NAME
from tensorflow.python.saved_model.signature_constants import CLASSIFY_OUTPUT_CLASSES
from tensorflow.python.saved_model.signature_constants import CLASSIFY_OUTPUT_SCORES
from tensorflow.python.saved_model.signature_constants import DEFAULT_SERVING_SIGNATURE_DEF_KEY
from tensorflow.python.saved_model.signature_constants import PREDICT_INPUTS
from tensorflow.python.saved_model.signature_constants import PREDICT_METHOD_NAME
from tensorflow.python.saved_model.signature_constants import PREDICT_OUTPUTS
from tensorflow.python.saved_model.signature_constants import REGRESS_INPUTS
from tensorflow.python.saved_model.signature_constants import REGRESS_METHOD_NAME
from tensorflow.python.saved_model.signature_constants import REGRESS_OUTPUTS
from tensorflow.python.saved_model.signature_def_utils import build_signature_def
from tensorflow.python.saved_model.signature_def_utils import classification_signature_def
from tensorflow.python.saved_model.signature_def_utils import is_valid_signature
from tensorflow.python.saved_model.signature_def_utils import predict_signature_def
from tensorflow.python.saved_model.signature_def_utils import regression_signature_def
from tensorflow.python.saved_model.tag_constants import GPU
from tensorflow.python.saved_model.tag_constants import SERVING
from tensorflow.python.saved_model.tag_constants import TPU
from tensorflow.python.saved_model.tag_constants import TRAINING
from tensorflow.python.saved_model.utils import build_tensor_info
from tensorflow.python.saved_model.utils import get_tensor_from_tensor_info
del _print_function
import sys as _sys
from tensorflow.python.util import deprecation_wrapper as _deprecation_wrapper
if not isinstance(_sys.modules[__name__], _deprecation_wrapper.DeprecationWrapper):
_sys.modules[__name__] = _deprecation_wrapper.DeprecationWrapper(
_sys.modules[__name__], "saved_model")
| 61.030769 | 100 | 0.887825 | 559 | 3,967 | 5.958855 | 0.16458 | 0.168118 | 0.234164 | 0.2852 | 0.786551 | 0.759532 | 0.747223 | 0.649655 | 0.437406 | 0.218253 | 0 | 0.002696 | 0.065037 | 3,967 | 64 | 101 | 61.984375 | 0.89539 | 0.042097 | 0 | 0 | 1 | 0 | 0.002901 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.927273 | 0 | 0.927273 | 0.036364 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
0e63e60147b4222e24b5c8b326fa3f541c362d63 | 203 | py | Python | backend_rest/tracking/admin.py | ezrankayamba/twiga_distribution | ac4fd3d4f6b111e734a932398be564c863582be2 | [
"MIT"
] | null | null | null | backend_rest/tracking/admin.py | ezrankayamba/twiga_distribution | ac4fd3d4f6b111e734a932398be564c863582be2 | [
"MIT"
] | 16 | 2020-03-23T13:24:11.000Z | 2022-03-12T00:17:58.000Z | backend_rest/tracking/admin.py | ezrankayamba/twiga_distribution | ac4fd3d4f6b111e734a932398be564c863582be2 | [
"MIT"
] | null | null | null | from django.contrib import admin
from . import models
admin.site.register(models.Record)
admin.site.register(models.Customer)
admin.site.register(models.Contact)
admin.site.register(models.Description)
| 25.375 | 39 | 0.827586 | 28 | 203 | 6 | 0.428571 | 0.214286 | 0.404762 | 0.547619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064039 | 203 | 7 | 40 | 29 | 0.884211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
0e864d836d6730653f652cf674dea361c6aa70e1 | 33 | py | Python | function_20373822.py | Dludora/Study-19 | 35a35dcf8e73828bce7153be3dc5e51c0296456e | [
"MIT"
] | 1 | 2022-03-19T08:03:06.000Z | 2022-03-19T08:03:06.000Z | function_20373822.py | Dludora/Study-19 | 35a35dcf8e73828bce7153be3dc5e51c0296456e | [
"MIT"
] | null | null | null | function_20373822.py | Dludora/Study-19 | 35a35dcf8e73828bce7153be3dc5e51c0296456e | [
"MIT"
] | 1 | 2022-03-19T07:25:31.000Z | 2022-03-19T07:25:31.000Z | print('My student_id: 20373822')
| 16.5 | 32 | 0.757576 | 5 | 33 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.266667 | 0.090909 | 33 | 1 | 33 | 33 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
7ebf46afd1ee3c3ed793024cf961c90428229d55 | 3,744 | py | Python | Doc/includes/test.py | Hadron/python | 73137f499ed658169f49273eee46845e3b53e800 | [
"PSF-2.0"
] | 2,293 | 2015-01-02T12:46:10.000Z | 2022-03-29T09:45:43.000Z | Doc/includes/test.py | Hadron/python | 73137f499ed658169f49273eee46845e3b53e800 | [
"PSF-2.0"
] | 315 | 2015-05-31T11:55:46.000Z | 2022-01-12T08:36:37.000Z | Doc/includes/test.py | Hadron/python | 73137f499ed658169f49273eee46845e3b53e800 | [
"PSF-2.0"
] | 1,033 | 2015-01-04T07:48:40.000Z | 2022-03-24T09:34:37.000Z | """Test module for the noddy examples
Noddy 1:
>>> import noddy
>>> n1 = noddy.Noddy()
>>> n2 = noddy.Noddy()
>>> del n1
>>> del n2
Noddy 2
>>> import noddy2
>>> n1 = noddy2.Noddy('jim', 'fulton', 42)
>>> n1.first
'jim'
>>> n1.last
'fulton'
>>> n1.number
42
>>> n1.name()
'jim fulton'
>>> n1.first = 'will'
>>> n1.name()
'will fulton'
>>> n1.last = 'tell'
>>> n1.name()
'will tell'
>>> del n1.first
>>> n1.name()
Traceback (most recent call last):
...
AttributeError: first
>>> n1.first
Traceback (most recent call last):
...
AttributeError: first
>>> n1.first = 'drew'
>>> n1.first
'drew'
>>> del n1.number
Traceback (most recent call last):
...
TypeError: can't delete numeric/char attribute
>>> n1.number=2
>>> n1.number
2
>>> n1.first = 42
>>> n1.name()
'42 tell'
>>> n2 = noddy2.Noddy()
>>> n2.name()
' '
>>> n2.first
''
>>> n2.last
''
>>> del n2.first
>>> n2.first
Traceback (most recent call last):
...
AttributeError: first
>>> n2.first
Traceback (most recent call last):
...
AttributeError: first
>>> n2.name()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: first
>>> n2.number
0
>>> n3 = noddy2.Noddy('jim', 'fulton', 'waaa')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: an integer is required
>>> del n1
>>> del n2
Noddy 3
>>> import noddy3
>>> n1 = noddy3.Noddy('jim', 'fulton', 42)
>>> n1 = noddy3.Noddy('jim', 'fulton', 42)
>>> n1.name()
'jim fulton'
>>> del n1.first
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: Cannot delete the first attribute
>>> n1.first = 42
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: The first attribute value must be a string
>>> n1.first = 'will'
>>> n1.name()
'will fulton'
>>> n2 = noddy3.Noddy()
>>> n2 = noddy3.Noddy()
>>> n2 = noddy3.Noddy()
>>> n3 = noddy3.Noddy('jim', 'fulton', 'waaa')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: an integer is required
>>> del n1
>>> del n2
Noddy 4
>>> import noddy4
>>> n1 = noddy4.Noddy('jim', 'fulton', 42)
>>> n1.first
'jim'
>>> n1.last
'fulton'
>>> n1.number
42
>>> n1.name()
'jim fulton'
>>> n1.first = 'will'
>>> n1.name()
'will fulton'
>>> n1.last = 'tell'
>>> n1.name()
'will tell'
>>> del n1.first
>>> n1.name()
Traceback (most recent call last):
...
AttributeError: first
>>> n1.first
Traceback (most recent call last):
...
AttributeError: first
>>> n1.first = 'drew'
>>> n1.first
'drew'
>>> del n1.number
Traceback (most recent call last):
...
TypeError: can't delete numeric/char attribute
>>> n1.number=2
>>> n1.number
2
>>> n1.first = 42
>>> n1.name()
'42 tell'
>>> n2 = noddy4.Noddy()
>>> n2 = noddy4.Noddy()
>>> n2 = noddy4.Noddy()
>>> n2 = noddy4.Noddy()
>>> n2.name()
' '
>>> n2.first
''
>>> n2.last
''
>>> del n2.first
>>> n2.first
Traceback (most recent call last):
...
AttributeError: first
>>> n2.first
Traceback (most recent call last):
...
AttributeError: first
>>> n2.name()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: first
>>> n2.number
0
>>> n3 = noddy4.Noddy('jim', 'fulton', 'waaa')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: an integer is required
Test cyclic gc(?)
>>> import gc
>>> gc.disable()
>>> x = []
>>> l = [x]
>>> n2.first = l
>>> n2.first
[[]]
>>> l.append(n2)
>>> del l
>>> del n1
>>> del n2
>>> sys.getrefcount(x)
3
>>> ignore = gc.collect()
>>> sys.getrefcount(x)
2
>>> gc.enable()
"""
import os
import sys
from distutils.util import get_platform
PLAT_SPEC = "%s-%s" % (get_platform(), sys.version[0:3])
src = os.path.join("build", "lib.%s" % PLAT_SPEC)
sys.path.append(src)
if __name__ == "__main__":
import doctest, __main__
doctest.testmod(__main__)
| 17.495327 | 56 | 0.619658 | 534 | 3,744 | 4.307116 | 0.161049 | 0.051739 | 0.140435 | 0.17 | 0.77087 | 0.758696 | 0.741739 | 0.705217 | 0.705217 | 0.705217 | 0 | 0.04672 | 0.165331 | 3,744 | 213 | 57 | 17.577465 | 0.68928 | 0.923878 | 0 | 0 | 0 | 0 | 0.086331 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 8 |
7ecc84fd811f0c4bbbca745557b86487db4fba56 | 7,898 | py | Python | tests/odeint_tests.py | morgatron/tfdiffeq | ef646f85cbd0821749a03e7ab51e03e16798fab1 | [
"MIT"
] | 214 | 2019-02-10T08:24:12.000Z | 2022-03-31T06:15:05.000Z | tests/odeint_tests.py | morgatron/tfdiffeq | ef646f85cbd0821749a03e7ab51e03e16798fab1 | [
"MIT"
] | 14 | 2019-03-02T14:56:29.000Z | 2021-12-28T13:06:45.000Z | tests/odeint_tests.py | morgatron/tfdiffeq | ef646f85cbd0821749a03e7ab51e03e16798fab1 | [
"MIT"
] | 40 | 2019-03-03T12:55:09.000Z | 2022-02-11T02:14:47.000Z | import unittest
import tensorflow as tf
import tfdiffeq
from tests import problems
if not tf.executing_eagerly():
tf.enable_v2_behavior()
error_tol = 1e-4
# torch.set_default_dtype(torch.float64)
TEST_DEVICE = "gpu:0" if tf.test.is_gpu_available() else "cpu"
def max_abs(tensor):
return tf.reduce_max(tf.abs(tensor))
def rel_error(true, estimate):
return max_abs((true - estimate) / true)
class TestSolverError(unittest.TestCase):
def test_euler(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE)
y = tfdiffeq.odeint(f, y0, t_points, method='euler')
self.assertLess(rel_error(sol, y), error_tol)
def test_midpoint(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE)
y = tfdiffeq.odeint(f, y0, t_points, method='midpoint')
self.assertLess(rel_error(sol, y), error_tol)
def test_huen(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE)
y = tfdiffeq.odeint(f, y0, t_points, method='huen')
self.assertLess(rel_error(sol, y), error_tol)
def test_bosh3(self):
for ode in problems.PROBLEMS.keys():
if ode == 'sine':
# Sine test never finishes.
continue
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, ode=ode)
y = tfdiffeq.odeint(f, y0, t_points, method='bosh3')
with self.subTest(ode=ode):
self.assertLess(rel_error(sol, y), error_tol)
def test_adaptive_heun(self):
for ode in problems.PROBLEMS.keys():
if ode == 'sine':
# Sine test never finishes.
continue
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, ode=ode)
y = tfdiffeq.odeint(f, y0, t_points, method='adaptive_heun')
with self.subTest(ode=ode):
self.assertLess(rel_error(sol, y), error_tol)
def test_dopri8(self):
for ode in problems.PROBLEMS.keys():
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, ode=ode)
y = tfdiffeq.odeint(f, y0, t_points, method='dopri8', rtol=1e-12, atol=1e-14)
with self.subTest(ode=ode):
self.assertLess(rel_error(sol, y), error_tol)
def test_rk4(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE)
y = tfdiffeq.odeint(f, y0, t_points, method='rk4')
self.assertLess(rel_error(sol, y), error_tol)
def test_explicit_adams(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE)
y = tfdiffeq.odeint(f, y0, t_points, method='explicit_adams')
self.assertLess(rel_error(sol, y), error_tol)
def test_adams(self):
for ode in problems.PROBLEMS.keys():
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, ode=ode)
y = tfdiffeq.odeint(f, y0, t_points, method='adams')
with self.subTest(ode=ode):
self.assertLess(rel_error(sol, y), error_tol)
def test_dopri5(self):
for ode in problems.PROBLEMS.keys():
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, ode=ode)
y = tfdiffeq.odeint(f, y0, t_points, method='dopri5')
with self.subTest(ode=ode):
self.assertLess(rel_error(sol, y), error_tol)
def test_adjoint(self):
for ode in problems.PROBLEMS.keys():
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y0 = tf.cast(y0, tf.float64)
t_points = tf.cast(t_points, tf.float64)
sol = tf.cast(sol, tf.float64)
y = tfdiffeq.odeint_adjoint(f, y0, t_points, method='dopri5')
with self.subTest(ode=ode):
self.assertLess(rel_error(sol, y), error_tol)
class TestSolverBackwardsInTimeError(unittest.TestCase):
def test_euler(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y = tfdiffeq.odeint(f, y0, t_points, method='euler')
self.assertLess(rel_error(sol, y), error_tol)
def test_midpoint(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y = tfdiffeq.odeint(f, y0, t_points, method='midpoint')
self.assertLess(rel_error(sol, y), error_tol)
def test_rk4(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y = tfdiffeq.odeint(f, y0, t_points, method='rk4')
self.assertLess(rel_error(sol, y), error_tol)
def test_explicit_adams(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y = tfdiffeq.odeint(f, y0, t_points, method='explicit_adams')
self.assertLess(rel_error(sol, y), error_tol)
def test_adams(self):
for ode in problems.PROBLEMS.keys():
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y = tfdiffeq.odeint(f, y0, t_points, method='adams')
with self.subTest(ode=ode):
self.assertLess(rel_error(sol, y), error_tol)
def test_dopri5(self):
for ode in problems.PROBLEMS.keys():
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y = tfdiffeq.odeint(f, y0, t_points, method='dopri5')
with self.subTest(ode=ode):
self.assertLess(rel_error(sol, y), error_tol)
def test_dopri8(self):
for ode in problems.PROBLEMS.keys():
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y = tfdiffeq.odeint(f, y0, t_points, method='dopri8')
with self.subTest(ode=ode):
self.assertLess(rel_error(sol, y), error_tol)
def test_adjoint(self):
for ode in problems.PROBLEMS.keys():
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y0 = tf.cast(y0, tf.float64)
t_points = tf.cast(t_points, tf.float64)
sol = tf.cast(sol, tf.float64)
y = tfdiffeq.odeint_adjoint(f, y0, t_points, method='dopri5')
with self.subTest(ode=ode):
self.assertLess(rel_error(sol, y), error_tol)
class TestNoIntegration(unittest.TestCase):
def test_midpoint(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y = tfdiffeq.odeint(f, y0, t_points[0:1], method='midpoint')
self.assertLess(max_abs(sol[0] - y), error_tol)
def test_rk4(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y = tfdiffeq.odeint(f, y0, t_points[0:1], method='rk4')
self.assertLess(max_abs(sol[0] - y), error_tol)
def test_explicit_adams(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y = tfdiffeq.odeint(f, y0, t_points[0:1], method='explicit_adams')
self.assertLess(max_abs(sol[0] - y), error_tol)
def test_adams(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y = tfdiffeq.odeint(f, y0, t_points[0:1], method='adams')
self.assertLess(max_abs(sol[0] - y), error_tol)
def test_dopri5(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y = tfdiffeq.odeint(f, y0, t_points[0:1], method='dopri5')
self.assertLess(max_abs(sol[0] - y), error_tol)
def test_dopri8(self):
f, y0, t_points, sol = problems.construct_problem(TEST_DEVICE, reverse=True)
y = tfdiffeq.odeint(f, y0, t_points[0:1], method='dopri8')
self.assertLess(max_abs(sol[0] - y), error_tol)
if __name__ == '__main__':
tf.enable_eager_execution()
unittest.main()
| 36.564815 | 89 | 0.635224 | 1,101 | 7,898 | 4.364214 | 0.08356 | 0.078668 | 0.041623 | 0.104058 | 0.895942 | 0.894277 | 0.894277 | 0.891779 | 0.891779 | 0.882414 | 0 | 0.019719 | 0.24234 | 7,898 | 215 | 90 | 36.734884 | 0.783255 | 0.011395 | 0 | 0.783784 | 0 | 0 | 0.024859 | 0 | 0 | 0 | 0 | 0 | 0.168919 | 1 | 0.182432 | false | 0 | 0.027027 | 0.013514 | 0.243243 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7d13a88bd4ff02038b59d17790412be3586a49d0 | 48,378 | py | Python | backend/tests/baserow/contrib/database/api/views/test_view_views.py | ericderace/baserow | 7b35e81f75166d914d07ef4ad0c30c625b6bb396 | [
"MIT"
] | null | null | null | backend/tests/baserow/contrib/database/api/views/test_view_views.py | ericderace/baserow | 7b35e81f75166d914d07ef4ad0c30c625b6bb396 | [
"MIT"
] | 6 | 2021-04-08T22:03:06.000Z | 2022-01-13T03:38:17.000Z | backend/tests/baserow/contrib/database/api/views/test_view_views.py | ericderace/baserow | 7b35e81f75166d914d07ef4ad0c30c625b6bb396 | [
"MIT"
] | null | null | null | import pytest
from rest_framework.status import HTTP_200_OK, HTTP_400_BAD_REQUEST, HTTP_404_NOT_FOUND
from django.shortcuts import reverse
from baserow.contrib.database.views.models import ViewFilter, ViewSort, GridView
from baserow.contrib.database.views.registries import (
view_type_registry, view_filter_type_registry
)
@pytest.mark.django_db
def test_list_views(api_client, data_fixture):
user, token = data_fixture.create_user_and_token(
email='test@test.nl', password='password', first_name='Test1')
table_1 = data_fixture.create_database_table(user=user)
table_2 = data_fixture.create_database_table()
view_1 = data_fixture.create_grid_view(table=table_1, order=1)
view_2 = data_fixture.create_grid_view(table=table_1, order=3)
view_3 = data_fixture.create_grid_view(
table=table_1,
order=2,
filter_type='OR',
filters_disabled=True
)
data_fixture.create_grid_view(table=table_2, order=1)
response = api_client.get(
reverse('api:database:views:list', kwargs={'table_id': table_1.id}), **{
'HTTP_AUTHORIZATION': f'JWT {token}'
}
)
assert response.status_code == HTTP_200_OK
response_json = response.json()
assert len(response_json) == 3
assert response_json[0]['id'] == view_1.id
assert response_json[0]['type'] == 'grid'
assert response_json[0]['filter_type'] == 'AND'
assert response_json[0]['filters_disabled'] is False
assert response_json[1]['id'] == view_3.id
assert response_json[1]['type'] == 'grid'
assert response_json[1]['filter_type'] == 'OR'
assert response_json[1]['filters_disabled'] is True
assert response_json[2]['id'] == view_2.id
assert response_json[2]['type'] == 'grid'
assert response_json[2]['filter_type'] == 'AND'
assert response_json[2]['filters_disabled'] is False
response = api_client.get(
reverse('api:database:views:list', kwargs={'table_id': table_2.id}), **{
'HTTP_AUTHORIZATION': f'JWT {token}'
}
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.json()['error'] == 'ERROR_USER_NOT_IN_GROUP'
response = api_client.get(
reverse('api:database:views:list', kwargs={'table_id': 999999}), **{
'HTTP_AUTHORIZATION': f'JWT {token}'
}
)
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_TABLE_DOES_NOT_EXIST'
@pytest.mark.django_db
def test_list_views_including_filters(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
table_1 = data_fixture.create_database_table(user=user)
table_2 = data_fixture.create_database_table()
field_1 = data_fixture.create_text_field(table=table_1)
field_2 = data_fixture.create_text_field(table=table_1)
field_3 = data_fixture.create_text_field(table=table_2)
view_1 = data_fixture.create_grid_view(table=table_1, order=1)
view_2 = data_fixture.create_grid_view(table=table_1, order=2)
view_3 = data_fixture.create_grid_view(table=table_2, order=1)
filter_1 = data_fixture.create_view_filter(view=view_1, field=field_1)
filter_2 = data_fixture.create_view_filter(view=view_1, field=field_2)
filter_3 = data_fixture.create_view_filter(view=view_2, field=field_1)
data_fixture.create_view_filter(view=view_3, field=field_3)
response = api_client.get(
'{}'.format(reverse(
'api:database:views:list',
kwargs={'table_id': table_1.id}
)),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_200_OK
response_json = response.json()
assert len(response_json) == 2
assert 'filters' not in response_json[0]
assert 'filters' not in response_json[1]
response = api_client.get(
'{}?includes=filters'.format(reverse(
'api:database:views:list',
kwargs={'table_id': table_1.id}
)),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_200_OK
response_json = response.json()
assert len(response_json[0]['filters']) == 2
assert response_json[0]['filters'][0]['id'] == filter_1.id
assert response_json[0]['filters'][0]['view'] == view_1.id
assert response_json[0]['filters'][0]['field'] == field_1.id
assert response_json[0]['filters'][0]['type'] == filter_1.type
assert response_json[0]['filters'][0]['value'] == filter_1.value
assert response_json[0]['filters'][1]['id'] == filter_2.id
assert len(response_json[1]['filters']) == 1
assert response_json[1]['filters'][0]['id'] == filter_3.id
@pytest.mark.django_db
def test_list_views_including_sortings(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
table_1 = data_fixture.create_database_table(user=user)
table_2 = data_fixture.create_database_table()
field_1 = data_fixture.create_text_field(table=table_1)
field_2 = data_fixture.create_text_field(table=table_1)
field_3 = data_fixture.create_text_field(table=table_2)
view_1 = data_fixture.create_grid_view(table=table_1, order=1)
view_2 = data_fixture.create_grid_view(table=table_1, order=2)
view_3 = data_fixture.create_grid_view(table=table_2, order=1)
sort_1 = data_fixture.create_view_sort(view=view_1, field=field_1)
sort_2 = data_fixture.create_view_sort(view=view_1, field=field_2)
sort_3 = data_fixture.create_view_sort(view=view_2, field=field_1)
data_fixture.create_view_sort(view=view_3, field=field_3)
response = api_client.get(
'{}'.format(reverse(
'api:database:views:list',
kwargs={'table_id': table_1.id}
)),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_200_OK
response_json = response.json()
assert len(response_json) == 2
assert 'sortings' not in response_json[0]
assert 'sortings' not in response_json[1]
response = api_client.get(
'{}?includes=sortings'.format(reverse(
'api:database:views:list',
kwargs={'table_id': table_1.id}
)),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_200_OK
response_json = response.json()
assert len(response_json[0]['sortings']) == 2
assert response_json[0]['sortings'][0]['id'] == sort_1.id
assert response_json[0]['sortings'][0]['view'] == view_1.id
assert response_json[0]['sortings'][0]['field'] == field_1.id
assert response_json[0]['sortings'][0]['order'] == sort_1.order
assert response_json[0]['sortings'][1]['id'] == sort_2.id
assert len(response_json[1]['sortings']) == 1
assert response_json[1]['sortings'][0]['id'] == sort_3.id
@pytest.mark.django_db
def test_create_view(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
table = data_fixture.create_database_table(user=user)
table_2 = data_fixture.create_database_table()
response = api_client.post(
reverse('api:database:views:list', kwargs={'table_id': table.id}),
{
'name': 'Test 1',
'type': 'NOT_EXISTING'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_REQUEST_BODY_VALIDATION'
assert response_json['detail']['type'][0]['code'] == 'invalid_choice'
response = api_client.post(
reverse('api:database:views:list', kwargs={'table_id': 99999}),
{'name': 'Test 1', 'type': 'grid'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_TABLE_DOES_NOT_EXIST'
response = api_client.post(
reverse('api:database:views:list', kwargs={'table_id': table_2.id}),
{'name': 'Test 1', 'type': 'grid'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.json()['error'] == 'ERROR_USER_NOT_IN_GROUP'
response = api_client.post(
reverse('api:database:views:list', kwargs={'table_id': table.id}),
{
'name': 'Test 1',
'type': 'grid',
'filter_type': 'OR',
'filters_disabled': True
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert response_json['type'] == 'grid'
assert response_json['filter_type'] == 'OR'
assert response_json['filters_disabled'] is True
grid = GridView.objects.filter()[0]
assert response_json['id'] == grid.id
assert response_json['name'] == grid.name
assert response_json['order'] == grid.order
assert response_json['filter_type'] == grid.filter_type
assert response_json['filters_disabled'] == grid.filters_disabled
assert 'filters' not in response_json
assert 'sortings' not in response_json
response = api_client.post(
'{}?includes=filters,sortings'.format(
reverse('api:database:views:list', kwargs={'table_id': table.id})
),
{
'name': 'Test 2',
'type': 'grid',
'filter_type': 'AND',
'filters_disabled': False
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert response_json['name'] == 'Test 2'
assert response_json['type'] == 'grid'
assert response_json['filter_type'] == 'AND'
assert response_json['filters_disabled'] is False
assert response_json['filters'] == []
assert response_json['sortings'] == []
response = api_client.post(
'{}'.format(reverse('api:database:views:list', kwargs={'table_id': table.id})),
{
'name': 'Test 3',
'type': 'grid'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert response_json['name'] == 'Test 3'
assert response_json['type'] == 'grid'
assert response_json['filter_type'] == 'AND'
assert response_json['filters_disabled'] is False
assert 'filters' not in response_json
assert 'sortings' not in response_json
@pytest.mark.django_db
def test_get_view(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
user_2, token_2 = data_fixture.create_user_and_token()
table = data_fixture.create_database_table(user=user)
table_2 = data_fixture.create_database_table(user=user_2)
view = data_fixture.create_grid_view(table=table)
view_2 = data_fixture.create_grid_view(table=table_2)
filter = data_fixture.create_view_filter(view=view)
url = reverse('api:database:views:item', kwargs={'view_id': view_2.id})
response = api_client.get(
url,
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.json()['error'] == 'ERROR_USER_NOT_IN_GROUP'
url = reverse('api:database:views:item', kwargs={'view_id': 99999})
response = api_client.get(
url,
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_404_NOT_FOUND
url = reverse('api:database:views:item', kwargs={'view_id': view.id})
response = api_client.get(
url,
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert response_json['id'] == view.id
assert response_json['table_id'] == view.table_id
assert response_json['type'] == 'grid'
assert response_json['table']['id'] == table.id
assert response_json['filter_type'] == 'AND'
assert not response_json['filters_disabled']
assert 'filters' not in response_json
assert 'sortings' not in response_json
url = reverse('api:database:views:item', kwargs={'view_id': view.id})
response = api_client.get(
'{}?includes=filters,sortings'.format(url),
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert response_json['id'] == view.id
assert len(response_json['filters']) == 1
assert response_json['filters'][0]['id'] == filter.id
assert response_json['filters'][0]['view'] == filter.view_id
assert response_json['filters'][0]['field'] == filter.field_id
assert response_json['filters'][0]['type'] == filter.type
assert response_json['filters'][0]['value'] == filter.value
assert response_json['sortings'] == []
@pytest.mark.django_db
def test_update_view(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
user_2, token_2 = data_fixture.create_user_and_token()
table = data_fixture.create_database_table(user=user)
table_2 = data_fixture.create_database_table(user=user_2)
view = data_fixture.create_grid_view(table=table)
view_2 = data_fixture.create_grid_view(table=table_2)
url = reverse('api:database:views:item', kwargs={'view_id': view_2.id})
response = api_client.patch(
url,
{'name': 'Test 1'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_USER_NOT_IN_GROUP'
url = reverse('api:database:views:item', kwargs={'view_id': 999999})
response = api_client.patch(
url,
{'name': 'Test 1'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_VIEW_DOES_NOT_EXIST'
url = reverse('api:database:views:item', kwargs={'view_id': view.id})
response = api_client.patch(
url,
{'UNKNOWN_FIELD': 'Test 1'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_200_OK
url = reverse('api:database:views:item', kwargs={'view_id': view.id})
response = api_client.patch(
url,
{'name': 'Test 1'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert response_json['id'] == view.id
assert response_json['name'] == 'Test 1'
assert response_json['filter_type'] == 'AND'
assert not response_json['filters_disabled']
view.refresh_from_db()
assert view.name == 'Test 1'
assert view.filter_type == 'AND'
assert not view.filters_disabled
url = reverse('api:database:views:item', kwargs={'view_id': view.id})
response = api_client.patch(
url,
{
'filter_type': 'OR',
'filters_disabled': True,
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert response_json['id'] == view.id
assert response_json['filter_type'] == 'OR'
assert response_json['filters_disabled']
assert 'filters' not in response_json
assert 'sortings' not in response_json
view.refresh_from_db()
assert view.filter_type == 'OR'
assert view.filters_disabled
filter_1 = data_fixture.create_view_filter(view=view)
url = reverse('api:database:views:item', kwargs={'view_id': view.id})
response = api_client.patch(
'{}?includes=filters,sortings'.format(url),
{'filter_type': 'AND'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert response_json['id'] == view.id
assert response_json['filter_type'] == 'AND'
assert response_json['filters_disabled'] is True
assert response_json['filters'][0]['id'] == filter_1.id
assert response_json['sortings'] == []
@pytest.mark.django_db
def test_delete_view(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
user_2, token_2 = data_fixture.create_user_and_token()
table = data_fixture.create_database_table(user=user)
table_2 = data_fixture.create_database_table(user=user_2)
view = data_fixture.create_grid_view(table=table)
view_2 = data_fixture.create_grid_view(table=table_2)
url = reverse('api:database:views:item', kwargs={'view_id': view_2.id})
response = api_client.delete(url, HTTP_AUTHORIZATION=f'JWT {token}')
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_USER_NOT_IN_GROUP'
url = reverse('api:database:views:item', kwargs={'view_id': 99999})
response = api_client.delete(url, HTTP_AUTHORIZATION=f'JWT {token}')
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_VIEW_DOES_NOT_EXIST'
url = reverse('api:database:views:item', kwargs={'view_id': view.id})
response = api_client.delete(url, HTTP_AUTHORIZATION=f'JWT {token}')
assert response.status_code == 204
assert GridView.objects.all().count() == 1
@pytest.mark.django_db
def test_list_view_filters(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
table_1 = data_fixture.create_database_table(user=user)
table_2 = data_fixture.create_database_table()
field_1 = data_fixture.create_text_field(table=table_1)
field_2 = data_fixture.create_text_field(table=table_1)
field_3 = data_fixture.create_text_field(table=table_2)
view_1 = data_fixture.create_grid_view(table=table_1, order=1)
view_2 = data_fixture.create_grid_view(table=table_1, order=2)
view_3 = data_fixture.create_grid_view(table=table_2, order=1)
filter_1 = data_fixture.create_view_filter(view=view_1, field=field_1)
filter_2 = data_fixture.create_view_filter(view=view_1, field=field_2)
data_fixture.create_view_filter(view=view_2, field=field_1)
data_fixture.create_view_filter(view=view_3, field=field_3)
response = api_client.get(
reverse(
'api:database:views:list_filters',
kwargs={'view_id': view_3.id}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.json()['error'] == 'ERROR_USER_NOT_IN_GROUP'
response = api_client.get(
reverse(
'api:database:views:list_filters',
kwargs={'view_id': 999999}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_VIEW_DOES_NOT_EXIST'
response = api_client.get(
reverse(
'api:database:views:list_filters',
kwargs={'view_id': view_1.id}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert len(response_json) == 2
assert response_json[0]['id'] == filter_1.id
assert response_json[0]['view'] == view_1.id
assert response_json[0]['field'] == field_1.id
assert response_json[0]['type'] == filter_1.type
assert response_json[0]['value'] == filter_1.value
assert response_json[1]['id'] == filter_2.id
@pytest.mark.django_db
def test_create_view_filter(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
table_1 = data_fixture.create_database_table(user=user)
table_2 = data_fixture.create_database_table()
field_1 = data_fixture.create_text_field(table=table_1)
field_2 = data_fixture.create_text_field(table=table_2)
view_1 = data_fixture.create_grid_view(table=table_1)
view_2 = data_fixture.create_grid_view(table=table_2)
response = api_client.post(
reverse('api:database:views:list_filters', kwargs={'view_id': view_2.id}),
{
'field': field_2.id,
'type': 'equal',
'value': 'test'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.json()['error'] == 'ERROR_USER_NOT_IN_GROUP'
response = api_client.post(
reverse('api:database:views:list_filters', kwargs={'view_id': 99999}),
{
'field': field_1.id,
'type': 'equal',
'value': 'test'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_VIEW_DOES_NOT_EXIST'
response = api_client.post(
reverse('api:database:views:list_filters', kwargs={'view_id': view_1.id}),
{
'field': 9999999,
'type': 'NOT_EXISTING',
'not_value': 'test'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_REQUEST_BODY_VALIDATION'
assert response_json['detail']['field'][0]['code'] == 'does_not_exist'
assert response_json['detail']['type'][0]['code'] == 'invalid_choice'
response = api_client.post(
reverse('api:database:views:list_filters', kwargs={'view_id': view_1.id}),
{
'field': field_2.id,
'type': 'equal',
'value': 'test'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_FIELD_NOT_IN_TABLE'
grid_view_type = view_type_registry.get('grid')
grid_view_type.can_filter = False
response = api_client.post(
reverse('api:database:views:list_filters', kwargs={'view_id': view_1.id}),
{
'field': field_1.id,
'type': 'equal',
'value': 'test'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_VIEW_FILTER_NOT_SUPPORTED'
grid_view_type.can_filter = True
equal_filter_type = view_filter_type_registry.get('equal')
allowed = equal_filter_type.compatible_field_types
equal_filter_type.compatible_field_types = []
response = api_client.post(
reverse('api:database:views:list_filters', kwargs={'view_id': view_1.id}),
{
'field': field_1.id,
'type': 'equal',
'value': 'test'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_VIEW_FILTER_TYPE_NOT_ALLOWED_FOR_FIELD'
equal_filter_type.compatible_field_types = allowed
response = api_client.post(
reverse('api:database:views:list_filters', kwargs={'view_id': view_1.id}),
{
'field': field_1.id,
'type': 'equal',
'value': 'test'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert ViewFilter.objects.all().count() == 1
first = ViewFilter.objects.all().first()
assert response_json['id'] == first.id
assert response_json['view'] == view_1.id
assert response_json['field'] == field_1.id
assert response_json['type'] == 'equal'
assert response_json['value'] == 'test'
response = api_client.post(
reverse('api:database:views:list_filters', kwargs={'view_id': view_1.id}),
{
'field': field_1.id,
'type': 'equal',
'value': ''
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert response_json['value'] == ''
response = api_client.post(
reverse('api:database:views:list_filters', kwargs={'view_id': view_1.id}),
{
'field': field_1.id,
'type': 'equal'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert response_json['value'] == ''
@pytest.mark.django_db
def test_get_view_filter(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
filter_1 = data_fixture.create_view_filter(user=user, value='test')
filter_2 = data_fixture.create_view_filter()
response = api_client.get(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': filter_2.id}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.json()['error'] == 'ERROR_USER_NOT_IN_GROUP'
response = api_client.get(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': 99999}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_VIEW_FILTER_DOES_NOT_EXIST'
response = api_client.get(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': filter_1.id}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert ViewFilter.objects.all().count() == 2
first = ViewFilter.objects.get(pk=filter_1.id)
assert response_json['id'] == first.id
assert response_json['view'] == first.view_id
assert response_json['field'] == first.field_id
assert response_json['type'] == 'equal'
assert response_json['value'] == 'test'
@pytest.mark.django_db
def test_update_view_filter(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
filter_1 = data_fixture.create_view_filter(user=user, value='test')
filter_2 = data_fixture.create_view_filter()
field_1 = data_fixture.create_text_field(table=filter_1.view.table)
field_2 = data_fixture.create_text_field()
response = api_client.patch(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': filter_2.id}
),
{'value': 'test'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.json()['error'] == 'ERROR_USER_NOT_IN_GROUP'
response = api_client.patch(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': 9999}
),
{'value': 'test'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_VIEW_FILTER_DOES_NOT_EXIST'
response = api_client.patch(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': filter_1.id}
),
{
'field': 9999999,
'type': 'NOT_EXISTING',
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_REQUEST_BODY_VALIDATION'
assert response_json['detail']['field'][0]['code'] == 'does_not_exist'
assert response_json['detail']['type'][0]['code'] == 'invalid_choice'
response = api_client.patch(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': filter_1.id}
),
{'field': field_2.id},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_FIELD_NOT_IN_TABLE'
equal_filter_type = view_filter_type_registry.get('not_equal')
allowed = equal_filter_type.compatible_field_types
equal_filter_type.compatible_field_types = []
grid_view_type = view_type_registry.get('grid')
grid_view_type.can_filter = False
response = api_client.patch(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': filter_1.id}
),
{'type': 'not_equal'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_VIEW_FILTER_TYPE_NOT_ALLOWED_FOR_FIELD'
equal_filter_type.compatible_field_types = allowed
response = api_client.patch(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': filter_1.id}
),
{
'field': field_1.id,
'type': 'not_equal',
'value': 'test 2'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert ViewFilter.objects.all().count() == 2
first = ViewFilter.objects.get(pk=filter_1.id)
assert first.field_id == field_1.id
assert first.type == 'not_equal'
assert first.value == 'test 2'
assert response_json['id'] == first.id
assert response_json['view'] == first.view_id
assert response_json['field'] == field_1.id
assert response_json['type'] == 'not_equal'
assert response_json['value'] == 'test 2'
response = api_client.patch(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': filter_1.id}
),
{'type': 'equal'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
first = ViewFilter.objects.get(pk=filter_1.id)
assert first.field_id == field_1.id
assert first.type == 'equal'
assert first.value == 'test 2'
assert response_json['id'] == first.id
assert response_json['field'] == field_1.id
assert response_json['type'] == 'equal'
assert response_json['value'] == 'test 2'
response = api_client.patch(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': filter_1.id}
),
{'value': 'test 3'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
first = ViewFilter.objects.get(pk=filter_1.id)
assert first.field_id == field_1.id
assert first.type == 'equal'
assert first.value == 'test 3'
assert response_json['id'] == first.id
assert response_json['view'] == first.view_id
assert response_json['field'] == field_1.id
assert response_json['type'] == 'equal'
assert response_json['value'] == 'test 3'
response = api_client.patch(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': filter_1.id}
),
{'value': ''},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
first = ViewFilter.objects.get(pk=filter_1.id)
assert first.value == ''
assert response_json['value'] == ''
@pytest.mark.django_db
def test_delete_view_filter(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
filter_1 = data_fixture.create_view_filter(user=user, value='test')
filter_2 = data_fixture.create_view_filter()
response = api_client.delete(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': filter_2.id}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.json()['error'] == 'ERROR_USER_NOT_IN_GROUP'
response = api_client.delete(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': 9999}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_VIEW_FILTER_DOES_NOT_EXIST'
response = api_client.delete(
reverse(
'api:database:views:filter_item',
kwargs={'view_filter_id': filter_1.id}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == 204
assert ViewFilter.objects.all().count() == 1
@pytest.mark.django_db
def test_list_view_sortings(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
table_1 = data_fixture.create_database_table(user=user)
table_2 = data_fixture.create_database_table()
field_1 = data_fixture.create_text_field(table=table_1)
field_2 = data_fixture.create_text_field(table=table_1)
field_3 = data_fixture.create_text_field(table=table_2)
view_1 = data_fixture.create_grid_view(table=table_1, order=1)
data_fixture.create_grid_view(table=table_1, order=2)
view_3 = data_fixture.create_grid_view(table=table_2, order=1)
sort_1 = data_fixture.create_view_sort(view=view_1, field=field_1)
sort_2 = data_fixture.create_view_sort(view=view_1, field=field_2)
data_fixture.create_view_sort(view=view_3, field=field_3)
response = api_client.get(
reverse(
'api:database:views:list_sortings',
kwargs={'view_id': view_3.id}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.json()['error'] == 'ERROR_USER_NOT_IN_GROUP'
response = api_client.get(
reverse(
'api:database:views:list_sortings',
kwargs={'view_id': 999999}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_VIEW_DOES_NOT_EXIST'
response = api_client.get(
reverse(
'api:database:views:list_sortings',
kwargs={'view_id': view_1.id}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert len(response_json) == 2
assert response_json[0]['id'] == sort_1.id
assert response_json[0]['view'] == view_1.id
assert response_json[0]['field'] == field_1.id
assert response_json[0]['order'] == sort_1.order
assert response_json[1]['id'] == sort_2.id
@pytest.mark.django_db
def test_create_view_sort(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
table_1 = data_fixture.create_database_table(user=user)
table_2 = data_fixture.create_database_table()
field_1 = data_fixture.create_text_field(table=table_1)
field_2 = data_fixture.create_text_field(table=table_2)
field_3 = data_fixture.create_text_field(table=table_1)
field_4 = data_fixture.create_text_field(table=table_1)
link_row_field = data_fixture.create_link_row_field(table=table_1)
view_1 = data_fixture.create_grid_view(table=table_1)
view_2 = data_fixture.create_grid_view(table=table_2)
response = api_client.post(
reverse('api:database:views:list_sortings', kwargs={'view_id': view_2.id}),
{
'field': field_2.id,
'order': 'ASC',
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.json()['error'] == 'ERROR_USER_NOT_IN_GROUP'
response = api_client.post(
reverse('api:database:views:list_sortings', kwargs={'view_id': 99999}),
{
'field': field_1.id,
'order': 'ASC',
'value': 'test'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_VIEW_DOES_NOT_EXIST'
response = api_client.post(
reverse('api:database:views:list_sortings', kwargs={'view_id': view_1.id}),
{
'field': 9999999,
'order': 'NOT_EXISTING'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_REQUEST_BODY_VALIDATION'
assert response_json['detail']['field'][0]['code'] == 'does_not_exist'
assert response_json['detail']['order'][0]['code'] == 'invalid_choice'
response = api_client.post(
reverse('api:database:views:list_sortings', kwargs={'view_id': view_1.id}),
{
'field': field_2.id,
'order': 'ASC',
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_FIELD_NOT_IN_TABLE'
grid_view_type = view_type_registry.get('grid')
grid_view_type.can_sort = False
response = api_client.post(
reverse('api:database:views:list_sortings', kwargs={'view_id': view_1.id}),
{
'field': field_1.id,
'order': 'ASC'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_VIEW_SORT_NOT_SUPPORTED'
grid_view_type.can_sort = True
response = api_client.post(
reverse('api:database:views:list_sortings', kwargs={'view_id': view_1.id}),
{
'field': link_row_field.id,
'order': 'ASC'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_VIEW_SORT_FIELD_NOT_SUPPORTED'
response = api_client.post(
reverse('api:database:views:list_sortings', kwargs={'view_id': view_1.id}),
{
'field': field_1.id,
'order': 'ASC'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert ViewSort.objects.all().count() == 1
first = ViewSort.objects.all().first()
assert response_json['id'] == first.id
assert response_json['view'] == view_1.id
assert response_json['field'] == field_1.id
assert response_json['order'] == 'ASC'
response = api_client.post(
reverse('api:database:views:list_sortings', kwargs={'view_id': view_1.id}),
{
'field': field_1.id,
'order': 'ASC'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_VIEW_SORT_FIELD_ALREADY_EXISTS'
response = api_client.post(
reverse('api:database:views:list_sortings', kwargs={'view_id': view_1.id}),
{
'field': field_3.id,
'order': 'DESC'
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert response_json['order'] == 'DESC'
response = api_client.post(
reverse('api:database:views:list_sortings', kwargs={'view_id': view_1.id}),
{
'field': field_4.id,
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert response_json['order'] == 'ASC'
assert ViewSort.objects.all().count() == 3
@pytest.mark.django_db
def test_get_view_sort(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
sort_1 = data_fixture.create_view_sort(user=user, order='DESC')
sort_2 = data_fixture.create_view_sort()
response = api_client.get(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': sort_2.id}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.json()['error'] == 'ERROR_USER_NOT_IN_GROUP'
response = api_client.get(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': 99999}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_VIEW_SORT_DOES_NOT_EXIST'
response = api_client.get(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': sort_1.id}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert ViewSort.objects.all().count() == 2
first = ViewSort.objects.get(pk=sort_1.id)
assert response_json['id'] == first.id
assert response_json['view'] == first.view_id
assert response_json['field'] == first.field_id
assert response_json['order'] == 'DESC'
@pytest.mark.django_db
def test_update_view_sort(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
sort_1 = data_fixture.create_view_sort(user=user, order='DESC')
sort_2 = data_fixture.create_view_sort()
sort_3 = data_fixture.create_view_sort(view=sort_1.view, order='ASC')
field_1 = data_fixture.create_text_field(table=sort_1.view.table)
link_row_field = data_fixture.create_link_row_field(table=sort_1.view.table)
field_2 = data_fixture.create_text_field()
response = api_client.patch(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': sort_2.id}
),
{'order': 'ASC'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.json()['error'] == 'ERROR_USER_NOT_IN_GROUP'
response = api_client.patch(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': 9999}
),
{'order': 'ASC'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_VIEW_SORT_DOES_NOT_EXIST'
response = api_client.patch(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': sort_1.id}
),
{
'field': 9999999,
'order': 'EXISTING',
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_REQUEST_BODY_VALIDATION'
assert response_json['detail']['field'][0]['code'] == 'does_not_exist'
assert response_json['detail']['order'][0]['code'] == 'invalid_choice'
response = api_client.patch(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': sort_1.id}
),
{'field': field_2.id},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_FIELD_NOT_IN_TABLE'
response = api_client.patch(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': sort_1.id}
),
{'field': link_row_field.id},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_VIEW_SORT_FIELD_NOT_SUPPORTED'
response = api_client.patch(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': sort_3.id}
),
{'field': sort_1.field_id},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_400_BAD_REQUEST
assert response_json['error'] == 'ERROR_VIEW_SORT_FIELD_ALREADY_EXISTS'
response = api_client.patch(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': sort_1.id}
),
{
'field': field_1.id,
'order': 'ASC',
},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
assert ViewSort.objects.all().count() == 3
first = ViewSort.objects.get(pk=sort_1.id)
assert first.field_id == field_1.id
assert first.order == 'ASC'
assert response_json['id'] == first.id
assert response_json['view'] == first.view_id
assert response_json['field'] == field_1.id
assert response_json['order'] == 'ASC'
response = api_client.patch(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': sort_1.id}
),
{'order': 'DESC'},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
first = ViewSort.objects.get(pk=sort_1.id)
assert first.field_id == field_1.id
assert first.order == 'DESC'
assert response_json['id'] == first.id
assert response_json['view'] == first.view_id
assert response_json['field'] == field_1.id
assert response_json['order'] == 'DESC'
response = api_client.patch(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': sort_1.id}
),
{},
format='json',
HTTP_AUTHORIZATION=f'JWT {token}'
)
response_json = response.json()
assert response.status_code == HTTP_200_OK
first = ViewSort.objects.get(pk=sort_1.id)
assert first.field_id == field_1.id
assert first.order == 'DESC'
assert response_json['id'] == first.id
assert response_json['view'] == first.view_id
assert response_json['field'] == field_1.id
assert response_json['order'] == 'DESC'
@pytest.mark.django_db
def test_delete_view_sort(api_client, data_fixture):
user, token = data_fixture.create_user_and_token()
sort_1 = data_fixture.create_view_sort(user=user, order='DESC')
sort_2 = data_fixture.create_view_sort()
response = api_client.delete(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': sort_2.id}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.json()['error'] == 'ERROR_USER_NOT_IN_GROUP'
response = api_client.delete(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': 9999}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == HTTP_404_NOT_FOUND
assert response.json()['error'] == 'ERROR_VIEW_SORT_DOES_NOT_EXIST'
response = api_client.delete(
reverse(
'api:database:views:sort_item',
kwargs={'view_sort_id': sort_1.id}
),
HTTP_AUTHORIZATION=f'JWT {token}'
)
assert response.status_code == 204
assert ViewSort.objects.all().count() == 1
| 35.942051 | 87 | 0.650544 | 6,231 | 48,378 | 4.732627 | 0.019901 | 0.1233 | 0.110482 | 0.063176 | 0.969616 | 0.951914 | 0.930381 | 0.920852 | 0.897691 | 0.88043 | 0 | 0.020623 | 0.218219 | 48,378 | 1,345 | 88 | 35.968773 | 0.759076 | 0 | 0 | 0.741564 | 0 | 0 | 0.167928 | 0.074497 | 0 | 0 | 0 | 0 | 0.259259 | 1 | 0.013992 | false | 0.000823 | 0.004115 | 0 | 0.018107 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7d2166249a94b72f089dc3f17b99cd5b7b467886 | 13,315 | py | Python | great_expectations/expectations/util.py | Lee-W/great_expectations | bd9cb27d1caa752364d298f5057e85b6b604b622 | [
"Apache-2.0"
] | null | null | null | great_expectations/expectations/util.py | Lee-W/great_expectations | bd9cb27d1caa752364d298f5057e85b6b604b622 | [
"Apache-2.0"
] | null | null | null | great_expectations/expectations/util.py | Lee-W/great_expectations | bd9cb27d1caa752364d298f5057e85b6b604b622 | [
"Apache-2.0"
] | null | null | null | import numpy as np
from great_expectations.validator.validation_graph import MetricConfiguration
legacy_method_parameters = {
"expect_column_bootstrapped_ks_test_p_value_to_be_greater_than": (
"column",
"partition_object",
"p",
"bootstrap_samples",
"bootstrap_sample_size",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_chisquare_test_p_value_to_be_greater_than": (
"column",
"partition_object",
"p",
"tail_weight_holdout",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_distinct_values_to_be_in_set": (
"column",
"value_set",
"parse_strings_as_datetimes",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_distinct_values_to_contain_set": (
"column",
"value_set",
"parse_strings_as_datetimes",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_distinct_values_to_equal_set": (
"column",
"value_set",
"parse_strings_as_datetimes",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_kl_divergence_to_be_less_than": (
"column",
"partition_object",
"threshold",
"tail_weight_holdout",
"internal_weight_holdout",
"bucketize_data",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_max_to_be_between": (
"column",
"min_value",
"max_value",
"strict_min",
"strict_max",
"parse_strings_as_datetimes",
"output_strftime_format",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_mean_to_be_between": (
"column",
"min_value",
"max_value",
"strict_min",
"strict_max",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_median_to_be_between": (
"column",
"min_value",
"max_value",
"strict_min",
"strict_max",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_min_to_be_between": (
"column",
"min_value",
"max_value",
"strict_min",
"strict_max",
"parse_strings_as_datetimes",
"output_strftime_format",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_most_common_value_to_be_in_set": (
"column",
"value_set",
"ties_okay",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_pair_cramers_phi_value_to_be_less_than": (
"column_A",
"column_B",
"bins_A",
"bins_B",
"n_bins_A",
"n_bins_B",
"threshold",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_pair_values_A_to_be_greater_than_B": (
"column_A",
"column_B",
"or_equal",
"parse_strings_as_datetimes",
"allow_cross_type_comparisons",
"ignore_row_if",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_pair_values_to_be_equal": (
"column_A",
"column_B",
"ignore_row_if",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_pair_values_to_be_in_set": (
"column_A",
"column_B",
"value_pairs_set",
"ignore_row_if",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_parameterized_distribution_ks_test_p_value_to_be_greater_than": (
"column",
"distribution",
"p_value",
"params",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_proportion_of_unique_values_to_be_between": (
"column",
"min_value",
"max_value",
"strict_min",
"strict_max",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_quantile_values_to_be_between": (
"column",
"quantile_ranges",
"allow_relative_error",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_stdev_to_be_between": (
"column",
"min_value",
"max_value",
"strict_min",
"strict_max",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_sum_to_be_between": (
"column",
"min_value",
"max_value",
"strict_min",
"strict_max",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_to_exist": (
"column",
"column_index",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_unique_value_count_to_be_between": (
"column",
"min_value",
"max_value",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_value_lengths_to_be_between": (
"column",
"min_value",
"max_value",
"mostly",
"row_condition",
"condition_parser",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_value_lengths_to_equal": (
"column",
"value",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_be_between": (
"column",
"min_value",
"max_value",
"strict_min",
"strict_max",
"allow_cross_type_comparisons",
"parse_strings_as_datetimes",
"output_strftime_format",
"mostly",
"row_condition",
"condition_parser",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_be_dateutil_parseable": (
"column",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_be_decreasing": (
"column",
"strictly",
"parse_strings_as_datetimes",
"mostly",
"row_condition",
"condition_parser",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_be_in_set": (
"column",
"value_set",
"mostly",
"parse_strings_as_datetimes",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_be_in_type_list": (
"column",
"type_list",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_be_increasing": (
"column",
"strictly",
"parse_strings_as_datetimes",
"mostly",
"row_condition",
"condition_parser",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_be_json_parseable": (
"column",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_be_null": (
"column",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_be_of_type": (
"column",
"type_",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_be_unique": (
"column",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_match_json_schema": (
"column",
"json_schema",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_match_regex": (
"column",
"regex",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_match_regex_list": (
"column",
"regex_list",
"match_on",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_match_strftime_format": (
"column",
"strftime_format",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_not_be_in_set": (
"column",
"value_set",
"mostly",
"parse_strings_as_datetimes",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_not_be_null": (
"column",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_not_match_regex": (
"column",
"regex",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_column_values_to_not_match_regex_list": (
"column",
"regex_list",
"mostly",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_compound_columns_to_be_unique": (
"column_list",
"ignore_row_if",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_multicolumn_sum_to_equal": (
"column_list",
"sum_total",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_multicolumn_values_to_be_unique": (
"column_list",
"ignore_row_if",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_select_column_values_to_be_unique_within_record": (
"column_list",
"ignore_row_if",
"result_format",
"row_condition",
"condition_parser",
"include_config",
"catch_exceptions",
"meta",
),
"expect_table_column_count_to_be_between": (
"min_value",
"max_value",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_table_column_count_to_equal": (
"value",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_table_columns_to_match_ordered_list": (
"column_list",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_table_columns_to_match_set": (
"column_set",
"exact_match",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_table_row_count_to_be_between": (
"min_value",
"max_value",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
"expect_table_row_count_to_equal": (
"value",
"result_format",
"include_config",
"catch_exceptions",
"meta",
),
}
| 24.253188 | 84 | 0.537439 | 1,182 | 13,315 | 5.477157 | 0.108291 | 0.096386 | 0.144578 | 0.2249 | 0.85434 | 0.839975 | 0.833024 | 0.819432 | 0.803676 | 0.76367 | 0 | 0 | 0.340068 | 13,315 | 548 | 85 | 24.297445 | 0.73677 | 0 | 0 | 0.851648 | 0 | 0 | 0.529103 | 0.18941 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.003663 | 0 | 0.003663 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
addc5055476cd3df1b30b6a33696f129bf6f9243 | 119 | py | Python | utils/QtCore.py | JaviCDiaz/Crypto-Info | 72b9343946efe6df135abf48e5a43b2a440b8d59 | [
"MIT"
] | null | null | null | utils/QtCore.py | JaviCDiaz/Crypto-Info | 72b9343946efe6df135abf48e5a43b2a440b8d59 | [
"MIT"
] | null | null | null | utils/QtCore.py | JaviCDiaz/Crypto-Info | 72b9343946efe6df135abf48e5a43b2a440b8d59 | [
"MIT"
] | null | null | null | from PySide6.QtCore import *
from PySide6.QtGui import *
from PySide6.QtWidgets import *
from PySide6.QtCharts import * | 29.75 | 31 | 0.806723 | 16 | 119 | 6 | 0.4375 | 0.458333 | 0.53125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.12605 | 119 | 4 | 32 | 29.75 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
bc30d6e498352c14509d7223b59dfb8c1c5b9cc2 | 4,574 | py | Python | tests/test_options.py | tcmetzger/sphinx-favicon | e8613e04bbe3b6f6816da4efd297302d20aa8cae | [
"MIT"
] | 9 | 2021-09-30T14:06:08.000Z | 2022-02-19T05:23:16.000Z | tests/test_options.py | tcmetzger/sphinx-favicon | e8613e04bbe3b6f6816da4efd297302d20aa8cae | [
"MIT"
] | 6 | 2021-11-10T14:17:09.000Z | 2021-11-18T22:01:51.000Z | tests/test_options.py | tcmetzger/sphinx-favicon | e8613e04bbe3b6f6816da4efd297302d20aa8cae | [
"MIT"
] | 1 | 2021-11-10T17:05:48.000Z | 2021-11-10T17:05:48.000Z | from itertools import chain
from pathlib import Path
import pytest
import conftest
@pytest.mark.sphinx("html", testroot="list_of_three_dicts")
def test_list_of_three_dicts(favicon_tags):
# this test should have 3 favicons
assert len(favicon_tags) == 3
# all favicons should have rel, href, type, and sizes attributes
for favicon_tag in favicon_tags:
assert favicon_tag["rel"]
assert favicon_tag["href"]
assert favicon_tag["type"]
assert favicon_tag["sizes"]
# check first favicon in more detail
assert favicon_tags[0]["rel"] == ["icon"]
assert (
favicon_tags[0]["href"]
== "https://secure.example.com/favicon/favicon-16x16.png"
)
assert favicon_tags[0]["type"] == "image/png"
assert favicon_tags[0]["sizes"] == "16x16"
@pytest.mark.sphinx("html", testroot="list_of_three_dicts_automated_values")
def test_list_of_three_dicts_automated_values(favicon_tags):
# this test should have 3 favicons
assert len(favicon_tags) == 3
# all favicons should have rel, href, type, and sizes attributes
for favicon_tag in favicon_tags:
assert favicon_tag["rel"]
assert favicon_tag["href"]
assert favicon_tag["type"]
assert favicon_tag["sizes"]
# check first favicon in more detail
assert favicon_tags[0]["rel"] == ["icon"]
assert (
favicon_tags[0]["href"]
== "https://secure.example.com/favicon/favicon-16x16.png"
)
assert favicon_tags[0]["type"] == "image/png"
assert favicon_tags[0]["sizes"] == "16x16"
@pytest.mark.sphinx("html", testroot="single_dict")
def test_single_dict(favicon_tags):
# this test should have 1 favicon
assert len(favicon_tags) == 1
# check favicon
assert favicon_tags[0]["rel"] == ["apple-touch-icon"]
assert (
favicon_tags[0]["href"]
== "https://secure.example.com/favicon/apple-touch-icon-180x180.png"
)
assert favicon_tags[0]["type"] == "image/png"
assert favicon_tags[0]["sizes"] == "180x180"
@pytest.mark.sphinx("html", testroot="list_of_urls")
def test_list_of_urls(favicon_tags):
# this test should have 3 favicons
assert len(favicon_tags) == 3
# all favicons should have rel, href, and type attributes
for favicon_tag in favicon_tags:
assert favicon_tag["rel"]
assert favicon_tag["href"]
assert favicon_tag["type"]
# check first favicon in more detail
assert favicon_tags[0]["rel"] == ["icon"]
assert (
favicon_tags[0]["href"]
== "https://secure.example.com/favicon/favicon-16x16.gif"
)
assert favicon_tags[0]["type"] == "image/gif"
@pytest.mark.sphinx("html", testroot="static_files")
def test_static_files(app, favicon_tags, favicon_tags_for_nested):
# this test should have 2 favicons
assert len(favicon_tags) == 2
# all favicons should have rel, href, type, and sizes attributes
for favicon_tag in chain(favicon_tags, favicon_tags_for_nested):
assert favicon_tag["rel"] == ["icon"]
assert "_static" in favicon_tag["href"]
assert favicon_tag["type"] == "image/svg+xml"
assert favicon_tag["sizes"]
assert "static-file" not in favicon_tag
for favicon_tag in favicon_tags:
assert favicon_tag["href"].startswith("_static")
for favicon_tag in favicon_tags_for_nested:
assert favicon_tag["href"].startswith("../_static")
static = Path(app.outdir, "_static")
assert (static / "square.svg").exists()
assert (static / "nested/triangle.svg").exists()
@pytest.mark.sphinx("html", testroot="href_and_static")
def test_href_and_static(app, favicon_tags, favicon_tags_for_nested):
# this test should have 3 favicons
assert len(favicon_tags) == 2
# all favicons should have rel, href, type, and sizes attributes
for favicon_tag in chain(favicon_tags, favicon_tags_for_nested):
assert favicon_tag["rel"] == ["icon"]
assert "_static" in favicon_tag["href"]
assert favicon_tag["type"] == "image/svg+xml"
assert favicon_tag["sizes"]
assert "static-file" not in favicon_tag
for favicon_tag in favicon_tags:
assert favicon_tag["href"].startswith("_static")
for favicon_tag in favicon_tags_for_nested:
assert favicon_tag["href"].startswith("../_static")
# favicons should use relative paths, ignoring paths provided with `href`
static = Path(app.outdir, "_static")
assert (static / "square.svg").exists()
assert (static / "nested/triangle.svg").exists()
| 32.671429 | 77 | 0.674464 | 613 | 4,574 | 4.823817 | 0.130506 | 0.148799 | 0.113629 | 0.091309 | 0.887386 | 0.861346 | 0.816706 | 0.805208 | 0.805208 | 0.784917 | 0 | 0.016125 | 0.200044 | 4,574 | 139 | 78 | 32.906475 | 0.79202 | 0.151946 | 0 | 0.707865 | 0 | 0 | 0.195495 | 0.009322 | 0 | 0 | 0 | 0 | 0.561798 | 1 | 0.067416 | false | 0 | 0.044944 | 0 | 0.11236 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
70d810762bddeff22a1b0a3448efa398a2582901 | 2,188 | py | Python | tests/test_importer.py | vecmezoni/import_monster | 8ada4394d44e0d7413e5776506b483da567eb410 | [
"MIT"
] | null | null | null | tests/test_importer.py | vecmezoni/import_monster | 8ada4394d44e0d7413e5776506b483da567eb410 | [
"MIT"
] | null | null | null | tests/test_importer.py | vecmezoni/import_monster | 8ada4394d44e0d7413e5776506b483da567eb410 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import cmath
import itertools
import math
import pytest
from import_monster import methods_importer
class TestMethodsImporter:
def test_raises_error_on_incorrect_module_input(self):
with pytest.raises(TypeError):
methods_importer('cool_method', [123])
def test_raises_error_on_non_existing_module_name(self):
with pytest.raises(ModuleNotFoundError):
methods_importer('cool_method', ['non_existing_module'])
def test_returns_nothing_for_non_existing_method_for_module_name(self):
assert methods_importer('non_existing_method', ['math']) == []
def test_returns_nothing_for_non_existing_method_for_module(self):
assert methods_importer('non_existing_method', [math]) == []
def test_returns_nothing_for_non_callable_property_for_module_name(self):
assert methods_importer('pi', ['math']) == []
def test_returns_nothing_for_non_callable_property_for_module(self):
assert methods_importer('pi', [math]) == []
def test_returns_method_for_single_module_name(self):
assert methods_importer('exp', ['math']) == [math.exp]
def test_returns_only_existing_method_for_several_module_names(self):
assert methods_importer('exp', ['math', 'itertools']) == [math.exp]
def test_returns_several_existing_method_for_several_module_names(self):
assert methods_importer('exp', ['math', 'cmath']) == [
math.exp, cmath.exp]
def test_keeps_the_order_of_modules_for_module_names(self):
assert methods_importer('exp', ['cmath', 'math']) == [
cmath.exp, math.exp]
def test_returns_method_for_single_module(self):
assert methods_importer('exp', [math]) == [math.exp]
def test_returns_only_existing_method_for_several_modules(self):
assert methods_importer('exp', [math, itertools]) == [math.exp]
def test_returns_several_existing_method_for_several_modules(self):
assert methods_importer('exp', [math, cmath]) == [math.exp, cmath.exp]
def test_keeps_the_order_of_modules_for_modules(self):
assert methods_importer('exp', [cmath, math]) == [cmath.exp, math.exp]
| 38.385965 | 78 | 0.722121 | 278 | 2,188 | 5.226619 | 0.176259 | 0.154852 | 0.140399 | 0.206469 | 0.794219 | 0.76669 | 0.759119 | 0.707502 | 0.707502 | 0.657949 | 0 | 0.002188 | 0.164534 | 2,188 | 56 | 79 | 39.071429 | 0.79267 | 0.009598 | 0 | 0 | 0 | 0 | 0.069284 | 0 | 0 | 0 | 0 | 0 | 0.315789 | 1 | 0.368421 | false | 0 | 0.526316 | 0 | 0.921053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
cb916feb84a117f7d641121b0ba4427c4ecdba77 | 21 | py | Python | sphinx-docs/__init__.py | czbiohub/reconstruct-order | e729ae3871aea0a5ec2d42744a9448c7f0a93037 | [
"Unlicense"
] | 6 | 2019-10-30T23:00:01.000Z | 2021-03-02T19:09:07.000Z | sphinx-docs/__init__.py | czbiohub/ReconstructOrder | e729ae3871aea0a5ec2d42744a9448c7f0a93037 | [
"Unlicense"
] | 14 | 2019-07-08T22:51:29.000Z | 2019-07-13T15:44:01.000Z | sphinx-docs/__init__.py | mehta-lab/reconstruct-order | e729ae3871aea0a5ec2d42744a9448c7f0a93037 | [
"Unlicense"
] | 2 | 2020-05-02T23:28:36.000Z | 2020-07-16T23:46:46.000Z | # bchhun, {4/17/19}
| 7 | 19 | 0.52381 | 4 | 21 | 2.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.294118 | 0.190476 | 21 | 2 | 20 | 10.5 | 0.352941 | 0.809524 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cbb54b0d0840b7d04738992656c79b3ddd60c44f | 14,891 | py | Python | examples/mixer-calibration/auto_mixer_tools_visa.py | qua-platform/qua-libs | 805a3b1a69980b939b370b3ba09434bc26dc45ec | [
"BSD-3-Clause"
] | 21 | 2021-05-21T08:23:34.000Z | 2022-03-25T11:30:55.000Z | examples/mixer-calibration/auto_mixer_tools_visa.py | qua-platform/qua-libs | 805a3b1a69980b939b370b3ba09434bc26dc45ec | [
"BSD-3-Clause"
] | 9 | 2021-05-13T19:56:00.000Z | 2021-12-21T05:11:04.000Z | examples/mixer-calibration/auto_mixer_tools_visa.py | qua-platform/qua-libs | 805a3b1a69980b939b370b3ba09434bc26dc45ec | [
"BSD-3-Clause"
] | 2 | 2021-06-21T10:56:40.000Z | 2021-12-19T14:21:33.000Z | # This file contains classes of spectrum analyzers using the VISA interface to communicate with the computers.
# They should have almost uniform commands, making adaptions to new models/brands quite easy
from qm.qua import *
from abc import ABC, abstractmethod
import numpy as np
import pyvisa as visa
class VisaSA(ABC):
def __init__(self, address, qm):
# Gets an existing qm, assumes there is an element called "qubit" with an operation named "test_pulse" which
# plays a constant pulse
super().__init__()
rm = visa.ResourceManager()
self.sa = rm.open_resource(address)
self.sa.timeout = 100000
with program() as mixer_cal:
with infinite_loop_():
play("test_pulse", "qubit")
self.qm = qm
self.job = qm.execute(mixer_cal)
self.method = None
def IQ_imbalance_correction(self, g, phi):
c = np.cos(phi)
s = np.sin(phi)
N = 1 / ((1 - g ** 2) * (2 * c ** 2 - 1))
return [
float(N * x) for x in [(1 - g) * c, (1 + g) * s, (1 - g) * s, (1 + g) * c]
]
def get_leakage(self, i0, q0):
self.qm.set_dc_offset_by_qe("qubit", "I", i0)
self.qm.set_dc_offset_by_qe("qubit", "Q", q0)
amp_ = self.get_amp()
return amp_
def get_image(self, g, p):
self.job.set_element_correction("qubit", self.IQ_imbalance_correction(g, p))
amp_ = self.get_amp()
return amp_
def __del__(self):
self.sa.clear()
self.sa.close()
@abstractmethod
def get_amp(self):
pass
@abstractmethod
def set_automatic_video_bandwidth(self, state: int):
# State should be 1 or 0
pass
@abstractmethod
def set_automatic_bandwidth(self, state: int):
# State should be 1 or 0
pass
@abstractmethod
def set_bandwidth(self, bw: int):
# Sets the bandwidth
pass
@abstractmethod
def set_sweep_points(self, n_points: int):
# Sets the number of points for a sweep
pass
@abstractmethod
def set_center_freq(self, freq: int):
# Sets the central frequency
pass
@abstractmethod
def set_span(self, span: int):
# Sets the span
pass
@abstractmethod
def set_cont_off(self):
# Sets continuous mode off
pass
@abstractmethod
def set_cont_on(self):
# Sets continuous mode on
pass
@abstractmethod
def get_single_trigger(self):
# Performs a single sweep
pass
@abstractmethod
def active_marker(self, marker: int):
# Active the given marker
pass
@abstractmethod
def set_marker_freq(self, marker: int, freq: int):
# Sets the marker's frequency
pass
@abstractmethod
def query_marker(self, marker: int):
# Query the marker
pass
@abstractmethod
def get_full_trace(self):
# Returns the full trace
pass
@abstractmethod
def enable_measurement(self):
# Sets the measurement to channel power
pass
@abstractmethod
def disables_measurement(self):
# Sets the measurement to none
pass
@abstractmethod
def sets_measurement_integration_bw(self, ibw: int):
# Sets the measurement integration bandwidth
pass
@abstractmethod
def disables_measurement_averaging(self):
# Disables averaging in the measurement
pass
@abstractmethod
def get_measurement_data(self):
# Returns the result of the measurement
pass
class RohdeSchwarzFPC1000(VisaSA):
def get_amp(self):
self.get_single_trigger()
if self.method == 1: # Channel power
sig = self.get_measurement_data()
elif self.method == 2: # Marker
sig = self.query_marker(1)
else:
sig = float("NaN")
return sig
def set_automatic_video_bandwidth(self, state: int):
# State should be 1 or 0
self.sa.write(f"SENS:BAND:VID:AUTO {int(state)}")
def set_automatic_bandwidth(self, state: int):
# State should be 1 or 0. Resolution (or measurement) bandwidth
self.sa.write(f"SENS:BAND:AUTO {int(state)}")
def set_bandwidth(self, bw: int):
# Sets the resolution (or measurement) bandwidth, 1 Hz to 3 MHz, default unit is Hz
# Example SENS:BAND 100000
self.sa.write(f"SENS:BAND {int(bw)}")
def set_sweep_points(self, n_points: int):
# Sets the number of points for a sweep, allowed range 101 to 2501, default is 201
self.sa.write(f"SENS:SWE:POIN {int(n_points)}")
def set_center_freq(self, freq: int):
# Sets the central frequency, default unit is Hz
self.sa.write(f"SENS:FREQ:CENT {int(freq)}")
def set_span(self, span: int):
# Sets the span, default unit is Hz
self.sa.write(f"SENS:FREQ:SPAN {int(span)}")
def set_cont_off(self):
# This command selects the sweep mode (but does not start the measurement!)
# OFF or 0 is a single sweep mode
# *OPC? is to make sure there is no overlapping execution
return self.sa.query("INIT:CONT OFF;*OPC?")
def set_cont_on(self):
# This command selects the sweep mode (but does not start the measurement!)
# ON or 1 is a continuous sweep mode
# *OPC? is to make sure there is no overlapping execution
return self.sa.query("INIT:CONT ON;*OPC?")
def get_single_trigger(self):
# Initiates a new measurement sequence (starts the sweep)
return self.sa.query("INIT:IMM;*OPC?")
def active_marker(self, marker: int):
# Activate the given marker
self.sa.write(f"CALC:MARK{int(marker)} ON")
def set_marker_freq(self, marker: int, freq: int):
# Sets the marker's frequency. Default unit is Hz
self.get_single_trigger()
self.sa.write(f"CALC:MARK{int(marker)}:X {int(freq)}")
def query_marker(self, marker: int):
# Query the amplitude (default unit is dBm) of the marker
return float(self.sa.query(f"CALC:MARK{int(marker)}:Y?"))
def get_full_trace(self):
# Returns the full trace. Implicit assumption that this is trace1 (there could be 1-4)
self.sa.write("FORM ASC") # data format needs to be in ASCII
ff_SA_Trace_Data = self.sa.query("TRAC:DATA? TRACE1")
# Data from the FPC comes out as a string of 1183 values separated by ',':
# '-1.97854112E+01,-3.97854112E+01,-2.97454112E+01,-4.92543112E+01,-5.17254112E+01,-1.91254112E+01...\n'
# The code below turns it into an a python list of floats
# Use split to turn long string to an array of values
ff_SA_Trace_Data_Array = ff_SA_Trace_Data.split(",")
amp = [float(i) for i in ff_SA_Trace_Data_Array]
return amp
def enable_measurement(self):
# Sets the measurement to channel power
self.sa.write(
"CALC:MARK:FUNC:POW:SEL CPOW; CALC:MARK:FUNC:LEV:ONCE; CALC:MARK:FUNC:CPOW:UNIT DBM; CALC:MARK:FUNC:POW:RES:PHZ ON"
)
def disables_measurement(self):
# Sets the channel power measurement to none
self.sa.write("CALC:MARK:FUNC:POW OFF")
def sets_measurement_integration_bw(self, ibw: int):
# Sets the measurement integration bandwidth for channel power measurements
self.sa.write(f"CALC:MARK:FUNC:CPOW:BAND {int(ibw)}")
def disables_measurement_averaging(self):
# disables averaging in the measurement
pass
def get_measurement_data(self):
# Returns the result of the measurement
return self.sa.query(f"CALC:MARK:FUNC:POW:RES? CPOW")
class KeysightFieldFox(VisaSA):
def get_amp(self):
self.get_single_trigger()
if self.method == 1: # Channel power
sig = self.get_measurement_data()
elif self.method == 2: # Marker
sig = self.query_marker(1)
else:
sig = float("NaN")
return sig
def set_automatic_video_bandwidth(self, state: int):
# State should be 1 or 0
self.sa.write(f"SENS:BAND:VID:AUTO {int(state)}")
def set_automatic_bandwidth(self, state: int):
# State should be 1 or 0
self.sa.write(f"SENS:BAND:AUTO {int(state)}")
def set_bandwidth(self, bw: int):
# Sets the bandwidth
self.sa.write(f"SENS:BAND {int(bw)}")
def set_sweep_points(self, n_points: int):
# Sets the number of points for a sweep
self.sa.write(f"SENS:SWE:POIN {int(n_points)}")
def set_center_freq(self, freq: int):
# Sets the central frequency
self.sa.write(f"SENS:FREQ:CENT {int(freq)}")
def set_span(self, span: int):
# Sets the span
self.sa.write(f"SENS:FREQ:SPAN {int(span)}")
def set_cont_off(self):
return self.sa.query("INIT:CONT OFF;*OPC?")
def set_cont_on(self):
# Sets continuous mode on
return self.sa.query("INIT:CONT ON;*OPC?")
def get_single_trigger(self):
# Performs a single sweep
return self.sa.query("INIT:IMM;*OPC?")
def active_marker(self, marker: int):
# Active the given marker
self.sa.write(f"CALC:MARK{int(marker)}:ACT")
def set_marker_freq(self, marker: int, freq: int):
# Sets the marker's frequency
self.get_single_trigger()
self.sa.write(f"CALC:MARK{int(marker)}:X {int(freq)}")
def query_marker(self, marker: int):
# Query the marker
return float(self.sa.query(f"CALC:MARK{int(marker)}:Y?"))
def get_full_trace(self):
# Returns the full trace
ff_SA_Trace_Data = self.sa.query("TRACE:DATA?")
# Data from the Fieldfox comes out as a string separated by ',':
# '-1.97854112E+01,-3.97854112E+01,-2.97454112E+01,-4.92543112E+01,-5.17254112E+01,-1.91254112E+01...\n'
# The code below turns it into an a python list of floats
# Use split to turn long string to an array of values
ff_SA_Trace_Data_Array = ff_SA_Trace_Data.split(",")
amp = [float(i) for i in ff_SA_Trace_Data_Array]
return amp
def enable_measurement(self):
# Sets the measurement to channel power
self.sa.write("SENS:MEAS:CHAN CHP")
def disables_measurement(self):
# Sets the measurement to none
self.sa.write("SENS:MEAS:CHAN NONE")
def sets_measurement_integration_bw(self, ibw: int):
# Sets the measurement integration bandwidth
self.sa.write(f"SENS:CME:IBW {int(ibw)}")
def disables_measurement_averaging(self):
# disables averaging in the measurement
self.sa.write("SENS:CME:AVER:ENAB 0")
def get_measurement_data(self):
# Returns the result of the measurement
return float(self.sa.query("CALC:MEAS:DATA?").split(",")[0])
# Data from the Fieldfox comes out as a string separated by ',':
# '-1.97854112E+01,-3.97854112E+01\n'
# The code above takes the first value and converts to float.
class KeysightXSeries(VisaSA):
def get_amp(self):
self.get_single_trigger()
if self.method == 1: # Channel power
sig = self.get_measurement_data()
elif self.method == 2: # Marker
sig = self.query_marker(1)
else:
sig = float("NaN")
return sig
def set_automatic_video_bandwidth(self, state: int):
# State should be 1 or 0
self.sa.write(f"SENS:BAND:VID:AUTO {int(state)}")
def set_automatic_bandwidth(self, state: int):
# State should be 1 or 0
self.sa.write(f"SENS:BAND:AUTO {int(state)}")
def set_bandwidth(self, bw: int):
# Sets the bandwidth
self.sa.write(f"SENS:BAND {int(bw)}")
def set_sweep_points(self, n_points: int):
# Sets the number of points for a sweep
self.sa.write(f"SENS:SWE:POIN {int(n_points)}")
def set_center_freq(self, freq: int):
# Sets the central frequency
self.sa.write(f"SENS:FREQ:CENT {int(freq)}")
def set_span(self, span: int):
# Sets the span
self.sa.write(f"SENS:FREQ:SPAN {int(span)}")
def set_cont_off(self):
return self.sa.query("INIT:CONT OFF;*OPC?")
def set_cont_on(self):
# Sets continuous mode on
return self.sa.query("INIT:CONT ON;*OPC?")
def get_single_trigger(self):
# Performs a single sweep
return self.sa.query("INIT:IMM;*OPC?")
def active_marker(self, marker: int):
# Active the given marker
self.sa.write(f"CALC:MARK{int(marker)}:MODE POS")
def set_marker_freq(self, marker: int, freq: int):
# Sets the marker's frequency
self.get_single_trigger()
self.sa.write(f"CALC:MARK{int(marker)}:X {int(freq)}")
def query_marker(self, marker: int):
# Query the marker
return float(self.sa.query(f"CALC:MARK{int(marker)}:Y?"))
def get_full_trace(self):
# Returns the full trace
ff_SA_Trace_Data = self.sa.query("TRACE:DATA? TRACE1")
# Data from the Keysight comes out as a string separated by ',':
# '-1.97854112E+01,-3.97854112E+01,-2.97454112E+01,-4.92543112E+01,-5.17254112E+01,-1.91254112E+01...\n'
# The code below turns it into an a python list of floats
# Use split to turn long string to an array of values
ff_SA_Trace_Data_Array = ff_SA_Trace_Data.split(",")
amp = [float(i) for i in ff_SA_Trace_Data_Array]
return amp
def enable_measurement(self):
# Sets the measurement to channel power
self.sa.write(":CONF:CHP")
def disables_measurement(self):
# Sets the measurement to none
self.sa.write(":CONF:CHP NONE")
def sets_measurement_integration_bw(self, ibw: int):
# Sets the measurement integration bandwidth
self.sa.write(f"SENS:CHP:BAND:INT {int(ibw)}")
def disables_measurement_averaging(self):
# disables averaging in the measurement
self.sa.write("SENS:CHP:AVER 0")
def get_measurement_data(self):
# Returns the result of the measurement
return float(self.sa.query("READ:CHP?").split(",")[0])
# Data from the Keysight comes out as a string separated by ',':
# '-1.97854112E+01,-3.97854112E+01\n'
# The code above takes the first value and converts to float.
| 34.3903 | 128 | 0.609966 | 2,074 | 14,891 | 4.262295 | 0.132594 | 0.039367 | 0.044796 | 0.036652 | 0.817647 | 0.793891 | 0.779299 | 0.765271 | 0.759389 | 0.728394 | 0 | 0.030491 | 0.286415 | 14,891 | 432 | 129 | 34.469907 | 0.80143 | 0.281579 | 0 | 0.767347 | 0 | 0.004082 | 0.133845 | 0.035922 | 0 | 0 | 0 | 0 | 0 | 1 | 0.330612 | false | 0.081633 | 0.016327 | 0.061224 | 0.461224 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
cbbd10b4d257bc705753d53e71d9522e4d20bea3 | 4,774 | py | Python | tests/dictlrn/test_cbpdndlmd.py | manvhah/sporco | 9237d7fc37e75089a2a65ebfe02b7491410da7d4 | [
"BSD-3-Clause"
] | 1 | 2019-07-23T11:27:41.000Z | 2019-07-23T11:27:41.000Z | tests/dictlrn/test_cbpdndlmd.py | wxwoods/sporco | 7b0eefea8b6c720ab9a4998a7c55237445765738 | [
"BSD-3-Clause"
] | null | null | null | tests/dictlrn/test_cbpdndlmd.py | wxwoods/sporco | 7b0eefea8b6c720ab9a4998a7c55237445765738 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import division
from builtins import object
import numpy as np
from sporco.dictlrn import cbpdndlmd
class TestSet01(object):
def setup_method(self, method):
N = 16
Nd = 5
M = 4
K = 3
np.random.seed(12345)
self.D0 = np.random.randn(Nd, Nd, M)
self.S = np.random.randn(N, N, K)
def test_01(self):
lmbda = 1e-1
W = np.ones(self.S.shape[0:2] + (1, self.S.shape[2], 1))
opt = cbpdndlmd.ConvBPDNMaskDictLearn.Options({'MaxMainIter': 10})
try:
b = cbpdndlmd.ConvBPDNMaskDictLearn(self.D0, self.S, lmbda,
W, opt=opt)
b.solve()
except Exception as e:
print(e)
assert 0
def test_02(self):
lmbda = 1e-1
W = np.ones(self.S.shape[0:2] + (1, self.S.shape[2], 1))
opt = cbpdndlmd.ConvBPDNMaskDictLearn.Options(
{'MaxMainIter': 5, 'CCMOD': {'CG': {'MaxIter': 1}}},
dmethod='cg')
try:
b = cbpdndlmd.ConvBPDNMaskDictLearn(self.D0, self.S, lmbda,
W, opt=opt, dmethod='cg')
b.solve()
except Exception as e:
print(e)
assert 0
def test_03(self):
lmbda = 1e-1
W = np.ones(self.S.shape[0:2] + (1, self.S.shape[2], 1))
opt = cbpdndlmd.ConvBPDNMaskDictLearn.Options({'MaxMainIter': 10},
dmethod='cns')
try:
b = cbpdndlmd.ConvBPDNMaskDictLearn(self.D0, self.S, lmbda, W,
opt=opt, dmethod='cns')
b.solve()
except Exception as e:
print(e)
assert 0
def test_04(self):
lmbda = 1e-1
W = np.ones(self.S.shape[0:2] + (1, self.S.shape[2], 1))
opt = cbpdndlmd.ConvBPDNMaskDictLearn.Options(
{'AccurateDFid': True, 'MaxMainIter': 10})
try:
b = cbpdndlmd.ConvBPDNMaskDictLearn(self.D0, self.S, lmbda, W,
opt=opt)
b.solve()
except Exception as e:
print(e)
assert 0
def test_05(self):
N = 16
Nc = 3
Nd = 5
M = 4
K = 3
D0 = np.random.randn(Nd, Nd, Nc, M)
S = np.random.randn(N, N, Nc, K)
lmbda = 1e-1
W = np.ones((N, N, 1, K, 1))
opt = cbpdndlmd.ConvBPDNMaskDictLearn.Options({'MaxMainIter': 10})
try:
b = cbpdndlmd.ConvBPDNMaskDictLearn(D0, S, lmbda, W, opt=opt)
b.solve()
except Exception as e:
print(e)
assert 0
def test_06(self):
N = 16
Nc = 3
Nd = 5
M = 4
K = 3
D0 = np.random.randn(Nd, Nd, 1, M)
S = np.random.randn(N, N, Nc, K)
lmbda = 1e-1
W = np.ones((N, N, Nc, K, 1))
opt = cbpdndlmd.ConvBPDNMaskDictLearn.Options({'MaxMainIter': 10})
try:
b = cbpdndlmd.ConvBPDNMaskDictLearn(D0, S, lmbda, W, opt=opt)
b.solve()
except Exception as e:
print(e)
assert 0
def test_07(self):
lmbda = 1e-1
W = np.ones(self.S.shape[0:2] + (1, self.S.shape[2], 1))
opt = cbpdndlmd.ConvBPDNMaskDictLearn.Options(
{'AccurateDFid': True, 'MaxMainIter': 10}, dmethod='fista')
try:
b = cbpdndlmd.ConvBPDNMaskDictLearn(self.D0, self.S, lmbda, W,
opt=opt, dmethod='fista')
b.solve()
except Exception as e:
print(e)
assert 0
def test_08(self):
lmbda = 1e-1
W = np.ones(self.S.shape[0:2] + (1, self.S.shape[2], 1))
opt = cbpdndlmd.ConvBPDNMaskDictLearn.Options(
{'AccurateDFid': True, 'MaxMainIter': 10}, xmethod='fista')
try:
b = cbpdndlmd.ConvBPDNMaskDictLearn(self.D0, self.S, lmbda, W,
opt=opt, xmethod='fista')
b.solve()
except Exception as e:
print(e)
assert 0
def test_09(self):
lmbda = 1e-1
W = np.ones(self.S.shape[0:2] + (1, self.S.shape[2], 1))
opt = cbpdndlmd.ConvBPDNMaskDictLearn.Options(
{'AccurateDFid': True, 'MaxMainIter': 10},
xmethod='fista', dmethod='cns')
try:
b = cbpdndlmd.ConvBPDNMaskDictLearn(
self.D0, self.S, lmbda, W, opt=opt, xmethod='fista',
dmethod='cns')
b.solve()
except Exception as e:
print(e)
assert 0
| 30.21519 | 74 | 0.478634 | 566 | 4,774 | 4.012367 | 0.128975 | 0.048437 | 0.061647 | 0.035667 | 0.896962 | 0.896962 | 0.878468 | 0.878468 | 0.878468 | 0.878468 | 0 | 0.047983 | 0.39757 | 4,774 | 157 | 75 | 30.407643 | 0.741655 | 0 | 0 | 0.714286 | 0 | 0 | 0.04336 | 0 | 0 | 0 | 0 | 0 | 0.067669 | 1 | 0.075188 | false | 0 | 0.030075 | 0 | 0.112782 | 0.067669 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
381021e874fe9cb88508ca834c01ad594be4bd74 | 117 | py | Python | ramda/min_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 56 | 2018-08-06T08:44:58.000Z | 2022-03-17T09:49:03.000Z | ramda/min_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 28 | 2019-06-17T11:09:52.000Z | 2022-02-18T16:59:21.000Z | ramda/min_test.py | jakobkolb/ramda.py | 982b2172f4bb95b9a5b09eff8077362d6f2f0920 | [
"MIT"
] | 5 | 2019-09-18T09:24:38.000Z | 2021-07-21T08:40:23.000Z | from .min import min
from ramda.private.asserts import assert_equal
def min_test():
assert_equal(min(3, 1), 1)
| 16.714286 | 46 | 0.735043 | 20 | 117 | 4.15 | 0.6 | 0.26506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030612 | 0.162393 | 117 | 6 | 47 | 19.5 | 0.816327 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
381a9ca944d257ff19d0f2450a4db9fd8a13e9e5 | 34,713 | py | Python | src/data_cleaners/cleaning_Encuesta_C2_Inicial.py | Grupo-Informatica-Educativa/CFK | 8fa09fa4c5259b358326ab364bd79d3564123ca7 | [
"MIT"
] | null | null | null | src/data_cleaners/cleaning_Encuesta_C2_Inicial.py | Grupo-Informatica-Educativa/CFK | 8fa09fa4c5259b358326ab364bd79d3564123ca7 | [
"MIT"
] | null | null | null | src/data_cleaners/cleaning_Encuesta_C2_Inicial.py | Grupo-Informatica-Educativa/CFK | 8fa09fa4c5259b358326ab364bd79d3564123ca7 | [
"MIT"
] | null | null | null | import pandas as pd
save_new = True
pd.set_option('display.max_rows', 50)
pd.set_option('display.max_columns', 50)
pd.set_option('display.width', 1000)
def add_columns(df1, df2):
for col in df2.columns:
df1[col] = df2[col]
def add_equal_columns(pivot_inicial):
# Pregunta 9
col = "9. ¿Cuáles de las siguientes áreas enseña y en qué grado?"
pivot_inicial[col] = pivot_inicial[col].str.replace('-1', '0')
df2 = pivot_inicial[col].str.split(r'\b\D+\b', expand=True)
df2.rename({
1: '9.1 Ciencias naturales y educación ambiental',
2: '9.2 Ciencias sociales, historia, geografía, constitución política y democracia',
3: '9.3 Educación artística',
4: '9.4 Educación ética y en valores humanos',
5: '9.5 Educación física, recreación y deportes',
6: '9.6 Educación religiosa',
7: '9.7 Humanidades, lengua castellana e idiomas extranjeros',
8: '9.8 Matemáticas',
9: '9.9 Tecnología e informática',
10: '9.10 Otro',
}, axis=1, inplace=True)
df2 = df2.drop([0], axis=1)
add_columns(pivot_inicial, df2)
# Pregunta 12
col = "12. ¿Cuáles de las siguientes estrategias usted ha usado en sus clases?"
opciones_preg10 = [
"Realizar clubes y actividades extracurriculares para niñas y jóvenes como refuerzo de lo visto en las clases de áreas STEM.",
"Destacar y reconocer los logros de las niñas y jóvenes, por ejemplo, promover concursos diferenciados por género, como, premio a la niña científica y el niño científico.",
"Dar referencias o modelos de mujeres destacadas en las áreas STEM, por ejemplo, mostrar la película de Marie Curie.",
"Motivar que las niñas participen y sean escuchadas, por ejemplo, alternándolas con los niños.",
"Estimular el liderazgo femenino, por ejemplo, que las niñas y adolescentes sean representantes de grupo.",
"Generar espacios de confianza para las niñas, por ejemplo, realizando reflexiones sobre el género al comienza de la clase",
"Prohibir y corregir los comentarios, actitudes y acciones sexistas.",
'Utilizar lenguaje inclusivo y no realizar estereotipos de género, por ejemplo, decir "Todas las personas" en vez de "todos los niños" o evitar decir que las niñas son delicadas.',
"Tratos y estímulos igualitarios a toda y todo estudiante independientemente de su género.",
"Observar el comportamiento de los niños hacia las niñas porque a ellas no se les puede tocar ni con pétalo de ua rosa."
]
for count, subpregunta in enumerate(opciones_preg10):
pivot_inicial[f'12.{count+1} {subpregunta}'] = pivot_inicial[col].str.contains(subpregunta).replace({
True: "Si",
False: "No"
})
# Pregunta 13
col = "13. Por favor evalúe los siguientes enunciados de acuerdo con su experiencia"
df2 = pivot_inicial[col].str.split(r'\b\D+\b', expand=True)
df2.replace({
"1": "Totalmente en desacuerdo",
"2": "En desacuerdo",
"3": "Neutro",
"4": "De acuerdo",
"5": "Totalmente de acuerdo",
},inplace=True)
df2.rename({
1: '13.1 Es preferible que las mujeres enseñen ciencias sociales y los hombres ciencias exactas',
2: '13.2 Es normal que la mayoría de los ingenieros mecánicos sean varones porque los hombres son mejores para los números',
3: '13.3 Por su esencia una mujer tiene mejor desempeño en un proyecto de alto impacto social que en un proyecto de robótica industrial.',
4: '13.4 Los hombres son mejores para la tecnología que las mujeres.',
5: '13.5 Las mujeres tienen mayores habilidades para proyectos sociales que tecnológicos.',
6: '13.6 Los grandes aportes en la computación han sido hechos por hombres.',
7: '13.7 Que la mayoría de mujeres no opte por áreas exactas es simplemente cuestión de preferencias.',
8: '13.8 Que la mayoría de personas en artes y humanidades sean mujeres es muestra de su sensibilidad.',
9: '13.9 Es natural que los hombres sea buenos para los números y las mujeres para las letras',
10: '13.10 Los hombres son muy ágiles tomando decisiones importantes.',
11: '13.11 Las niñas son más ordenadas que los niños.',
12: '13.12 Muchas mujeres se caracterizan por una pureza que pocos hombres poseen',
13: '13.13 Las mujeres deben ser queridas y protegidas por los hombres',
14: '13.14 Todo hombre debe tener una mujer a quien amar',
15: '13.15 El hombre está incompleto sin la mujer',
16: '13.16 Las mujeres en comparación con los hombres tienden a tener un sentido más refinado de la cultura y el buen gusto',
}, axis=1, inplace=True)
df2 = df2.drop([0], axis=1)
add_columns(pivot_inicial, df2)
# Pregunta 15
col = "15. Por favor evalúe los siguientes enunciados de acuerdo con su experiencia:"
df2 = pivot_inicial[col].str.split(r'\b\D+\b', expand=True)
df2.replace({
"1": "Totalmente en desacuerdo",
"2": "En desacuerdo",
"3": "Neutro",
"4": "De acuerdo",
"5": "Totalmente de acuerdo",
},inplace=True)
df2.rename({
1: '15.1 Sé cómo resolver los problemas técnicos cuando fallan las TIC',
2: '15.2 Puedo aprender sobre nuevas tecnologías fácilmente',
3: '15.3 Sé cómo usar las TIC con los estudiantes en clase',
4: '15.4 Me apoyo en mis colegas para resolver problemas sobre cómo trabajar algún tema',
5: '15.5 Puedo hablar con otros docentes sobre el diseño de cursos',
6: '15.6 Siento que tengo apoyo de otros docentes para el diseño de mis cursos',
7: '15.7 No tengo con quién conversar sobre el diseño de mis cursos',
}, axis=1, inplace=True)
df2 = df2.drop([0], axis=1)
add_columns(pivot_inicial, df2)
# Pregunta 17
col = "17. Por favor evalúe las siguientes afirmaciones según qué tan de acuerdo está usted con enseñar las siguientes prácticas como objetivos de aprendizaje relacionados con el pensamiento computacional"
df2 = pivot_inicial[col].str.split(r'\b\D+\b', expand=True)
df2.replace({
"1": "Totalmente en desacuerdo",
"2": "En desacuerdo",
"3": "Neutro",
"4": "De acuerdo",
"5": "Totalmente de acuerdo",
},inplace=True)
df2.rename({
1: '17.1 Usar el correo electrónico',
2: '17.2 Crear y usar de modelos y simulaciones',
3: '17.3 Automatizar tareas',
4: '17.4 Usar Word',
5: '17.5 Procesar Datos',
6: '17.6 Resolver problemas a través de herramientas computacionales (como simulaciones)',
7: '17.7 Resolver problemas a través de herramientas computacionales (como lenguajes de programación)',
}, axis=1, inplace=True)
df2 = df2.drop([0], axis=1)
add_columns(pivot_inicial, df2)
# Pregunta 18
col = "18. Por favor evalúe los siguientes enunciados de acuerdo con qué tan preparado(a) se siente para integrar el pensamiento computacional en sus cursos"
df2 = pivot_inicial[col].str.split(r'\b\D+\b', expand=True)
df2.replace({
"1": "Totalmente en desacuerdo",
"2": "En desacuerdo",
"3": "Neutro",
"4": "De acuerdo",
"5": "Totalmente de acuerdo",
},inplace=True)
df2.rename({
1: '18.1 Puedo aplicar las prácticas y habilidades del pensamiento computacional a mi trabajo',
2: '18.2 Puedo definir el pensamiento computacional',
3: '18.3 Puedo describir las prácticas y habilidades que componen el pensamiento computacional a mis estudiantes',
4: '18.4 Puedo aplicar las prácticas y habilidades del pensamiento computacional a mi vida diaria',
5: '18.5 Creo que tengo las habilidades para desarrollar el pensamiento computacional en mis estudiantes',
6: '18.6 Puedo enseñar fácilmente sobre nuevas prácticas computacionales',
7: '18.7 Puedo diseñar una clase que desarrolle el pensamiento computacional en los estudiantes',
8: '18.8 Puedo seleccionar tecnologías para usar en mi salón de clases, que me permitan mejorar qué enseño y cómo enseño pensamiento computacional',
9: '18.9 Puedo aplicar mis habilidades en pensamiento computacional para ayudar a los estudiantes a perseguir sus intereses individuales',
10: '18.10 Puedo implementar y evaluar la idoneidad de una estrategia pedagógica que le permita a los estudiantes desarrollar pensamiento computacional'
}, axis=1, inplace=True)
df2 = df2.drop([0], axis=1)
add_columns(pivot_inicial, df2)
# Pregunta 20
col = "20. En una escala de 1 a 10 (donde 10 es muy a menudo), con qué frecuencia utilizarías las siguientes prácticas pedagógicas para enseñar pensamiento computacional"
pivot_inicial[col] = pivot_inicial[col].str.replace('-1', '0')
df2 = pivot_inicial[col].str.split(r'\b\D+\b', expand=True)
df2.rename({
1: '20.1 Actividades desconectadas',
2: '20.2 Usa-Modifica-Crea',
3: '20.3 Clase magistral',
4: '20.4 Enseñanza explícita y sin ambigüedades',
5: '20.5 Marcha Silenciosa',
6: '20.6 Aprendizaje basado en proyectos',
}, axis=1, inplace=True)
df2 = df2.drop([0], axis=1)
add_columns(pivot_inicial, df2)
# Pregunta 22
col = "22. Cuando un estudiante se enfrenta a una dificultad creando un programa y no sabe si está correcto, qué tan a menudo, en una escala de 1-10 (donde 10 es siempre), usted:"
pivot_inicial[col] = pivot_inicial[col].str.replace('-1', '0')
df2 = pivot_inicial[col].str.split(r'\b\D+\b', expand=True)
df2.rename({
1: '22.1 Le explicaría la respuesta correcta',
2: '22.2 Le sugeriría ir paso a paso por el programa simulando su ejecución',
3: '22.3 Le diría que revise sus notas',
4: '22.4 Le sugeriría que revise las memorias colectivas',
5: '22.5 Le sugeriría volver a leer el problema',
6: '22.6 Le sugeriría intentar con varios valores para evaluar el programa',
7: '22.7 Le explicaría el problema nuevamente',
}, axis=1, inplace=True)
df2 = df2.drop([0], axis=1)
add_columns(pivot_inicial, df2)
otras = [
"9. ¿Cuáles de las siguientes áreas enseña y en qué grado?",
'9.1 Ciencias naturales y educación ambiental',
'9.2 Ciencias sociales, historia, geografía, constitución política y democracia',
'9.3 Educación artística',
'9.4 Educación ética y en valores humanos',
'9.5 Educación física, recreación y deportes',
'9.6 Educación religiosa',
'9.7 Humanidades, lengua castellana e idiomas extranjeros',
'9.8 Matemáticas',
'9.9 Tecnología e informática',
'9.10 Otro',
"12. ¿Cuáles de las siguientes estrategias usted ha usado en sus clases?",
"12.1 Realizar clubes y actividades extracurriculares para niñas y jóvenes como refuerzo de lo visto en las clases de áreas STEM.",
"12.2 Destacar y reconocer los logros de las niñas y jóvenes, por ejemplo, promover concursos diferenciados por género, como, premio a la niña científica y el niño científico.",
"12.3 Dar referencias o modelos de mujeres destacadas en las áreas STEM, por ejemplo, mostrar la película de Marie Curie.",
"12.4 Motivar que las niñas participen y sean escuchadas, por ejemplo, alternándolas con los niños.",
"12.5 Estimular el liderazgo femenino, por ejemplo, que las niñas y adolescentes sean representantes de grupo.",
"12.6 Generar espacios de confianza para las niñas, por ejemplo, realizando reflexiones sobre el género al comienza de la clase",
"12.7 Prohibir y corregir los comentarios, actitudes y acciones sexistas.",
'12.8 Utilizar lenguaje inclusivo y no realizar estereotipos de género, por ejemplo, decir "Todas las personas" en vez de "todos los niños" o evitar decir que las niñas son delicadas.',
"12.9 Tratos y estímulos igualitarios a toda y todo estudiante independientemente de su género.",
"12.10 Observar el comportamiento de los niños hacia las niñas porque a ellas no se les puede tocar ni con pétalo de ua rosa.",
"13. Por favor evalúe los siguientes enunciados de acuerdo con su experiencia",
'13.1 Es preferible que las mujeres enseñen ciencias sociales y los hombres ciencias exactas',
'13.2 Es normal que la mayoría de los ingenieros mecánicos sean varones porque los hombres son mejores para los números',
'13.3 Por su esencia una mujer tiene mejor desempeño en un proyecto de alto impacto social que en un proyecto de robótica industrial.',
'13.4 Los hombres son mejores para la tecnología que las mujeres.',
'13.5 Las mujeres tienen mayores habilidades para proyectos sociales que tecnológicos.',
'13.6 Los grandes aportes en la computación han sido hechos por hombres.',
'13.7 Que la mayoría de mujeres no opte por áreas exactas es simplemente cuestión de preferencias.',
'13.8 Que la mayoría de personas en artes y humanidades sean mujeres es muestra de su sensibilidad.',
'13.9 Es natural que los hombres sea buenos para los números y las mujeres para las letras',
'13.10 Los hombres son muy ágiles tomando decisiones importantes.',
'13.11 Las niñas son más ordenadas que los niños.',
'13.12 Muchas mujeres se caracterizan por una pureza que pocos hombres poseen',
'13.13 Las mujeres deben ser queridas y protegidas por los hombres',
'13.14 Todo hombre debe tener una mujer a quien amar',
'13.15 El hombre está incompleto sin la mujer',
'13.16 Las mujeres en comparación con los hombres tienden a tener un sentido más refinado de la cultura y el buen gusto',
'15.1 Sé cómo resolver los problemas técnicos cuando fallan las TIC',
'15.2 Puedo aprender sobre nuevas tecnologías fácilmente',
'15.3 Sé cómo usar las TIC con los estudiantes en clase',
'15.4 Me apoyo en mis colegas para resolver problemas sobre cómo trabajar algún tema',
'15.5 Puedo hablar con otros docentes sobre el diseño de cursos',
'15.6 Siento que tengo apoyo de otros docentes para el diseño de mis cursos',
'15.7 No tengo con quién conversar sobre el diseño de mis cursos',
"17. Por favor evalúe las siguientes afirmaciones según qué tan de acuerdo está usted con enseñar las siguientes prácticas como objetivos de aprendizaje relacionados con el pensamiento computacional",
'17.1 Usar el correo electrónico',
'17.2 Crear y usar de modelos y simulaciones',
'17.3 Automatizar tareas',
'17.4 Usar Word',
'17.5 Procesar Datos',
'17.6 Resolver problemas a través de herramientas computacionales (como simulaciones)',
'17.7 Resolver problemas a través de herramientas computacionales (como lenguajes de programación)',
"18. Por favor evalúe los siguientes enunciados de acuerdo con qué tan preparado(a) se siente para integrar el pensamiento computacional en sus cursos",
'18.1 Puedo aplicar las prácticas y habilidades del pensamiento computacional a mi trabajo',
'18.2 Puedo definir el pensamiento computacional',
'18.3 Puedo describir las prácticas y habilidades que componen el pensamiento computacional a mis estudiantes',
'18.4 Puedo aplicar las prácticas y habilidades del pensamiento computacional a mi vida diaria',
'18.5 Creo que tengo las habilidades para desarrollar el pensamiento computacional en mis estudiantes',
'18.6 Puedo enseñar fácilmente sobre nuevas prácticas computacionales',
'18.7 Puedo diseñar una clase que desarrolle el pensamiento computacional en los estudiantes',
'18.8 Puedo seleccionar tecnologías para usar en mi salón de clases, que me permitan mejorar qué enseño y cómo enseño pensamiento computacional',
'18.9 Puedo aplicar mis habilidades en pensamiento computacional para ayudar a los estudiantes a perseguir sus intereses individuales',
'18.10 Puedo implementar y evaluar la idoneidad de una estrategia pedagógica que le permita a los estudiantes desarrollar pensamiento computacional',
"20. En una escala de 1 a 10 (donde 10 es muy a menudo), con qué frecuencia utilizarías las siguientes prácticas pedagógicas para enseñar pensamiento computacional",
'20.1 Actividades desconectadas',
'20.2 Usa-Modifica-Crea',
'20.3 Clase magistral',
'20.4 Enseñanza explícita y sin ambigüedades',
'20.5 Marcha Silenciosa',
'20.6 Aprendizaje basado en proyectos',
"22. Cuando un estudiante se enfrenta a una dificultad creando un programa y no sabe si está correcto, qué tan a menudo, en una escala de 1-10 (donde 10 es siempre), usted:",
'22.1 Le explicaría la respuesta correcta',
'22.2 Le sugeriría ir paso a paso por el programa simulando su ejecución',
'22.3 Le diría que revise sus notas',
'22.4 Le sugeriría que revise las memorias colectivas',
'22.5 Le sugeriría volver a leer el problema',
'22.6 Le sugeriría intentar con varios valores para evaluar el programa',
'22.7 Le explicaría el problema nuevamente'
]
#########################
df_inicial = pd.read_csv('data/crudos/Inicial.csv',
error_bad_lines=False,
warn_bad_lines=False,
low_memory=False)
df_inicial["Pregunta"] = df_inicial["Pregunta"].str.replace("\n", " ").replace("\b", " ")
_items = df_inicial[df_inicial["Pregunta"] == "Por favor evalúe los siguientes enunciados de acuerdo con su experiencia: "]["Respuesta"]
_items = _items.str.contains("TIC")
df_inicial["temp"] = _items.copy()
df_inicial["temp"] = df_inicial["temp"].fillna(False)
df_inicial.loc[df_inicial["temp"],"Pregunta"] = "15. Por favor evalúe los siguientes enunciados de acuerdo con su experiencia:"
df_inicial = df_inicial.drop(["temp"],axis=1)
pivot_inicial = df_inicial.pivot_table(
index=['Nombre', 'Apellido', 'Correo Electrónico', 'Curso', 'ID Asignado Por Moodle', 'Nombre De Usuario'],
columns='Pregunta',
values='Respuesta',
aggfunc='first'
).reset_index()
pivot_inicial.columns = [col.replace("\n", " ").strip() for col in pivot_inicial.columns]
pivot_inicial.columns = [col.replace("\r", " ").strip() for col in pivot_inicial.columns]
pivot_inicial.columns = [col.replace("\b", " ").strip() for col in pivot_inicial.columns]
df_inicial.columns = [col.replace("\n", " ").strip() for col in df_inicial.columns]
#########################
encuesta_caraterizacion = {
'¿Cómo prefieres que te llamen?':
'2. ¿Cómo prefieres que te llamen?',
'Número de Cédula':
'3. Número de Cédula',
'Rango de edad':
'4. Rango de edad',
'Mi primera lengua es español:':
'5. Mi primera lengua es español:',
'Departamento de residencia':
'6. Departamento de residencia',
'Municipio de residencia:':
'7. Municipio de residencia:',
'Institución Educativa en la que laboro':
'8. Institución Educativa en la que laboro',
'¿A qué estatuto docente pertenece?':
'9. ¿A qué estatuto docente pertenece?',
'Por favor evalúa tus conocimientos de herramienta digitales del 1 al 10, según tu grado de familiarización en el manejo de los mismos (10 es muy hábil)':
'10. Por favor evalúa tus conocimientos de herramienta digitales del 1 al 10, según tu grado de familiarización en el manejo de los mismos (10 es muy hábil)',
'Por favor evalúa, en la escala del 1 al 10, tus conocimientos previos sobre los contenidos pedagógicos que se estudiarán en el curso, según tu nivel de experiencia (10 es experto)':
'11. Por favor evalúa, en la escala del 1 al 10, tus conocimientos previos sobre los contenidos pedagógicos que se estudiarán en el curso, según tu nivel de experiencia (10 es experto)',
'Por favor evalúa tus habilidades previas en programación, según la siguiente escala':
'12. Por favor evalúa tus habilidades previas en programación, según la siguiente escala',
'Agrega cualquier comentario adicional que quieras hacer, con relación a tus conocimientos previos y/o cómo esperas beneficiarte de los contenidos que estudiarás.':
"13. Agrega cualquier comentario adicional que quieras hacer, con relación a tus conocimientos previos y/o cómo espera beneficiarse de los contenidos que estudiarás.",
'Considero que tengo la autorregulación, disciplina y responsabilidad que se requieren para ser exitoso(a) en este programa de formación virtual':
'14. Considero que tengo la autorregulación, disciplina y responsabilidad que se requieren para ser exitoso(a) en este programa de formación virtual',
'Considero que los conocimientos y materiales que adquiriré durante el programa serán relevantes para mi trabajo como docente.':
"15. Considero que los conocimientos y materiales que adquiriré durante el programa serán relevantes para mi trabajo como docente.",
'Considero que lo que aprenderé en el curso lo podre aplicar fácilmente en mi contexto de enseñanza/aprendizaje.':
"16. Considero que lo que aprenderé en el curso lo podre aplicar fácilmente en mi contexto de enseñanza/aprendizaje.",
'Considero que los recursos de internet y equipos con los que cuento serán suficientes para participar en las actividades del curso.':
"17. Considero que los recursos de internet y equipos con los que cuento serán suficientes para participar en las actividades del curso.",
'He hecho arreglos para disponer, cabalmente, del tiempo semanal requerido para desarrollar las actividades propuestas de forma adecuada.':
"18. He hecho arreglos para disponer, cabalmente, del tiempo semanal requerido para desarrollar las actividades propuestas de forma adecuada.",
'El o los horarios que me resultan más adecuados para asistir a los encuentros sincrónicos es/son: (Marca todas las opciones que te resulten adecuadas)':
"19. El o los horarios que me resultan más adecuados para asistir a los encuentros sincrónicos es/son: (Marca todas las opciones que te resulten adecuadas)"
}
for a in pivot_inicial.columns:
if "Por favor evalúa tus habilidades previas en programación" in a:
aux = a
pivot_inicial2 = pivot_inicial.rename(columns={aux:'Por favor evalúa tus habilidades previas en programación, según la siguiente escala'})
to_drop = list(encuesta_caraterizacion.keys())
#########################
preguntas_info_inicial = {
"ID Asignado Por Moodle": "ID Moodle",
"Nombre": "Nombre",
"Apellido": "Apellido",
"Correo Electrónico": "Correo Electrónico",
"Curso": "Curso",
"Nombre De Usuario":
"1. Cédula",
"Edad (Años)":
"2. Edad",
"Su institución está en un contexto:":
"3. Contexto IE",
"Género:":
"4. Género",
'¿Es usted cabeza de hogar?':
'5. ¿Es usted cabeza de hogar? ',
'¿Cuál es su estado civil?':
'6. ¿Cuál es su estado civil?',
'Número de horas de clases semanales que orienta (Solo números)':
'7. Número de horas de clases semanales que orienta',
'¿Es usted líder comunitario?':
'8. ¿Es usted líder comunitario?',
"¿Cuáles de las siguientes áreas enseña y en qué grado? (Marque 'NS/NC' si no enseña el área)":
"9. ¿Cuáles de las siguientes áreas enseña y en qué grado?",
'¿De acuerdo con lo anterior, usted es docente de áreas STEM (ciencias naturales, matemática, tecnología e informática) o No STEM (ciencias sociales, educación artística, educación física, educación religiosa, humanidades e idiomas extranjeros)?':
'10. ¿De acuerdo con lo anterior, usted es docente de áreas STEM o No STEM?',
'Su formación es en áreas' :
'11. Su formación es en áreas',
"¿Cuáles de las siguientes estrategias usted ha usado en sus clases? (Opción múltiple)":
"12. ¿Cuáles de las siguientes estrategias usted ha usado en sus clases?",
"Por favor evalúe los siguientes enunciados de acuerdo con su experiencia:":
"13. Por favor evalúe los siguientes enunciados de acuerdo con su experiencia",
"Agregue cualquier comentario o aclaración sobre las preguntas anteriores.":
"14 .Comentario o clarificación sobre las preguntas anteriores",
"15. Por favor evalúe los siguientes enunciados de acuerdo con su experiencia:":
"15. Por favor evalúe los siguientes enunciados de acuerdo con su experiencia:",
#"Agrega cualquier comentario o clarificación sobre las preguntas anteriores.":
#"16. Comentario o clarificación sobre las preguntas anteriores",
"Por favor evalúe las siguientes afirmaciones según qué tan de acuerdo está usted con enseñar las siguientes prácticas como objetivos de aprendizaje relacionados con el pensamiento computacional:":
"17. Por favor evalúe las siguientes afirmaciones según qué tan de acuerdo está usted con enseñar las siguientes prácticas como objetivos de aprendizaje relacionados con el pensamiento computacional",
"Por favor evalúe los siguientes enunciados de acuerdo con qué tan preparado(a) se siente para integrar el pensamiento computacional en sus cursos:":
"18. Por favor evalúe los siguientes enunciados de acuerdo con qué tan preparado(a) se siente para integrar el pensamiento computacional en sus cursos",
#"Agrega cualquier comentario o clarificación sobre las preguntas anteriores.":
#"19 .Comentario o clarificación sobre las preguntas anteriores",
"En una escala de 1 a 10 (donde 10 es muy a menudo), con qué frecuencia utilizarías las siguientes prácticas pedagógicas para enseñar pensamiento computacional. Si no conoce alguna práctica pedagógica, por favor elija la opción NS/NC.":
"20. En una escala de 1 a 10 (donde 10 es muy a menudo), con qué frecuencia utilizarías las siguientes prácticas pedagógicas para enseñar pensamiento computacional",
#"Agrega cualquier comentario o clarificación adicional sobre las estrategias de enseñanza de la pregunta anterior.":
#"21. Comentario o clarificación adicional sobre las estrategias de enseñanza de la pregunta anterior",
"Cuando un estudiante se enfrenta a una dificultad creando un programa y no sabe si está correcto, qué tan a menudo, en una escala de 1-10 (donde 10 es siempre), usted:":
"22. Cuando un estudiante se enfrenta a una dificultad creando un programa y no sabe si está correcto, qué tan a menudo, en una escala de 1-10 (donde 10 es siempre), usted:",
}
preguntas_propias_rename = {
"La docente Margarita decidió hacer que sus estudiantes de segundo de primaria utilicen los computadores del colegio para predecir el clima de una semana (temperatura, precipitaciones, y viento). Cada estudiante debe dibujar cómo se verá el clima en la ciudad en dicha semana. Margarita, creó un archivo compartido donde los estudiantes ingresarán la información. Luego tomaron las predicciones de modelos de Internet y los ingresaron en el mismo documento compartido. Durante una semana tomaron los datos reales, y luego, proyectaron en el tablero los datos predichos por los estudiantes, los del modelo de Internet, y los datos reales. Al finalizar, Margarita les mostró a los estudiantes cómo hacer un gráfico para comparar los diferentes datos. ¿Está Margarita desarrollando el pensamiento computacional de sus estudiantes? Seleccione todas las respuestas que considere correctas.":
"24. La docente Margarita decidió hacer que sus estudiantes de segundo de primaria utilicen los computadores del colegio para predecir el clima de una semana (temperatura, precipitaciones, y viento). Cada estudiante debe dibujar cómo se verá el clima en la ciudad en dicha semana. Margarita, creó un archivo compartido donde los estudiantes ingresarán la información. Luego tomaron las predicciones de modelos de Internet y los ingresaron en el mismo documento compartido. Durante una semana tomaron los datos reales, y luego, proyectaron en el tablero los datos predichos por los estudiantes, los del modelo de Internet, y los datos reales. Al finalizar, Margarita les mostró a los estudiantes cómo hacer un gráfico para comparar los diferentes datos. ¿Está Margarita desarrollando el pensamiento computacional de sus estudiantes? Seleccione todas las respuestas que considere correctas.",
"La cafetería del colegio empacó almuerzos iguales para todos los estudiantes, menos los de Juan Arias y María Vásquez que no pueden comer huevo. Los almuerzos están marcados con el apellido de los estudiantes y organizados alfabéticamente. Para verificar que su almuerzo cumple con la restricción alimenticia María con ayuda de su profesor buscan en las cajas. María sabe que su almuerzo debe estar al final, así que busca hasta que encuentre una caja que comience por una letra cerca de la V. Cuando encuentra una que comienza con Trujillo, mira el último almuerzo de esa caja y se da cuenta que termina en Zapata. Así, María se da cuenta que su almuerzo debe estar allí. ¿Está María usando el pensamiento computacional para encontrar su almuerzo? Seleccione todas las respuestas que considere correctas.":
"25. La cafetería del colegio empacó almuerzos iguales para todos los estudiantes, menos los de Juan Arias y María Vásquez que no pueden comer huevo. Los almuerzos están marcados con el apellido de los estudiantes y organizados alfabéticamente. Para verificar que su almuerzo cumple con la restricción alimenticia María con ayuda de su profesor buscan en las cajas. María sabe que su almuerzo debe estar al final, así que busca hasta que encuentre una caja que comience por una letra cerca de la V. Cuando encuentra una que comienza con Trujillo, mira el último almuerzo de esa caja y se da cuenta que termina en Zapata. Así, María se da cuenta que su almuerzo debe estar allí. ¿Está María usando el pensamiento computacional para encontrar su almuerzo? Seleccione todas las respuestas que considere correctas.",
"Un ratón robot ha sido programado para seguir las siguientes instrucciones: (1) Sigue hacia abajo hasta que haya un cruce a uno de los lados (2) Cuando encuentres un cruce, atraviésalo (3) Vuelve al paso (1). Considera el siguiente laberinto para nuestro ratón robot. ¿En cuál de los tubos debería comenzar el robot para llegar al queso?":
"26. Un ratón robot ha sido programado para seguir instrucciones. ¿En cuál de los tubos debería comenzar el robot para llegar al queso?",
"Andrea hizo un diagrama de flujo para diseñar el algoritmo que le permitirá encender automáticamente el ventilador cuando esté muy caliente su habitación. Sin embargo, no está segura de que funcione. ¿Qué le podrías recomendar?":
"27. Andrea hizo un diagrama de flujo para diseñar el algoritmo que le permitirá encender automáticamente el ventilador cuando esté muy caliente su habitación. Sin embargo, no está segura de que funcione. ¿Qué le podrías recomendar?",
"Considera el siguiente segmento de código¿Después de que el anterior código se ejecuta, cual es el valor de la variable secuela?":
"28. Considera el siguiente segmento de código ¿Después de que el anterior código se ejecuta, cual es el valor de la variable secuela?",
"Considera el siguiente código: Si a=3, b=8 y c=10, ¿Qué imprimirá el programa?":
"29. Considera el siguiente código: Si a=3, b=8 y c=10, ¿Qué imprimirá el programa?",
"Considera el siguiente código: Después de que se ejecute el código anterior, ¿Cuáles de los siguientes enunciados sonverdaderos?":
"30. Considera el siguiente código: Después de que se ejecute el código anterior, ¿Cuáles de los siguientes enunciados son verdaderos?",
"Suponiendo que “a” y “b” son variables booleanas. Considera la siguiente expresión lógica:¿Cuál de las siguientes afirmaciones describe de manera más precisa la evaluación de las expresiones?":
"31. Suponiendo que “a” y “b” son variables booleanas. Considera la siguiente expresión lógica:¿Cuál de las siguientes afirmaciones describe de manera más precisa la evaluación de las expresiones?",
"La alcaldía acaba de contratar a Valeria para hacer un programa en la Micro:bit que controle el alumbrado público de su ciudad. Utilizando el sensor de luz de la tarjeta Micro:bit, ella se dio cuenta que cuando mide niveles de luz con un valor por debajo de 100, ya está suficientemente oscuro como para prender el alumbrado público. El programa que hizo funciona bien para prender el alumbrado de la ciudad, pero luego cuando amanece, las luces siguen encendidas durante todo el día. Valeria no está segura cómo solucionarlo, pero tiene algunas ideas que cree que podrían funcionar. ¿Cuál de las siguientes opciones crees que debería usar Valeria? Imagen 1 Imagen 2 Imagen 3 Imagen 4":
"32. La alcaldía acaba de contratar a Valeria para hacer un programa en la Micro:bit que controle el alumbrado público de su ciudad. Utilizando el sensor de luz de la tarjeta Micro:bit, ella se dio cuenta que cuando mide niveles de luz con un valor por debajo de 100, ya está suficientemente oscuro como para prender el alumbrado público. El programa que hizo funciona bien para prender el alumbrado de la ciudad, pero luego cuando amanece, las luces siguen encendidas durante todo el día. Valeria no está segura cómo solucionarlo, pero tiene algunas ideas que cree que podrían funcionar. ¿Cuál de las siguientes opciones crees que debería usar Valeria?",
"¿Qué botella debe cambiarse de color para que el resultado final sea una botella de color blanco? Tenga en cuenta lo que hace cada máquina recicladora que se usa en este sistema.":
"33. ¿Qué botella debe cambiarse de color para que el resultado final sea una botella de color blanco? Tenga en cuenta lo que hace cada máquina recicladora que se usa en este sistema.",
"Teniendo en cuenta el siguiente fragmento de código, Alejandra responde a la pregunta ¿Cuál será el valor final de “Y”? afirmando que el valor final será 44. El código retorna 120¿Qué opinas de la respuesta de Alejandra?":
"34. Teniendo en cuenta el siguiente fragmento de código, Alejandra responde a la pregunta ¿Cuál será el valor final de “Y”? afirmando que el valor final será 44. El código retorna 120 ¿Qué opinas de la respuesta de Alejandra?",
}
#########################
col_inicial = [i for i in pivot_inicial2.columns if i.startswith(
'Por favor evalúa tus habilidades previas en programación')][0]
pivot_inicial3 = pivot_inicial2.reset_index()
pivot_inicial3 = pivot_inicial2.drop(to_drop, axis=1)
merged = preguntas_info_inicial | preguntas_propias_rename
cols =[]
for e in merged:
cols.append(merged[e])
pivot_inicial.rename(merged, axis=1, inplace=True)
add_equal_columns(pivot_inicial)
merge = []
merge.extend(otras)
merge.extend(cols)
pivot_inicial[merge].to_excel("PretestInicial.xlsx", encoding='utf-8-sig')
| 69.84507 | 901 | 0.728862 | 5,189 | 34,713 | 4.864521 | 0.160917 | 0.016639 | 0.020601 | 0.008755 | 0.891451 | 0.87398 | 0.859401 | 0.834641 | 0.812218 | 0.783337 | 0 | 0.02951 | 0.199522 | 34,713 | 496 | 902 | 69.985887 | 0.877317 | 0.017256 | 0 | 0.204082 | 0 | 0.201531 | 0.797023 | 0.001971 | 0 | 0 | 0 | 0 | 0 | 1 | 0.005102 | false | 0 | 0.007653 | 0 | 0.012755 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3825d4bedf9a74ff5e2aac10fc1d31d79b6ad756 | 649 | py | Python | chesschallenge/chess/tests/test_coordinate_validation.py | Mika-IO/-backend-technical-assessment | 1503883e3a925d63113fad73599972078fd62932 | [
"MIT"
] | null | null | null | chesschallenge/chess/tests/test_coordinate_validation.py | Mika-IO/-backend-technical-assessment | 1503883e3a925d63113fad73599972078fd62932 | [
"MIT"
] | 2 | 2021-08-31T16:27:53.000Z | 2021-08-31T17:21:17.000Z | chesschallenge/chess/tests/test_coordinate_validation.py | Mika-IO/backend-technical-assessment | 1503883e3a925d63113fad73599972078fd62932 | [
"MIT"
] | null | null | null | import pytest
from chesschallenge.chess.chess import validate_coordinate
def test_a2_is_valid_coordenate():
assert validate_coordinate("a2") == True
def test_c5_is_valid_coordenate():
assert validate_coordinate("c5") == True
def test_34_is_valid_coordenate():
assert validate_coordinate("34") == False
def test_345_is_valid_coordenate():
assert validate_coordinate("345") == False
def test_ans_is_valid_coordenate():
assert validate_coordinate("ans") == False
def test_bb_is_valid_coordenate():
assert validate_coordinate("bb") == False
def test_z9_is_valid_coordenate():
assert validate_coordinate("z9") == False | 27.041667 | 58 | 0.771957 | 86 | 649 | 5.406977 | 0.255814 | 0.309677 | 0.255914 | 0.346237 | 0.617204 | 0.617204 | 0 | 0 | 0 | 0 | 0 | 0.028269 | 0.127889 | 649 | 24 | 59 | 27.041667 | 0.793286 | 0 | 0 | 0 | 0 | 0 | 0.024615 | 0 | 0 | 0 | 0 | 0 | 0.4375 | 1 | 0.4375 | true | 0 | 0.125 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
698e6b66490d4797b771389cf6da605c3385c4bd | 370 | py | Python | Project/new/proj_spec/submission.py | suryaavala/17s1-cs9318 | fdfa84a5f3330d189af213d670479c65d6c60a28 | [
"MIT"
] | null | null | null | Project/new/proj_spec/submission.py | suryaavala/17s1-cs9318 | fdfa84a5f3330d189af213d670479c65d6c60a28 | [
"MIT"
] | null | null | null | Project/new/proj_spec/submission.py | suryaavala/17s1-cs9318 | fdfa84a5f3330d189af213d670479c65d6c60a28 | [
"MIT"
] | 2 | 2018-04-04T10:36:55.000Z | 2019-08-23T05:53:55.000Z | ## import modules here
################# training #################
def train(data, classifier_file):# do not change the heading of the function
pass # **replace** this line with your code
################# testing #################
def test(data, classifier_file):# do not change the heading of the function
pass # **replace** this line with your code | 30.833333 | 76 | 0.581081 | 45 | 370 | 4.733333 | 0.555556 | 0.131455 | 0.169014 | 0.187793 | 0.779343 | 0.779343 | 0.779343 | 0.779343 | 0.779343 | 0.779343 | 0 | 0 | 0.175676 | 370 | 12 | 77 | 30.833333 | 0.698361 | 0.543243 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
69d9090869ace69da074296ad7953eddfe80e6ff | 13,957 | py | Python | tests.py | Atterratio/flake8-prevent-fails | 068502a1542d03e60d9d9a9853dfdc1f0883f9cb | [
"MIT"
] | null | null | null | tests.py | Atterratio/flake8-prevent-fails | 068502a1542d03e60d9d9a9853dfdc1f0883f9cb | [
"MIT"
] | null | null | null | tests.py | Atterratio/flake8-prevent-fails | 068502a1542d03e60d9d9a9853dfdc1f0883f9cb | [
"MIT"
] | null | null | null | import ast
import unittest
from flake8_prevent_fails import FailsChecker, MESSAGES
class TestIndexes(unittest.TestCase):
def test_dirty_list(self):
data = ast.parse('test_var = test_list[0]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF101'), result)
data = ast.parse('try:\n'
' test_var = test_list[0]\n'
'except AttributeError:\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF101'), result)
data = ast.parse('try:\n'
' test_var = test_list[0]\n'
'except (AttributeError, Error):\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF101'), result)
data = ast.parse('try:\n'
' test_var = test_list[var]\n'
'except AttributeError:\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF103'), result)
data = ast.parse('try:\n'
' test_var = test_list[var]\n'
'except (AttributeError, Error):\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF103'), result)
def test_cleaned_except_list_with_num(self):
data = ast.parse('try:\n'
' test_var = test_list[0]\n'
'except:\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
data = ast.parse('try:\n'
' test_var = test_list[0]\n'
'except IndexError:\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
data = ast.parse('try:\n'
' test_var = test_list[0]\n'
'except (AttributeError, IndexError):\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
def test_cleaned_except_list_with_name(self):
data = ast.parse('try:\n'
' test_var = test_list[var]\n'
'except:\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
data = ast.parse('try:\n'
' test_var = test_list[var]\n'
'except IndexError:\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
data = ast.parse('try:\n'
' test_var = test_list[var]\n'
'except (AttributeError, IndexError):\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
def test_cleaned_if_lt_list_with_num(self):
data = ast.parse('if 0 < len(test_list):\n'
' test_var = test_list[0]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
data = ast.parse('if 0 > len(over_list):\n'
' test_var = test_list[0]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF101'), result)
data = ast.parse('if 0 < len(test_list):\n'
' test_var = test_list[1]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF101'), result)
data = ast.parse('if 0 < len(over_list):\n'
' test_var = test_list[0]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF101'), result)
def test_cleaned_if_gt_list_with_num(self):
data = ast.parse('if len(test_list) > 0:\n'
' test_var = test_list[0]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
data = ast.parse('if len(test_list) < 0:\n'
' test_var = test_list[0]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF101'), result)
data = ast.parse('if len(test_list) > 0:\n'
' test_var = test_list[1]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF101'), result)
data = ast.parse('if len(over_list) > 0:\n'
' test_var = test_list[0]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF101'), result)
def test_dirty_dict(self):
data = ast.parse('test_var = test_list["test"]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF102'), result)
data = ast.parse('try:\n'
' test_var = test_list["test"]\n'
'except AttributeError:\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF102'), result)
data = ast.parse('try:\n'
' test_var = test_list["test"]\n'
'except (AttributeError, Error):\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF102'), result)
def test_cleaned_except_dict_with_str(self):
data = ast.parse('try:\n'
' test_var = test_list["test"]\n'
'except:\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
data = ast.parse('try:\n'
' test_var = test_list["test"]\n'
'except KeyError:\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
data = ast.parse('try:\n'
' test_var = test_list["test"]\n'
'except (AttributeError, KeyError):\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
def test_cleaned_except_dict_with_name(self):
data = ast.parse('try:\n'
' test_var = test_list[var]\n'
'except KeyError:\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
data = ast.parse('try:\n'
' test_var = test_list[var]\n'
'except (AttributeError, KeyError):\n'
' pass')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
def test_cleaned_if_dict_with_str(self):
data = ast.parse('if test_list.get("test"):\n'
' test_var = test_list["test"]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
data = ast.parse('if test_list.get("tests"):\n'
' test_var = test_list["test"]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF102'), result)
data = ast.parse('if tests_list.get("test"):\n'
' test_var = test_list["test"]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF102'), result)
data = ast.parse('if test_list.let("test"):\n'
' test_var = test_list["test"]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF102'), result)
def test_cleaned_if_dict_with_name(self):
data = ast.parse('if test_list.get(var):\n'
' test_var = test_list[var]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
data = ast.parse('if test_list.get(vars):\n'
' test_var = test_list[var]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF103'), result)
data = ast.parse('if test_list.get("tests"):\n'
' test_var = test_list[var]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF103'), result)
data = ast.parse('if tests_list.get(var):\n'
' test_var = test_list[var]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF103'), result)
data = ast.parse('if test_list.let(var):\n'
' test_var = test_list[var]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF103'), result)
def test_cleaned_for_dict(self):
data = ast.parse('for var in test_list:\n'
' test_var = test_list[var]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 0)
data = ast.parse('for vars in test_list:\n'
' test_var = test_list[var]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF103'), result)
data = ast.parse('for var in tests_list:\n'
' test_var = test_list[var]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF103'), result)
data = ast.parse('for var in tests_list:\n'
' test_var = test_list["test"]')
checker = FailsChecker(data, None, None)
results = list(e for e in checker.run())
self.assertEqual(len(results), 1)
for result in results:
self.assertIn(MESSAGES.get('PF102'), result)
if __name__ == '__main__':
unittest.main()
| 40.929619 | 65 | 0.523393 | 1,631 | 13,957 | 4.380748 | 0.036174 | 0.060462 | 0.06718 | 0.083975 | 0.976487 | 0.973548 | 0.968509 | 0.961652 | 0.943457 | 0.943457 | 0 | 0.01499 | 0.354732 | 13,957 | 340 | 66 | 41.05 | 0.77837 | 0 | 0 | 0.892256 | 0 | 0 | 0.179265 | 0.014975 | 0 | 0 | 0 | 0 | 0.215488 | 1 | 0.037037 | false | 0.057239 | 0.010101 | 0 | 0.050505 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
69db47a83f332f756d460a7aca32bb1e0ce6bc4d | 2,228 | py | Python | tests/test_sim_league_browser.py | League-Advisor/league-advisor | b77895833075ff13b075875eff421ec9fef9770e | [
"MIT"
] | null | null | null | tests/test_sim_league_browser.py | League-Advisor/league-advisor | b77895833075ff13b075875eff421ec9fef9770e | [
"MIT"
] | 10 | 2021-11-04T16:45:45.000Z | 2021-11-12T11:11:15.000Z | tests/test_sim_league_browser.py | League-Advisor/league-advisor | b77895833075ff13b075875eff421ec9fef9770e | [
"MIT"
] | null | null | null | """This module will tests LeagueBrowser class methodes"""
from league_advisor.league_browser import LeagueBrowser
from tests.flo import diff
def test_import_class():
assert LeagueBrowser()
def test_leaguebrowser_receive_user_input_method_quit():
leaguebrowser = LeagueBrowser()
diffs = diff(leaguebrowser.receive_user_input,path="tests/simulations/leaguebrowser_receive_user_input_method_quit.sim.txt")
assert not diffs, diffs
def test_leaguebrowser_receive_user_input_method_item_class():
leaguebrowser = LeagueBrowser()
diffs = diff(leaguebrowser.receive_user_input, path="tests/simulations/leaguebrowser_receive_user_input_method_item.sim.txt")
assert not diffs, diffs
def test_leaguebrowser_receive_item_method_classes():
leaguebrowser = LeagueBrowser()
diffs = diff(leaguebrowser.receive_user_input, path="tests/simulations/leaguebrowser_receive_item_method_classes.sim.txt")
assert not diffs, diffs
# def test_leaguebrowser_receive_item_method_names():
# leaguebrowser = LeagueBrowser()
# diffs = diff(leaguebrowser.receive_user_input, path="tests/simulations/leaguebrowser_receive_item_method_names.sim.txt")
# assert not diffs, diffs
def test_leaguebrowser_receive_item_method_names_backmenu():
leaguebrowser = LeagueBrowser()
diffs = diff(leaguebrowser.receive_user_input,
path="tests/simulations/leaguebrowser_receive_item_method_nanes_backmenu.sim.txt")
assert not diffs, diffs
def test_leaguebrowser_receive_item_method_classes_backmenu():
leaguebrowser = LeagueBrowser()
diffs = diff(leaguebrowser.receive_user_input,
path="tests/simulations/leaguebrowser_receive_item_method_classes_backmenu.sim.txt")
assert not diffs, diffs
def test_leaguebrowser_receive_champions_start():
leaguebrowser = LeagueBrowser()
diffs = diff(leaguebrowser.receive_champions,path="tests/simulations/browser_recieve_champions_start.sim.txt")
assert not diffs, diffs
def test_leaguebrowser_receive_champions_info():
leaguebrowser = LeagueBrowser()
diffs = diff(leaguebrowser.receive_champions,path="tests/simulations/browser_recieve_champions_info.sim.txt")
assert not diffs, diffs | 40.509091 | 129 | 0.798474 | 260 | 2,228 | 6.476923 | 0.142308 | 0.261283 | 0.142518 | 0.172209 | 0.892518 | 0.889549 | 0.865202 | 0.831354 | 0.831354 | 0.831354 | 0 | 0 | 0.125673 | 2,228 | 55 | 130 | 40.509091 | 0.864476 | 0.131508 | 0 | 0.470588 | 0 | 0 | 0.243902 | 0.243902 | 0 | 0 | 0 | 0 | 0.235294 | 1 | 0.235294 | false | 0 | 0.088235 | 0 | 0.323529 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
384eba14f4998ea63dd57ca896eef627c273e04d | 724 | py | Python | problems/dfs/Solution695.py | akalu/cs-problems-python | 9b1bd8e3932be62135a38a77f955ded9a766b654 | [
"MIT"
] | null | null | null | problems/dfs/Solution695.py | akalu/cs-problems-python | 9b1bd8e3932be62135a38a77f955ded9a766b654 | [
"MIT"
] | null | null | null | problems/dfs/Solution695.py | akalu/cs-problems-python | 9b1bd8e3932be62135a38a77f955ded9a766b654 | [
"MIT"
] | null | null | null | """ Given a non-empty 2D array grid of 0's and 1's, an island is a group of 1's
(representing land) connected 4-directionally (horizontal or vertical.) You
may assume all four edges of the grid are surrounded by water. Find the
maximum area of an island in the given 2D array. (If there is no island, the
maximum area is 0.)
Example 1:
[
[0,0,1,0,0,0,0,1,0,0,0,0,0],
[0,0,0,0,0,0,0,1,1,1,0,0,0],
[0,1,1,0,1,0,0,0,0,0,0,0,0],
[0,1,0,0,1,1,0,0,1,0,1,0,0],
[0,1,0,0,1,1,0,0,1,1,1,0,0],
[0,0,0,0,0,0,0,0,0,0,1,0,0],
[0,0,0,0,0,0,0,1,1,1,0,0,0],
[0,0,0,0,0,0,0,1,1,0,0,0,0]
]
output: 6
IDEA:
use dfs to traverse all cells
"""
class Solution695:
pass
| 26.814815 | 81 | 0.569061 | 182 | 724 | 2.263736 | 0.296703 | 0.305825 | 0.356796 | 0.38835 | 0.254854 | 0.252427 | 0.240291 | 0.225728 | 0.225728 | 0.225728 | 0 | 0.207143 | 0.226519 | 724 | 26 | 82 | 27.846154 | 0.528571 | 0.856354 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
385c3048d9a4773ed0d837c4eebfacdbb3aad296 | 5,897 | py | Python | lib/django/tests/regressiontests/httpwrappers/tests.py | vin/gerbilcount | fdffe648c3e9ad2667a6edfe0e19d4446c522395 | [
"Apache-2.0"
] | 2 | 2016-05-08T08:57:01.000Z | 2020-02-08T07:39:48.000Z | lib/django/tests/regressiontests/httpwrappers/tests.py | Arachnid/google_appengine | 2e950619f5027f414131fafc3cc253af4875a0fe | [
"Apache-2.0"
] | null | null | null | lib/django/tests/regressiontests/httpwrappers/tests.py | Arachnid/google_appengine | 2e950619f5027f414131fafc3cc253af4875a0fe | [
"Apache-2.0"
] | null | null | null | """
###################
# Empty QueryDict #
###################
>>> q = QueryDict('')
>>> q['foo']
Traceback (most recent call last):
...
MultiValueDictKeyError: "Key 'foo' not found in <MultiValueDict: {}>"
>>> q['something'] = 'bar'
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.get('foo', 'default')
'default'
>>> q.getlist('foo')
[]
>>> q.setlist('foo', ['bar', 'baz'])
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.appendlist('foo', ['bar'])
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.has_key('foo')
False
>>> q.items()
[]
>>> q.lists()
[]
>>> q.keys()
[]
>>> q.values()
[]
>>> len(q)
0
>>> q.update({'foo': 'bar'})
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.pop('foo')
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.popitem()
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.clear()
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.setdefault('foo', 'bar')
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.urlencode()
''
###################################
# Mutable copy of empty QueryDict #
###################################
>>> q = q.copy()
>>> q['foo']
Traceback (most recent call last):
...
MultiValueDictKeyError: "Key 'foo' not found in <MultiValueDict: {}>"
>>> q['name'] = 'john'
>>> q['name']
'john'
>>> q.get('foo', 'default')
'default'
>>> q.get('name', 'default')
'john'
>>> q.getlist('name')
['john']
>>> q.getlist('foo')
[]
>>> q.setlist('foo', ['bar', 'baz'])
>>> q.get('foo', 'default')
'baz'
>>> q.getlist('foo')
['bar', 'baz']
>>> q.appendlist('foo', 'another')
>>> q.getlist('foo')
['bar', 'baz', 'another']
>>> q['foo']
'another'
>>> q.has_key('foo')
True
>>> q.items()
[('foo', 'another'), ('name', 'john')]
>>> q.lists()
[('foo', ['bar', 'baz', 'another']), ('name', ['john'])]
>>> q.keys()
['foo', 'name']
>>> q.values()
['another', 'john']
>>> len(q)
2
>>> q.update({'foo': 'hello'})
# Displays last value
>>> q['foo']
'hello'
>>> q.get('foo', 'not available')
'hello'
>>> q.getlist('foo')
['bar', 'baz', 'another', 'hello']
>>> q.pop('foo')
['bar', 'baz', 'another', 'hello']
>>> q.get('foo', 'not there')
'not there'
>>> q.setdefault('foo', 'bar')
'bar'
>>> q['foo']
'bar'
>>> q.getlist('foo')
['bar']
>>> q.urlencode()
'foo=bar&name=john'
>>> q.clear()
>>> len(q)
0
#####################################
# QueryDict with one key/value pair #
#####################################
>>> q = QueryDict('foo=bar')
>>> q['foo']
'bar'
>>> q['bar']
Traceback (most recent call last):
...
MultiValueDictKeyError: "Key 'bar' not found in <MultiValueDict: {'foo': ['bar']}>"
>>> q['something'] = 'bar'
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.get('foo', 'default')
'bar'
>>> q.get('bar', 'default')
'default'
>>> q.getlist('foo')
['bar']
>>> q.getlist('bar')
[]
>>> q.setlist('foo', ['bar', 'baz'])
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.appendlist('foo', ['bar'])
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.has_key('foo')
True
>>> q.has_key('bar')
False
>>> q.items()
[('foo', 'bar')]
>>> q.lists()
[('foo', ['bar'])]
>>> q.keys()
['foo']
>>> q.values()
['bar']
>>> len(q)
1
>>> q.update({'foo': 'bar'})
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.pop('foo')
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.popitem()
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.clear()
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.setdefault('foo', 'bar')
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.urlencode()
'foo=bar'
#####################################################
# QueryDict with two key/value pairs with same keys #
#####################################################
>>> q = QueryDict('vote=yes&vote=no')
>>> q['vote']
'no'
>>> q['something'] = 'bar'
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.get('vote', 'default')
'no'
>>> q.get('foo', 'default')
'default'
>>> q.getlist('vote')
['yes', 'no']
>>> q.getlist('foo')
[]
>>> q.setlist('foo', ['bar', 'baz'])
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.appendlist('foo', ['bar'])
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.has_key('vote')
True
>>> q.has_key('foo')
False
>>> q.items()
[('vote', 'no')]
>>> q.lists()
[('vote', ['yes', 'no'])]
>>> q.keys()
['vote']
>>> q.values()
['no']
>>> len(q)
1
>>> q.update({'foo': 'bar'})
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.pop('foo')
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.popitem()
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.clear()
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.setdefault('foo', 'bar')
Traceback (most recent call last):
...
AttributeError: This QueryDict instance is immutable
>>> q.urlencode()
'vote=yes&vote=no'
"""
from django.http import QueryDict
if __name__ == "__main__":
import doctest
doctest.testmod()
| 16.426184 | 83 | 0.590639 | 711 | 5,897 | 4.879044 | 0.102672 | 0.050159 | 0.147881 | 0.179014 | 0.780917 | 0.75036 | 0.71548 | 0.680888 | 0.672816 | 0.672816 | 0 | 0.000985 | 0.139054 | 5,897 | 358 | 84 | 16.472067 | 0.682293 | 0.980668 | 0 | 0 | 0 | 0 | 0.07619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
386e50c4d69a8d0a05ce983dd399235c595fc130 | 67,060 | py | Python | BenchmarkEvalPaper/RunTests.py | bmd2007/benchmark_eval | aa42bb3369e79db4cb63e1963afcc8af6d8f5696 | [
"MIT"
] | 1 | 2022-01-11T08:03:32.000Z | 2022-01-11T08:03:32.000Z | BenchmarkEvalPaper/RunTests.py | bmd2007/benchmark_eval | aa42bb3369e79db4cb63e1963afcc8af6d8f5696 | [
"MIT"
] | null | null | null | BenchmarkEvalPaper/RunTests.py | bmd2007/benchmark_eval | aa42bb3369e79db4cb63e1963afcc8af6d8f5696 | [
"MIT"
] | null | null | null | import os
import sys
currentdir = os.path.dirname(os.path.realpath(__file__))
parentdir = os.path.dirname(currentdir)
sys.path.append(parentdir)
currentDir = currentdir + '/'
import PPIPUtils
from Methods.Tian2019SVM.Tian2019SVM import Tian2019SVM
from Methods.Guo2008.GuoSVM import GuoSVM
from Methods.Li2020DeepEnsemble.LiDeepNetwork import LiDeepNetworkModule
from Methods.Sun2017AutoEncoderNetwork.SunStackAutoEncoder import SunStackAutoEncoderAC
from Methods.Sun2017AutoEncoderNetwork.SunStackAutoEncoder import SunStackAutoEncoderCT
from Methods.Chen2019RNNNetwork.ChenNetwork import ChenNetworkModule
from Methods.RichouxDeepNetwork.RichouxDeepNetwork import RichouxNetworkModuleLSTM
from Methods.RichouxDeepNetwork.RichouxDeepNetwork import RichouxNetworkModuleFULL
from Methods.Li2018DeepNetwork.Li2018DeepNetwork import Li2018DeepNetworkModule
from Methods.Czibula2021AutoPPI.Czibula2021AutoPPI import Czibula2021AutoPPIModule
from Methods.Czibula2021AutoPPI.Czibula2021AutoPPI import Czibula2021AutoPPIModuleSS
from Methods.Czibula2021AutoPPI.Czibula2021AutoPPI import Czibula2021AutoPPIModuleJJ
from Methods.Czibula2021AutoPPI.Czibula2021AutoPPI import Czibula2021AutoPPIModuleSJ
from Methods.Zhang2019DeepEnsemble.Zhang2019DeepEnsemble import ZhangDeepModule
from Methods.Yao2019DeepNetwork.Yao2019DeepNetwork import Yao2019NetworkModule
from Methods.Zhou2011SVM.ZhouSVM import ZhouSVM
from Methods.GonzalezLopez2019DeepNetwork.GonzalezLopez2019DeepNetwork import GonzalezLopez2019Module
from Methods.Zhao2012SVM.Zhao2012SVM import Zhao2012SVM
from Methods.Hashemifar2018DeepNetwork.Hashemifar2018DeepNetwork import Hashemifar2018DeepNetworkModule
from Methods.Goktepe2018SVM.Goktepe2018SVM import Goktepe2018SVM
from Methods.Pan2010.Pan2010 import Pan2010ModuleLDACTRANDFOREST, Pan2010ModuleLDACTROTFOREST, Pan2010ModuleLDACTSVM, Pan2010ModuleACRANDFOREST, Pan2010ModuleACROTFOREST, Pan2010ModuleACSVM,Pan2010ModulePSAACRANDFOREST,Pan2010ModulePSAACROTFOREST,Pan2010ModulePSAACSVM
from Methods.Du2017DeepNetwork.Du2017DeepNetwork import Du2017DeepNetworkModuleComb, Du2017DeepNetworkModuleSep
from Methods.Jia2015.Jia2015RF import Jia2015RFModule
from Methods.You2015RF.You2015RF import You2015RFModule
from Methods.Ding2016RF.Ding2016RF import Ding2016RFModule
from Methods.Wang2017RotF.Wang2017RotF import Wang2017RotFModule
from Methods.Chen2019LGBM.Chen2019LGBM import Chen2019LGBMModule
from Methods.Jia2019RF.Jia2019RF import Jia2019RFModule
from Methods.RandomNetwork.RandomNetwork import RandomNetworkModule
from Methods.RandomRF.RandomRF import RandomRFModule
from Methods.BiasModules.BasicBiasModuleGOSimSeqSim import BasicBiasModuleGOSimSeqSim
from Methods.BiasModules.BasicBiasModuleSeqSim import BasicBiasModuleSeqSim
from Methods.BiasModules.BasicBiasModule import BasicBiasModule
from Methods.MaetschkeVar2011.MaetschkeVar2011 import MaetschkeVar2011Module
from Methods.Chen2005RF.Chen2005RF import Chen2005RFModule
from Methods.GouVar2006GOLR.GouVar2006GOLR import GouVar2006GOLRModule
from Methods.ZhangDomainVar2016.ZhangDomainVar2016 import ZhangDomainVar2016AllModule, ZhangDomainVar2016NonTestModule, ZhangDomainVar2016HeldOutModule
from Methods.Zhang2016GO.Zhang2016GO import Zhang2016GOModule
from Methods.SimpleEnsemble.SimpleEnsemble import SimpleEnsembleAllModule, SimpleEnsembleNonTestModule, SimpleEnsembleHeldOutModule
import time
import numpy as np
from ProjectDataLoader import *
from PreProcessDatasets import createFeatures
from RunTrainTest import *
#algorithms
guo2008Test = True
li2020Test = True
sun2017Test = True
tian2019Test = True
Chen2019RNN = True
richouxANN = True
li2018Deep = True
Czibula2021AutoPPI = True
ZhangDeep2019 = True
YaoDeep2019 = True
zhou2011SVM = True
GonzalezLopez2019 = True
Zhao2012SVMTest = True
Hashemifar2018Test = True
Goktepe2018SVMTest = True
pan2010TestForests = True
pan2010TestSVMs = True
du2017DeepNetworkTest = True
jia2015RandomForestTest = True
you2015RandomForestTest = True
ding2016RandomForestTest = True
wang2017RotFTest = True
chen2019LGBMTest = True
jia2019RandomForestTest = True
randomNetworkTest = True
randomRFTest = True
biasTests = True
MaetschkeVarTest = True
Chen2005RF = True
GouVar2006GOLRTest = True
ZhangDomainVar2016Test = True
Zhang2016GOTest = True
SimpleEnsembleTest = True
#data Types
orgData = True
HumanRandom50 = True
HumanRandom20 = True
HumanHeldOut50 = True
HumanHeldOut20 = True
baseResultsFolderName = 'results/'
#runs based on global variables
#can be toggled before calling function
def RunAll():
if guo2008Test:
#create results folders if they do not exist
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Guo2008Results/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'Model':'THUNDERSVM'}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(GuoSVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(GuoSVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTest(GuoSVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(GuoSVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTest(GuoSVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if li2020Test:
#create results folders if they do not exist
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName= baseResultsFolderName+'Li2020Results/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'fullGPU':True,'deviceType':'cuda'}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadLiADData(resultsFolderName)
runTest(LiDeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=convertToFolder(saves),predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(LiDeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=convertToFolder(saves),predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTest(LiDeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=convertToFolder(saves),predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(LiDeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=convertToFolder(saves),predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTest(LiDeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=convertToFolder(saves),predictionsFLst = pfs)
if tian2019Test:
#create results folders if they do not exist
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'tian2019Results/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'Model':'THUNDERSVM'}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(Tian2019SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName)
runTest(Tian2019SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(Tian2019SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTest(Tian2019SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(Tian2019SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTest(Tian2019SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if sun2017Test:
PPIPUtils.makeDir(baseResultsFolderName)
PPIPUtils.makeDir(baseResultsFolderName+'Sun2017Results/')
for pair in [(SunStackAutoEncoderAC,'SunResults2017AC'),(SunStackAutoEncoderCT,'SunResults2017CT')]:
resultsFolderName = baseResultsFolderName+'Sun2017Results/'+pair[1]+'/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'fullGPU':True,'deviceType':'cuda'}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadPanHumanLarge(resultsFolderName)
runTest(pair[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(pair[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(pair[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(pair[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(pair[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs,startIdx=13)
if Chen2019RNN:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Chen2019Results/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'fullGPU':True}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataChen(resultsFolderName)
runTest(ChenNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
hyp = {'fullGPU':True,'schedPatience':1}
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(ChenNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTest(ChenNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(ChenNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTest(ChenNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if richouxANN:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Richoux2019Results/'
PPIPUtils.makeDir(resultsFolderName)
resultsFolderName1= resultsFolderName+'LSTM/'
resultsFolderName2= resultsFolderName+'FULL/'
PPIPUtils.makeDir(resultsFolderName1)
PPIPUtils.makeDir(resultsFolderName2)
hyp = {'fullGPU':True}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadRichouxHumanDataStrict(resultsFolderName1)
runTest(RichouxNetworkModuleLSTM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadRichouxHumanDataStrict(resultsFolderName2)
runTest(RichouxNetworkModuleFULL, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName1,augment=True)
runTest(RichouxNetworkModuleLSTM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName2,augment=True)
runTest(RichouxNetworkModuleFULL, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName1,augment=True)
runTestLst(RichouxNetworkModuleLSTM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName2,augment=True)
runTestLst(RichouxNetworkModuleFULL, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName1,augment=True)
runTest(RichouxNetworkModuleLSTM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName2,augment=True)
runTest(RichouxNetworkModuleFULL, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName1,augment=True)
runTestLst(RichouxNetworkModuleLSTM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName2,augment=True)
runTestLst(RichouxNetworkModuleFULL, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if li2018Deep:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Li2018DeepResults/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'fullGPU':True}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadPanHumanLarge(resultsFolderName)
runTest(Li2018DeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(Li2018DeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(Li2018DeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(Li2018DeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(Li2018DeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if Czibula2021AutoPPI:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Czibula2021AutoPPI/'
PPIPUtils.makeDir(resultsFolderName)
resultsFolderNames = [resultsFolderName+'Czibula2021AutoPPISS/',resultsFolderName+'Czibula2021AutoPPISJ/',resultsFolderName+'Czibula2021AutoPPIJJ/']
modelTypes = [Czibula2021AutoPPIModuleSS,Czibula2021AutoPPIModuleSJ,Czibula2021AutoPPIModuleJJ]
for i in range(0,3):
PPIPUtils.makeDir(resultsFolderNames[i])
hyp = {'fullGPU':True}
if orgData:
for i in range(0,3):
trainSets, testSets, saves, pfs, folderName = loadPanHumanLarge(resultsFolderNames[i])
runTest(modelTypes[i], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
for i in range(0,3):
trainSets, testSets, saves, pfs, folderName = loadGuoMultiSpeciesChen(resultsFolderNames[i])
runTest(modelTypes[i], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
for i in range(0,3):
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderNames[i])
runTest(modelTypes[i], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
for i in range(0,3):
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderNames[i])
runTestLst(modelTypes[i], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
for i in range(0,3):
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderNames[i])
runTest(modelTypes[i], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
for i in range(0,3):
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderNames[i])
runTestLst(modelTypes[i], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if ZhangDeep2019:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'ZhangDeep2019/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'fullGPU':True}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadDuYeast(resultsFolderName)
runTest(ZhangDeepModule, None,trainSets,testSets,folderName,hyp,saveModels=convertToFolder(saves),predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(ZhangDeepModule, None,trainSets,testSets,folderName,hyp,saveModels=convertToFolder(saves),predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(ZhangDeepModule, None,trainSets,testSets,folderName,hyp,saveModels=convertToFolder(saves),predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(ZhangDeepModule, None,trainSets,testSets,folderName,hyp,saveModels=convertToFolder(saves),predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(ZhangDeepModule, None,trainSets,testSets,folderName,hyp,saveModels=convertToFolder(saves),predictionsFLst = pfs)
if YaoDeep2019:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'YaoDeep2019/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'fullGPU':True}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadPanHumanSmall(resultsFolderName)
runTest(Yao2019NetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataChen(resultsFolderName)
runTest(Yao2019NetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(Yao2019NetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(Yao2019NetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(Yao2019NetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(Yao2019NetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(Yao2019NetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if zhou2011SVM:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'zhou2011SVMResults/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'Model':'THUNDERSVM'}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(ZhouSVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(ZhouSVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(ZhouSVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(ZhouSVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(ZhouSVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if GonzalezLopez2019:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'GonzalezLopez2019/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'fullGPU':True}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadDuYeast(resultsFolderName)
runTest(GonzalezLopez2019Module, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(GonzalezLopez2019Module, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataChen(resultsFolderName)
runTest(GonzalezLopez2019Module, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName)
runTest(GonzalezLopez2019Module, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanHumanSmall(resultsFolderName)
runTest(GonzalezLopez2019Module, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(GonzalezLopez2019Module, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(GonzalezLopez2019Module, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(GonzalezLopez2019Module, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(GonzalezLopez2019Module, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if Zhao2012SVMTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'zhao2012SVMResults/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'Model':'THUNDERSVM'}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName)
runTest(Zhao2012SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadLiuFruitFly(resultsFolderName)
runTest(Zhao2012SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(Zhao2012SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(Zhao2012SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(Zhao2012SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(Zhao2012SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if Hashemifar2018Test:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Hashemifar2018DeepResults/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'fullGPU':True}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(Hashemifar2018DeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(Hashemifar2018DeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(Hashemifar2018DeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(Hashemifar2018DeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(Hashemifar2018DeepNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if Goktepe2018SVMTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Goktepe2018SVMResults/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'Model':'ThunderSVM'}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadPanHumanSmall(resultsFolderName)
runTest(Goktepe2018SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName,kfolds=5)
runTest(Goktepe2018SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanMartinHuman(resultsFolderName,kfolds=5)
runTest(Goktepe2018SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(Goktepe2018SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(Goktepe2018SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(Goktepe2018SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(Goktepe2018SVM, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if pan2010TestForests:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Pan2010/'
PPIPUtils.makeDir(resultsFolderName)
pan2010TestForestModules = [Pan2010ModuleLDACTRANDFOREST, Pan2010ModuleLDACTROTFOREST, Pan2010ModuleACRANDFOREST, Pan2010ModuleACROTFOREST, Pan2010ModulePSAACRANDFOREST,Pan2010ModulePSAACROTFOREST]
pan2010ResultsFolderNames = [resultsFolderName+'LDARand/',resultsFolderName+'LDARot/',resultsFolderName+'ACRand/',resultsFolderName+'ACRot/',resultsFolderName+'PSAACRand/',resultsFolderName+'PSAACRot']
for i in range(0,len(pan2010TestForestModules)):
modName = pan2010TestForestModules[i]
resultsFolderName = pan2010ResultsFolderNames[i]
PPIPUtils.makeDir(resultsFolderName)
hyp={}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadPanHumanLarge(resultsFolderName)
saves=None
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanHumanSmall(resultsFolderName,kfolds=5)
saves=None
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanMartinHuman(resultsFolderName,kfolds=5)
saves=None
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
hyp={}
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
saves=None
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
saves=None
runTestLst(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
saves=None
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
saves=None
runTestLst(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if pan2010TestSVMs:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Pan2010/'
PPIPUtils.makeDir(resultsFolderName)
PPIPUtils.makeDir('Results/')
PPIPUtils.makeDir('Results/Pan2010/')
pan2010TestSVMModules = [Pan2010ModuleLDACTSVM, Pan2010ModuleACSVM, Pan2010ModulePSAACSVM]
pan2010ResultsFolderNames = [baseResultsFolderName+'LDASVM/',baseResultsFolderName+'ACSVM/',baseResultsFolderName+'PSAACSVM/']
for i in range(0,len(pan2010TestSVMModules)):
modName = pan2010TestSVMModules[i]
resultsFolderName = pan2010ResultsFolderNames[i]
PPIPUtils.makeDir(resultsFolderName)
hyp = {'Model':'THUNDERSVM'}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadPanHumanLarge(resultsFolderName)
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanHumanSmall(resultsFolderName,kfolds=5)
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanMartinHuman(resultsFolderName,kfolds=5)
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if du2017DeepNetworkTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Du2017/'
PPIPUtils.makeDir(resultsFolderName)
modLst = [Du2017DeepNetworkModuleSep, Du2017DeepNetworkModuleComb]
resultFolders = [resultsFolderName+'Sep/',resultsFolderName+'Comb/']
for i in range(0,len(modLst)):
modName = modLst[i]
resultsFolderName = resultFolders[i]
PPIPUtils.makeDir(resultsFolderName)
hyp = {'fullGPU':True}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadPanHumanSmall(resultsFolderName)
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadDuYeast(resultsFolderName)
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName)
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(modName, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs,startIdx=2)
if jia2015RandomForestTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Jia2015RFResults/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadJiaYeast(resultsFolderName)
runTest(Jia2015RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName,kfolds=10)
runTest(Jia2015RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(Jia2015RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(Jia2015RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(Jia2015RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(Jia2015RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if you2015RandomForestTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'You2015RFResults/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(You2015RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName,kfolds=10)
runTest(You2015RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(You2015RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(You2015RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(You2015RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(You2015RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if ding2016RandomForestTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Ding2016RFResults/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(Ding2016RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName,kfolds=5)
runTest(Ding2016RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanHumanSmall(resultsFolderName)
runTest(Ding2016RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(Ding2016RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(Ding2016RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(Ding2016RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(Ding2016RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if wang2017RotFTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Wang2017RotFResults/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(Wang2017RotFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName,kfolds=5)
runTest(Wang2017RotFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(Wang2017RotFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(Wang2017RotFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(Wang2017RotFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(Wang2017RotFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if chen2019LGBMTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Chen2019LGBMTest/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(Chen2019LGBMModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName,kfolds=5)
runTest(Chen2019LGBMModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(Chen2019LGBMModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(Chen2019LGBMModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(Chen2019LGBMModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(Chen2019LGBMModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if jia2019RandomForestTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Jia2019RFResults/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(Jia2019RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadJiaYeast(resultsFolderName,trainDataPerClass='Max',full=False)
runTest(Jia2019RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName,kfolds=10)
runTest(Jia2019RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(Jia2019RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(Jia2019RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(Jia2019RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(Jia2019RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if randomNetworkTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'RandomNetworkResults/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'fullGPU':True}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanHumanLarge(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanHumanSmall(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanMartinHuman(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataChen(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadLiADData(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadRichouxHumanDataStrict(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadGuoMultiSpeciesChen(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadLiuFruitFly(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadDuYeast(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadJiaYeast(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(RandomNetworkModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if randomRFTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'RandomRFResults/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'fullGPU':True}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanHumanLarge(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanHumanSmall(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanMartinHuman(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataChen(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadLiADData(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadRichouxHumanDataStrict(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadGuoMultiSpeciesChen(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadLiuFruitFly(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadDuYeast(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadJiaYeast(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(RandomRFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if biasTests:
for mod in [(BasicBiasModule,'BasicBiasModule'),(BasicBiasModuleSeqSim,'BasicBiasModuleSeqSim'),(BasicBiasModuleGOSimSeqSim,'BasicBiasModuleGOSimSeqSim')]:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+mod[1]+'/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {}
if orgData:
trainSets, testSets, saves, pfs, folderName = loadMartinHPylori(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataTian(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanHumanLarge(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanHumanSmall(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadPanMartinHuman(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadGuoYeastDataChen(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadLiADData(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadRichouxHumanDataStrict(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadGuoMultiSpeciesChen(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadLiuFruitFly(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadDuYeast(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
trainSets, testSets, saves, pfs, folderName = loadJiaYeast(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
testSets = [testSets[0]]
pfs = [pfs[0]]
runTestLst(mod[0], None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if MaetschkeVarTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'MaetschkeVarResults'+'/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {}
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(MaetschkeVar2011Module, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(MaetschkeVar2011Module, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(MaetschkeVar2011Module, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(MaetschkeVar2011Module, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if Chen2005RF:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Chen2005RFResults'+'/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {}
if HumanRandom50:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom50(resultsFolderName)
runTest(Chen2005RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderName = loadHumanRandom20(resultsFolderName)
runTestLst(Chen2005RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut50(resultsFolderName)
runTest(Chen2005RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderName = loadHumanHeldOut20(resultsFolderName)
runTestLst(Chen2005RFModule, None,trainSets,testSets,folderName,hyp,saveModels=saves,predictionsFLst = pfs)
if GouVar2006GOLRTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Guo2007SimResults'+'/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {}
if HumanRandom50:
trainSets, testSets, saves, pfs, folderNames = loadHumanRandom50(resultsFolderName,dirLst=True)
runTestPairwiseFoldersLst(GouVar2006GOLRModule, None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderNames = loadHumanRandom20(resultsFolderName,dirLst=True)
loads = [None]*len(saves) #since Semantic Similarities do not change based on test set, can skip doing training 2nd time
loads[len(saves)//2:] = saves[:len(saves)//2]
runTestPairwiseFoldersLst(GouVar2006GOLRModule, None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs,loads=loads)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderNames = loadHumanHeldOut50(resultsFolderName,dirLst=True)
runTestPairwiseFoldersLst(GouVar2006GOLRModule, None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderNames = loadHumanHeldOut20(resultsFolderName,dirLst=True)
loads = [None]*len(saves) #since Semantic Similarities do not change based on test set, can skip doing training 2nd time
loads[len(saves)//2:] = saves[:len(saves)//2]
runTestPairwiseFoldersLst(GouVar2006GOLRModule, None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs,loads=loads)
if Zhang2016GOTest:
PPIPUtils.makeDir(baseResultsFolderName)
resultsFolderName = baseResultsFolderName+'Zhang2016GO'+'/'
PPIPUtils.makeDir(resultsFolderName)
hyp = {'Model':'THUNDERSVM'}
if HumanRandom50:
trainSets, testSets, saves, pfs, folderNames = loadHumanRandom50(resultsFolderName,dirLst=True)
runTestPairwiseFoldersLst(Zhang2016GOModule, None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20:
trainSets, testSets, saves, pfs, folderNames = loadHumanRandom20(resultsFolderName,dirLst=True)
loads = [None]*len(saves) #since Semantic Similarities do not change based on test set, can skip doing training 2nd time
loads[len(saves)//2:] = saves[:len(saves)//2]
runTestPairwiseFoldersLst(Zhang2016GOModule, None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs,loads=loads,startIdx=8)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderNames = loadHumanHeldOut50(resultsFolderName,dirLst=True)
runTestPairwiseFoldersLst(Zhang2016GOModule, None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderNames = loadHumanHeldOut20(resultsFolderName,dirLst=True)
loads = [None]*len(saves) #since Semantic Similarities do not change based on test set, can skip doing training 2nd time
loads[len(saves)//2:] = saves[:len(saves)//2]
runTestPairwiseFoldersLst(Zhang2016GOModule, None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs,loads=loads)
if ZhangDomainVar2016Test:
PPIPUtils.makeDir(baseResultsFolderName)
midFolder = baseResultsFolderName + 'ZhangDomainVar2016Results/'
PPIPUtils.makeDir(midFolder)
idx = 0
for pair in [(ZhangDomainVar2016AllModule,midFolder+'All/'), (ZhangDomainVar2016NonTestModule,midFolder+'NonTest/'), (ZhangDomainVar2016HeldOutModule,midFolder+'HeldOut/')]:
resultsFolderName = pair[1]
PPIPUtils.makeDir(resultsFolderName)
hyp = {}
if HumanRandom50 and idx !=2: #idx=2 is held out data, which only works on the held out protein datasets
trainSets, testSets, saves, pfs, folderNames = loadHumanRandom50(resultsFolderName,dirLst=True)
runTestPairwiseFoldersLst(pair[0], None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20 and idx !=2:
trainSets, testSets, saves, pfs, folderNames = loadHumanRandom20(resultsFolderName,dirLst=True)
runTestPairwiseFoldersLst(pair[0], None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderNames = loadHumanHeldOut50(resultsFolderName,dirLst=True)
runTestPairwiseFoldersLst(pair[0], None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderNames = loadHumanHeldOut20(resultsFolderName,dirLst=True)
runTestPairwiseFoldersLst(pair[0], None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs)
idx += 1
if SimpleEnsembleTest:
PPIPUtils.makeDir(baseResultsFolderName)
midFolder = baseResultsFolderName + 'SimpleEnsembleResults/'
PPIPUtils.makeDir(midFolder)
idx = 0
for pair in [(SimpleEnsembleAllModule,midFolder+'All/'), (SimpleEnsembleNonTestModule,midFolder+'NonTest/'), (SimpleEnsembleHeldOutModule,midFolder+'HeldOut/')]:
resultsFolderName = pair[1]
PPIPUtils.makeDir(resultsFolderName)
hyp = {}
if HumanRandom50 and idx !=2: #idx=2 is held out data, which only works on the held out protein datasets
trainSets, testSets, saves, pfs, folderNames = loadHumanRandom50(resultsFolderName,dirLst=True)
runTestPairwiseFoldersLst(pair[0], None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanRandom20 and idx !=2:
trainSets, testSets, saves, pfs, folderNames = loadHumanRandom20(resultsFolderName,dirLst=True)
runTestPairwiseFoldersLst(pair[0], None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut50:
trainSets, testSets, saves, pfs, folderNames = loadHumanHeldOut50(resultsFolderName,dirLst=True)
runTestPairwiseFoldersLst(pair[0], None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs)
if HumanHeldOut20:
trainSets, testSets, saves, pfs, folderNames = loadHumanHeldOut20(resultsFolderName,dirLst=True)
runTestPairwiseFoldersLst(pair[0], None,None,None,folderNames,hyp,saveModels=saves,predictionsFLst = pfs)
idx += 1
def genSequenceFeatures():
createFeatures(currentDir+'PPI_Datasets/Guo_Data_Yeast_Tian/',set(['EGBW11','AC30','MMI','LD10_CTD','PSAAC15','Moran','Geary','AC11','PSAAC9','PSSMAAC','PSSMDPC','SkipGramAA25H20','LD10_CTD','NumericEncoding20Skip3','MCD4CTD','PSSMLST','PSSMAAC','PSSMDPC','JIA_DWT','MLD4CTD','NMBROTO_6_30','AAC20','PSSMDCT','NMBROTO_9','MORAN_9','GEARY_9','PSEAAC_3','conjointTriad','CHAOS','Random500','AllvsAllSim']))
createFeatures(currentDir+'PPI_Datasets/Guo_Data_Yeast_Chen/',set(['SkipGramAA7','OneHotEncoding7','SkipGramAA25H20','NumericEncoding20Skip3','Random500','AllvsAllSim']))
createFeatures(currentDir+'PPI_Datasets/Guo_MultiSpecies_Chen/',set(['AC14_30','conjointTriad','Random500','AllvsAllSim']))
createFeatures(currentDir+'PPI_Datasets/Du_Yeast/',set(['MCD5CTD','LD10_CTD','AC30','AAC20','AAC400','DUMULTIGROUPCTD','Grantham_Sequence_Order_30','Schneider_Sequence_Order_30','Grantham_Quasi_30','Schneider_Quasi_30','APSAAC30_2','PSEAAC_3','Random500','AllvsAllSim']))
createFeatures(currentDir+'PPI_Datasets/Jia_Data_Yeast/',set(['JIA_DWT','CHAOS','Random500','AllvsAllSim']))
createFeatures(currentDir+'PPI_Datasets/Martin_H_pylori/',set(['EGBW11','AC11','PSAAC9','NumericEncoding20Skip3','MCD4CTD','Grantham_Sequence_Order_30','Schneider_Sequence_Order_30','Grantham_Quasi_30','Schneider_Quasi_30','Geary_Zhao_30','NMBroto_Zhao_30','Moran_Zhao_30','PSEAAC_Zhao_30','PSSMDPC','SkipWeightedConjointTriad','PSAAC20','AAC20','AAC400','DUMULTIGROUPCTD','APSAAC30_2','JIA_DWT','MLD4CTD','NMBROTO_6_30','MMI','PSSMDCT','NMBROTO_9','MORAN_9','GEARY_9','PSEAAC_3','LD10_CTD','conjointTriad','CHAOS','Random500','AllvsAllSim']))
createFeatures(currentDir+'PPI_Datasets/Liu_Fruit_Fly/',set(['Grantham_Sequence_Order_30','Schneider_Sequence_Order_30','Grantham_Quasi_30','Schneider_Quasi_30','Geary_Zhao_30','NMBroto_Zhao_30','Moran_Zhao_30','PSEAAC_Zhao_30','Random500','AllvsAllSim']))
createFeatures(currentDir+'PPI_Datasets/Li_AD/',set(['AC30','LD10_CTD','PSAAC15','conjointTriad','PSEAAC_3','Random500','AllvsAllSim']))
createFeatures(currentDir+'PPI_Datasets/Pan_Human_Data/Pan_Large/',set(['AC30','NumericEncoding22','AC14_30','conjointTriad','PSAAC20','Random500','AllvsAllSim']))
createFeatures(currentDir+'PPI_Datasets/Pan_Human_Data/Pan_Small/',set(['SkipGramAA25H20','NumericEncoding20Skip3','PSSMLST','PSSMDPC','SkipWeightedConjointTriad','PSAAC20','conjointTriad','AC30','AAC20','AAC400','DUMULTIGROUPCTD','Grantham_Sequence_Order_30','Schneider_Sequence_Order_30','Grantham_Quasi_30','Schneider_Quasi_30','APSAAC30_2','NMBROTO_6_30','MMI','Random500','AllvsAllSim']))
createFeatures(currentDir+'PPI_Datasets/Pan_Human_Data/Martin_Human/',set(['PSSMDPC','SkipWeightedConjointTriad','PSAAC20','conjointTriad','AC30','Random500','AllvsAllSim']))
createFeatures(currentDir+'PPI_Datasets/Richoux_Human_Data/',set(['OneHotEncoding24','Random500','AllvsAllSim']))
createFeatures(currentDir+'PPI_Datasets/Human2021/',set(['EGBW11','AC30','LD10_CTD','PSAAC15','conjointTriad','MMI','Moran','Geary','PSSMAAC','PSSMDPC','AC11','PSAAC9','SkipGramAA7','OneHotEncoding7','OneHotEncoding24','NumericEncoding22','AC14_30','MCD5CTD','SkipGramAA25H20','NumericEncoding20Skip3','Geary_Zhao_30','NMBroto_Zhao_30','Moran_Zhao_30','PSEAAC_Zhao_30','Grantham_Quasi_30','Schneider_Quasi_30','MCD4CTD','Grantham_Sequence_Order_30','Schneider_Sequence_Order_30','PSSMLST','SkipWeightedConjointTriad','PSAAC20','AAC20','AAC400','DUMULTIGROUPCTD','APSAAC30_2','JIA_DWT','MLD4CTD','NMBROTO_6_30','PSSMDCT','NMBROTO_9','MORAN_9','GEARY_9','PSEAAC_3','CHAOS','Random500','AllvsAllSim']))
if __name__ == '__main__':
genSequenceFeatures()
RunAll() | 61.242009 | 701 | 0.794721 | 6,133 | 67,060 | 8.662808 | 0.061308 | 0.13695 | 0.091927 | 0.104463 | 0.8597 | 0.809953 | 0.800297 | 0.795422 | 0.787461 | 0.778483 | 0 | 0.036288 | 0.111139 | 67,060 | 1,095 | 702 | 61.242009 | 0.855034 | 0.01096 | 0 | 0.761147 | 0 | 0 | 0.055751 | 0.016575 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002123 | false | 0 | 0.049894 | 0 | 0.052017 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
38af88e07eac26ceb429d8a929803fa99d93e6ad | 14,710 | py | Python | models.py | KNU-BrainAI/AD | ce2c778039d47a01baa1adf3bc00d9d448e0b3bd | [
"MIT"
] | null | null | null | models.py | KNU-BrainAI/AD | ce2c778039d47a01baa1adf3bc00d9d448e0b3bd | [
"MIT"
] | null | null | null | models.py | KNU-BrainAI/AD | ce2c778039d47a01baa1adf3bc00d9d448e0b3bd | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# author: jinhee
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
torch.set_default_tensor_type(torch.DoubleTensor)
"""
for within-subject : Deep_ConvNEt, EEGNet, EEG-TCNet, CCRNN
for cross-subject : with 'sub_*'
"""
class ConstrainedConv2d(nn.Conv2d):
def forward(self, input):
return F.conv2d(input, self.weight.clamp(min=-1.0, max=1.0), self.bias, self.stride, self.padding, self.dilation, self.groups)
class ConstrainedLinear(nn.Linear):
def forward(self, input):
return F.linear(input, self.weight.clamp(min=-0.25, max=0.25), self.bias)
class Deep_ConvNet(nn.Module):
def __init__(self, bias=False, num_class=2):
super(Deep_ConvNet, self).__init__()
self.conv_split = nn.Sequential(
nn.Conv2d(1, 25, (1,10), 1),
nn.Conv2d(25, 25, (32,1), 1, bias=False),
)
self.post_conv = nn.Sequential(
nn.BatchNorm2d(25),
nn.ELU(),
nn.MaxPool2d((1,3), 3),
nn.Dropout(0.3)
)
self.conv_pool1 = nn.Sequential(
nn.Conv2d(25, 50, (1,10), 1, bias=False),
nn.BatchNorm2d(50),
nn.MaxPool2d((1,3), 3),
nn.Dropout(0.3)
)
self.conv_pool2 = nn.Sequential(
nn.Conv2d(50, 100, (1,10), 1, bias=False),
nn.BatchNorm2d(100),
nn.MaxPool2d((1,3), 3),
nn.Dropout(0.3)
)
self.conv_pool3 = nn.Sequential(
nn.Conv2d(100, 200, (1,10), 1, bias=False),
nn.BatchNorm2d(200),
nn.MaxPool2d((1,3), 3),
nn.Dropout(0.3)
)
self.conv_fc = nn.Sequential(
ConstrainedLinear(200*1*1, num_class)
)
def forward(self, x):
out = self.conv_split(x)
out = self.post_conv(out)
out = self.conv_pool1(out)
out = self.conv_pool2(out)
out = self.conv_pool3(out)
out = out.view(-1, np.prod(out.shape[1:]))
out = self.conv_fc(out)
return out
class EEGNet(nn.Module):
def __init__(self, num_class=2, bias=False, drop_ratio=.5, F1=8, D=2):
super(EEGNet, self).__init__()
F2 = F1*D
self.conv_temporal = nn.Sequential(
nn.ZeroPad2d((((250-1)//2)+1, ((250-1)//2), 0, 0)),
nn.Conv2d(1, F1, (1,250), 1, bias=bias),
nn.BatchNorm2d(F1),
)
self.conv_spatial = nn.Sequential(
ConstrainedConv2d(F1, F1*D, (32,1), 1, bias=bias, groups=F1),
nn.BatchNorm2d(F1*D),
nn.ELU(),
nn.AvgPool2d((1,4)),
nn.Dropout(drop_ratio)
)
self.conv_separable = nn.Sequential(
nn.ZeroPad2d((((125-1)//2)+1, ((125-1)//2), 0, 0)),
nn.Conv2d(F1*D, F2, (1,125), 1, bias=bias, groups=F1*D), #depthwise
nn.Conv2d(F2, F2, 1, 1), #pointwise = 1dconv
nn.BatchNorm2d(F2),
nn.ELU(),
nn.AvgPool2d((1,8)), #(12)
nn.Dropout(drop_ratio)
)
self.conv_fc = nn.Sequential(
ConstrainedLinear(F2*1*15, num_class)
#nn.Linear(F2*1*15, num_class) #(16*1*10)
)
def forward(self, x):
out = self.conv_temporal(x)
out = self.conv_spatial(out)
out = self.conv_separable(out)
out = out.view(-1, np.prod(out.shape[1:]))
out = self.conv_fc(out)
return out
class EEG_TCNet(nn.Module):
def __init__(self, bias=False, num_class=2, drop_ratio=.5, F1=8, D=2):
super(EEG_TCNet, self).__init__()
F2 = F1*D
self.conv_temporal = nn.Sequential(
nn.ZeroPad2d((((250-1)//2)+1, ((250-1)//2), 0, 0)),
nn.Conv2d(1, F1, (1,250), 1, bias=bias),
nn.BatchNorm2d(F1),
)
self.conv_spatial = nn.Sequential(
ConstrainedConv2d(F1, F1*D, (32,1), 1, bias=bias, groups=F1),
nn.BatchNorm2d(F1*D),
nn.ELU(),
nn.AvgPool2d((1,4)),
nn.Dropout(drop_ratio)
)
self.conv_separable = nn.Sequential(
nn.ZeroPad2d((((125-1)//2)+1, ((125-1)//2), 0, 0)),
nn.Conv2d(F1*D, F2, (1,125), 1, bias=bias, groups=F1*D), #depthwise
nn.Conv2d(F2, F2, 1, 1), #pointwise = 1dconv
nn.BatchNorm2d(F2),
nn.ELU(),
nn.AvgPool2d((1,8)), #(12)
nn.Dropout(drop_ratio)
)
self.conv_fc = nn.Sequential(
ConstrainedLinear(F2*1*15, num_class)
#nn.Linear(F2*1*15, num_class) #(16*1*10)
)
# TCN-block
self.tcn_block1 = nn.Sequential(
nn.ZeroPad2d((2,1,0,0)),
nn.Conv1d(F2, F2, 4, 1),
nn.BatchNorm1d(F2),
nn.ELU(),
nn.Dropout(0.3),
nn.ZeroPad2d((2,1,0,0)),
nn.Conv1d(F2, F2, 4, 1),
nn.BatchNorm1d(F2),
nn.ELU(),
nn.Dropout(0.3),
)
self.tcn_block2 = nn.Sequential(
nn.ZeroPad2d((3,3,0,0)),
nn.Conv1d(F2, F2, 4, 1, dilation=2),
nn.BatchNorm1d(F2),
nn.ELU(),
nn.Dropout(0.3),
nn.ZeroPad2d((3,3,0,0)),
nn.Conv1d(F2, F2, 4, 1, dilation=2),
nn.BatchNorm1d(F2),
nn.ELU(),
nn.Dropout(0.3),
)
def forward(self, x):
out = self.conv_temporal(x)
out = self.conv_spatial(out)
out = self.conv_separable(out)
out = torch.squeeze(out, axis=2)
tcn = self.tcn_block1(out)
out = out + tcn
out = nn.ELU()(out)
tcn = self.tcn_block2(out)
out = out + tcn
out = nn.ELU()(out)
out = out.view(-1, np.prod(out.shape[1:]))
out = self.conv_fc(out)
return out
class CCRNN(nn.Module):
def __init__(self, num_classes=2, drop_ratio=0.5, nSeg=30):
super(CCRNN, self).__init__()
self.nSeg = nSeg
self.conv_module = nn.Sequential(
nn.Conv2d(1, 32, 3, 1, padding=(3-1)//2),
nn.ELU(),
nn.Conv2d(32, 64, 3, 1, padding=(3-1)//2),
nn.ELU(),
nn.Conv2d(64, 128, 3, 1, padding=(3-1)//2),
nn.ELU()
)
self.conv_fc = nn.Sequential(
nn.Linear(128*7*5, 1024),
nn.ELU(),
nn.Dropout(drop_ratio)
)
self.rnn_module = nn.Sequential(
nn.LSTM(1024, 64, 2, batch_first=True, dropout=drop_ratio)
)
self.rnn_fc = nn.Sequential(
nn.Linear(64, 1024),
nn.ELU(),
nn.Dropout(drop_ratio)
)
self.readout = nn.Sequential(
nn.Linear(1024, num_classes)
)
def forward(self, x):
out = self.conv_module(x)
out = out.reshape(out.shape[0], np.prod(out.shape[1:]))
out = self.conv_fc(out)
out = out.reshape(-1, self.nSeg, out.shape[-1])
out, (hn, cn) = self.rnn_module(out)
out = out[:, -1]
out = self.rnn_fc(out)
out = self.readout(out)
return out
class sub_Deep_ConvNet(nn.Module):
def __init__(self, bias=False, drop_ratio=0.5, num_class=2):
super(sub_Deep_ConvNet, self).__init__()
self.conv_split = nn.Sequential(
nn.Conv2d(1, 25, (1,10), 1),
nn.Conv2d(25, 25, (32,1), 1, bias=False),
)
self.post_conv = nn.Sequential(
nn.BatchNorm2d(25),
nn.ELU(),
nn.MaxPool2d((1,3), 3),
nn.Dropout(0.3)
)
self.conv_pool1 = nn.Sequential(
nn.Conv2d(25, 50, (1,10), 1, bias=False),
nn.BatchNorm2d(50),
nn.MaxPool2d((1,3), 3),
nn.Dropout(0.3)
)
self.conv_pool2 = nn.Sequential(
nn.Conv2d(50, 100, (1,10), 1, bias=False),
nn.BatchNorm2d(100),
nn.MaxPool2d((1,3), 3),
nn.Dropout(0.3)
)
self.conv_pool3 = nn.Sequential(
nn.Conv2d(100, 200, (1,10), 1, bias=False),
nn.BatchNorm2d(200),
nn.MaxPool2d((1,3), 3),
nn.Dropout(0.3)
)
self.conv_pool3 = nn.Sequential(
nn.Conv2d(100, 200, (1,10), 1, bias=False),
nn.BatchNorm2d(200),
nn.MaxPool2d((1,3), 3),
nn.Dropout(0.3)
)
self.conv_fc = nn.Sequential(
ConstrainedLinear(200*1*1, 1024),
nn.Dropout(drop_ratio),
ConstrainedLinear(1024, 512),
nn.Dropout(drop_ratio),
ConstrainedLinear(512, num_class)
)
def forward(self, x):
out = self.conv_split(x)
out = self.post_conv(out)
out = self.conv_pool1(out)
out = self.conv_pool2(out)
out = self.conv_pool3(out)
out = out.view(-1, np.prod(out.shape[1:]))
out = self.conv_fc(out)
return out
class sub_EEGNet(nn.Module):
def __init__(self, drop_ratio=.5, bias=False, num_class=2, F1=8, D=2):
super(sub_EEGNet, self).__init__()
F2 = F1*D
self.conv_temporal = nn.Sequential(
nn.ZeroPad2d((((250-1)//2)+1, ((250-1)//2), 0, 0)),
nn.Conv2d(1, F1, (1,250), 1, bias=bias),
nn.BatchNorm2d(F1),
)
self.conv_spatial = nn.Sequential(
ConstrainedConv2d(F1, F1*D, (32,1), 1, bias=bias, groups=F1),
nn.BatchNorm2d(F1*D),
nn.ELU(),
nn.AvgPool2d((1,4)),
nn.Dropout(drop_ratio)
)
self.conv_separable = nn.Sequential(
nn.ZeroPad2d((((125-1)//2)+1, ((125-1)//2), 0, 0)),
nn.Conv2d(F1*D, F2, (1,125), 1, bias=bias, groups=F1*D), #depthwise
nn.Conv2d(F2, F2, 1, 1), #pointwise = 1dconv
nn.BatchNorm2d(F2),
nn.ELU(),
nn.AvgPool2d((1,8)), #(12)
nn.Dropout(drop_ratio)
)
self.conv_fc = nn.Sequential(
ConstrainedLinear(F2*1*15, 1024),
#nn.Linear(F2*1*15, 1024),
nn.Dropout(drop_ratio),
ConstrainedLinear(1024, 512),
#nn.Linear(1024, 512),
nn.Dropout(drop_ratio),
ConstrainedLinear(512, num_class)
#nn.Linear(512, num_class)
)
def forward(self, x):
out = self.conv_temporal(x)
out = self.conv_spatial(out)
out = self.conv_separable(out)
out = out.view(-1, np.prod(out.shape[1:]))
out = self.conv_fc(out)
return out
class sub_EEG_TCNet(nn.Module):
def __init__(self, bias=False, drop_ratio=.5, num_class=2, F1=8, D=2):
super(sub_EEG_TCNet, self).__init__()
F2 = F1*D
self.conv_temporal = nn.Sequential(
nn.ZeroPad2d((((250-1)//2)+1, ((250-1)//2), 0, 0)),
nn.Conv2d(1, F1, (1,250), 1, bias=bias),
nn.BatchNorm2d(F1),
)
self.conv_spatial = nn.Sequential(
ConstrainedConv2d(F1, F1*D, (32,1), 1, bias=bias, groups=F1),
nn.BatchNorm2d(F1*D),
nn.ELU(),
nn.AvgPool2d((1,4)),
nn.Dropout(drop_ratio)
)
self.conv_separable = nn.Sequential(
nn.ZeroPad2d((((125-1)//2)+1, ((125-1)//2), 0, 0)),
nn.Conv2d(F1*D, F2, (1,125), 1, bias=bias, groups=F1*D), #depthwise
nn.Conv2d(F2, F2, 1, 1), #pointwise = 1dconv
nn.BatchNorm2d(F2),
nn.ELU(),
nn.AvgPool2d((1,8)), #(12)
nn.Dropout(drop_ratio)
)
self.conv_fc = nn.Sequential(
ConstrainedLinear(F2*1*15, 1024),
nn.Dropout(drop_ratio),
ConstrainedLinear(1024, 512),
nn.Dropout(drop_ratio),
ConstrainedLinear(512, num_class)
)
# TCN-block
self.tcn_block1 = nn.Sequential(
nn.ZeroPad2d((2,1,0,0)),
nn.Conv1d(F2, F2, 4, 1),
nn.BatchNorm1d(F2),
nn.ELU(),
nn.Dropout(0.3),
nn.ZeroPad2d((2,1,0,0)),
nn.Conv1d(F2, F2, 4, 1),
nn.BatchNorm1d(F2),
nn.ELU(),
nn.Dropout(0.3),
)
self.tcn_block2 = nn.Sequential(
nn.ZeroPad2d((3,3,0,0)),
nn.Conv1d(F2, F2, 4, 1, dilation=2),
nn.BatchNorm1d(F2),
nn.ELU(),
nn.Dropout(0.3),
nn.ZeroPad2d((3,3,0,0)),
nn.Conv1d(F2, F2, 4, 1, dilation=2),
nn.BatchNorm1d(F2),
nn.ELU(),
nn.Dropout(0.3),
)
def forward(self, x):
out = self.conv_temporal(x)
out = self.conv_spatial(out)
out = self.conv_separable(out)
out = torch.squeeze(out, axis=2)
tcn = self.tcn_block1(out)
out = out + tcn
out = nn.ELU()(out)
tcn = self.tcn_block2(out)
out = out + tcn
out = nn.ELU()(out)
out = out.view(-1, np.prod(out.shape[1:]))
out = self.conv_fc(out)
return out
class sub_CCRNN(nn.Module):
def __init__(self, drop_ratio=0.5, nSeg=30, num_classes=2):
super(sub_CCRNN, self).__init__()
self.nSeg = nSeg
self.conv_module = nn.Sequential(
nn.Conv2d(1, 32, 3, 1, padding=(3-1)//2),
nn.ELU(),
nn.Conv2d(32, 64, 3, 1, padding=(3-1)//2),
nn.ELU(),
nn.Conv2d(64, 128, 3, 1, padding=(3-1)//2),
nn.ELU()
)
self.conv_fc = nn.Sequential(
nn.Linear(128*7*5, 1024),
nn.ELU(),
nn.Dropout(drop_ratio)
)
self.rnn_module = nn.Sequential(
nn.LSTM(1024, 64, 2, batch_first=True, dropout=drop_ratio)
)
self.rnn_fc = nn.Sequential(
nn.Linear(64, 1024),
nn.ELU(),
nn.Dropout(drop_ratio)
)
self.readout = nn.Sequential(
nn.Linear(1024, 128),
nn.Dropout(drop_ratio),
nn.Linear(128, 128),
nn.Dropout(drop_ratio),
nn.Linear(128, num_classes)
)
def forward(self, x):
out = self.conv_module(x)
out = out.reshape(out.shape[0], np.prod(out.shape[1:]))
out = self.conv_fc(out)
out = out.reshape(-1, self.nSeg, out.shape[-1])
out, (hn, cn) = self.rnn_module(out)
out = out[:, -1]
out = self.rnn_fc(out)
out = self.readout(out)
return out
| 32.400881 | 134 | 0.498368 | 1,957 | 14,710 | 3.635667 | 0.065406 | 0.068587 | 0.064933 | 0.050597 | 0.938721 | 0.927618 | 0.907519 | 0.899649 | 0.885032 | 0.863106 | 0 | 0.096296 | 0.344867 | 14,710 | 453 | 135 | 32.472406 | 0.642005 | 0.022366 | 0 | 0.793451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04534 | false | 0 | 0.010076 | 0.005038 | 0.105793 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
38e722462ed84e6769a401e82b6799009c7fd399 | 205 | py | Python | Codewars/6kyu/micro-world/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | 7 | 2017-09-20T16:40:39.000Z | 2021-08-31T18:15:08.000Z | Codewars/6kyu/micro-world/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | Codewars/6kyu/micro-world/Python/test.py | RevansChen/online-judge | ad1b07fee7bd3c49418becccda904e17505f3018 | [
"MIT"
] | null | null | null | # Python - 3.6.0
Test.assert_equals(micro_world([101, 53, 42, 102, 101, 55, 54], 1), 3)
Test.assert_equals(micro_world([20, 15, 10, 15, 20, 25], 5), 1)
Test.assert_equals(micro_world([5, 3, 1, 5], 1), 4)
| 34.166667 | 70 | 0.643902 | 42 | 205 | 3 | 0.52381 | 0.238095 | 0.380952 | 0.5 | 0.619048 | 0 | 0 | 0 | 0 | 0 | 0 | 0.237288 | 0.136585 | 205 | 5 | 71 | 41 | 0.474576 | 0.068293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2a32aa2ae6dd29c7fb71209109c9605f27fd33b2 | 70 | py | Python | celery_config.py | hoslo/ocr | 4f78ae7013beb2cab8fb9391ba25ba5e6e78967c | [
"Apache-2.0"
] | 4 | 2019-05-27T10:23:55.000Z | 2020-01-19T10:03:14.000Z | celery_config.py | dun933/ocr | 4f78ae7013beb2cab8fb9391ba25ba5e6e78967c | [
"Apache-2.0"
] | null | null | null | celery_config.py | dun933/ocr | 4f78ae7013beb2cab8fb9391ba25ba5e6e78967c | [
"Apache-2.0"
] | 3 | 2019-08-16T18:24:02.000Z | 2020-05-15T06:35:45.000Z | broker = 'redis://127.0.0.1:6379/0'
backend='redis://127.0.0.1:6379/1' | 35 | 35 | 0.642857 | 16 | 70 | 2.8125 | 0.4375 | 0.355556 | 0.4 | 0.444444 | 0.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.328358 | 0.042857 | 70 | 2 | 36 | 35 | 0.343284 | 0 | 0 | 0 | 0 | 0 | 0.676056 | 0.676056 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
2a4dddfd454aed1911e285f795bfb926288de8d2 | 34 | py | Python | passlocker/__init__.py | chrislee35/passlocker | 337b225db3b9281ea58c54c9334658b8c7b27f72 | [
"MIT"
] | 2 | 2020-11-23T17:49:38.000Z | 2020-12-27T12:47:08.000Z | passlocker/__init__.py | chrislee35/passlocker | 337b225db3b9281ea58c54c9334658b8c7b27f72 | [
"MIT"
] | null | null | null | passlocker/__init__.py | chrislee35/passlocker | 337b225db3b9281ea58c54c9334658b8c7b27f72 | [
"MIT"
] | null | null | null | from .passlocker import PassLocker | 34 | 34 | 0.882353 | 4 | 34 | 7.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
2a6495d0d4b0a64a39ffbda616ca7132327d7f5b | 6,362 | py | Python | caffe/mnist-gpu/model/test_predict.py | melatonin355/models | fc2500aab024f71fa8cf33e13d748703338612a8 | [
"Apache-2.0"
] | null | null | null | caffe/mnist-gpu/model/test_predict.py | melatonin355/models | fc2500aab024f71fa8cf33e13d748703338612a8 | [
"Apache-2.0"
] | null | null | null | caffe/mnist-gpu/model/test_predict.py | melatonin355/models | fc2500aab024f71fa8cf33e13d748703338612a8 | [
"Apache-2.0"
] | 1 | 2019-06-10T22:57:15.000Z | 2019-06-10T22:57:15.000Z | import pipeline_predict
json_bytes = b'{"image": [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.05098039656877518, 0.529411792755127, 0.3960784673690796, 0.572549045085907, 0.572549045085907, 0.847058892250061, 0.8156863451004028, 0.9960784912109375, 1.0, 1.0, 0.9960784912109375, 0.5960784554481506, 0.027450982481241226, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.32156863808631897, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.7882353663444519, 0.11764706671237946, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.32156863808631897, 0.9921569228172302, 0.988235354423523, 0.7921569347381592, 0.9450981020927429, 0.545098066329956, 0.21568629145622253, 0.3450980484485626, 0.45098042488098145, 0.125490203499794, 0.125490203499794, 0.03921568766236305, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.32156863808631897, 0.9921569228172302, 0.803921639919281, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6352941393852234, 0.9921569228172302, 0.803921639919281, 0.24705883860588074, 0.3490196168422699, 0.6509804129600525, 0.32156863808631897, 0.32156863808631897, 0.1098039299249649, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.007843137718737125, 0.7529412508010864, 0.9921569228172302, 0.9725490808486938, 0.9686275124549866, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.8274510502815247, 0.29019609093666077, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2549019753932953, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.847058892250061, 0.027450982481241226, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5921568870544434, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.7333333492279053, 0.44705885648727417, 0.23137256503105164, 0.23137256503105164, 0.4784314036369324, 0.9921569228172302, 0.9921569228172302, 0.03921568766236305, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5568627715110779, 0.9568628072738647, 0.7098039388656616, 0.08235294371843338, 0.019607843831181526, 0.0, 0.0, 0.0, 0.08627451211214066, 0.9921569228172302, 0.9921569228172302, 0.43137258291244507, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.15294118225574493, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08627451211214066, 0.9921569228172302, 0.9921569228172302, 0.46666669845581055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08627451211214066, 0.9921569228172302, 0.9921569228172302, 0.46666669845581055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08627451211214066, 0.9921569228172302, 0.9921569228172302, 0.46666669845581055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1882353127002716, 0.9921569228172302, 0.9921569228172302, 0.46666669845581055, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6705882549285889, 0.9921569228172302, 0.9921569228172302, 0.12156863510608673, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2392157018184662, 0.9647059440612793, 0.9921569228172302, 0.6274510025978088, 0.003921568859368563, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.08235294371843338, 0.44705885648727417, 0.16470588743686676, 0.0, 0.0, 0.2549019753932953, 0.9294118285179138, 0.9921569228172302, 0.9333333969116211, 0.27450981736183167, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4941176772117615, 0.9529412388801575, 0.0, 0.0, 0.5803921818733215, 0.9333333969116211, 0.9921569228172302, 0.9921569228172302, 0.4078431725502014, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7411764860153198, 0.9764706492424011, 0.5529412031173706, 0.8784314393997192, 0.9921569228172302, 0.9921569228172302, 0.9490196704864502, 0.43529415130615234, 0.007843137718737125, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6235294342041016, 0.9921569228172302, 0.9921569228172302, 0.9921569228172302, 0.9764706492424011, 0.6274510025978088, 0.1882353127002716, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.18431372940540314, 0.5882353186607361, 0.729411780834198, 0.5686274766921997, 0.3529411852359772, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]}'
print(pipeline_predict.predict(json_bytes))
| 1,060.333333 | 6,291 | 0.626847 | 1,581 | 6,362 | 2.519924 | 0.054396 | 0.630522 | 0.926205 | 1.209839 | 0.620231 | 0.571787 | 0.571787 | 0.536145 | 0.536145 | 0.518825 | 0 | 0.702334 | 0.124489 | 6,362 | 5 | 6,292 | 1,272.4 | 0.012926 | 0 | 0 | 0 | 0 | 0.333333 | 0.986325 | 0.019805 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 11 |
aa7842242752b6f74dd5a35b2eb0c5fb4bedcbb7 | 58 | py | Python | core-python/Core_Python/extraknowledge/TypeTest.py | theumang100/tutorials-1 | 497f54c2adb022c316530319a168fca1c007d4b1 | [
"MIT"
] | 9 | 2020-04-23T05:24:19.000Z | 2022-02-17T16:37:51.000Z | core-python/Core_Python/extraknowledge/TypeTest.py | theumang100/tutorials-1 | 497f54c2adb022c316530319a168fca1c007d4b1 | [
"MIT"
] | 5 | 2020-10-01T05:08:37.000Z | 2020-10-12T03:18:10.000Z | core-python/Core_Python/extraknowledge/TypeTest.py | theumang100/tutorials-1 | 497f54c2adb022c316530319a168fca1c007d4b1 | [
"MIT"
] | 9 | 2020-04-28T14:06:41.000Z | 2021-10-19T18:32:28.000Z | print(type(type(int)))
print(type(int))
print(type(float)) | 19.333333 | 22 | 0.724138 | 10 | 58 | 4.2 | 0.4 | 0.642857 | 0.571429 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 58 | 3 | 23 | 19.333333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
2aec9a17238d8649b07f43484a27516e162a0699 | 9,199 | py | Python | advanced/part12-15_hockey_statistics/test/test_hockey_statistics2.py | Hannah-Abi/python-pro-21 | 2ce32c4bf118054329d19afdf83c50561be1ada8 | [
"MIT"
] | null | null | null | advanced/part12-15_hockey_statistics/test/test_hockey_statistics2.py | Hannah-Abi/python-pro-21 | 2ce32c4bf118054329d19afdf83c50561be1ada8 | [
"MIT"
] | null | null | null | advanced/part12-15_hockey_statistics/test/test_hockey_statistics2.py | Hannah-Abi/python-pro-21 | 2ce32c4bf118054329d19afdf83c50561be1ada8 | [
"MIT"
] | null | null | null | import unittest
from unittest.mock import patch
from tmc import points, reflect
from tmc.utils import load, load_module, reload_module, get_stdout, check_source
from functools import reduce
import os
import os.path
import textwrap
from random import choice, randint
from datetime import date, datetime, timedelta
exercise = 'src.hockey_statistics'
def s(l: list):
return "\n".join(l)
@points('12.hockey_statistics2')
class HockeyStatistics2Test(unittest.TestCase):
@classmethod
def setUpClass(cls):
with patch('builtins.input', side_effect=["partial.json", "0"]):
cls.module = load_module(exercise, 'fi')
def test_01_team_players_1(self):
input_values = ["partial.json", "4" , "WSH", "0"]
with patch('builtins.input', side_effect=input_values):
try:
reload_module(self.module)
except:
self.fail(f"Check that your program works with input\n{s(input_values)}")
output = get_stdout()
self.assertFalse(len(output)==0,'Your code does not output anything. Check that it is not inside if __name__ == "__main__" block.')
exp = """Jakub Vrana WSH 25 + 27 = 52
Jonas Siegenthaler WSH 2 + 7 = 9"""
for line in exp.split("\n"):
if not line in output:
self.fail(f"Your program should output line\n{line}\nwhen the program is executed with input\n{s(input_values)}\nNow the output was\n{output}")
output_lines = output.split('\n')
exp_lines = exp.split("\n")
n = output_lines.index(exp_lines[0])
for i in range(len(exp_lines)):
try:
oo = output_lines[n+i]
except:
self.fail(f"when the program is executed with input\n{s(input_values)}\nOutput \n{output}\nis not in correct order, it should be\n{exp}")
ee = exp_lines[i]
self.assertEqual(oo, ee, f"when the program is executed with input\n{s(input_values)}\nOutput \n{output}\nis not in correct order, it should be\n{exp}")
def test_02_team_players_2(self):
input_values = ["partial.json", "4" , "DAL", "0"]
with patch('builtins.input', side_effect=input_values):
try:
reload_module(self.module)
except:
self.fail(f"Check that your program works with input\n{s(input_values)}")
output = get_stdout()
exp = """John Klingberg DAL 6 + 26 = 32
Taylor Fedun DAL 2 + 7 = 9"""
for line in exp.split("\n"):
if not line in output:
self.fail(f"Your program should output line\n{line}\nwhen the program is executed with input\n{s(input_values)}\nNow the output was\n{output}")
output_lines = output.split('\n')
exp_lines = exp.split("\n")
n = output_lines.index(exp_lines[0])
for i in range(len(exp_lines)):
try:
oo = output_lines[n+i]
except:
self.fail(f"when the program is executed with input\n{s(input_values)}\nOutput \n{output}\nis not in correct order, it should be\n{exp}")
ee = exp_lines[i]
self.assertEqual(oo, ee, f"when the program is executed with input\n{s(input_values)}\nOutput \n{output}\nis not in correct order, it should be\n{exp}")
def test_03_country_players_1(self):
input_values = ["partial.json", "5" , "CAN", "0"]
with patch('builtins.input', side_effect=input_values):
try:
reload_module(self.module)
except:
self.fail(f"Check that your program works with input\n{s(input_values)}")
output = get_stdout()
exp = """Jared McCann PIT 14 + 21 = 35
Travis Zajac NJD 9 + 16 = 25
Taylor Fedun DAL 2 + 7 = 9
Mark Jankowski CGY 5 + 2 = 7
Logan Shaw WPG 3 + 2 = 5"""
for line in exp.split("\n"):
if not line in output:
self.fail(f"Your program should output line\n{line}\nwhen the program is executed with input\n{s(input_values)}\nNow the output was\n{output}")
output_lines = output.split('\n')
exp_lines = exp.split("\n")
n = output_lines.index(exp_lines[0])
for i in range(len(exp_lines)):
try:
oo = output_lines[n+i]
except:
self.fail(f"when the program is executed with input\n{s(input_values)}\nOutput \n{output}\nis not in correct order, it should be\n{exp}")
ee = exp_lines[i]
self.assertEqual(oo, ee, f"when the program is executed with input\n{s(input_values)}\nOutput \n{output}\nis not in correct order, it should be\n{exp}")
def test_04_country_players_2(self):
input_values = ["partial.json", "5" , "SWE", "0"]
with patch('builtins.input', side_effect=input_values):
try:
reload_module(self.module)
except:
self.fail(f"Check that your program works with input\n{s(input_values)}")
output = get_stdout()
exp = """John Klingberg DAL 6 + 26 = 32
Jonathan Davidsson OTT 0 + 1 = 1"""
for line in exp.split("\n"):
if not line in output:
self.fail(f"Your program should output line\n{line}\nwhen the program is executed with input\n{s(input_values)}\nNow the output was\n{output}")
output_lines = output.split('\n')
exp_lines = exp.split("\n")
n = output_lines.index(exp_lines[0])
for i in range(len(exp_lines)):
try:
oo = output_lines[n+i]
except:
self.fail(f"when the program is executed with input\n{s(input_values)}\nOutput \n{output}\nis not in correct order, it should be\n{exp}")
ee = exp_lines[i]
self.assertEqual(oo, ee, f"when the program is executed with input\n{s(input_values)}\nOutput \n{output}\nis not in correct order, it should be\n{exp}")
def test_05_country_players_big_file_1(self):
input_values = ["all.json", "5" , "AUS", "0"]
with patch('builtins.input', side_effect=input_values):
try:
reload_module(self.module)
except:
self.fail(f"Check that your program works with input\n{s(input_values)}")
output = get_stdout()
exp = """Nathan Walker STL 1 + 1 = 2"""
for line in exp.split("\n"):
if not line in output:
self.fail(f"Your program should output line\n{line}\nwhen the program is executed with input\n{s(input_values)}\nNow the output was\n{output}")
output_lines = output.split('\n')
exp_lines = exp.split("\n")
n = output_lines.index(exp_lines[0])
for i in range(len(exp_lines)):
try:
oo = output_lines[n+i]
except:
self.fail(f"when the program is executed with input\n{s(input_values)}\nOutput \n{output}\nis not in correct order, it should be\n{exp}")
ee = exp_lines[i]
self.assertEqual(oo, ee, f"when the program is executed with input\n{s(input_values)}\nOutput \n{output}\nis not in correct order, it should be\n{exp}")
def test_06_country_players_big_file_2(self):
input_values = ["all.json", "5" , "AUT", "0"]
with patch('builtins.input', side_effect=input_values):
try:
reload_module(self.module)
except:
self.fail(f"Check that your program works with input\n{s(input_values)}")
output = get_stdout()
exp = """Michael Raffl PHI 8 + 12 = 20
Michael Grabner ARI 8 + 3 = 11"""
for line in exp.split("\n"):
if not line in output:
self.fail(f"Your program should output line\n{line}\nwhen the program is executed with input\n{s(input_values)}\nNow the output was\n{output}")
output_lines = output.split('\n')
exp_lines = exp.split("\n")
n = output_lines.index(exp_lines[0])
for i in range(len(exp_lines)):
try:
oo = output_lines[n+i]
except:
self.fail(f"when the program is executed with input\n{s(input_values)}\nOutput \n{output}\nis not in correct order, it should be\n{exp}")
ee = exp_lines[i]
self.assertEqual(oo, ee, f"when the program is executed with input\n{s(input_values)}\nOutput \n{output}\nis not in correct order, it should be\n{exp}")
if __name__ == '__main__':
unittest.main() | 45.315271 | 170 | 0.558865 | 1,250 | 9,199 | 3.9944 | 0.1384 | 0.079311 | 0.048067 | 0.052874 | 0.828961 | 0.828961 | 0.806329 | 0.779091 | 0.779091 | 0.779091 | 0 | 0.016069 | 0.330253 | 9,199 | 203 | 171 | 45.315271 | 0.794352 | 0 | 0 | 0.716049 | 0 | 0.117284 | 0.385761 | 0.08087 | 0 | 0 | 0 | 0 | 0.04321 | 1 | 0.049383 | false | 0 | 0.061728 | 0.006173 | 0.123457 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2afaa17e5456401e6879f89535d519d6ec5c0972 | 2,391 | py | Python | project/iotd/workish/all_views/apis/tasks.py | balangheorghe/CloudComputing | b4ca2209e1a2292abffcb559dc942430a2862296 | [
"Apache-2.0"
] | null | null | null | project/iotd/workish/all_views/apis/tasks.py | balangheorghe/CloudComputing | b4ca2209e1a2292abffcb559dc942430a2862296 | [
"Apache-2.0"
] | null | null | null | project/iotd/workish/all_views/apis/tasks.py | balangheorghe/CloudComputing | b4ca2209e1a2292abffcb559dc942430a2862296 | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
from ...utils import get_user_role, check_admin
from django.contrib.auth.decorators import login_required
@login_required(login_url='login')
def task_view(request):
isAdmin = check_admin(get_user_role(request))
api_name = "view_not_admin"
if isAdmin:
api_name = "view"
if request.method == 'POST':
return render(request, 'workish/views/apis/tasks/{}.html'.format(api_name), {'error': 'Working on it!'})
else:
return render(request, 'workish/views/apis/tasks/{}.html'.format(api_name), {'error': 'Working on it!'})
@login_required(login_url='login')
def task_create(request):
isAdmin = check_admin(get_user_role(request))
working_on_it = "working_on_it"
if isAdmin:
working_on_it = "working_on_it_admin"
if not isAdmin:
return render(request, 'workish/views/auth/{}.html'.format(working_on_it), {'error': 'Access Denied!'})
api_name = 'create'
if request.method == 'POST':
return render(request, 'workish/views/apis/tasks/{}.html'.format(api_name), {'error': 'Working on it!'})
else:
return render(request, 'workish/views/apis/tasks/{}.html'.format(api_name), {'error': 'Working on it!'})
@login_required(login_url='login')
def task_request(request):
isAdmin = check_admin(get_user_role(request))
working_on_it = "working_on_it"
if isAdmin:
working_on_it = "working_on_it_admin"
api_name = 'request'
if request.method == 'POST':
return render(request, 'workish/views/apis/tasks/{}.html'.format(api_name), {'error': 'Working on it!'})
else:
return render(request, 'workish/views/apis/tasks/{}.html'.format(api_name), {'error': 'Working on it!'})
@login_required(login_url='login')
def task_to_approve(request):
isAdmin = check_admin(get_user_role(request))
working_on_it = "working_on_it"
if isAdmin:
working_on_it = "working_on_it_admin"
if not isAdmin:
return render(request, 'workish/views/auth/{}.html'.format(working_on_it), {'error': 'Access Denied!'})
api_name = 'to_approve'
if request.method == 'POST':
return render(request, 'workish/views/apis/tasks/{}.html'.format(api_name), {'error': 'Working on it!'})
else:
return render(request, 'workish/views/apis/tasks/{}.html'.format(api_name), {'error': 'Working on it!'})
| 35.161765 | 112 | 0.677959 | 325 | 2,391 | 4.753846 | 0.138462 | 0.128155 | 0.156634 | 0.168285 | 0.86343 | 0.86343 | 0.86343 | 0.842071 | 0.814887 | 0.814887 | 0 | 0 | 0.167712 | 2,391 | 67 | 113 | 35.686567 | 0.776382 | 0 | 0 | 0.76 | 0 | 0 | 0.280636 | 0.128816 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.06 | 0 | 0.34 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2dd06cfd36fd6dff170b0c31dbf01e51606153fc | 200 | py | Python | reVX/hybrid_stats/__init__.py | NREL/reVX | 4d62eb2c003c3b53b959f7a58bdc342d18098884 | [
"BSD-3-Clause"
] | 7 | 2020-04-06T00:29:55.000Z | 2022-01-23T20:00:14.000Z | reVX/hybrid_stats/__init__.py | NREL/reVX | 4d62eb2c003c3b53b959f7a58bdc342d18098884 | [
"BSD-3-Clause"
] | 67 | 2020-02-28T20:15:35.000Z | 2022-03-31T21:34:52.000Z | reVX/hybrid_stats/__init__.py | NREL/reVX | 4d62eb2c003c3b53b959f7a58bdc342d18098884 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Sub-package to compute hybrid solar-wind generation stats
"""
from reVX.hybrid_stats.hybrid_stats import HybridStats
from reVX.hybrid_stats.temporal_agg import TemporalAgg
| 28.571429 | 57 | 0.785 | 28 | 200 | 5.464286 | 0.678571 | 0.215686 | 0.183007 | 0.248366 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005618 | 0.11 | 200 | 6 | 58 | 33.333333 | 0.853933 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
9356181b70c3d16eeed53436f20bd9f3be44436d | 24,211 | py | Python | django_flex_user/tests/views/test_endpoint_users_user.py | ebenh/django-flex-user | efffb21e4ce33d2ea8665756334e2a391f4b5a72 | [
"MIT"
] | 1 | 2021-09-13T20:26:02.000Z | 2021-09-13T20:26:02.000Z | django_flex_user/tests/views/test_endpoint_users_user.py | ebenh/django-flex-user | efffb21e4ce33d2ea8665756334e2a391f4b5a72 | [
"MIT"
] | null | null | null | django_flex_user/tests/views/test_endpoint_users_user.py | ebenh/django-flex-user | efffb21e4ce33d2ea8665756334e2a391f4b5a72 | [
"MIT"
] | null | null | null | from rest_framework.test import APITestCase
from rest_framework import status
class TestFlexUserRetrieveUpdate(APITestCase):
"""
This class is designed to test django_flex_user.views.FlexUser
"""
_REST_ENDPOINT_PATH = '/api/accounts/users/user/'
def test_method_get(self):
response = self.client.get(self._REST_ENDPOINT_PATH)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_method_post(self):
response = self.client.post(self._REST_ENDPOINT_PATH)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_method_put(self):
response = self.client.put(self._REST_ENDPOINT_PATH)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_method_patch(self):
response = self.client.patch(self._REST_ENDPOINT_PATH)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_method_delete(self):
response = self.client.delete(self._REST_ENDPOINT_PATH)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_method_options(self):
response = self.client.options(self._REST_ENDPOINT_PATH)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
class TestFlexUserRetrieveUpdateAuthenticated(APITestCase):
"""
This class is designed to test django_flex_user.views.FlexUser
"""
_REST_ENDPOINT_PATH = '/api/accounts/users/user/'
class _ContentType:
class ApplicationJSON:
username_values = [{},
{'username': None},
{'username': ''},
{'username': 'validUsername'},
{'username': 'invalidUsername+'}]
email_values = [{},
{'email': None},
{'email': ''},
{'email': 'validEmail@example.com'},
{'email': 'invalidEmail'}]
phone_values = [{},
{'phone': None},
{'phone': ''},
{'phone': '+12025551234'},
{'phone': 'invalidPhoneNumber'}]
password_values = [{},
{'password': None},
{'password': ''},
{'password': 'validPassword'},
{'password': 'invalid'}]
class MultipartFormData:
username_values = [{},
{'username': ''},
{'username': 'validUsername'},
{'username': 'invalidUsername+'}]
email_values = [{},
{'email': ''},
{'email': 'validEmail@example.com'},
{'email': 'invalidEmail'}]
phone_values = [{},
{'phone': ''},
{'phone': '+12025551234'},
{'phone': 'invalidPhoneNumber'}]
password_values = [{},
{'password': ''},
{'password': 'validPassword'},
{'password': 'invalid'}]
def setUp(self):
from django_flex_user.models.user import FlexUser
self.user = FlexUser.objects.create_user(username='validUsername', password='validPassword')
def test_method_get(self):
is_authenticated = self.client.login(username='validUsername', password='validPassword')
self.assertIs(is_authenticated, True)
response = self.client.get(self._REST_ENDPOINT_PATH)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.data,
{
'username': 'validUsername',
'email': None,
'email_verified': None,
'phone': None,
'phone_verified': None
}
)
self.client.logout()
def test_method_post(self):
is_authenticated = self.client.login(username='validUsername', password='validPassword')
self.assertIs(is_authenticated, True)
response = self.client.post(self._REST_ENDPOINT_PATH)
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
self.client.logout()
def test_method_patch_format_application_json(self):
from django.db import transaction
for i in self._ContentType.ApplicationJSON.username_values:
for j in self._ContentType.ApplicationJSON.email_values:
for k in self._ContentType.ApplicationJSON.phone_values:
for l in self._ContentType.ApplicationJSON.password_values:
data = {}
data.update(i)
data.update(j)
data.update(k)
data.update(l)
with self.subTest(**data), transaction.atomic():
"""
Special considerations for password changes:
By default, updating a user's password invalidates all sessions for the user. To make it so
that the user is *not* signed out by a password change, in
django_flex_user.serializers.FlexUserSerializer.update we call
django.contrib.auth.update_session_auth_hash which (1) generates a new session key for the
user's current session (2) updates the current session's _auth_user_hash with a value based
on the user's new password (because the value of _auth_user_hash for all other sessions are
not based on the user's latest password, those sessions are implicitly invalidated). The
session key for the newly created session is returned to the client in a 'set-cookie'
response header.
Because this call is wrapped in a transaction that rolls back all database changes at the
end of each iteration, the changes to the user's session will not be persisted to the
database. This means that on iterations following a password change, the session key
that was returned to the client will not match any session in the django_session table.
Therefore it is insufficient to log in the test client once before the execution of this
loop. Instead we have to call django.test.client.Client.force_login on each iteration to
ensure the client always has a valid session. We could instead call
django.test.client.Client.login, but it significantly impacts execution time.
For good measure/symmetry we also call django.test.client.Client.logout at the end of each
iteration.
"""
self.client.force_login(self.user, 'django_flex_user.backends.FlexUserModelBackend')
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data, format='json')
if 'password' in data and not data['password']:
"""
If the supplied password is defined and either None or the empty string,
django_flex_user.views.FlexUser.put should return HTTP status code HTTP_400_BAD_REQUEST.
"""
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
elif ('username' in data and data['username'] is None) and \
('email' not in data or data['email'] is None) and \
('phone' not in data or data['phone'] is None):
"""
If the supplied username is None, and the supplied email and phone are
simultaneously undefined or None, django_flex_user.views.FlexUser.put should return HTTP status
code HTTP_400_BAD_REQUEST.
"""
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
elif data.get('username') == '' or \
data.get('email') == '' or \
data.get('phone') == '':
"""
If any of the supplied username, email or phone are the empty string
django_flex_user.views.FlexUser.put should return HTTP status code HTTP_400_BAD_REQUEST.
"""
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
elif (data.get('username') and 'invalid' in data['username']) or \
(data.get('email') and 'invalid' in data['email']) or \
(data.get('phone') and 'invalid' in data['phone']) or \
(data.get('password') and 'invalid' in data['password']):
"""
If any of the supplied username, email, phone or password are defined and
invalid, django_flex_user.views.FlexUser.put should return HTTP status code
HTTP_400_BAD_REQUEST.
"""
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
else:
"""
This case encompasses all possible permutations of supplied username, email,
phone and password for which django_flex_user.views.FlexUser.put should return HTTP status
code HTTP_200_OK.
"""
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.data,
{
'username': data.get('username', 'validUsername'),
'email': data.get('email'),
'email_verified': False if data.get('email') else None,
'phone': data.get('phone'),
'phone_verified': False if data.get('phone') else None
}
)
self.client.logout()
transaction.set_rollback(True)
def test_method_patch_format_multipart_form_data(self):
from django.db import transaction
for i in self._ContentType.MultipartFormData.username_values:
for j in self._ContentType.MultipartFormData.email_values:
for k in self._ContentType.MultipartFormData.phone_values:
for l in self._ContentType.MultipartFormData.password_values:
data = {}
data.update(i)
data.update(j)
data.update(k)
data.update(l)
with self.subTest(**data), transaction.atomic():
"""
Special considerations for password changes:
By default, updating a user's password invalidates all sessions for the user. To make it so
that the user is *not* signed out by a password change, in
django_flex_user.serializers.FlexUserSerializer.update we call
django.contrib.auth.update_session_auth_hash which (1) generates a new session key for the
user's current session (2) updates the current session's _auth_user_hash with a value based
on the user's new password (because the value of _auth_user_hash for all other sessions are
not based on the user's latest password, those sessions are implicitly invalidated). The
session key for the newly created session is returned to the client in a 'set-cookie'
response header.
Because this call is wrapped in a transaction that rolls back all database changes at the
end of each iteration, the changes to the user's session will not be persisted to the
database. This means that on iterations following a password change, the session key
that was returned to the client will not match any session in the django_session table.
Therefore it is insufficient to log in the test client once before the execution of this
loop. Instead we have to call django.test.client.Client.force_login on each iteration to
ensure the client always has a valid session. We could instead call
django.test.client.Client.login, but it significantly impacts execution time.
For good measure/symmetry we also call django.test.client.Client.logout at the end of each
iteration.
"""
self.client.force_login(self.user, 'django_flex_user.backends.FlexUserModelBackend')
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data, format='multipart')
if 'password' in data and data['password'] == '':
"""
If the supplied password is defined and blank, django_flex_user.views.FlexUser.put should return
HTTP status code HTTP_400_BAD_REQUEST.
"""
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
elif ('username' in data and data['username'] == '') and \
('email' not in data or data['email'] == '') and \
('phone' not in data or data['phone'] == ''):
"""
If the supplied username is blank, and the supplied email and phone are
simultaneously undefined or blank, django_flex_user.views.FlexUser.put should return HTTP status
code HTTP_400_BAD_REQUEST.
"""
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
elif (data.get('username') and 'invalid' in data['username']) or \
(data.get('email') and 'invalid' in data['email']) or \
(data.get('phone') and 'invalid' in data['phone']) or \
(data.get('password') and 'invalid' in data['password']):
"""
If any of the supplied username, email, phone or password are defined and
invalid, django_flex_user.views.FlexUser.put should return HTTP status code HTTP_400_BAD_REQUEST.
"""
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
else:
"""
This case encompasses all possible permutations of supplied username, email,
phone and password for which django_flex_user.views.FlexUser.put should return HTTP status
code HTTP_200_OK.
"""
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(
response.data,
{
'username': data.get('username', 'validUsername') or None,
'email': data.get('email') or None,
'email_verified': False if data.get('email') else None,
'phone': data.get('phone') or None,
'phone_verified': False if data.get('phone') else None
}
)
self.client.logout()
transaction.set_rollback(True)
def test_method_patch_username_case_insensitivity(self):
from django_flex_user.models.user import FlexUser
FlexUser.objects.create_user(username='validUsername2', password='validPassword')
self.client.force_login(self.user, 'django_flex_user.backends.FlexUserModelBackend')
data = {'username': 'VALIDUSERNAME2'}
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
def test_method_patch_duplicate_username(self):
from django_flex_user.models.user import FlexUser
FlexUser.objects.create_user(username='validUsername2', password='validPassword')
self.client.force_login(self.user, 'django_flex_user.backends.FlexUserModelBackend')
data = {'username': 'validUsername2'}
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
def test_method_patch_duplicate_email(self):
from django_flex_user.models.user import FlexUser
FlexUser.objects.create_user(email='validEmail@example.com', password='validPassword')
self.client.force_login(self.user, 'django_flex_user.backends.FlexUserModelBackend')
data = {'email': 'validEmail@example.com'}
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
def test_method_patch_duplicate_phone(self):
from django_flex_user.models.user import FlexUser
FlexUser.objects.create_user(phone='+12025551234', password='validPassword')
self.client.force_login(self.user, 'django_flex_user.backends.FlexUserModelBackend')
data = {'phone': '+12025551234'}
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
def test_method_patch_ambiguous_username(self):
"""
Verify that an email address or phone number cannot form a valid username.
:return:
"""
self.client.force_login(self.user, 'django_flex_user.backends.FlexUserModelBackend')
data = {'username': 'validEmail@example.com'}
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
data = {'username': '+12025551234'}
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
def test_method_patch_ambiguous_email(self):
"""
Verify that a username or phone number cannot form a valid email.
:return:
"""
self.client.force_login(self.user, 'django_flex_user.backends.FlexUserModelBackend')
data = {'email': 'validUsername'}
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
data = {'email': '+12025551234'}
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
def test_method_patch_ambiguous_phone(self):
"""
Verify that a username or email address cannot form a valid phone.
:return:
"""
self.client.force_login(self.user, 'django_flex_user.backends.FlexUserModelBackend')
data = {'phone': 'validUsername'}
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
data = {'phone': 'validEmail@example.com'}
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.client.logout()
def test_method_patch_normalize_username(self):
self.client.force_login(self.user, 'django_flex_user.backends.FlexUserModelBackend')
nfd = 'validUsérname' # é = U+0065 U+0301
nfkc = 'validUsérname' # é = U+00e9
data = {'username': nfd}
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data['username'], nfkc)
self.client.logout()
def test_method_patch_normalize_email(self):
self.client.force_login(self.user, 'django_flex_user.backends.FlexUserModelBackend')
data = {'email': 'validEmail@bücher.example'}
response = self.client.patch(self._REST_ENDPOINT_PATH, data=data)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data['email'], 'validEmail@xn--bcher-kva.example')
self.client.logout()
def test_method_put(self):
is_authenticated = self.client.login(username='validUsername', password='validPassword')
self.assertIs(is_authenticated, True)
response = self.client.put(self._REST_ENDPOINT_PATH)
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
self.client.logout()
def test_method_delete(self):
is_authenticated = self.client.login(username='validUsername', password='validPassword')
self.assertIs(is_authenticated, True)
response = self.client.delete(self._REST_ENDPOINT_PATH)
self.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
self.client.logout()
def test_method_options(self):
is_authenticated = self.client.login(username='validUsername', password='validPassword')
self.assertIs(is_authenticated, True)
response = self.client.options(self._REST_ENDPOINT_PATH)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.client.logout()
| 50.650628 | 129 | 0.549585 | 2,409 | 24,211 | 5.342466 | 0.096721 | 0.049728 | 0.066123 | 0.072106 | 0.923077 | 0.897514 | 0.889277 | 0.866589 | 0.832012 | 0.821445 | 0 | 0.01363 | 0.369708 | 24,211 | 477 | 130 | 50.756813 | 0.829751 | 0.016274 | 0 | 0.685185 | 0 | 0 | 0.124457 | 0.044364 | 0 | 0 | 0 | 0 | 0.155556 | 1 | 0.085185 | false | 0.092593 | 0.033333 | 0 | 0.144444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
9369c745bf0136b92dba325fdb228c97367af112 | 84 | py | Python | c.py | usha324/python | 7aa967b8dac8cd0c466652db448cb7e405821389 | [
"bzip2-1.0.6"
] | null | null | null | c.py | usha324/python | 7aa967b8dac8cd0c466652db448cb7e405821389 | [
"bzip2-1.0.6"
] | null | null | null | c.py | usha324/python | 7aa967b8dac8cd0c466652db448cb7e405821389 | [
"bzip2-1.0.6"
] | null | null | null | print (7 > 10)
print (4 < 16)
print (4 == 4)
print (4 <= 4)
print (4 != 4)
| 14 | 17 | 0.440476 | 15 | 84 | 2.466667 | 0.333333 | 0.648649 | 0.567568 | 0.648649 | 0.567568 | 0.567568 | 0 | 0 | 0 | 0 | 0 | 0.218182 | 0.345238 | 84 | 5 | 18 | 16.8 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
938333b03cc7b78e7463ac545c3eddf8b5815a4b | 2,861 | py | Python | eugene/tests/test_initial_conditions.py | jantzen/eugene | a5fdc8cfb31e1fa4e48b2f882be84347cc8a7d69 | [
"MIT"
] | 3 | 2017-04-11T22:12:41.000Z | 2021-06-29T20:08:59.000Z | eugene/tests/test_initial_conditions.py | jantzen/eugene | a5fdc8cfb31e1fa4e48b2f882be84347cc8a7d69 | [
"MIT"
] | null | null | null | eugene/tests/test_initial_conditions.py | jantzen/eugene | a5fdc8cfb31e1fa4e48b2f882be84347cc8a7d69 | [
"MIT"
] | 1 | 2021-04-09T08:51:14.000Z | 2021-04-09T08:51:14.000Z | import eugene as eu
import numpy as np
import warnings
import pdb
def test_choose_untrans_trans():
# test 1-D
t = np.linspace(0., 10., 100)
x = []
y = []
z = []
ics = np.random.normal(size=10000)
for ic in ics:
x.append(ic + np.exp(0.2 * t) - 1.)
ics = np.random.normal(size=10000)
for ic in ics:
y.append(ic + np.exp(0.3 * t) - 1.)
ics = np.random.normal(size=10000)
for ic in ics:
z.append(ic + np.exp(0.2 * t) + 0.2 * t -1.)
data = [x, y, z]
untrans, trans = eu.initial_conditions.choose_untrans_trans(data, 100)
assert len(untrans[0]) == 100
## verify that warnings are properly triggered and reported
with warnings.catch_warnings(record=True) as w:
x = []
y = []
ics = np.random.normal(size=10000)
for ic in ics:
x.append(ic + np.exp(0.2 * t) - 1.)
ics = np.random.normal(size=10000) + 20.
for ic in ics:
y.append(ic + np.exp(0.3 * t) - 1.)
data = [x, y]
untrans, transm, error_flag = eu.initial_conditions.choose_untrans_trans(data, 100,
report=True)
print("Number of warnings captured = " + str(len(w)))
for warn in w:
print(warn.message)
assert len(w) == 3
assert error_flag[0,1] == 3
# test 3-D
t = np.concatenate([np.linspace(0., 10., 100).reshape(1,-1), np.linspace(0.,
10., 100).reshape(1,-1),np.linspace(0., 10., 100).reshape(1,-1)],
axis=0)
x = []
y = []
z = []
ics = np.random.normal(size=(3,10000))
for ic in ics.T:
ic = ic.reshape(-1,1)
x.append(ic + np.exp(0.2 * t) - 1.)
ics = np.random.normal(size=(3,10000))
for ic in ics.T:
ic = ic.reshape(-1,1)
y.append(ic + np.exp(0.3 * t) - 1.)
ics = np.random.normal(size=(3,10000))
for ic in ics.T:
ic = ic.reshape(-1,1)
z.append(ic + np.exp(0.2 * t) + 0.2 * t -1.)
data = [x, y, z]
untrans, trans = eu.initial_conditions.choose_untrans_trans(data, 100)
assert len(untrans[0]) == 100
## verify that warnings are properly triggered and reported
with warnings.catch_warnings(record=True) as w:
x = []
y = []
ics = np.random.normal(size=(3,10000))
for ic in ics.T:
ic = ic.reshape(-1,1)
x.append(ic + np.exp(0.2 * t) - 1.)
ics = np.random.normal(size=(3,10000)) + 20.
for ic in ics.T:
ic = ic.reshape(-1,1)
y.append(ic + np.exp(0.3 * t) - 1.)
data = [x, y]
untrans, trans, error_flag = eu.initial_conditions.choose_untrans_trans(data, 100,
report=True)
print("Number of warnings captured = " + str(len(w)))
for warn in w:
print(warn.message)
assert len(w) == 3
assert error_flag[0,1] == 3
| 28.61 | 91 | 0.539322 | 455 | 2,861 | 3.345055 | 0.147253 | 0.032852 | 0.072273 | 0.111695 | 0.919842 | 0.90933 | 0.906702 | 0.906702 | 0.90276 | 0.90276 | 0 | 0.082082 | 0.301643 | 2,861 | 99 | 92 | 28.89899 | 0.67968 | 0.045788 | 0 | 0.833333 | 0 | 0 | 0.022043 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.012821 | false | 0 | 0.051282 | 0 | 0.064103 | 0.051282 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
878740beb31b3cef055b4c67bdff3247e7e288a9 | 156 | py | Python | test_password_security.py | Schots/password_security | 35ecb12a055b3b0169f37a8c9095f49b1f6e82e8 | [
"MIT"
] | null | null | null | test_password_security.py | Schots/password_security | 35ecb12a055b3b0169f37a8c9095f49b1f6e82e8 | [
"MIT"
] | 1 | 2021-03-24T22:59:48.000Z | 2021-03-24T22:59:48.000Z | test_password_security.py | Schots/password_security | 35ecb12a055b3b0169f37a8c9095f49b1f6e82e8 | [
"MIT"
] | null | null | null | import hashlib
from password_security import password_checker,get_pwnd_count
def test_password_checker():
assert password_checker("123") == 1078184
| 17.333333 | 61 | 0.807692 | 20 | 156 | 5.95 | 0.7 | 0.378151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073529 | 0.128205 | 156 | 8 | 62 | 19.5 | 0.801471 | 0 | 0 | 0 | 0 | 0 | 0.019608 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0.75 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 8 |
879ed2c06a475e4d5af527ef1424feb10812e062 | 13,831 | py | Python | tf_api/PtGrey.py | abhineet123/animal_detection_ | be0dd60d2b56b267f329b7be71d7f037499f98bc | [
"CC-BY-4.0"
] | 6 | 2020-06-18T16:41:40.000Z | 2022-03-10T07:15:13.000Z | tf_api/PtGrey.py | abhineet123/animal_detection_ | be0dd60d2b56b267f329b7be71d7f037499f98bc | [
"CC-BY-4.0"
] | 1 | 2021-08-11T08:42:28.000Z | 2021-08-11T08:42:28.000Z | tf_api/PtGrey.py | abhineet123/animal_detection_ | be0dd60d2b56b267f329b7be71d7f037499f98bc | [
"CC-BY-4.0"
] | 1 | 2022-02-25T11:06:17.000Z | 2022-02-25T11:06:17.000Z | def runPySpinCam(cam_id, _mode=0):
global height, width, image_converted, cap_fps
system = PtGrey.System.GetInstance()
cam_list = system.GetCameras()
num_cameras = cam_list.GetSize()
print("Number of cameras detected: {:d}".format(num_cameras))
if num_cameras == 0:
cam_list.Clear()
system.ReleaseInstance()
raise IOError("Not enough cameras!")
cam = cam_list.GetByIndex(cam_id)
try:
nodemap_tldevice = cam.GetTLDeviceNodeMap()
try:
node_device_information = PtGrey.CCategoryPtr(nodemap_tldevice.GetNode("DeviceInformation"))
if PtGrey.IsAvailable(node_device_information) and PtGrey.IsReadable(node_device_information):
features = node_device_information.GetFeatures()
for feature in features:
node_feature = PtGrey.CValuePtr(feature)
print("%s: %s" % (node_feature.GetName(),
node_feature.ToString() if PtGrey.IsReadable(node_feature) else
"Node not readable"))
else:
print("Device control information not available.")
except PtGrey.SpinnakerException as ex:
raise IOError("Error in getting device info: %s" % ex)
cam.Init()
nodemap = cam.GetNodeMap()
if rgb_mode == 1:
pix_format_txt = "RGB8Packed"
elif rgb_mode == 2:
pix_format_txt = "BayerRG8"
else:
pix_format_txt = "Mono8"
pixel_format_mode = PtGrey.CEnumerationPtr(nodemap.GetNode("PixelFormat"))
if not PtGrey.IsAvailable(pixel_format_mode) or not PtGrey.IsWritable(pixel_format_mode):
raise IOError("Unable to set pixel format mode to RGB (enum retrieval). Aborting...")
node_pixel_format_mode_rgb8 = pixel_format_mode.GetEntryByName(pix_format_txt)
if not PtGrey.IsAvailable(node_pixel_format_mode_rgb8) or not PtGrey.IsReadable(
node_pixel_format_mode_rgb8):
raise IOError("Unable to set pixel format mode to RGB (entry retrieval). Aborting...")
pixel_format_mode.SetIntValue(node_pixel_format_mode_rgb8.GetValue())
print("pixel format mode set to {:s}...".format(pix_format_txt))
video_mode_txt = 'Mode{:d}'.format(video_mode)
video_mode_node = PtGrey.CEnumerationPtr(nodemap.GetNode("VideoMode"))
if not PtGrey.IsAvailable(video_mode_node) or not PtGrey.IsWritable(video_mode_node):
raise IOError("Unable to set video mode to {} (enum retrieval). Aborting...".format(video_mode_txt))
node_video_mode_node = video_mode_node.GetEntryByName(video_mode_txt)
if not PtGrey.IsAvailable(node_video_mode_node) or not PtGrey.IsReadable(node_video_mode_node):
raise IOError("Unable to set video mode to {} (entry retrieval). Aborting...".format(video_mode_txt))
video_mode_node.SetIntValue(node_video_mode_node.GetValue())
print("video mode set to {:s}...".format(video_mode_txt))
node_acquisition_mode = PtGrey.CEnumerationPtr(nodemap.GetNode("AcquisitionMode"))
if not PtGrey.IsAvailable(node_acquisition_mode) or not PtGrey.IsWritable(node_acquisition_mode):
raise IOError(
"Unable to set acquisition mode to continuous (enum retrieval). Aborting...")
node_acquisition_mode_continuous = node_acquisition_mode.GetEntryByName("Continuous")
if not PtGrey.IsAvailable(node_acquisition_mode_continuous) or not PtGrey.IsReadable(
node_acquisition_mode_continuous):
raise IOError("Unable to set acquisition mode to continuous (entry retrieval). Aborting...")
acquisition_mode_continuous = node_acquisition_mode_continuous.GetValue()
node_acquisition_mode.SetIntValue(acquisition_mode_continuous)
print("acquisition mode set to continuous...")
cam.BeginAcquisition()
# get first image
while True:
try:
# print('Getting the first image')
image_result = cam.GetNextImage()
if image_result.IsIncomplete():
print("Image incomplete with image status %d ..." % image_result.GetImageStatus())
continue
width = image_result.GetWidth()
height = image_result.GetHeight()
image_converted = image_result
# if rgb_mode == 2:
# image_converted = image_result.Convert(PtGrey.PixelFormat_RGB8Packed, PtGrey.HQ_LINEAR)
# else:
# image_converted = image_result
# image_result.Release()
break
except PtGrey.SpinnakerException as ex:
raise IOError("Error in acquiring image: %s" % ex)
while True:
if stop_pt_grey_cam:
break
try:
cap_start_t = time.time()
image_result = cam.GetNextImage()
if image_result.IsIncomplete():
print("Image incomplete with image status %d ..." % image_result.GetImageStatus())
continue
width = image_result.GetWidth()
height = image_result.GetHeight()
cap_end_t = time.time()
cap_fps = 1.0 / float(cap_end_t - cap_start_t)
with ptgrey_mutex:
# if rgb_mode == 2:
# image_converted = image_result.Convert(PtGrey.PixelFormat_RGB8Packed, PtGrey.HQ_LINEAR)
# else:
# image_converted = image_result
image_converted = image_result
# cap_end_t2 = time.time()
# cap_fps2 = 1.0 / float(cap_end_t2 - cap_start_t)
if _mode == 1:
image_np_gray = np.array(image_converted.GetData(), dtype=np.uint8).reshape(
(height, width)).copy()
image_np = cv2.cvtColor(image_np_gray, cv2.COLOR_GRAY2RGB)
cv2.imshow(win_title, image_np)
k = cv2.waitKey(1)
if k == ord('q') or k == 27:
break
# image_result.Release()
except PtGrey.SpinnakerException as ex:
raise IOError("Error in acquiring image: %s" % ex)
except PtGrey.SpinnakerException as ex:
raise IOError("Error: %s" % ex)
cam.EndAcquisition()
cam.DeInit()
del cam
cam_list.Clear()
system.ReleaseInstance()
def runPySpinCam(cam_id, _mode=0):
global height, width, image_converted, cap_fps
system = PtGrey.System.GetInstance()
cam_list = system.GetCameras()
num_cameras = cam_list.GetSize()
print("Number of cameras detected: {:d}".format(num_cameras))
if num_cameras == 0:
cam_list.Clear()
system.ReleaseInstance()
raise IOError("Not enough cameras!")
cam = cam_list.GetByIndex(cam_id)
try:
nodemap_tldevice = cam.GetTLDeviceNodeMap()
try:
node_device_information = PtGrey.CCategoryPtr(nodemap_tldevice.GetNode("DeviceInformation"))
if PtGrey.IsAvailable(node_device_information) and PtGrey.IsReadable(node_device_information):
features = node_device_information.GetFeatures()
for feature in features:
node_feature = PtGrey.CValuePtr(feature)
print("%s: %s" % (node_feature.GetName(),
node_feature.ToString() if PtGrey.IsReadable(node_feature) else
"Node not readable"))
else:
print("Device control information not available.")
except PtGrey.SpinnakerException as ex:
raise IOError("Error in getting device info: %s" % ex)
cam.Init()
nodemap = cam.GetNodeMap()
if rgb_mode == 1:
pix_format_txt = "RGB8Packed"
elif rgb_mode == 2:
pix_format_txt = "BayerRG8"
else:
pix_format_txt = "Mono8"
pixel_format_mode = PtGrey.CEnumerationPtr(nodemap.GetNode("PixelFormat"))
if not PtGrey.IsAvailable(pixel_format_mode) or not PtGrey.IsWritable(pixel_format_mode):
raise IOError("Unable to set pixel format mode to RGB (enum retrieval). Aborting...")
node_pixel_format_mode_rgb8 = pixel_format_mode.GetEntryByName(pix_format_txt)
if not PtGrey.IsAvailable(node_pixel_format_mode_rgb8) or not PtGrey.IsReadable(
node_pixel_format_mode_rgb8):
raise IOError("Unable to set pixel format mode to RGB (entry retrieval). Aborting...")
pixel_format_mode.SetIntValue(node_pixel_format_mode_rgb8.GetValue())
print("pixel format mode set to {:s}...".format(pix_format_txt))
video_mode_txt = 'Mode{:d}'.format(video_mode)
video_mode_node = PtGrey.CEnumerationPtr(nodemap.GetNode("VideoMode"))
if not PtGrey.IsAvailable(video_mode_node) or not PtGrey.IsWritable(video_mode_node):
raise IOError("Unable to set video mode to {} (enum retrieval). Aborting...".format(video_mode_txt))
node_video_mode_node = video_mode_node.GetEntryByName(video_mode_txt)
if not PtGrey.IsAvailable(node_video_mode_node) or not PtGrey.IsReadable(node_video_mode_node):
raise IOError(
"Unable to set video mode to {} (entry retrieval). Aborting...".format(video_mode_txt))
video_mode_node.SetIntValue(node_video_mode_node.GetValue())
print("video mode set to {:s}...".format(video_mode_txt))
node_acquisition_mode = PtGrey.CEnumerationPtr(nodemap.GetNode("AcquisitionMode"))
if not PtGrey.IsAvailable(node_acquisition_mode) or not PtGrey.IsWritable(node_acquisition_mode):
raise IOError(
"Unable to set acquisition mode to continuous (enum retrieval). Aborting...")
node_acquisition_mode_continuous = node_acquisition_mode.GetEntryByName("Continuous")
if not PtGrey.IsAvailable(node_acquisition_mode_continuous) or not PtGrey.IsReadable(
node_acquisition_mode_continuous):
raise IOError("Unable to set acquisition mode to continuous (entry retrieval). Aborting...")
acquisition_mode_continuous = node_acquisition_mode_continuous.GetValue()
node_acquisition_mode.SetIntValue(acquisition_mode_continuous)
print("acquisition mode set to continuous...")
cam.BeginAcquisition()
# get first image
while True:
try:
# print('Getting the first image')
image_result = cam.GetNextImage()
if image_result.IsIncomplete():
print("Image incomplete with image status %d ..." % image_result.GetImageStatus())
continue
width = image_result.GetWidth()
height = image_result.GetHeight()
image_converted = image_result
# if rgb_mode == 2:
# image_converted = image_result.Convert(PtGrey.PixelFormat_RGB8Packed, PtGrey.HQ_LINEAR)
# else:
# image_converted = image_result
# image_result.Release()
break
except PtGrey.SpinnakerException as ex:
raise IOError("Error in acquiring image: %s" % ex)
while True:
if stop_pt_grey_cam:
break
try:
cap_start_t = time.time()
image_result = cam.GetNextImage()
if image_result.IsIncomplete():
print("Image incomplete with image status %d ..." % image_result.GetImageStatus())
continue
width = image_result.GetWidth()
height = image_result.GetHeight()
cap_end_t = time.time()
cap_fps = 1.0 / float(cap_end_t - cap_start_t)
with ptgrey_mutex:
# if rgb_mode == 2:
# image_converted = image_result.Convert(PtGrey.PixelFormat_RGB8Packed, PtGrey.HQ_LINEAR)
# else:
# image_converted = image_result
image_converted = image_result
# cap_end_t2 = time.time()
# cap_fps2 = 1.0 / float(cap_end_t2 - cap_start_t)
if _mode == 1:
image_np_gray = np.array(image_converted.GetData(), dtype=np.uint8).reshape(
(height, width)).copy()
image_np = cv2.cvtColor(image_np_gray, cv2.COLOR_GRAY2RGB)
cv2.imshow(win_title, image_np)
k = cv2.waitKey(1)
if k == ord('q') or k == 27:
break
# image_result.Release()
except PtGrey.SpinnakerException as ex:
raise IOError("Error in acquiring image: %s" % ex)
except PtGrey.SpinnakerException as ex:
raise IOError("Error: %s" % ex)
cam.EndAcquisition()
cam.DeInit()
del cam
cam_list.Clear()
system.ReleaseInstance() | 48.024306 | 117 | 0.586653 | 1,444 | 13,831 | 5.364266 | 0.108033 | 0.041828 | 0.046476 | 0.034082 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0.006904 | 0.329766 | 13,831 | 288 | 118 | 48.024306 | 0.828695 | 0.068036 | 0 | 0.990783 | 0 | 0 | 0.138237 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009217 | false | 0 | 0 | 0 | 0.009217 | 0.073733 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
87c6add833fe9760096c204252397e494523076f | 6,481 | py | Python | src/radical/ensemblemd/tests/kernels/simple_tests.py | chemlove/radical.ensemblemd | 0ec4b127760d2fee88d4eae1768fecec4bdd6b21 | [
"MIT"
] | null | null | null | src/radical/ensemblemd/tests/kernels/simple_tests.py | chemlove/radical.ensemblemd | 0ec4b127760d2fee88d4eae1768fecec4bdd6b21 | [
"MIT"
] | null | null | null | src/radical/ensemblemd/tests/kernels/simple_tests.py | chemlove/radical.ensemblemd | 0ec4b127760d2fee88d4eae1768fecec4bdd6b21 | [
"MIT"
] | null | null | null | """ Tests cases
"""
import os
import sys
import glob
import unittest
import radical.ensemblemd
#-----------------------------------------------------------------------------
#
class SimpleKernelTests(unittest.TestCase):
def setUp(self):
# clean up fragments from previous tests
pass
def tearDown(self):
# clean up after ourselves
pass
#-------------------------------------------------------------------------
#
def test__amber_kernel(self):
"""Basic test of the AMBER kernel.
"""
k = radical.ensemblemd.Kernel(name="md.amber")
k.arguments = ["--mininfile=abc", "--mdinfile=def", "--topfile=ghi","--cycle=1"]
_kernel = k._bind_to_resource("*")
assert type(_kernel) == radical.ensemblemd.kernel_plugins.md.amber.Kernel, _kernel
# Test kernel specifics here:
k = radical.ensemblemd.Kernel(name="md.amber")
k.arguments = ["--mininfile=abc", "--mdinfile=def", "--topfile=ghi","--cycle=1"]
k._bind_to_resource("*")
assert k._cu_def_executable == "/bin/bash", k._cu_def_executable
assert k.arguments == ['-l','-c','pmemd -O -i abc -o min1.out -inf min1.inf -r md1.crd -p ghi -c min1.crd -ref min1.crd && pmemd -O -i def -o md1.out -inf md1.inf -x md1.ncdf -r md1.rst -p ghi -c md1.crd'], k.arguments
assert k._cu_def_pre_exec == [], k._cu_def_pre_exec
assert k._cu_def_post_exec == None, k._cu_def_post_exec
k._bind_to_resource("stampede.tacc.utexas.edu")
assert k._cu_def_executable == "/bin/bash", k._cu_def_executable
assert k.arguments == ['-l','-c','pmemd -O -i abc -o min1.out -inf min1.inf -r md1.crd -p ghi -c min1.crd -ref min1.crd && pmemd -O -i def -o md1.out -inf md1.inf -x md1.ncdf -r md1.rst -p ghi -c md1.crd'], k.arguments
assert k._cu_def_pre_exec == ["module load TACC", "module load amber"], k._cu_def_pre_exec
assert k._cu_def_post_exec == None, k._cu_def_post_exec
#-------------------------------------------------------------------------
#
def test__coco_kernel(self):
"""Basic test of the CoCo kernel.
"""
k = radical.ensemblemd.Kernel(name="md.coco")
k.arguments = ["--grid=3", "--dims=3", "--frontpoints=8","--topfile=abc","--mdfile=def",
"--output=xyz","--cycle=1"]
_kernel = k._bind_to_resource("*")
assert type(_kernel) == radical.ensemblemd.kernel_plugins.md.coco.Kernel, _kernel
# Test kernel specifics here:
k = radical.ensemblemd.Kernel(name="md.coco")
k.arguments = ["--grid=3", "--dims=3", "--frontpoints=8","--topfile=abc","--mdfile=def",
"--output=xyz","--cycle=1"]
k._bind_to_resource("*")
assert k._cu_def_executable == "/bin/bash", k._cu_def_executable
assert k.arguments == ['-l','-c','pyCoCo --grid 3 --dims 3 --frontpoints 8 --topfile abc --mdfile def --output xyz && python postexec.py 8 1'], k.arguments
assert k._cu_def_pre_exec == [], k._cu_def_pre_exec
assert k._cu_def_post_exec == None, k._cu_def_post_exec
k._bind_to_resource("stampede.tacc.utexas.edu")
assert k._cu_def_executable == "/bin/bash", k._cu_def_executable
assert k.arguments == ['-l','-c','pyCoCo --grid 3 --dims 3 --frontpoints 8 --topfile abc --mdfile def --output xyz && python postexec.py 8 1'], k.arguments
assert k._cu_def_pre_exec == ["module load intel/13.0.2.146","module load python","module load netcdf/4.3.2",
"module load hdf5/1.8.13","module load amber",
"export PYTHONPATH=/work/02998/ardi/coco_installation/lib/python2.7/site-packages:$PYTHONPATH",
"export PATH=/work/02998/ardi/coco_installation/bin:$PATH"], k._cu_def_pre_exec
assert k._cu_def_post_exec == None, k._cu_def_post_exec
#-------------------------------------------------------------------------
#
def test__gromacs_kernel(self):
"""Basic test of the GROMACS kernel.
"""
k = radical.ensemblemd.Kernel(name="md.gromacs")
k.arguments = ["--grompp=grompp.mdp","--topol=topol.top"]
_kernel = k._bind_to_resource("*")
assert type(_kernel) == radical.ensemblemd.kernel_plugins.md.gromacs.Kernel, _kernel
# Test kernel specifics here:
k = radical.ensemblemd.Kernel(name="md.gromacs")
k.arguments = ["--grompp=grompp.mdp","--topol=topol.top"]
k._bind_to_resource("*")
assert k._cu_def_executable == "python", k._cu_def_executable
assert k.arguments == ['run.py','--mdp','grompp.mdp','--gro','start.gro','--top','topol.top','--out','out.gro'], k.arguments
assert k._cu_def_pre_exec == [], k._cu_def_pre_exec
assert k._cu_def_post_exec == None, k._cu_def_post_exec
k._bind_to_resource("stampede.tacc.utexas.edu")
assert k._cu_def_executable == ["python"], k._cu_def_executable
assert k.arguments == ['run.py','--mdp','grompp.mdp','--gro','start.gro','--top','topol.top','--out','out.gro'], k.arguments
assert k._cu_def_pre_exec == ["module load gromacs python mpi4py"], k._cu_def_pre_exec
assert k._cu_def_post_exec == None, k._cu_def_post_exec
#-------------------------------------------------------------------------
#
def test__lsdmap_kernel(self):
"""Basic test of the LSDMAP kernel.
"""
k = radical.ensemblemd.Kernel(name="md.lsdmap")
k.arguments = ["--config=config.ini"]
_kernel = k._bind_to_resource("*")
assert type(_kernel) == radical.ensemblemd.kernel_plugins.md.lsdmap.Kernel, _kernel
# Test kernel specifics here:
k = radical.ensemblemd.Kernel(name="md.lsdmap")
k.arguments = ["--config=config.ini"]
k._bind_to_resource("*")
assert k._cu_def_executable == "lsdmap", k._cu_def_executable
assert k.arguments == ['lsdm.py', '-f','config.ini','-c','tmpha.gro','-n','out.nn','-w','weight.w'], k.arguments
assert k._cu_def_pre_exec == [], k._cu_def_pre_exec
assert k._cu_def_post_exec == None, k._cu_def_post_exec
k._bind_to_resource("stampede.tacc.utexas.edu")
assert k.arguments == ['lsdm.py', '-f','config.ini','-c','tmpha.gro','-n','out.nn','-w','weight.w'], k.arguments
assert k._cu_def_post_exec == None, k._cu_def_post_exec
| 49.473282 | 226 | 0.581855 | 870 | 6,481 | 4.072414 | 0.150575 | 0.037257 | 0.074513 | 0.074513 | 0.869038 | 0.852667 | 0.825572 | 0.814282 | 0.814282 | 0.799605 | 0 | 0.013214 | 0.205987 | 6,481 | 130 | 227 | 49.853846 | 0.675282 | 0.111557 | 0 | 0.691358 | 0 | 0.061728 | 0.297463 | 0.040245 | 0 | 0 | 0 | 0 | 0.419753 | 1 | 0.074074 | false | 0.024691 | 0.061728 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e201bac45ab5a564032360c104a84888e9619a26 | 24,977 | py | Python | tests/experiments/datasets.py | laudv/veritas | ba1761cc333b08b4381afa720b24ace065a9f106 | [
"Apache-2.0"
] | 6 | 2020-10-29T10:20:48.000Z | 2022-03-31T13:39:47.000Z | tests/experiments/datasets.py | laudv/veritas | ba1761cc333b08b4381afa720b24ace065a9f106 | [
"Apache-2.0"
] | 1 | 2021-11-25T13:15:11.000Z | 2021-12-08T09:23:24.000Z | tests/experiments/datasets.py | laudv/veritas | ba1761cc333b08b4381afa720b24ace065a9f106 | [
"Apache-2.0"
] | null | null | null | import os, json
import util
import pickle
import numpy as np
import pandas as pd
import sklearn.metrics as metrics
from sklearn import preprocessing
from veritas import addtree_from_xgb_model, addtrees_from_multiclass_xgb_model
class Dataset:
models_dir = "tests/experiments/models"
def __init__(self, special_name_tag=""):
self.special_tag = special_name_tag # special parameters, name indication
self.X = None
self.y = None
def load_dataset(self): # populate X, y
raise RuntimeError("not implemented")
def load_model(self, num_trees, tree_depth): # populate self.model, self.at, self.feat2id
""" populate self.model, self.at """
raise RuntimeError("not implemented")
def get_model_name(self, num_trees, tree_depth):
return f"{type(self).__name__}{self.special_tag}-{num_trees}-{tree_depth}"
def minmax_normalize(self):
X = self.X.values
min_max_scaler = preprocessing.MinMaxScaler()
X_scaled = min_max_scaler.fit_transform(X)
df = pd.DataFrame(X_scaled, columns=self.X.columns)
self.X = df
class Calhouse(Dataset):
def __init__(self):
super().__init__()
self.params = {
"objective": "reg:squarederror",
"tree_method": "hist",
"seed": 14,
"nthread": 1,
}
def load_dataset(self):
if self.X is None or self.y is None:
self.X, self.y = util.load_openml("calhouse", data_id=537)
self.y = np.log(self.y)
def load_model(self, num_trees, tree_depth):
model_name = self.get_model_name(num_trees, tree_depth)
if not os.path.isfile(os.path.join(self.models_dir, f"{model_name}.xgb")):
self.load_dataset()
print(f"training model depth={tree_depth}, num_trees={num_trees}")
def metric(y, raw_yhat): #maximized
return -metrics.mean_squared_error(y, raw_yhat)
self.params["max_depth"] = tree_depth
self.model, lr, metric_value = util.optimize_learning_rate(self.X,
self.y, self.params, num_trees, metric)
self.meta = {"lr": lr, "metric": metric_value, "columns": list(self.X.columns)}
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "wb") as f:
pickle.dump(self.model, f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "w") as f:
json.dump(self.meta, f)
else:
print(f"loading model from file: {model_name}")
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "rb") as f:
self.model = pickle.load(f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "r") as f:
self.meta = json.load(f)
feat2id_dict = {v: i for i, v in enumerate(self.meta["columns"])}
self.feat2id = lambda x: feat2id_dict[x]
self.at = addtree_from_xgb_model(self.model, feat2id_map=self.feat2id)
self.at.base_score = 0
class Allstate(Dataset):
def __init__(self):
super().__init__()
self.params = {
"objective": "reg:squarederror",
"tree_method": "hist",
"seed": 14,
"nthread": 1,
}
def load_dataset(self):
if self.X is None or self.y is None:
allstate_data_path = os.path.join(os.environ["VERITAS_DATA_DIR"], "allstate.h5")
data = pd.read_hdf(allstate_data_path)
self.X = data.drop(columns=["loss"])
self.y = data.loss
def load_model(self, num_trees, tree_depth):
model_name = self.get_model_name(num_trees, tree_depth)
if not os.path.isfile(os.path.join(self.models_dir, f"{model_name}.xgb")):
self.load_dataset()
print(f"training model depth={tree_depth}, num_trees={num_trees}")
def metric(y, raw_yhat): #maximized
return -metrics.mean_squared_error(y, raw_yhat)
self.params["max_depth"] = tree_depth
self.model, lr, metric_value = util.optimize_learning_rate(self.X,
self.y, self.params, num_trees, metric)
self.meta = {"lr": lr, "metric": metric_value, "columns": list(self.X.columns)}
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "wb") as f:
pickle.dump(self.model, f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "w") as f:
json.dump(self.meta, f)
else:
print(f"loading model from file: {model_name}")
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "rb") as f:
self.model = pickle.load(f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "r") as f:
self.meta = json.load(f)
feat2id_dict = {v: i for i, v in enumerate(self.meta["columns"])}
self.feat2id = lambda x: feat2id_dict[x]
self.at = addtree_from_xgb_model(self.model, feat2id_map=self.feat2id)
self.at.base_score = 0
class Covtype(Dataset):
def __init__(self):
super().__init__()
self.params = {
"objective": "binary:logistic",
"eval_metric": "error",
"tree_method": "hist",
"seed": 235,
"nthread": 1,
}
def load_dataset(self):
if self.X is None or self.y is None:
self.X, self.y = util.load_openml("covtype", data_id=1596)
self.y = (self.y==2)
def load_model(self, num_trees, tree_depth):
model_name = self.get_model_name(num_trees, tree_depth)
if not os.path.isfile(os.path.join(self.models_dir, f"{model_name}.xgb")):
self.load_dataset()
print(f"training model depth={tree_depth}, num_trees={num_trees}")
def metric(y, raw_yhat):
return metrics.accuracy_score(y, raw_yhat > 0)
self.params["max_depth"] = tree_depth
self.model, lr, metric_value = util.optimize_learning_rate(self.X,
self.y, self.params, num_trees, metric)
self.meta = {"lr": lr, "metric": metric_value, "columns": list(self.X.columns)}
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "wb") as f:
pickle.dump(self.model, f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "w") as f:
json.dump(self.meta, f)
else:
print(f"loading model from file: {model_name}")
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "rb") as f:
self.model = pickle.load(f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "r") as f:
self.meta = json.load(f)
feat2id_dict = {v: i for i, v in enumerate(self.meta["columns"])}
self.feat2id = lambda x: feat2id_dict[x]
self.at = addtree_from_xgb_model(self.model, feat2id_map=self.feat2id)
self.at.base_score = 0
class CovtypeNormalized(Covtype):
def __init__(self):
super().__init__()
def load_dataset(self):
if self.X is None or self.y is None:
super().load_dataset()
self.minmax_normalize()
class Higgs(Dataset):
def __init__(self):
super().__init__()
self.params = {
"objective": "binary:logistic",
"eval_metric": "error",
"tree_method": "hist",
"seed": 220,
"nthread": 1,
}
def load_dataset(self):
if self.X is None or self.y is None:
higgs_data_path = os.path.join(os.environ["VERITAS_DATA_DIR"], "higgs.h5")
self.X = pd.read_hdf(higgs_data_path, "X")
self.y = pd.read_hdf(higgs_data_path, "y")
def load_model(self, num_trees, tree_depth):
model_name = self.get_model_name(num_trees, tree_depth)
if not os.path.isfile(os.path.join(self.models_dir, f"{model_name}.xgb")):
self.load_dataset()
print(f"training model depth={tree_depth}, num_trees={num_trees}")
def metric(y, raw_yhat):
return metrics.accuracy_score(y, raw_yhat > 0)
self.params["max_depth"] = tree_depth
self.model, lr, metric_value = util.optimize_learning_rate(self.X,
self.y, self.params, num_trees, metric)
self.meta = {"lr": lr, "metric": metric_value, "columns": list(self.X.columns)}
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "wb") as f:
pickle.dump(self.model, f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "w") as f:
json.dump(self.meta, f)
else:
print(f"loading model from file: {model_name}")
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "rb") as f:
self.model = pickle.load(f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "r") as f:
self.meta = json.load(f)
feat2id_dict = {v: i for i, v in enumerate(self.meta["columns"])}
self.feat2id = lambda x: feat2id_dict[x]
self.at = addtree_from_xgb_model(self.model, feat2id_map=self.feat2id)
self.at.base_score = 0
class LargeHiggs(Dataset):
def __init__(self):
super().__init__()
self.params = {
"objective": "binary:logistic",
"eval_metric": "error",
"tree_method": "hist",
"seed": 220,
"nthread": 1,
}
def load_dataset(self):
if self.X is None or self.y is None:
higgs_data_path = os.path.join(os.environ["VERITAS_DATA_DIR"], "higgs_large.h5")
data = pd.read_hdf(higgs_data_path)
self.y = data[0]
self.X = data.drop(columns=[0])
columns = [f"a{i}" for i in range(self.X.shape[1])]
self.X.columns = columns
self.minmax_normalize()
def load_model(self, num_trees, tree_depth):
model_name = self.get_model_name(num_trees, tree_depth)
if not os.path.isfile(os.path.join(self.models_dir, f"{model_name}.xgb")):
self.load_dataset()
print(f"training model depth={tree_depth}, num_trees={num_trees}")
def metric(y, raw_yhat):
return metrics.accuracy_score(y, raw_yhat > 0)
self.params["max_depth"] = tree_depth
self.model, lr, metric_value = util.optimize_learning_rate(self.X,
self.y, self.params, num_trees, metric)
self.meta = {"lr": lr, "metric": metric_value, "columns": list(self.X.columns)}
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "wb") as f:
pickle.dump(self.model, f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "w") as f:
json.dump(self.meta, f)
else:
print(f"loading model from file: {model_name}")
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "rb") as f:
self.model = pickle.load(f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "r") as f:
self.meta = json.load(f)
feat2id_dict = {v: i for i, v in enumerate(self.meta["columns"])}
self.feat2id = lambda x: feat2id_dict[x]
self.at = addtree_from_xgb_model(self.model, feat2id_map=self.feat2id)
self.at.base_score = 0
class Mnist(Dataset):
def __init__(self):
super().__init__()
self.params = {
"num_class": 10,
"objective": "multi:softmax",
"tree_method": "hist",
"eval_metric": "merror",
"seed": 53589,
"nthread": 4,
}
def load_dataset(self):
if self.X is None or self.y is None:
self.X, self.y = util.load_openml("mnist", data_id=554)
def load_model(self, num_trees, tree_depth):
model_name = self.get_model_name(num_trees, tree_depth)
if not os.path.isfile(os.path.join(self.models_dir, f"{model_name}.xgb")):
self.load_dataset()
print(f"training model depth={tree_depth}, num_trees={num_trees}")
def metric(y, yhat): #maximized
return metrics.accuracy_score(y, yhat)
self.params["max_depth"] = tree_depth
self.model, lr, metric_value = util.optimize_learning_rate(self.X,
self.y, self.params, num_trees, metric)
self.meta = {"lr": lr, "metric": metric_value, "columns": list(self.X.columns)}
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "wb") as f:
pickle.dump(self.model, f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "w") as f:
json.dump(self.meta, f)
else:
print(f"loading model from file: {model_name}")
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "rb") as f:
self.model = pickle.load(f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "r") as f:
self.meta = json.load(f)
feat2id_dict = {v: i for i, v in enumerate(self.meta["columns"])}
self.feat2id = lambda x: feat2id_dict[x]
self.at = addtrees_from_multiclass_xgb_model(self.model, 10, feat2id_map=self.feat2id)
for at in self.at:
at.base_score = 0
class MnistNormalized(Mnist):
def __init__(self):
super().__init__()
def load_dataset(self):
if self.X is None or self.y is None:
super().load_dataset()
self.minmax_normalize()
class Mnist2v6(Mnist):
def __init__(self):
super().__init__()
self.params = {
"objective": "binary:logistic",
"eval_metric": "error",
"tree_method": "hist",
"seed": 235,
"nthread": 4,
"subsample": 0.5,
"colsample_bytree": 0.8,
}
def load_dataset(self):
if self.X is None or self.y is None:
super().load_dataset()
self.X = self.X.loc[(self.y==2) | (self.y==6), :]
self.y = self.y[(self.y==2) | (self.y==6)]
self.y = (self.y == 2.0).astype(float)
self.X.reset_index(inplace=True, drop=True)
self.y.reset_index(inplace=True, drop=True)
def load_model(self, num_trees, tree_depth):
model_name = self.get_model_name(num_trees, tree_depth)
if not os.path.isfile(os.path.join(self.models_dir, f"{model_name}.xgb")):
self.load_dataset()
print(f"training model depth={tree_depth}, num_trees={num_trees}")
def metric(y, raw_yhat):
return metrics.accuracy_score(y, raw_yhat > 0)
self.params["max_depth"] = tree_depth
self.model, lr, metric_value = util.optimize_learning_rate(self.X,
self.y, self.params, num_trees, metric)
self.meta = {"lr": lr, "metric": metric_value, "columns": list(self.X.columns)}
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "wb") as f:
pickle.dump(self.model, f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "w") as f:
json.dump(self.meta, f)
else:
print(f"loading model from file: {model_name}")
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "rb") as f:
self.model = pickle.load(f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "r") as f:
self.meta = json.load(f)
feat2id_dict = {v: i for i, v in enumerate(self.meta["columns"])}
self.feat2id = lambda x: feat2id_dict[x]
self.at = addtree_from_xgb_model(self.model, feat2id_map=self.feat2id)
self.at.base_score = 0
class FashionMnist(Dataset):
def __init__(self):
super().__init__()
self.params = {
"num_class": 10,
"objective": "multi:softmax",
"tree_method": "hist",
"eval_metric": "merror",
"seed": 132955,
"nthread": 1,
}
def load_dataset(self):
if self.X is None or self.y is None:
self.X, self.y = util.load_openml("fashion_mnist", data_id=40996)
#self.minmax_normalize()
def load_model(self, num_trees, tree_depth):
model_name = self.get_model_name(num_trees, tree_depth)
if not os.path.isfile(os.path.join(self.models_dir, f"{model_name}.xgb")):
self.load_dataset()
print(f"training model depth={tree_depth}, num_trees={num_trees}")
def metric(y, yhat): #maximized
return metrics.accuracy_score(y, yhat)
self.params["max_depth"] = tree_depth
self.model, lr, metric_value = util.optimize_learning_rate(self.X,
self.y, self.params, num_trees, metric)
self.meta = {"lr": lr, "metric": metric_value, "columns": list(self.X.columns)}
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "wb") as f:
pickle.dump(self.model, f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "w") as f:
json.dump(self.meta, f)
else:
print(f"loading model from file: {model_name}")
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "rb") as f:
self.model = pickle.load(f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "r") as f:
self.meta = json.load(f)
feat2id_dict = {v: i for i, v in enumerate(self.meta["columns"])}
self.feat2id = lambda x: feat2id_dict[x]
self.at = addtrees_from_multiclass_xgb_model(self.model, 10, feat2id_map=self.feat2id)
for at in self.at:
at.base_score = 0
class FashionMnist2v6(FashionMnist):
def __init__(self):
super().__init__()
self.params = {
"objective": "binary:logistic",
"eval_metric": "error",
"tree_method": "hist",
"seed": 235,
"nthread": 4,
"subsample": 0.5,
"colsample_bytree": 0.8,
}
def load_dataset(self):
if self.X is None or self.y is None:
super().load_dataset()
self.X = self.X.loc[(self.y==2) | (self.y==6), :]
self.y = self.y[(self.y==2) | (self.y==6)]
self.y = (self.y == 2.0).astype(float)
self.X.reset_index(inplace=True, drop=True)
self.y.reset_index(inplace=True, drop=True)
def load_model(self, num_trees, tree_depth):
model_name = self.get_model_name(num_trees, tree_depth)
if not os.path.isfile(os.path.join(self.models_dir, f"{model_name}.xgb")):
self.load_dataset()
print(f"training model depth={tree_depth}, num_trees={num_trees}")
def metric(y, raw_yhat):
return metrics.accuracy_score(y, raw_yhat > 0)
self.params["max_depth"] = tree_depth
self.model, lr, metric_value = util.optimize_learning_rate(self.X,
self.y, self.params, num_trees, metric)
self.meta = {"lr": lr, "metric": metric_value, "columns": list(self.X.columns)}
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "wb") as f:
pickle.dump(self.model, f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "w") as f:
json.dump(self.meta, f)
else:
print(f"loading model from file: {model_name}")
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "rb") as f:
self.model = pickle.load(f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "r") as f:
self.meta = json.load(f)
feat2id_dict = {v: i for i, v in enumerate(self.meta["columns"])}
self.feat2id = lambda x: feat2id_dict[x]
self.at = addtree_from_xgb_model(self.model, feat2id_map=self.feat2id)
self.at.base_score = 0
class Ijcnn1(Dataset):
def __init__(self):
super().__init__()
self.params = {
"objective": "binary:logistic",
"eval_metric": "error",
"tree_method": "hist",
"seed": 235,
"nthread": 1,
}
def load_dataset(self):
if self.X is None or self.y is None:
ijcnn1_data_path = os.path.join(os.environ["VERITAS_DATA_DIR"], "ijcnn1.h5")
self.X = pd.read_hdf(ijcnn1_data_path, "Xtrain")
self.Xtest = pd.read_hdf(ijcnn1_data_path, "Xtest")
columns = [f"a{i}" for i in range(self.X.shape[1])]
self.X.columns = columns
self.Xtest.columns = columns
self.y = pd.read_hdf(ijcnn1_data_path, "ytrain")
self.ytest = pd.read_hdf(ijcnn1_data_path, "ytest")
self.minmax_normalize()
def load_model(self, num_trees, tree_depth):
model_name = self.get_model_name(num_trees, tree_depth)
if not os.path.isfile(os.path.join(self.models_dir, f"{model_name}.xgb")):
self.load_dataset()
print(f"training model depth={tree_depth}, num_trees={num_trees}")
def metric(y, raw_yhat):
return metrics.accuracy_score(y, raw_yhat > 0)
self.params["max_depth"] = tree_depth
self.model, lr, metric_value = util.optimize_learning_rate(self.X,
self.y, self.params, num_trees, metric)
self.meta = {"lr": lr, "metric": metric_value, "columns": list(self.X.columns)}
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "wb") as f:
pickle.dump(self.model, f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "w") as f:
json.dump(self.meta, f)
else:
print(f"loading model from file: {model_name}")
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "rb") as f:
self.model = pickle.load(f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "r") as f:
self.meta = json.load(f)
feat2id_dict = {v: i for i, v in enumerate(self.meta["columns"])}
self.feat2id = lambda x: feat2id_dict[x]
self.at = addtree_from_xgb_model(self.model, feat2id_map=self.feat2id)
self.at.base_score = 0
class Webspam(Dataset):
def __init__(self):
super().__init__()
self.params = {
"objective": "binary:logistic",
"eval_metric": "error",
"tree_method": "hist",
"seed": 732,
"nthread": 1,
}
def load_dataset(self):
if self.X is None or self.y is None:
data_path = os.path.join(os.environ["VERITAS_DATA_DIR"], "webspam_wc_normalized_unigram.h5")
self.X = pd.read_hdf(data_path, "X")
self.X.columns = [f"a{i}" for i in range(self.X.shape[1])]
self.y = pd.read_hdf(data_path, "y")
self.minmax_normalize()
def load_model(self, num_trees, tree_depth):
model_name = self.get_model_name(num_trees, tree_depth)
if not os.path.isfile(os.path.join(self.models_dir, f"{model_name}.xgb")):
self.load_dataset()
print(f"training model depth={tree_depth}, num_trees={num_trees}")
def metric(y, raw_yhat):
return metrics.accuracy_score(y, raw_yhat > 0)
self.params["max_depth"] = tree_depth
self.model, lr, metric_value = util.optimize_learning_rate(self.X,
self.y, self.params, num_trees, metric)
self.meta = {"lr": lr, "metric": metric_value, "columns": list(self.X.columns)}
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "wb") as f:
pickle.dump(self.model, f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "w") as f:
json.dump(self.meta, f)
else:
print(f"loading model from file: {model_name}")
with open(os.path.join(self.models_dir, f"{model_name}.xgb"), "rb") as f:
self.model = pickle.load(f)
with open(os.path.join(self.models_dir, f"{model_name}.meta"), "r") as f:
self.meta = json.load(f)
feat2id_dict = {v: i for i, v in enumerate(self.meta["columns"])}
self.feat2id = lambda x: feat2id_dict[x]
self.at = addtree_from_xgb_model(self.model, feat2id_map=self.feat2id)
self.at.base_score = 0
| 41.421227 | 104 | 0.577331 | 3,458 | 24,977 | 3.969057 | 0.054078 | 0.058361 | 0.043716 | 0.056102 | 0.92867 | 0.908998 | 0.894572 | 0.894572 | 0.892168 | 0.892168 | 0 | 0.010408 | 0.284502 | 24,977 | 602 | 105 | 41.490033 | 0.757596 | 0.007287 | 0 | 0.847695 | 0 | 0 | 0.138856 | 0.014164 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106212 | false | 0 | 0.016032 | 0.024048 | 0.176353 | 0.044088 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
35d4f93c85bbe4dfb2f35081822fb95a5247a189 | 8,988 | py | Python | robust_rmab/baselines/nature_baselines_sis.py | sjohnsonyu/cluster-level-robust-rmab | 75bd09332907d5e215ab325e118856d5ada03405 | [
"MIT",
"BSD-3-Clause"
] | 2 | 2021-07-10T08:31:28.000Z | 2021-11-24T16:38:45.000Z | robust_rmab/baselines/nature_baselines_sis.py | sjohnsonyu/cluster-level-robust-rmab | 75bd09332907d5e215ab325e118856d5ada03405 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | robust_rmab/baselines/nature_baselines_sis.py | sjohnsonyu/cluster-level-robust-rmab | 75bd09332907d5e215ab325e118856d5ada03405 | [
"MIT",
"BSD-3-Clause"
] | 1 | 2021-11-24T17:21:27.000Z | 2021-11-24T17:21:27.000Z | import numpy as np
import itertools
# Don't use if you need deterministic (e.g., in main loop of double oracle)
class RandomNaturePolicy:
def __init__(self, nature_params, ind):
self.nature_params=nature_params
self.ind = ind
self.name="Random_Nature"
def __repr__(self):
return "%s_%i"%(self.name, self.ind)
def get_nature_action(self, o):
actions = np.zeros((self.nature_params.shape[0],self.nature_params.shape[1]))
for arm_i in range(actions.shape[0]):
for param_i in range(actions.shape[1]):
param_range = self.nature_params[arm_i, param_i, 1] - self.nature_params[arm_i, param_i, 0]
param_lower = self.nature_params[arm_i, param_i, 0]
actions[arm_i, param_i] = np.random.rand()*param_range + param_lower
return actions
def bound_nature_actions(self, actions, state=None, reshape=True):
return actions
# this is never going to work for SIS, too big
# def get_policy_array(self, state_dim=0, N=0):
# N = self.nature_params.shape[0]
# S = self.nature_params.shape[1]
# param_i = self.nature_params.shape[2]
# all_states = list(itertools.product(np.arange(S), repeat=N))
# policy_array = np.zeros((len(all_states),N*A),dtype=float)
# tup_to_ind = dict(zip(all_states,np.arange(len(all_states))))
# all_states = np.array(all_states)
# for i, state in enumerate(all_states):
# policy_array[i] = self.get_nature_action(state).reshape(-1)
# return policy_array, tup_to_ind
class PessimisticNaturePolicy:
def __init__(self, nature_params, ind):
self.nature_params=nature_params
self.ind = ind
self.name="Pessimist_Nature"
self.param_setting = self.set_params()
def __repr__(self):
return "%s_%i"%(self.name, self.ind)
def get_nature_action(self, o):
return self.param_setting
def set_params(self):
param_setting = np.zeros((self.nature_params.shape[0],self.nature_params.shape[1]))
for arm_i in range(param_setting.shape[0]):
# infectivity -- pess is higher
param_setting[arm_i, 0] = self.nature_params[arm_i, 0, 1]
# num_contacts -- pess is higher
param_setting[arm_i, 1] = self.nature_params[arm_i, 1, 1]
# action effect -- pess is lower
param_setting[arm_i, 2] = self.nature_params[arm_i, 2, 0]
# action effect -- pess is lower
param_setting[arm_i, 3] = self.nature_params[arm_i, 3, 0]
return param_setting
def bound_nature_actions(self, actions, state=None, reshape=True):
return actions
# def get_policy_array(self, state_dim=0, N=0):
# N = self.nature_params.shape[0]
# S = self.nature_params.shape[1]
# A = self.nature_params.shape[2]
# all_states = list(itertools.product(np.arange(S), repeat=N))
# policy_array = np.zeros((len(all_states),N*A),dtype=float)
# tup_to_ind = dict(zip(all_states,np.arange(len(all_states))))
# all_states = np.array(all_states)
# for i, state in enumerate(all_states):
# policy_array[i] = self.get_nature_action(state).reshape(-1)
# return policy_array, tup_to_ind
class OptimisticNaturePolicy:
def __init__(self, nature_params, ind):
self.nature_params=nature_params
self.ind = ind
self.name="Optimist_Nature"
self.param_setting = self.set_params()
def __repr__(self):
return "%s_%i"%(self.name, self.ind)
def get_nature_action(self, o):
return self.param_setting
def set_params(self):
param_setting = np.zeros((self.nature_params.shape[0],self.nature_params.shape[1]))
for arm_i in range(param_setting.shape[0]):
# infectivity -- optim is lower
param_setting[arm_i, 0] = self.nature_params[arm_i, 0, 0]
# num_contacts -- optim is lower
param_setting[arm_i, 1] = self.nature_params[arm_i, 1, 0]
# action effect -- optim is higher
param_setting[arm_i, 2] = self.nature_params[arm_i, 2, 1]
# action effect -- optim is higher
param_setting[arm_i, 3] = self.nature_params[arm_i, 3, 1]
return param_setting
def bound_nature_actions(self, actions, state=None, reshape=True):
return actions
# def get_policy_array(self, state_dim=0, N=0):
# N = self.nature_params.shape[0]
# S = self.nature_params.shape[1]
# A = self.nature_params.shape[2]
# all_states = list(itertools.product(np.arange(S), repeat=N))
# policy_array = np.zeros((len(all_states),N*A),dtype=float)
# tup_to_ind = dict(zip(all_states,np.arange(len(all_states))))
# all_states = np.array(all_states)
# for i, state in enumerate(all_states):
# policy_array[i] = self.get_nature_action(state).reshape(-1)
# return policy_array, tup_to_ind
class MiddleNaturePolicy:
def __init__(self, nature_params, ind, perturbations=None, perturbation_size=0.1):
self.nature_params=nature_params
self.ind = ind
self.perturbations = perturbations
self.perturbation_size = perturbation_size
self.name="Middle_Nature"
self.param_setting = self.set_params()
def __repr__(self):
return "%s_%i"%(self.name, self.ind)
def get_nature_action(self, o):
return self.param_setting
def set_params(self):
param_setting = np.zeros((self.nature_params.shape[0],self.nature_params.shape[1]))
for arm_i in range(param_setting.shape[0]):
for param_i in range(param_setting.shape[1]):
param_mean = self.nature_params[arm_i, param_i].mean()
if self.perturbations is not None:
param_range = np.ptp(self.nature_params[arm_i, param_i])
perturb_width = param_range*self.perturbation_size
perturbation = self.perturbations[arm_i, param_i]
perturbation = perturbation*perturb_width*2 - perturb_width
print(self.nature_params[arm_i, param_i])
print(perturbation)
print('before',param_mean)
param_mean = param_mean + perturbation
print('after',param_mean)
print()
param_setting[arm_i, param_i] = param_mean
return param_setting
def bound_nature_actions(self, actions, state=None, reshape=True):
return actions
# def get_policy_array(self, state_dim=0, N=0):
# N = self.nature_params.shape[0]
# S = self.nature_params.shape[1]
# A = self.nature_params.shape[2]
# all_states = list(itertools.product(np.arange(S), repeat=N))
# policy_array = np.zeros((len(all_states),N*A),dtype=float)
# tup_to_ind = dict(zip(all_states,np.arange(len(all_states))))
# all_states = np.array(all_states)
# for i, state in enumerate(all_states):
# policy_array[i] = self.get_nature_action(state).reshape(-1)
# return policy_array, tup_to_ind
class SampledRandomNaturePolicy:
def __init__(self, nature_params, ind):
self.nature_params=nature_params
self.param_setting=None
self.ind = ind
self.name="Sampled_Random_Nature_sis"
# only run this once
def sample_param_setting(self, seed):
assert self.param_setting is None
rand_state = np.random.RandomState()
rand_state.seed(seed)
shape = self.nature_params.shape[:-1]
sample = rand_state.rand(*shape)
range_upper = self.nature_params[:, :, 1]
range_lower = self.nature_params[:, :, 0]
sample = sample*(range_upper - range_lower) + range_lower
self.param_setting = sample
def __repr__(self):
return "%s_%i"%(self.name, self.ind)
def get_nature_action(self, o):
return self.param_setting
def bound_nature_actions(self, actions, state=None, reshape=True):
return actions
# we'll just settle for getting one sample from each...
# def get_policy_array(self, state_dim=0, N=0):
# N = self.nature_params.shape[0]
# S = self.nature_params.shape[1]
# A = self.nature_params.shape[2]
# all_states = list(itertools.product(np.arange(S), repeat=N))
# policy_array = np.zeros((len(all_states),N*A),dtype=float)
# tup_to_ind = dict(zip(all_states,np.arange(len(all_states))))
# all_states = np.array(all_states)
# for i, state in enumerate(all_states):
# policy_array[i] = self.get_nature_action(state).reshape(-1)
# return policy_array, tup_to_ind
| 33.537313 | 107 | 0.624833 | 1,230 | 8,988 | 4.298374 | 0.102439 | 0.124835 | 0.151315 | 0.095328 | 0.781918 | 0.770758 | 0.762625 | 0.726499 | 0.716285 | 0.690183 | 0 | 0.011631 | 0.263462 | 8,988 | 267 | 108 | 33.662921 | 0.787009 | 0.346239 | 0 | 0.522124 | 0 | 0 | 0.020317 | 0.004304 | 0 | 0 | 0 | 0 | 0.00885 | 1 | 0.212389 | false | 0 | 0.017699 | 0.123894 | 0.433628 | 0.044248 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 7 |
ea31e19ebf0cde7622964b661c7077889dae0d9d | 83 | py | Python | basic/output.py | sdyz5210/python | 78f9999f94d92d9ca7fde6f18acec7d3abd422ef | [
"BSD-3-Clause"
] | null | null | null | basic/output.py | sdyz5210/python | 78f9999f94d92d9ca7fde6f18acec7d3abd422ef | [
"BSD-3-Clause"
] | null | null | null | basic/output.py | sdyz5210/python | 78f9999f94d92d9ca7fde6f18acec7d3abd422ef | [
"BSD-3-Clause"
] | null | null | null | print 300
print 100+200
print '100 + 200 =',100+200
print 'my','name','is','summer' | 20.75 | 31 | 0.662651 | 15 | 83 | 3.666667 | 0.533333 | 0.327273 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.287671 | 0.120482 | 83 | 4 | 31 | 20.75 | 0.465753 | 0 | 0 | 0 | 0 | 0 | 0.297619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
ea37b015cd46684c28d356372205dd39554e8fca | 181,901 | py | Python | fireworks/fireworks_compare_bounds.py | ermongroup/adaptive_hashing | b6e0cfb9c81881d7935e4ff7444438bddd56e7dc | [
"MIT"
] | null | null | null | fireworks/fireworks_compare_bounds.py | ermongroup/adaptive_hashing | b6e0cfb9c81881d7935e4ff7444438bddd56e7dc | [
"MIT"
] | 1 | 2019-07-27T08:47:56.000Z | 2019-07-27T08:47:56.000Z | fireworks/fireworks_compare_bounds.py | ermongroup/adaptive_hashing | b6e0cfb9c81881d7935e4ff7444438bddd56e7dc | [
"MIT"
] | 1 | 2020-08-13T21:12:07.000Z | 2020-08-13T21:12:07.000Z | from __future__ import division
# from sat import SAT, get_variable_subset
import time
import os
import math
import resource
import decimal
from fireworks import Firework, Workflow, FWorker, LaunchPad
from fireworks.utilities.fw_utilities import explicit_serialize
from fireworks.core.firework import FWAction, FireTaskBase
import sys
sys.path.insert(0, "/atlas/u/jkuck/F2/src/python/")
from f2 import sharp_sat_call_from_python, find_lower_bound_call_from_python, execute_cmd
#True: run locally
#False: run remotely on cluster
TEST_LOCAL = False
DSHARP_EXECUTABLE = '/atlas/u/jkuck/dsharp/dsharp'
if TEST_LOCAL:
from fireworks.core.rocket_launcher import rapidfire
else:
from fireworks.queue.queue_launcher import rapidfire
from fireworks.user_objects.queue_adapters.common_adapter import CommonAdapter
from fw_tutorials.dynamic_wf.fibadd_task import FibonacciAdderTask
from cluster_config import HOME_DIRECTORY, MONGODB_USERNAME, MONGODB_PASSWORD
from experiment_config import MONGODB_HOST, MONGODB_PORT, MONGODB_NAME
import numpy as np
# Add the following lines to the file ~/.bashrc.user on Atlas:
# export PYTHONPATH="/atlas/u/jkuck/F2:$PYTHONPATH"
# export PYTHONPATH="/atlas/u/jkuck/F2/fireworks:$PYTHONPATH"
# $ source ~/.bashrc.user
# $ cd /atlas/u/jkuck/F2/fireworks/venv_f2
# $ source bin/activate
# $ cd ../
# $ python fireworks_compare_bounds.py
NJOBS_QUEUE = 200
m_ranges = {#'c432.isc': range(25, 42), #log_2(Z) = 36.1
'c432.isc': range(25, 46), #log_2(Z) = 36.1
'c499.isc': range(30, 51), #log_2(Z) = 41.0
'c880.isc': range(50, 71), #log_2(Z) = 60.0
'c1355.isc': range(30, 51), #log_2(Z) = 41.0
'c1908.isc': range(20, 44), #log_2(Z) = 33.0
'c2670.isc': range(220, 265), #log_2(Z) = 233
'sat-grid-pbl-0010.cnf': range(65, 95), #log_2(Z) = 78.9
'sat-grid-pbl-0015.cnf': range(170, 210), #log_2(Z) = 180.9
'sat-grid-pbl-0020.cnf': range(310, 350), #log_2(Z) = 318
'ra.cnf': range(920, 1000), #log_2(Z) = 951.0
'tire-1.cnf': range(20, 40), #log_2(Z) = 29.4 #range(27, 32), #range(20, 40),
'tire-2.cnf': range(30, 55), #log_2(Z) = 39.4 #range(27, 32), #range(20, 40),
'tire-3.cnf': range(25, 55), #log_2(Z) = 37.7 #range(27, 32), #range(20, 40),
'tire-4.cnf': range(35, 60), #log_2(Z) = 46.6 #range(27, 32), #range(20, 40),
'log-1.cnf': range(60, 85), #log_2(Z) = 69.0
'log-2.cnf': range(30, 45), #log_2(Z) = 34.9
'lang12.cnf': range(10, 26), #log_2(Z) =
'hypercube.cnf': range(80, 100), #log_2(Z) = 90
'hypercube1.cnf': range(40, 60), #log_2(Z) = 50
'hypercube2.cnf': range(1, 20), #log_2(Z) = 10
'hypercube3.cnf': range(1, 30), #log_2(Z) = 10
'hypercube4.cnf': range(10, 40), #log_2(Z) = 20
'hypercube5.cnf': range(40, 70), #log_2(Z) = 50
'hypercube6.cnf': range(90, 120), #log_2(Z) = 100
'hypercube7.cnf': range(490, 530), #log_2(Z) = 500
}
m_ranges = {#'c432.isc': range(25, 42), #log_2(Z) = 36.1
'c432.isc': range(18, 46), #log_2(Z) = 36.1
'c499.isc': range(20, 49), #log_2(Z) = 41.0
'c880.isc': range(40, 62), #log_2(Z) = 60.0
'c1355.isc': range(31, 38), #log_2(Z) = 41.0
'c1908.isc': range(20, 44), #log_2(Z) = 33.0
'c2670.isc': range(180, 240), #log_2(Z) = 233
'sat-grid-pbl-0010.cnf': range(55, 85), #log_2(Z) = 78.9
'sat-grid-pbl-0015.cnf': range(150, 190), #log_2(Z) = 180.9
'sat-grid-pbl-0020.cnf': range(270, 325), #log_2(Z) = 318
'ra.cnf': range(870, 1000), #log_2(Z) = 951.0
'tire-1.cnf': range(15, 38), #log_2(Z) = 29.4 #range(27, 32), #range(20, 40),
'tire-2.cnf': range(23, 47), #log_2(Z) = 39.4 #range(27, 32), #range(20, 40),
'tire-3.cnf': range(24, 46), #log_2(Z) = 37.7 #range(27, 32), #range(20, 40),
'tire-4.cnf': range(28, 55), #log_2(Z) = 46.6 #range(27, 32), #range(20, 40),
'log-1.cnf': range(55, 75), #log_2(Z) = 69.0
'log-2.cnf': range(22, 43), #log_2(Z) = 34.9
'lang12.cnf': range(6, 26), #log_2(Z) =
'hypercube.cnf': range(70, 100), #log_2(Z) = 90
'hypercube1.cnf': range(33, 60), #log_2(Z) = 50
'hypercube2.cnf': range(1, 20), #log_2(Z) = 10
'hypercube3.cnf': range(1, 30), #log_2(Z) = 10
'hypercube4.cnf': range(10, 40), #log_2(Z) = 20
'hypercube5.cnf': range(40, 70), #log_2(Z) = 50
'hypercube6.cnf': range(90, 120), #log_2(Z) = 100
'hypercube7.cnf': range(490, 530), #log_2(Z) = 500
}
if TEST_LOCAL:
f_ranges = {'c432.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
#'c432.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 7)],
#'c432.isc': [.0001, .001],
'c499.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'lang12.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'c880.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'c1355.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'c1908.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'c2670.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'sat-grid-pbl-0010.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
# 'sat-grid-pbl-0015.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'sat-grid-pbl-0015.cnf': [i/2000.0 for i in range(20,40)],
'sat-grid-pbl-0020.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'ra.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'tire-1.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'tire-2.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'tire-3.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'tire-4.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
# 'log-1.cnf': [i/100.0 for i in range(20, 50)],
'log-1.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'log-2.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube1.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube2.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube3.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube4.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube5.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube6.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube7.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
}
else:
f_ranges = {'c432.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
#'c432.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 7)],
#'c432.isc': [.0001, .001],
'c499.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'lang12.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'c880.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'c1355.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'c1908.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'c2670.isc': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'sat-grid-pbl-0010.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
# 'sat-grid-pbl-0015.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'sat-grid-pbl-0015.cnf': [i/2000.0 for i in range(20,60)],
'sat-grid-pbl-0020.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'ra.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'tire-1.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'tire-2.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'tire-3.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'tire-4.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'log-1.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'log-2.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube1.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube2.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube3.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube4.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube5.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube6.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
'hypercube7.cnf': [i/1000.0 for i in range(1,10)] + [i/100.0 for i in range(1, 50)],
}
#logger = open('heatmap_result_moreModels2/speed=%d.txt' % (m), "w")
@explicit_serialize
class RunSpecificExperimentBatch(FireTaskBase):
def run_task(self, fw_spec):
# RESULTS_DIRECTORY = '/atlas/u/jkuck/XORModelCount/SATModelCount/fireworks/postNIPS/extended_MF_valsTEST/%s' % fw_spec['problem_name'].split('.')[0]
if TEST_LOCAL:
RESULTS_DIRECTORY = '/atlas/u/jkuck/F2/fireworks/local_results'
else:
# RESULTS_DIRECTORY = '/atlas/u/jkuck/F2/fireworks/cluster_results_fixSharpSat'
# RESULTS_DIRECTORY = '/atlas/u/jkuck/F2/fireworks/cluster_results_orderVarsByMarginals_chunksRandom'
# RESULTS_DIRECTORY = '/atlas/u/jkuck/F2/fireworks/cluster_results_orderVarsByDOUBLEMarginals_chunksAssignmentProblem'
# RESULTS_DIRECTORY = '/atlas/u/jkuck/F2/fireworks/cluster_results_orderVarsByMarginals_randomInChunks_postUAI1'
# RESULTS_DIRECTORY = '/atlas/u/jkuck/F2/fireworks/cluster_results_UAIcameraReadyDsharp1'
RESULTS_DIRECTORY = '/atlas/u/jkuck/F2/fireworks/cluster_results_UAIcameraReadyDsharp_LBconfidenceFixed'
if not os.path.exists(RESULTS_DIRECTORY):
os.makedirs(RESULTS_DIRECTORY)
filename = '%s/%s.txt'%\
(RESULTS_DIRECTORY, fw_spec['problem_name'].split('.cnf')[0])
logger = open(filename, 'w')
logger.write('repeats_of_randomized_hashing_methods: %s\n' % (fw_spec['repeats']))
logger.close()
####### CHECK IF DSHARP CAN SOLVE THE PROBLEM QUICKLY, IF SO RETURN EARLY #######
time_out, solution_count, dsharp_time = dsharp_call_from_python(problem_name=fw_spec['problem_name'], time_limit=2)
if not time_out:
logger = open(filename, 'a')
logger.write("dsharp time_out: %s solution_count: %s dsharp_time: %s\n" % (time_out, solution_count, dsharp_time))
logger.close()
return 0
# ####### CHECK IF SHARPSAT CAN SOLVE THE PROBLEM QUICKLY, IF SO RETURN EARLY #######
# time_out, solution_count, sharp_sat_time = sharp_sat_call_from_python(problem_name=fw_spec['problem_name'], time_limit=2)
# if not time_out:
# logger = open(filename, 'a')
# logger.write("sharpSAT time_out: %s solution_count: %s sharp_sat_time: %s\n" % (time_out, solution_count, sharp_sat_time))
# logger.close()
# return 0
# ####### RUN EXPERIMENT: F2 with 1 ones per column, order variables by marginals, T=1 solutions #######
# for random_seed in range(fw_spec['repeats']):
# extra_configs = {
# 'sum_of_T_solutions':1,
# }
# lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
# random_seed=random_seed, var_degree=1, method='bi_regular_order_vars_by_marginals_randomChunks', extra_configs=extra_configs, time_limit=5000)
# logger = open(filename, 'a')
# logger.write("biregular_order_vars_by_marginals_assignmentProblem_variable_degree_1_Tsol_1 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
# logger.close()
# if sat_solver_time > 500:
# break
####### RUN EXPERIMENT: F2 with 1 ones per column, T=1 solutions #######
for random_seed in range(fw_spec['repeats']):
extra_configs = {
'sum_of_T_solutions':1,
}
lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
random_seed=random_seed, var_degree=1, method='original', extra_configs=extra_configs, time_limit=5000)
logger = open(filename, 'a')
logger.write("biregular_variable_degree_1_Tsol_1 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
logger.close()
if sat_solver_time > 5000:
break
####### RUN EXPERIMENT: F2 with 1 ones per column, T=10 solutions #######
for random_seed in range(fw_spec['repeats']):
extra_configs = {
'sum_of_T_solutions':10,
}
lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
random_seed=random_seed, var_degree=1, method='original', extra_configs=extra_configs, time_limit=5000)
logger = open(filename, 'a')
logger.write("biregular_variable_degree_1_Tsol_10 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
logger.close()
if sat_solver_time > 5000:
break
####### RUN EXPERIMENT: F2 with 1 ones per column, order variables by marginals, T=10 solutions #######
for random_seed in range(fw_spec['repeats']):
extra_configs = {
'sum_of_T_solutions':10,
}
lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
random_seed=random_seed, var_degree=1, method='bi_regular_order_vars_by_marginals_randomChunks', extra_configs=extra_configs, time_limit=5000)
logger = open(filename, 'a')
logger.write("biregular_order_vars_by_marginals_assignmentProblem_variable_degree_1_Tsol_10 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
logger.close()
if sat_solver_time > 5000:
break
TEST_FEWER_REPEATS = False
if TEST_FEWER_REPEATS:
####### RUN EXPERIMENT: F2 with 1 ones per column, order variables by marginals, T=3 solutions #######
for random_seed in range(fw_spec['repeats']):
extra_configs = {
'sum_of_T_solutions':3,
}
lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
random_seed=random_seed, var_degree=1, method='bi_regular_order_vars_by_marginals_randomChunks', extra_configs=extra_configs, time_limit=5000)
logger = open(filename, 'a')
logger.write("biregular_order_vars_by_marginals_assignmentProblem_variable_degree_1_Tsol_3 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
logger.close()
if sat_solver_time > 500:
break
####### RUN EXPERIMENT: F2 with 1 ones per column, order variables by marginals, T=1 solutions #######
for random_seed in range(fw_spec['repeats']):
extra_configs = {
'sum_of_T_solutions':1,
}
lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
random_seed=random_seed, var_degree=1, method='bi_regular_order_vars_by_marginals_randomChunks', extra_configs=extra_configs, time_limit=5000)
logger = open(filename, 'a')
logger.write("biregular_order_vars_by_marginals_assignmentProblem_variable_degree_1_Tsol_1 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
logger.close()
if sat_solver_time > 500:
break
# ####### RUN EXPERIMENT: F2 with 1 ones per column, order variables by 'double' marginals #######
# for random_seed in range(fw_spec['repeats']):
# lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
# random_seed=random_seed, var_degree=1, method='bi_regular_order_vars_by_double_marginals', extra_configs=extra_configs, time_limit=5000)
# logger = open(filename, 'a')
# logger.write("biregular_order_vars_by_doubleMarginals_assignmentProblem_variable_degree_1 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
# logger.close()
# if sat_solver_time > 500:
# break
TEST_HIGHER_DENSITY = False
if TEST_HIGHER_DENSITY:
#try longer constraints, 2 ones per column
####### RUN EXPERIMENT: F2 with 2 ones per column, T=1 solutions #######
for random_seed in range(fw_spec['repeats']):
extra_configs = {
'sum_of_T_solutions':1,
}
lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
random_seed=random_seed, var_degree=2, method='original', extra_configs=extra_configs, time_limit=5000)
logger = open(filename, 'a')
logger.write("biregular_variable_degree_2_Tsol_1 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
logger.close()
if sat_solver_time > 500:
break
####### RUN EXPERIMENT: F2 with 2 ones per column, T=10 solutions #######
for random_seed in range(fw_spec['repeats']):
extra_configs = {
'sum_of_T_solutions':10,
}
lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
random_seed=random_seed, var_degree=2, method='original', extra_configs=extra_configs, time_limit=5000)
logger = open(filename, 'a')
logger.write("biregular_variable_degree_2_Tsol_10 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
logger.close()
if sat_solver_time > 500:
break
####### RUN EXPERIMENT: F2 with 2 ones per column, order variables by marginals, T=10 solutions #######
for random_seed in range(fw_spec['repeats']):
extra_configs = {
'sum_of_T_solutions':10,
}
lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
random_seed=random_seed, var_degree=2, method='bi_regular_order_vars_by_marginals_randomChunks', extra_configs=extra_configs, time_limit=5000)
logger = open(filename, 'a')
logger.write("biregular_order_vars_by_marginals_assignmentProblem_variable_degree_2_Tsol_10 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
logger.close()
if sat_solver_time > 500:
break
#try longer constraints, 3 ones per column
####### RUN EXPERIMENT: F2 with 3 ones per column, T=1 solutions #######
for random_seed in range(fw_spec['repeats']):
extra_configs = {
'sum_of_T_solutions':1,
}
lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
random_seed=random_seed, var_degree=3, method='original', extra_configs=extra_configs, time_limit=5000)
logger = open(filename, 'a')
logger.write("biregular_variable_degree_3_Tsol_1 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
logger.close()
if sat_solver_time > 500:
break
####### RUN EXPERIMENT: F2 with 3 ones per column, T=10 solutions #######
for random_seed in range(fw_spec['repeats']):
extra_configs = {
'sum_of_T_solutions':10,
}
lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
random_seed=random_seed, var_degree=3, method='original', extra_configs=extra_configs, time_limit=5000)
logger = open(filename, 'a')
logger.write("biregular_variable_degree_3_Tsol_10 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
logger.close()
if sat_solver_time > 500:
break
####### RUN EXPERIMENT: F2 with 3 ones per column, order variables by marginals, T=10 solutions #######
for random_seed in range(fw_spec['repeats']):
extra_configs = {
'sum_of_T_solutions':10,
}
lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
random_seed=random_seed, var_degree=3, method='bi_regular_order_vars_by_marginals_randomChunks', extra_configs=extra_configs, time_limit=5000)
logger = open(filename, 'a')
logger.write("biregular_order_vars_by_marginals_assignmentProblem_variable_degree_3_Tsol_10 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
logger.close()
if sat_solver_time > 500:
break
# ####### RUN EXPERIMENT: F2 with 1.5 ones per column, T=1 solutions #######
# for random_seed in range(fw_spec['repeats']):
# extra_configs = {
# 'sum_of_T_solutions':1,
# }
# lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
# random_seed=random_seed, var_degree=1.5, method='original', extra_configs=extra_configs, time_limit=5000)
# logger = open(filename, 'a')
# logger.write("biregular_variable_degree_1.5_Tsol_1 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
# logger.close()
# if sat_solver_time > 500:
# break
# ####### RUN EXPERIMENT: F2 with 1.5 ones per column, order variables by marginals #######
# for random_seed in range(fw_spec['repeats']):
# lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
# random_seed=random_seed, var_degree=1.5, method='bi_regular_order_vars_by_marginals_randomChunks', extra_configs=None, time_limit=5000)
# logger = open(filename, 'a')
# logger.write("biregular_order_vars_by_marginals_assignmentProblem_variable_degree_1.5 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s parallel_runtime: %s\n" % (time_out, lb, sat_solver_time, random_seed, parallel_runtime))
# logger.close()
# if sat_solver_time > 500:
# break
# exit(0)
# ####### RUN EXPERIMENT: F2 with 3 ones per column #######
# for random_seed in range(fw_spec['repeats']):
# lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
# random_seed=random_seed, var_degree=3, method='original', extra_configs=None, time_limit=5000)
# logger = open(filename, 'a')
# logger.write("biregular_variable_degree_3 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s\n" % (time_out, lb, sat_solver_time, random_seed))
# logger.close()
# if sat_solver_time > 500:
# break
# ####### RUN EXPERIMENT: F2 with long (iid .5) constraints #######
# for random_seed in range(fw_spec['repeats']):
# extra_configs = {
# #density of ones in constraint matrix for method = 'iid'
# 'f': .5,
# }
# lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
# random_seed=random_seed, var_degree=3, method='iid', extra_configs=extra_configs, time_limit=5000)
# logger = open(filename, 'a')
# logger.write("long_iid_.5 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s\n" % (time_out, lb, sat_solver_time, random_seed))
# logger.close()
# if sat_solver_time > 500:
# break
# ####### RUN EXPERIMENT: F2 biregular constraints, sample entire matrix and look at marginals #######
# for random_seed in range(fw_spec['repeats']):
# extra_configs = {
# #when sampling biregular matrices such that each constraint has a good marginal,
# #how do we deal with the 'problem constraints' at the end?
# # - 'iid': sample them iid, give up biregular
# # - 'keep_biregular': leave them as biregular, give up good marginals
# 'biregular_marginal_problem_constraint': 'iid',
# }
# lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
# random_seed=random_seed, var_degree=1.5, method='bi_regular_marginals_joint_constraint', extra_configs=extra_configs, time_limit=5000)
# logger = open(filename, 'a')
# logger.write("bi_regular_marginals_joint_constraint_variable_degree_1.5 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s\n" % (time_out, lb, sat_solver_time, random_seed))
# logger.close()
# if sat_solver_time > 500:
# break
# ####### RUN EXPERIMENT: F2 biregular constraints, sample entire matrix and look at marginals of each constraint #######
# for random_seed in range(fw_spec['repeats']):
# extra_configs = {
# #when sampling biregular matrices such that each constraint has a good marginal,
# #how do we deal with the 'problem constraints' at the end?
# # - 'iid': sample them iid, give up biregular
# # - 'keep_biregular': leave them as biregular, give up good marginals
# 'biregular_marginal_problem_constraint': 'iid',
# }
# lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
# random_seed=random_seed, var_degree=1.5, method='bi_regular_marginals_per_constraint', extra_configs=extra_configs, time_limit=5000)
# logger = open(filename, 'a')
# logger.write("bi_regular_marginals_per_constraint_iid_variable_degree_1.5 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s\n" % (time_out, lb, sat_solver_time, random_seed))
# logger.close()
# if sat_solver_time > 500:
# break
# ####### RUN EXPERIMENT: F2 biregular constraints, sample entire matrix and look at marginals of each constraint #######
# for random_seed in range(fw_spec['repeats']):
# extra_configs = {
# #when sampling biregular matrices such that each constraint has a good marginal,
# #how do we deal with the 'problem constraints' at the end?
# # - 'iid': sample them iid, give up biregular
# # - 'keep_biregular': leave them as biregular, give up good marginals
# 'biregular_marginal_problem_constraint': 'keep_biregular',
# }
# lb, sat_solver_time, time_out, parallel_runtime = find_lower_bound_call_from_python(problem_name=fw_spec['problem_name'],\
# random_seed=random_seed, var_degree=1.5, method='bi_regular_marginals_per_constraint', extra_configs=extra_configs, time_limit=5000)
# logger = open(filename, 'a')
# logger.write("bi_regular_marginals_per_constraint_keep_biregular_variable_degree_1.5 time_out: %s lower_bound: %s sat_solver_time: %s random_seed: %s\n" % (time_out, lb, sat_solver_time, random_seed))
# logger.close()
# if sat_solver_time > 500:
# break
####### RUN EXPERIMENT IN ONE FILE: SHARPSAT #######
time_out, solution_count, sharp_sat_time = sharp_sat_call_from_python(problem_name=fw_spec['problem_name'], time_limit=5000)
logger = open(filename, 'a')
logger.write("sharpSAT time_out: %s solution_count: %s sharp_sat_time: %s\n" % (time_out, solution_count, sharp_sat_time))
logger.close()
#etc....
####### CHECK IF DSHARP CAN SOLVE THE PROBLEM QUICKLY, IF SO RETURN EARLY #######
time_out, solution_count, dsharp_time = dsharp_call_from_python(problem_name=fw_spec['problem_name'], time_limit=5000)
logger = open(filename, 'a')
logger.write("dsharp time_out: %s solution_count: %s dsharp_time: %s\n" % (time_out, solution_count, dsharp_time))
logger.close()
def create_launchpad():
with open('./my_launchpad.yaml', 'w') as f:
f.write('host: %s\n' % MONGODB_HOST)
f.write('port: %d\n' % MONGODB_PORT)
f.write('name: %s\n' % MONGODB_NAME)
f.write('username: %s\n' % MONGODB_USERNAME)
f.write('password: %s\n' % MONGODB_PASSWORD)
f.write('logdir: null\n')
f.write('strm_lvl: INFO\n')
def run_experiment():
'''
'''
# write new launchpad file
create_launchpad()
# set up the LaunchPad and reset it
launchpad = LaunchPad(host=MONGODB_HOST, port=MONGODB_PORT, name=MONGODB_NAME, username=MONGODB_USERNAME, password=MONGODB_PASSWORD,
logdir=None, strm_lvl='INFO', user_indices=None, wf_user_indices=None)
# logdir=None, strm_lvl='INFO', user_indices=None, wf_user_indices=None, ssl_ca_file=None)
launchpad.reset('', require_password=False)
all_fireworks = []
if TEST_LOCAL:
PROBLEM_NAMES = ['01A-1.cnf.gz.no_w.cnf']
# PROBLEM_NAMES = ['54.sk_12_97.cnf.gz.no_w.cnf']
# PROBLEM_NAMES = ['01A-1.cnf.gz.no_w.cnf', '01B-1.cnf.gz.no_w.cnf', '01B-2.cnf.gz.no_w.cnf', '01B-3.cnf.gz.no_w.cnf', '01B-4.cnf.gz.no_w.cnf', '01B-5.cnf.gz.no_w.cnf', '02A-1.cnf.gz.no_w.cnf', '02A-2.cnf.gz.no_w.cnf', '02A-3.cnf.gz.no_w.cnf', '02B-1.cnf.gz.no_w.cnf', '02B-2.cnf.gz.no_w.cnf', '02B-3.cnf.gz.no_w.cnf', '02B-4.cnf.gz.no_w.cnf', '02B-5.cnf.gz.no_w.cnf', '03A-1.cnf.gz.no_w.cnf', '03A-2.cnf.gz.no_w.cnf', '03B-1.cnf.gz.no_w.cnf', '03B-2.cnf.gz.no_w.cnf', '03B-3.cnf.gz.no_w.cnf', '03B-4.cnf.gz.no_w.cnf', '04A-1.cnf.gz.no_w.cnf', '04A-2.cnf.gz.no_w.cnf', '04A-3.cnf.gz.no_w.cnf', '04B-1.cnf.gz.no_w.cnf', '04B-2.cnf.gz.no_w.cnf', '04B-3.cnf.gz.no_w.cnf', '04B-3.cnf.gz.no_w.cnf.gz.log', '04B-4.cnf.gz.no_w.cnf', '05A-1.cnf.gz.no_w.cnf', '05A-2.cnf.gz.no_w.cnf', '05B-1.cnf.gz.no_w.cnf', '05B-2.cnf.gz.no_w.cnf', '05B-3.cnf.gz.no_w.cnf', '06A-1.cnf.gz.no_w.cnf', '06A-2.cnf.gz.no_w.cnf', '06A-3.cnf.gz.no_w.cnf', '06A-4.cnf.gz.no_w.cnf', '06B-1.cnf.gz.no_w.cnf', '06B-2.cnf.gz.no_w.cnf', '06B-3.cnf.gz.no_w.cnf', '06B-4.cnf.gz.no_w.cnf', '07A-1.cnf.gz.no_w.cnf', '07A-2.cnf.gz.no_w.cnf', '07A-3.cnf.gz.no_w.cnf', '07A-4.cnf.gz.no_w.cnf', '07A-5.cnf.gz.no_w.cnf', '07B-1.cnf.gz.no_w.cnf', '07B-2.cnf.gz.no_w.cnf', '07B-3.cnf.gz.no_w.cnf', '07B-4.cnf.gz.no_w.cnf', '07B-5.cnf.gz.no_w.cnf', '07B-6.cnf.gz.no_w.cnf', '08A-1.cnf.gz.no_w.cnf', '08A-2.cnf.gz.no_w.cnf', '08A-3.cnf.gz.no_w.cnf', '08A-4.cnf.gz.no_w.cnf', '08B-1.cnf.gz.no_w.cnf', '08B-2.cnf.gz.no_w.cnf', '08B-3.cnf.gz.no_w.cnf', '08B-4.cnf.gz.no_w.cnf', '09A-1.cnf.gz.no_w.cnf', '09A-2.cnf.gz.no_w.cnf', '09A-3.cnf.gz.no_w.cnf', '09B-1.cnf.gz.no_w.cnf', '09B-2.cnf.gz.no_w.cnf', '09B-3.cnf.gz.no_w.cnf', '09B-4.cnf.gz.no_w.cnf', '09B-5.cnf.gz.no_w.cnf', '09B-6.cnf.gz.no_w.cnf', '107.sk_3_90.cnf.gz.no_w.cnf', '109.sk_4_36.cnf.gz.no_w.cnf', '10A-1.cnf.gz.no_w.cnf', '10A-2.cnf.gz.no_w.cnf', '10A-3.cnf.gz.no_w.cnf', '10A-4.cnf.gz.no_w.cnf', '10B-10.cnf.gz.no_w.cnf', '10B-11.cnf.gz.no_w.cnf', '10B-1.cnf.gz.no_w.cnf', '10B-2.cnf.gz.no_w.cnf', '10B-3.cnf.gz.no_w.cnf', '10B-4.cnf.gz.no_w.cnf', '10B-5.cnf.gz.no_w.cnf', '10B-6.cnf.gz.no_w.cnf', '10B-7.cnf.gz.no_w.cnf', '10B-8.cnf.gz.no_w.cnf', '10B-9.cnf.gz.no_w.cnf', '10.sk_1_46.cnf.gz.no_w.cnf', '110.sk_3_88.cnf.gz.no_w.cnf', '111.sk_2_36.cnf.gz.no_w.cnf', '11A-1.cnf.gz.no_w.cnf', '11A-2.cnf.gz.no_w.cnf', '11A-3.cnf.gz.no_w.cnf', '11A-4.cnf.gz.no_w.cnf', '11B-1.cnf.gz.no_w.cnf', '11B-2.cnf.gz.no_w.cnf', '11B-3.cnf.gz.no_w.cnf', '11B-4.cnf.gz.no_w.cnf', '11B-5.cnf.gz.no_w.cnf', '12A-1.cnf.gz.no_w.cnf', '12A-2.cnf.gz.no_w.cnf', '12A-3.cnf.gz.no_w.cnf', '12A-4.cnf.gz.no_w.cnf', '12B-1.cnf.gz.no_w.cnf', '12B-2.cnf.gz.no_w.cnf', '12B-3.cnf.gz.no_w.cnf', '12B-4.cnf.gz.no_w.cnf', '12B-5.cnf.gz.no_w.cnf', '12B-6.cnf.gz.no_w.cnf', '13A-1.cnf.gz.no_w.cnf', '13A-2.cnf.gz.no_w.cnf', '13A-3.cnf.gz.no_w.cnf', '13A-4.cnf.gz.no_w.cnf', '13B-1.cnf.gz.no_w.cnf', '13B-2.cnf.gz.no_w.cnf', '13B-3.cnf.gz.no_w.cnf', '13B-4.cnf.gz.no_w.cnf', '13B-5.cnf.gz.no_w.cnf', '14A-1.cnf.gz.no_w.cnf', '14A-2.cnf.gz.no_w.cnf', '14A-3.cnf.gz.no_w.cnf', '15A-1.cnf.gz.no_w.cnf', '15A-2.cnf.gz.no_w.cnf', '15A-3.cnf.gz.no_w.cnf', '15A-4.cnf.gz.no_w.cnf', '15B-1.cnf.gz.no_w.cnf', '15B-2.cnf.gz.no_w.cnf', '15B-3.cnf.gz.no_w.cnf', '15B-4.cnf.gz.no_w.cnf', '15B-5.cnf.gz.no_w.cnf', '17A-1.cnf.gz.no_w.cnf', '17A-2.cnf.gz.no_w.cnf', '17A-3.cnf.gz.no_w.cnf', '17A-4.cnf.gz.no_w.cnf', '17A-5.cnf.gz.no_w.cnf', '17A-6.cnf.gz.no_w.cnf', '17B-1.cnf.gz.no_w.cnf', '17B-2.cnf.gz.no_w.cnf', '17B-3.cnf.gz.no_w.cnf', '17B-4.cnf.gz.no_w.cnf', '17B-5.cnf.gz.no_w.cnf', '17.sk_3_45.cnf.gz.no_w.cnf', '18A-1.cnf.gz.no_w.cnf', '18A-2.cnf.gz.no_w.cnf', '18A-3.cnf.gz.no_w.cnf', '18A-4.cnf.gz.no_w.cnf', '19.sk_3_48.cnf.gz.no_w.cnf', '20.sk_1_51.cnf.gz.no_w.cnf', '27.sk_3_32.cnf.gz.no_w.cnf', '29.sk_3_45.cnf.gz.no_w.cnf', '30.sk_5_76.cnf.gz.no_w.cnf', '32.sk_4_38.cnf.gz.no_w.cnf', '35.sk_3_52.cnf.gz.no_w.cnf', '36.sk_3_77.cnf.gz.no_w.cnf', '4step.cnf.gz.no_w.cnf', '50-10-10-q.cnf.gz.no_w.cnf', '50-10-1-q.cnf.gz.no_w.cnf', '50-10-2-q.cnf.gz.no_w.cnf', '50-10-3-q.cnf.gz.no_w.cnf', '50-10-4-q.cnf.gz.no_w.cnf', '50-10-5-q.cnf.gz.no_w.cnf', '50-10-6-q.cnf.gz.no_w.cnf', '50-10-7-q.cnf.gz.no_w.cnf', '50-10-8-q.cnf.gz.no_w.cnf', '50-10-9-q.cnf.gz.no_w.cnf', '50-12-10-q.cnf.gz.no_w.cnf', '50-12-1-q.cnf.gz.no_w.cnf', '50-12-2-q.cnf.gz.no_w.cnf', '50-12-3-q.cnf.gz.no_w.cnf', '50-12-4-q.cnf.gz.no_w.cnf', '50-12-5-q.cnf.gz.no_w.cnf', '50-12-6-q.cnf.gz.no_w.cnf', '50-12-7-q.cnf.gz.no_w.cnf', '50-12-8-q.cnf.gz.no_w.cnf', '50-12-9-q.cnf.gz.no_w.cnf', '50-14-10-q.cnf.gz.no_w.cnf', '50-14-1-q.cnf.gz.no_w.cnf', '50-14-2-q.cnf.gz.no_w.cnf', '50-14-3-q.cnf.gz.no_w.cnf', '50-14-4-q.cnf.gz.no_w.cnf', '50-14-5-q.cnf.gz.no_w.cnf', '50-14-6-q.cnf.gz.no_w.cnf', '50-14-7-q.cnf.gz.no_w.cnf', '50-14-8-q.cnf.gz.no_w.cnf', '50-14-9-q.cnf.gz.no_w.cnf', '50-16-10-q.cnf.gz.no_w.cnf', '50-16-1-q.cnf.gz.no_w.cnf', '50-16-2-q.cnf.gz.no_w.cnf', '50-16-3-q.cnf.gz.no_w.cnf', '50-16-4-q.cnf.gz.no_w.cnf', '50-16-5-q.cnf.gz.no_w.cnf', '50-16-6-q.cnf.gz.no_w.cnf', '50-16-7-q.cnf.gz.no_w.cnf', '50-16-8-q.cnf.gz.no_w.cnf', '50-16-9-q.cnf.gz.no_w.cnf', '50-18-10-q.cnf.gz.no_w.cnf', '50-18-1-q.cnf.gz.no_w.cnf', '50-18-2-q.cnf.gz.no_w.cnf', '50-18-3-q.cnf.gz.no_w.cnf', '50-18-4-q.cnf.gz.no_w.cnf', '50-18-5-q.cnf.gz.no_w.cnf', '50-18-6-q.cnf.gz.no_w.cnf', '50-18-7-q.cnf.gz.no_w.cnf', '50-18-8-q.cnf.gz.no_w.cnf', '50-18-9-q.cnf.gz.no_w.cnf', '50-20-10-q.cnf.gz.no_w.cnf', '50-20-1-q.cnf.gz.no_w.cnf', '50-20-2-q.cnf.gz.no_w.cnf', '50-20-3-q.cnf.gz.no_w.cnf', '50-20-4-q.cnf.gz.no_w.cnf', '50-20-5-q.cnf.gz.no_w.cnf', '50-20-6-q.cnf.gz.no_w.cnf', '50-20-7-q.cnf.gz.no_w.cnf', '50-20-8-q.cnf.gz.no_w.cnf', '50-20-9-q.cnf.gz.no_w.cnf', '51.sk_4_38.cnf.gz.no_w.cnf', '53.sk_4_32.cnf.gz.no_w.cnf', '54.sk_12_97.cnf.gz.no_w.cnf', '54.sk_12_97.cnf.gz.no_w.no_independent_set.cnf', '55.sk_3_46.cnf.gz.no_w.cnf', '56.sk_6_38.cnf.gz.no_w.cnf', '57.sk_4_64.cnf.gz.no_w.cnf', '5step.cnf.gz.no_w.cnf', '63.sk_3_64.cnf.gz.no_w.cnf', '70.sk_3_40.cnf.gz.no_w.cnf', '71.sk_3_65.cnf.gz.no_w.cnf', '75-10-10-q.cnf.gz.no_w.cnf', '75-10-1-q.cnf.gz.no_w.cnf', '75-10-2-q.cnf.gz.no_w.cnf', '75-10-3-q.cnf.gz.no_w.cnf', '75-10-4-q.cnf.gz.no_w.cnf', '75-10-5-q.cnf.gz.no_w.cnf', '75-10-6-q.cnf.gz.no_w.cnf', '75-10-7-q.cnf.gz.no_w.cnf', '75-10-8-q.cnf.gz.no_w.cnf', '75-10-9-q.cnf.gz.no_w.cnf', '75-12-10-q.cnf.gz.no_w.cnf', '75-12-1-q.cnf.gz.no_w.cnf', '75-12-2-q.cnf.gz.no_w.cnf', '75-12-3-q.cnf.gz.no_w.cnf', '75-12-4-q.cnf.gz.no_w.cnf', '75-12-5-q.cnf.gz.no_w.cnf', '75-12-6-q.cnf.gz.no_w.cnf', '75-12-7-q.cnf.gz.no_w.cnf', '75-12-8-q.cnf.gz.no_w.cnf', '75-12-9-q.cnf.gz.no_w.cnf', '75-14-10-q.cnf.gz.no_w.cnf', '75-14-1-q.cnf.gz.no_w.cnf', '75-14-2-q.cnf.gz.no_w.cnf', '75-14-3-q.cnf.gz.no_w.cnf', '75-14-4-q.cnf.gz.no_w.cnf', '75-14-5-q.cnf.gz.no_w.cnf', '75-14-6-q.cnf.gz.no_w.cnf', '75-14-7-q.cnf.gz.no_w.cnf', '75-14-8-q.cnf.gz.no_w.cnf', '75-14-9-q.cnf.gz.no_w.cnf', '75-15-10-q.cnf.gz.no_w.cnf', '75-15-1-q.cnf.gz.no_w.cnf', '75-15-2-q.cnf.gz.no_w.cnf', '75-15-3-q.cnf.gz.no_w.cnf', '75-15-4-q.cnf.gz.no_w.cnf', '75-15-5-q.cnf.gz.no_w.cnf', '75-15-6-q.cnf.gz.no_w.cnf', '75-15-7-q.cnf.gz.no_w.cnf', '75-15-8-q.cnf.gz.no_w.cnf', '75-15-9-q.cnf.gz.no_w.cnf', '75-16-10-q.cnf.gz.no_w.cnf', '75-16-1-q.cnf.gz.no_w.cnf', '75-16-2-q.cnf.gz.no_w.cnf', '75-16-3-q.cnf.gz.no_w.cnf', '75-16-4-q.cnf.gz.no_w.cnf', '75-16-5-q.cnf.gz.no_w.cnf', '75-16-6-q.cnf.gz.no_w.cnf', '75-16-7-q.cnf.gz.no_w.cnf', '75-16-8-q.cnf.gz.no_w.cnf', '75-16-9-q.cnf.gz.no_w.cnf', '75-17-10-q.cnf.gz.no_w.cnf', '75-17-1-q.cnf.gz.no_w.cnf', '75-17-2-q.cnf.gz.no_w.cnf', '75-17-3-q.cnf.gz.no_w.cnf', '75-17-4-q.cnf.gz.no_w.cnf', '75-17-5-q.cnf.gz.no_w.cnf', '75-17-6-q.cnf.gz.no_w.cnf', '75-17-7-q.cnf.gz.no_w.cnf', '75-17-8-q.cnf.gz.no_w.cnf', '75-17-9-q.cnf.gz.no_w.cnf', '75-18-10-q.cnf.gz.no_w.cnf', '75-18-1-q.cnf.gz.no_w.cnf', '75-18-2-q.cnf.gz.no_w.cnf', '75-18-3-q.cnf.gz.no_w.cnf', '75-18-4-q.cnf.gz.no_w.cnf', '75-18-5-q.cnf.gz.no_w.cnf', '75-18-6-q.cnf.gz.no_w.cnf', '75-18-7-q.cnf.gz.no_w.cnf', '75-18-8-q.cnf.gz.no_w.cnf', '75-18-9-q.cnf.gz.no_w.cnf', '75-19-10-q.cnf.gz.no_w.cnf', '75-19-1-q.cnf.gz.no_w.cnf', '75-19-2-q.cnf.gz.no_w.cnf', '75-19-3-q.cnf.gz.no_w.cnf', '75-19-4-q.cnf.gz.no_w.cnf', '75-19-5-q.cnf.gz.no_w.cnf', '75-19-6-q.cnf.gz.no_w.cnf', '75-19-7-q.cnf.gz.no_w.cnf', '75-19-8-q.cnf.gz.no_w.cnf', '75-19-9-q.cnf.gz.no_w.cnf', '75-20-10-q.cnf.gz.no_w.cnf', '75-20-1-q.cnf.gz.no_w.cnf', '75-20-2-q.cnf.gz.no_w.cnf', '75-20-3-q.cnf.gz.no_w.cnf', '75-20-4-q.cnf.gz.no_w.cnf', '75-20-5-q.cnf.gz.no_w.cnf', '75-20-6-q.cnf.gz.no_w.cnf', '75-20-7-q.cnf.gz.no_w.cnf', '75-20-8-q.cnf.gz.no_w.cnf', '75-20-9-q.cnf.gz.no_w.cnf', '75-21-10-q.cnf.gz.no_w.cnf', '75-21-1-q.cnf.gz.no_w.cnf', '75-21-2-q.cnf.gz.no_w.cnf', '75-21-3-q.cnf.gz.no_w.cnf', '75-21-4-q.cnf.gz.no_w.cnf', '75-21-5-q.cnf.gz.no_w.cnf', '75-21-6-q.cnf.gz.no_w.cnf', '75-21-7-q.cnf.gz.no_w.cnf', '75-21-8-q.cnf.gz.no_w.cnf', '75-21-9-q.cnf.gz.no_w.cnf', '75-22-10-q.cnf.gz.no_w.cnf', '75-22-1-q.cnf.gz.no_w.cnf', '75-22-2-q.cnf.gz.no_w.cnf', '75-22-3-q.cnf.gz.no_w.cnf', '75-22-4-q.cnf.gz.no_w.cnf', '75-22-5-q.cnf.gz.no_w.cnf', '75-22-6-q.cnf.gz.no_w.cnf', '75-22-7-q.cnf.gz.no_w.cnf', '75-22-8-q.cnf.gz.no_w.cnf', '75-22-9-q.cnf.gz.no_w.cnf', '75-23-10-q.cnf.gz.no_w.cnf', '75-23-1-q.cnf.gz.no_w.cnf', '75-23-2-q.cnf.gz.no_w.cnf', '75-23-3-q.cnf.gz.no_w.cnf', '75-23-4-q.cnf.gz.no_w.cnf', '75-23-5-q.cnf.gz.no_w.cnf', '75-23-6-q.cnf.gz.no_w.cnf', '75-23-7-q.cnf.gz.no_w.cnf', '75-23-8-q.cnf.gz.no_w.cnf', '75-23-9-q.cnf.gz.no_w.cnf', '75-24-10-q.cnf.gz.no_w.cnf', '75-24-1-q.cnf.gz.no_w.cnf', '75-24-2-q.cnf.gz.no_w.cnf', '75-24-3-q.cnf.gz.no_w.cnf', '75-24-4-q.cnf.gz.no_w.cnf', '75-24-5-q.cnf.gz.no_w.cnf', '75-24-6-q.cnf.gz.no_w.cnf', '75-24-7-q.cnf.gz.no_w.cnf', '75-24-8-q.cnf.gz.no_w.cnf', '75-24-9-q.cnf.gz.no_w.cnf', '75-25-10-q.cnf.gz.no_w.cnf', '75-25-1-q.cnf.gz.no_w.cnf', '75-25-2-q.cnf.gz.no_w.cnf', '75-25-3-q.cnf.gz.no_w.cnf', '75-25-4-q.cnf.gz.no_w.cnf', '75-25-5-q.cnf.gz.no_w.cnf', '75-25-6-q.cnf.gz.no_w.cnf', '75-25-7-q.cnf.gz.no_w.cnf', '75-25-8-q.cnf.gz.no_w.cnf', '75-25-9-q.cnf.gz.no_w.cnf', '75-26-10-q.cnf.gz.no_w.cnf', '75-26-1-q.cnf.gz.no_w.cnf', '75-26-2-q.cnf.gz.no_w.cnf', '75-26-3-q.cnf.gz.no_w.cnf', '75-26-4-q.cnf.gz.no_w.cnf', '75-26-5-q.cnf.gz.no_w.cnf', '75-26-6-q.cnf.gz.no_w.cnf', '75-26-7-q.cnf.gz.no_w.cnf', '75-26-8-q.cnf.gz.no_w.cnf', '75-26-9-q.cnf.gz.no_w.cnf', '77.sk_3_44.cnf.gz.no_w.cnf', '79.sk_4_40.cnf.gz.no_w.cnf', '7.sk_4_50.cnf.gz.no_w.cnf', '80.sk_2_48.cnf.gz.no_w.cnf', '81.sk_5_51.cnf.gz.no_w.cnf', '84.sk_4_77.cnf.gz.no_w.cnf', '90-10-10-q.cnf.gz.no_w.cnf', '90-10-1-q.cnf.gz.no_w.cnf', '90-10-2-q.cnf.gz.no_w.cnf', '90-10-3-q.cnf.gz.no_w.cnf', '90-10-4-q.cnf.gz.no_w.cnf', '90-10-5-q.cnf.gz.no_w.cnf', '90-10-6-q.cnf.gz.no_w.cnf', '90-10-7-q.cnf.gz.no_w.cnf', '90-10-8-q.cnf.gz.no_w.cnf', '90-10-9-q.cnf.gz.no_w.cnf', '90-12-10-q.cnf.gz.no_w.cnf', '90-12-1-q.cnf.gz.no_w.cnf', '90-12-2-q.cnf.gz.no_w.cnf', '90-12-3-q.cnf.gz.no_w.cnf', '90-12-4-q.cnf.gz.no_w.cnf', '90-12-5-q.cnf.gz.no_w.cnf', '90-12-6-q.cnf.gz.no_w.cnf', '90-12-7-q.cnf.gz.no_w.cnf', '90-12-8-q.cnf.gz.no_w.cnf', '90-12-9-q.cnf.gz.no_w.cnf', '90-14-10-q.cnf.gz.no_w.cnf', '90-14-1-q.cnf.gz.no_w.cnf', '90-14-2-q.cnf.gz.no_w.cnf', '90-14-3-q.cnf.gz.no_w.cnf', '90-14-4-q.cnf.gz.no_w.cnf', '90-14-5-q.cnf.gz.no_w.cnf', '90-14-6-q.cnf.gz.no_w.cnf', '90-14-7-q.cnf.gz.no_w.cnf', '90-14-8-q.cnf.gz.no_w.cnf', '90-14-9-q.cnf.gz.no_w.cnf', '90-15-10-q.cnf.gz.no_w.cnf', '90-15-1-q.cnf.gz.no_w.cnf', '90-15-2-q.cnf.gz.no_w.cnf', '90-15-3-q.cnf.gz.no_w.cnf', '90-15-4-q.cnf.gz.no_w.cnf', '90-15-5-q.cnf.gz.no_w.cnf', '90-15-6-q.cnf.gz.no_w.cnf', '90-15-7-q.cnf.gz.no_w.cnf', '90-15-8-q.cnf.gz.no_w.cnf', '90-15-9-q.cnf.gz.no_w.cnf', '90-16-10-q.cnf.gz.no_w.cnf', '90-16-1-q.cnf.gz.no_w.cnf', '90-16-2-q.cnf.gz.no_w.cnf', '90-16-3-q.cnf.gz.no_w.cnf', '90-16-4-q.cnf.gz.no_w.cnf', '90-16-5-q.cnf.gz.no_w.cnf', '90-16-6-q.cnf.gz.no_w.cnf', '90-16-7-q.cnf.gz.no_w.cnf', '90-16-8-q.cnf.gz.no_w.cnf', '90-16-9-q.cnf.gz.no_w.cnf', '90-17-10-q.cnf.gz.no_w.cnf', '90-17-1-q.cnf.gz.no_w.cnf', '90-17-2-q.cnf.gz.no_w.cnf', '90-17-3-q.cnf.gz.no_w.cnf', '90-17-4-q.cnf.gz.no_w.cnf', '90-17-5-q.cnf.gz.no_w.cnf', '90-17-6-q.cnf.gz.no_w.cnf', '90-17-7-q.cnf.gz.no_w.cnf', '90-17-8-q.cnf.gz.no_w.cnf', '90-17-9-q.cnf.gz.no_w.cnf', '90-18-10-q.cnf.gz.no_w.cnf', '90-18-1-q.cnf.gz.no_w.cnf', '90-18-2-q.cnf.gz.no_w.cnf', '90-18-3-q.cnf.gz.no_w.cnf', '90-18-4-q.cnf.gz.no_w.cnf', '90-18-5-q.cnf.gz.no_w.cnf', '90-18-6-q.cnf.gz.no_w.cnf', '90-18-7-q.cnf.gz.no_w.cnf', '90-18-8-q.cnf.gz.no_w.cnf', '90-18-9-q.cnf.gz.no_w.cnf', '90-19-10-q.cnf.gz.no_w.cnf', '90-19-1-q.cnf.gz.no_w.cnf', '90-19-2-q.cnf.gz.no_w.cnf', '90-19-3-q.cnf.gz.no_w.cnf', '90-19-4-q.cnf.gz.no_w.cnf', '90-19-5-q.cnf.gz.no_w.cnf', '90-19-6-q.cnf.gz.no_w.cnf', '90-19-7-q.cnf.gz.no_w.cnf', '90-19-8-q.cnf.gz.no_w.cnf', '90-19-9-q.cnf.gz.no_w.cnf', '90-20-10-q.cnf.gz.no_w.cnf', '90-20-1-q.cnf.gz.no_w.cnf', '90-20-2-q.cnf.gz.no_w.cnf', '90-20-3-q.cnf.gz.no_w.cnf', '90-20-4-q.cnf.gz.no_w.cnf', '90-20-5-q.cnf.gz.no_w.cnf', '90-20-6-q.cnf.gz.no_w.cnf', '90-20-7-q.cnf.gz.no_w.cnf', '90-20-8-q.cnf.gz.no_w.cnf', '90-20-9-q.cnf.gz.no_w.cnf', '90-21-10-q.cnf.gz.no_w.cnf', '90-21-1-q.cnf.gz.no_w.cnf', '90-21-2-q.cnf.gz.no_w.cnf', '90-21-3-q.cnf.gz.no_w.cnf', '90-21-4-q.cnf.gz.no_w.cnf', '90-21-5-q.cnf.gz.no_w.cnf', '90-21-6-q.cnf.gz.no_w.cnf', '90-21-7-q.cnf.gz.no_w.cnf', '90-21-8-q.cnf.gz.no_w.cnf', '90-21-9-q.cnf.gz.no_w.cnf', '90-22-10-q.cnf.gz.no_w.cnf', '90-22-1-q.cnf.gz.no_w.cnf', '90-22-2-q.cnf.gz.no_w.cnf', '90-22-3-q.cnf.gz.no_w.cnf', '90-22-4-q.cnf.gz.no_w.cnf', '90-22-5-q.cnf.gz.no_w.cnf', '90-22-6-q.cnf.gz.no_w.cnf', '90-22-7-q.cnf.gz.no_w.cnf', '90-22-8-q.cnf.gz.no_w.cnf', '90-22-9-q.cnf.gz.no_w.cnf', '90-23-10-q.cnf.gz.no_w.cnf', '90-23-1-q.cnf.gz.no_w.cnf', '90-23-2-q.cnf.gz.no_w.cnf', '90-23-3-q.cnf.gz.no_w.cnf', '90-23-4-q.cnf.gz.no_w.cnf', '90-23-5-q.cnf.gz.no_w.cnf', '90-23-6-q.cnf.gz.no_w.cnf', '90-23-7-q.cnf.gz.no_w.cnf', '90-23-8-q.cnf.gz.no_w.cnf', '90-23-9-q.cnf.gz.no_w.cnf', '90-24-10-q.cnf.gz.no_w.cnf', '90-24-1-q.cnf.gz.no_w.cnf', '90-24-2-q.cnf.gz.no_w.cnf', '90-24-3-q.cnf.gz.no_w.cnf', '90-24-4-q.cnf.gz.no_w.cnf', '90-24-5-q.cnf.gz.no_w.cnf', '90-24-6-q.cnf.gz.no_w.cnf', '90-24-7-q.cnf.gz.no_w.cnf', '90-24-8-q.cnf.gz.no_w.cnf', '90-24-9-q.cnf.gz.no_w.cnf', '90-25-10-q.cnf.gz.no_w.cnf', '90-25-1-q.cnf.gz.no_w.cnf', '90-25-2-q.cnf.gz.no_w.cnf', '90-25-3-q.cnf.gz.no_w.cnf', '90-25-4-q.cnf.gz.no_w.cnf', '90-25-5-q.cnf.gz.no_w.cnf', '90-25-6-q.cnf.gz.no_w.cnf', '90-25-7-q.cnf.gz.no_w.cnf', '90-25-8-q.cnf.gz.no_w.cnf', '90-25-9-q.cnf.gz.no_w.cnf', '90-26-10-q.cnf.gz.no_w.cnf', '90-26-1-q.cnf.gz.no_w.cnf', '90-26-2-q.cnf.gz.no_w.cnf', '90-26-3-q.cnf.gz.no_w.cnf', '90-26-4-q.cnf.gz.no_w.cnf', '90-26-5-q.cnf.gz.no_w.cnf', '90-26-6-q.cnf.gz.no_w.cnf', '90-26-7-q.cnf.gz.no_w.cnf', '90-26-8-q.cnf.gz.no_w.cnf', '90-26-9-q.cnf.gz.no_w.cnf', '90-30-10-q.cnf.gz.no_w.cnf', '90-30-1-q.cnf.gz.no_w.cnf', '90-30-2-q.cnf.gz.no_w.cnf', '90-30-3-q.cnf.gz.no_w.cnf', '90-30-4-q.cnf.gz.no_w.cnf', '90-30-5-q.cnf.gz.no_w.cnf', '90-30-6-q.cnf.gz.no_w.cnf', '90-30-7-q.cnf.gz.no_w.cnf', '90-30-8-q.cnf.gz.no_w.cnf', '90-30-9-q.cnf.gz.no_w.cnf', '90-34-10-q.cnf.gz.no_w.cnf', '90-34-1-q.cnf.gz.no_w.cnf', '90-34-2-q.cnf.gz.no_w.cnf', '90-34-3-q.cnf.gz.no_w.cnf', '90-34-4-q.cnf.gz.no_w.cnf', '90-34-5-q.cnf.gz.no_w.cnf', '90-34-6-q.cnf.gz.no_w.cnf', '90-34-7-q.cnf.gz.no_w.cnf', '90-34-8-q.cnf.gz.no_w.cnf', '90-34-9-q.cnf.gz.no_w.cnf', '90-38-10-q.cnf.gz.no_w.cnf', '90-38-1-q.cnf.gz.no_w.cnf', '90-38-2-q.cnf.gz.no_w.cnf', '90-38-3-q.cnf.gz.no_w.cnf', '90-38-4-q.cnf.gz.no_w.cnf', '90-38-5-q.cnf.gz.no_w.cnf', '90-38-6-q.cnf.gz.no_w.cnf', '90-38-7-q.cnf.gz.no_w.cnf', '90-38-8-q.cnf.gz.no_w.cnf', '90-38-9-q.cnf.gz.no_w.cnf', '90-42-10-q.cnf.gz.no_w.cnf', '90-42-1-q.cnf.gz.no_w.cnf', '90-42-2-q.cnf.gz.no_w.cnf', '90-42-3-q.cnf.gz.no_w.cnf', '90-42-4-q.cnf.gz.no_w.cnf', '90-42-5-q.cnf.gz.no_w.cnf', '90-42-6-q.cnf.gz.no_w.cnf', '90-42-7-q.cnf.gz.no_w.cnf', '90-42-8-q.cnf.gz.no_w.cnf', '90-42-9-q.cnf.gz.no_w.cnf', '90-46-10-q.cnf.gz.no_w.cnf', '90-46-1-q.cnf.gz.no_w.cnf', '90-46-2-q.cnf.gz.no_w.cnf', '90-46-3-q.cnf.gz.no_w.cnf', '90-46-4-q.cnf.gz.no_w.cnf', '90-46-5-q.cnf.gz.no_w.cnf', '90-46-6-q.cnf.gz.no_w.cnf', '90-46-7-q.cnf.gz.no_w.cnf', '90-46-8-q.cnf.gz.no_w.cnf', '90-46-9-q.cnf.gz.no_w.cnf', '90-50-10-q.cnf.gz.no_w.cnf', '90-50-1-q.cnf.gz.no_w.cnf', '90-50-2-q.cnf.gz.no_w.cnf', '90-50-3-q.cnf.gz.no_w.cnf', '90-50-4-q.cnf.gz.no_w.cnf', '90-50-5-q.cnf.gz.no_w.cnf', '90-50-6-q.cnf.gz.no_w.cnf', '90-50-7-q.cnf.gz.no_w.cnf', '90-50-8-q.cnf.gz.no_w.cnf', '90-50-9-q.cnf.gz.no_w.cnf', 'ActivityService2.sk_10_27.cnf.gz.no_w.cnf', 'ActivityService.sk_11_27.cnf.gz.no_w.cnf', 'blasted_case_0_b11_1.cnf.gz.no_w.cnf', 'blasted_case_0_b12_1.cnf.gz.no_w.cnf', 'blasted_case_0_b12_2.cnf.gz.no_w.cnf', 'blasted_case_0_b12_even1.cnf.gz.no_w.cnf', 'blasted_case_0_b12_even2.cnf.gz.no_w.cnf', 'blasted_case_0_b12_even3.cnf.gz.no_w.cnf', 'blasted_case_0_b14_1.cnf.gz.no_w.cnf', 'blasted_case_0_ptb_1.cnf.gz.no_w.cnf', 'blasted_case_0_ptb_2.cnf.gz.no_w.cnf', 'blasted_case100.cnf.gz.no_w.cnf', 'blasted_case101.cnf.gz.no_w.cnf', 'blasted_case102.cnf.gz.no_w.cnf', 'blasted_case103.cnf.gz.no_w.cnf', 'blasted_case104.cnf.gz.no_w.cnf', 'blasted_case105.cnf.gz.no_w.cnf', 'blasted_case106.cnf.gz.no_w.cnf', 'blasted_case107.cnf.gz.no_w.cnf', 'blasted_case108.cnf.gz.no_w.cnf', 'blasted_case109.cnf.gz.no_w.cnf', 'blasted_case10.cnf.gz.no_w.cnf', 'blasted_case110.cnf.gz.no_w.cnf', 'blasted_case111.cnf.gz.no_w.cnf', 'blasted_case112.cnf.gz.no_w.cnf', 'blasted_case113.cnf.gz.no_w.cnf', 'blasted_case114.cnf.gz.no_w.cnf', 'blasted_case115.cnf.gz.no_w.cnf', 'blasted_case116.cnf.gz.no_w.cnf', 'blasted_case117.cnf.gz.no_w.cnf', 'blasted_case118.cnf.gz.no_w.cnf', 'blasted_case119.cnf.gz.no_w.cnf', 'blasted_case11.cnf.gz.no_w.cnf', 'blasted_case120.cnf.gz.no_w.cnf', 'blasted_case121.cnf.gz.no_w.cnf', 'blasted_case122.cnf.gz.no_w.cnf', 'blasted_case123.cnf.gz.no_w.cnf', 'blasted_case124.cnf.gz.no_w.cnf', 'blasted_case125.cnf.gz.no_w.cnf', 'blasted_case126.cnf.gz.no_w.cnf', 'blasted_case127.cnf.gz.no_w.cnf', 'blasted_case128.cnf.gz.no_w.cnf', 'blasted_case12.cnf.gz.no_w.cnf', 'blasted_case130.cnf.gz.no_w.cnf', 'blasted_case131.cnf.gz.no_w.cnf', 'blasted_case132.cnf.gz.no_w.cnf', 'blasted_case133.cnf.gz.no_w.cnf', 'blasted_case134.cnf.gz.no_w.cnf', 'blasted_case135.cnf.gz.no_w.cnf', 'blasted_case136.cnf.gz.no_w.cnf', 'blasted_case137.cnf.gz.no_w.cnf', 'blasted_case138.cnf.gz.no_w.cnf', 'blasted_case139.cnf.gz.no_w.cnf', 'blasted_case140.cnf.gz.no_w.cnf', 'blasted_case141.cnf.gz.no_w.cnf', 'blasted_case142.cnf.gz.no_w.cnf', 'blasted_case143.cnf.gz.no_w.cnf', 'blasted_case144.cnf.gz.no_w.cnf', 'blasted_case145.cnf.gz.no_w.cnf', 'blasted_case146.cnf.gz.no_w.cnf', 'blasted_case_1_4_b14_even.cnf.gz.no_w.cnf', 'blasted_case14.cnf.gz.no_w.cnf', 'blasted_case15.cnf.gz.no_w.cnf', 'blasted_case17.cnf.gz.no_w.cnf', 'blasted_case18.cnf.gz.no_w.cnf', 'blasted_case19.cnf.gz.no_w.cnf', 'blasted_case_1_b11_1.cnf.gz.no_w.cnf', 'blasted_case_1_b12_1.cnf.gz.no_w.cnf', 'blasted_case_1_b12_2.cnf.gz.no_w.cnf', 'blasted_case_1_b12_even1.cnf.gz.no_w.cnf', 'blasted_case_1_b12_even2.cnf.gz.no_w.cnf', 'blasted_case_1_b12_even3.cnf.gz.no_w.cnf', 'blasted_case_1_b14_1.cnf.gz.no_w.cnf', 'blasted_case_1_b14_2.cnf.gz.no_w.cnf', 'blasted_case_1_b14_3.cnf.gz.no_w.cnf', 'blasted_case1_b14_even3.cnf.gz.no_w.cnf', 'blasted_case_1_b14_even.cnf.gz.no_w.cnf', 'blasted_case1.cnf.gz.no_w.cnf', 'blasted_case_1_ptb_1.cnf.gz.no_w.cnf', 'blasted_case_1_ptb_2.cnf.gz.no_w.cnf', 'blasted_case200.cnf.gz.no_w.cnf', 'blasted_case201.cnf.gz.no_w.cnf', 'blasted_case202.cnf.gz.no_w.cnf', 'blasted_case203.cnf.gz.no_w.cnf', 'blasted_case204.cnf.gz.no_w.cnf', 'blasted_case205.cnf.gz.no_w.cnf', 'blasted_case206.cnf.gz.no_w.cnf', 'blasted_case207.cnf.gz.no_w.cnf', 'blasted_case208.cnf.gz.no_w.cnf', 'blasted_case209.cnf.gz.no_w.cnf', 'blasted_case20.cnf.gz.no_w.cnf', 'blasted_case210.cnf.gz.no_w.cnf', 'blasted_case211.cnf.gz.no_w.cnf', 'blasted_case212.cnf.gz.no_w.cnf', 'blasted_case213.cnf.gz.no_w.cnf', 'blasted_case214.cnf.gz.no_w.cnf', 'blasted_case21.cnf.gz.no_w.cnf', 'blasted_case22.cnf.gz.no_w.cnf', 'blasted_case23.cnf.gz.no_w.cnf', 'blasted_case24.cnf.gz.no_w.cnf', 'blasted_case25.cnf.gz.no_w.cnf', 'blasted_case26.cnf.gz.no_w.cnf', 'blasted_case27.cnf.gz.no_w.cnf', 'blasted_case28.cnf.gz.no_w.cnf', 'blasted_case29.cnf.gz.no_w.cnf', 'blasted_case_2_b12_1.cnf.gz.no_w.cnf', 'blasted_case_2_b12_2.cnf.gz.no_w.cnf', 'blasted_case_2_b12_even1.cnf.gz.no_w.cnf', 'blasted_case_2_b12_even2.cnf.gz.no_w.cnf', 'blasted_case_2_b12_even3.cnf.gz.no_w.cnf', 'blasted_case_2_b14_1.cnf.gz.no_w.cnf', 'blasted_case_2_b14_2.cnf.gz.no_w.cnf', 'blasted_case_2_b14_3.cnf.gz.no_w.cnf', 'blasted_case_2_b14_even.cnf.gz.no_w.cnf', 'blasted_case2.cnf.gz.no_w.cnf', 'blasted_case_2_ptb_1.cnf.gz.no_w.cnf', 'blasted_case_2_ptb_2.cnf.gz.no_w.cnf', 'blasted_case30.cnf.gz.no_w.cnf', 'blasted_case31.cnf.gz.no_w.cnf', 'blasted_case32.cnf.gz.no_w.cnf', 'blasted_case33.cnf.gz.no_w.cnf', 'blasted_case_3_4_b14_even.cnf.gz.no_w.cnf', 'blasted_case34.cnf.gz.no_w.cnf', 'blasted_case35.cnf.gz.no_w.cnf', 'blasted_case36.cnf.gz.no_w.cnf', 'blasted_case37.cnf.gz.no_w.cnf', 'blasted_case38.cnf.gz.no_w.cnf', 'blasted_case39.cnf.gz.no_w.cnf', 'blasted_case_3_b14_1.cnf.gz.no_w.cnf', 'blasted_case_3_b14_2.cnf.gz.no_w.cnf', 'blasted_case_3_b14_3.cnf.gz.no_w.cnf', 'blasted_case3_b14_even3.cnf.gz.no_w.cnf', 'blasted_case3.cnf.gz.no_w.cnf', 'blasted_case40.cnf.gz.no_w.cnf', 'blasted_case41.cnf.gz.no_w.cnf', 'blasted_case42.cnf.gz.no_w.cnf', 'blasted_case43.cnf.gz.no_w.cnf', 'blasted_case44.cnf.gz.no_w.cnf', 'blasted_case45.cnf.gz.no_w.cnf', 'blasted_case46.cnf.gz.no_w.cnf', 'blasted_case47.cnf.gz.no_w.cnf', 'blasted_case49.cnf.gz.no_w.cnf', 'blasted_case4.cnf.gz.no_w.cnf', 'blasted_case50.cnf.gz.no_w.cnf', 'blasted_case51.cnf.gz.no_w.cnf', 'blasted_case52.cnf.gz.no_w.cnf', 'blasted_case53.cnf.gz.no_w.cnf', 'blasted_case54.cnf.gz.no_w.cnf', 'blasted_case55.cnf.gz.no_w.cnf', 'blasted_case56.cnf.gz.no_w.cnf', 'blasted_case57.cnf.gz.no_w.cnf', 'blasted_case58.cnf.gz.no_w.cnf', 'blasted_case59_1.cnf.gz.no_w.cnf', 'blasted_case59.cnf.gz.no_w.cnf', 'blasted_case5.cnf.gz.no_w.cnf', 'blasted_case60.cnf.gz.no_w.cnf', 'blasted_case61.cnf.gz.no_w.cnf', 'blasted_case62.cnf.gz.no_w.cnf', 'blasted_case63.cnf.gz.no_w.cnf', 'blasted_case64.cnf.gz.no_w.cnf', 'blasted_case68.cnf.gz.no_w.cnf', 'blasted_case6.cnf.gz.no_w.cnf', 'blasted_case7.cnf.gz.no_w.cnf', 'blasted_case8.cnf.gz.no_w.cnf', 'blasted_case9.cnf.gz.no_w.cnf', 'blasted_squaring10.cnf.gz.no_w.cnf', 'blasted_squaring11.cnf.gz.no_w.cnf', 'blasted_squaring12.cnf.gz.no_w.cnf', 'blasted_squaring14.cnf.gz.no_w.cnf', 'blasted_squaring16.cnf.gz.no_w.cnf', 'blasted_squaring1.cnf.gz.no_w.cnf', 'blasted_squaring20.cnf.gz.no_w.cnf', 'blasted_squaring21.cnf.gz.no_w.cnf', 'blasted_squaring22.cnf.gz.no_w.cnf', 'blasted_squaring23.cnf.gz.no_w.cnf', 'blasted_squaring24.cnf.gz.no_w.cnf', 'blasted_squaring25.cnf.gz.no_w.cnf', 'blasted_squaring26.cnf.gz.no_w.cnf', 'blasted_squaring27.cnf.gz.no_w.cnf', 'blasted_squaring28.cnf.gz.no_w.cnf', 'blasted_squaring29.cnf.gz.no_w.cnf', 'blasted_squaring2.cnf.gz.no_w.cnf', 'blasted_squaring30.cnf.gz.no_w.cnf', 'blasted_squaring3.cnf.gz.no_w.cnf', 'blasted_squaring40.cnf.gz.no_w.cnf', 'blasted_squaring41.cnf.gz.no_w.cnf', 'blasted_squaring42.cnf.gz.no_w.cnf', 'blasted_squaring4.cnf.gz.no_w.cnf', 'blasted_squaring50.cnf.gz.no_w.cnf', 'blasted_squaring51.cnf.gz.no_w.cnf', 'blasted_squaring5.cnf.gz.no_w.cnf', 'blasted_squaring60.cnf.gz.no_w.cnf', 'blasted_squaring6.cnf.gz.no_w.cnf', 'blasted_squaring70.cnf.gz.no_w.cnf', 'blasted_squaring7.cnf.gz.no_w.cnf', 'blasted_squaring8.cnf.gz.no_w.cnf', 'blasted_squaring9.cnf.gz.no_w.cnf', 'blasted_TR_b12_1_linear.cnf.gz.no_w.cnf', 'blasted_TR_b12_2_linear.cnf.gz.no_w.cnf', 'blasted_TR_b12_even2_linear.cnf.gz.no_w.cnf', 'blasted_TR_b12_even3_linear.cnf.gz.no_w.cnf', 'blasted_TR_b12_even7_linear.cnf.gz.no_w.cnf', 'blasted_TR_b14_1_linear.cnf.gz.no_w.cnf', 'blasted_TR_b14_2_linear.cnf.gz.no_w.cnf', 'blasted_TR_b14_3_linear.cnf.gz.no_w.cnf', 'blasted_TR_b14_even2_linear.cnf.gz.no_w.cnf', 'blasted_TR_b14_even3_linear.cnf.gz.no_w.cnf', 'blasted_TR_b14_even_linear.cnf.gz.no_w.cnf', 'blasted_TR_device_1_even_linear.cnf.gz.no_w.cnf', 'blasted_TR_device_1_linear.cnf.gz.no_w.cnf', 'blasted_TR_ptb_1_linear.cnf.gz.no_w.cnf', 'blasted_TR_ptb_2_linear.cnf.gz.no_w.cnf', 'brp.pm_14steps_10int_8fract_p1_N=200_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_10int_8fract_p1_N=200_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=1000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=1000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=400_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=400_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=600_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=600_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=800_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=800_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_13int_8fract_p1_N=2000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_13int_8fract_p1_N=2000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_14int_8fract_p1_N=3000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_14int_8fract_p1_N=3000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_14int_8fract_p1_N=4000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_14int_8fract_p1_N=4000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_14int_8fract_p1_N=5000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_14int_8fract_p1_N=5000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_22int_8fract_p1_N=1000000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_22int_8fract_p1_N=1000000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_22int_8fract_p1_N=100000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_22int_8fract_p1_N=100000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_22int_8fract_p1_N=10000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_22int_8fract_p1_N=10000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_26int_8fract_p1_N=10000000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_26int_8fract_p1_N=10000000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_30int_8fract_p1_N=100000000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_30int_8fract_p1_N=100000000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_8int_8fract_p1_N=16_MAX=2over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_8int_8fract_p1_N=16_MAX=2under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_8int_8fract_p1_N=32_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_8int_8fract_p1_N=32_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_8int_8fract_p1_N=64_MAX=5over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_8int_8fract_p1_N=64_MAX=5under.dimacs.gz.no_w.cnf', 'compress.sk_17_291.cnf.gz.no_w.cnf', 'ConcreteActivityService.sk_13_28.cnf.gz.no_w.cnf', 'ConcreteRoleAffectationService.sk_119_273.cnf.gz.no_w.cnf', 'crowds_big.pm_15steps_8int_7fract_PCTL_TotalRuns=10_CrowdSize=40over.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_8int_7fract_PCTL_TotalRuns=10_CrowdSize=40under.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_8int_7fract_PCTL_TotalRuns=20_CrowdSize=40over.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_8int_7fract_PCTL_TotalRuns=20_CrowdSize=40under.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_9int_7fract_PCTL_TotalRuns=40_CrowdSize=128over.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_9int_7fract_PCTL_TotalRuns=40_CrowdSize=128under.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_9int_7fract_PCTL_TotalRuns=60_CrowdSize=128over.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_9int_7fract_PCTL_TotalRuns=60_CrowdSize=128under.dimacs.gz.no_w.cnf', 'crowds.pm_15steps_8int_7fract_PCTL_TotalRuns=10_CrowdSize=20over.dimacs.gz.no_w.cnf', 'crowds.pm_15steps_8int_7fract_PCTL_TotalRuns=10_CrowdSize=20under.dimacs.gz.no_w.cnf', 'crowds.pm_15steps_8int_7fract_PCTL_TotalRuns=3_CrowdSize=5over.dimacs.gz.no_w.cnf', 'crowds.pm_15steps_8int_7fract_PCTL_TotalRuns=3_CrowdSize=5under.dimacs.gz.no_w.cnf', 'crowds.pm_15steps_8int_7fract_PCTL_TotalRuns=6_CrowdSize=10over.dimacs.gz.no_w.cnf', 'crowds.pm_15steps_8int_7fract_PCTL_TotalRuns=6_CrowdSize=10under.dimacs.gz.no_w.cnf', 'diagStencilClean.sk_41_36.cnf.gz.no_w.cnf', 'diagStencil.sk_35_36.cnf.gz.no_w.cnf', 'doublyLinkedList.sk_8_37.cnf.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=10_L=2over.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=10_L=2under.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=10_L=4over.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=10_L=4under.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=7_L=2over.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=7_L=2under.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=7_L=4over.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=7_L=4under.dimacs.gz.no_w.cnf', 'egl.pm_31steps_6int_1fract_unfairA_N=5_L=2over.dimacs.gz.no_w.cnf', 'egl.pm_31steps_6int_1fract_unfairA_N=5_L=2under.dimacs.gz.no_w.cnf', 'egl.pm_60steps_6int_1fract_unfairA_N=5_L=4over.dimacs.gz.no_w.cnf', 'egl.pm_60steps_6int_1fract_unfairA_N=5_L=4under.dimacs.gz.no_w.cnf', 'enqueueSeqSK.sk_10_42.cnf.gz.no_w.cnf', 'GuidanceService2.sk_2_27.cnf.gz.no_w.cnf', 'GuidanceService.sk_4_27.cnf.gz.no_w.cnf', 'hash-10-1.cnf.gz.no_w.cnf', 'hash-10-2.cnf.gz.no_w.cnf', 'hash-10-3.cnf.gz.no_w.cnf', 'hash-10-4.cnf.gz.no_w.cnf', 'hash-10-5.cnf.gz.no_w.cnf', 'hash-10-6.cnf.gz.no_w.cnf', 'hash-10-7.cnf.gz.no_w.cnf', 'hash-10-8.cnf.gz.no_w.cnf', 'hash-10.cnf.gz.no_w.cnf', 'hash-11-1.cnf.gz.no_w.cnf', 'hash-11-2.cnf.gz.no_w.cnf', 'hash-11-3.cnf.gz.no_w.cnf', 'hash-11-4.cnf.gz.no_w.cnf', 'hash-11-5.cnf.gz.no_w.cnf', 'hash-11-6.cnf.gz.no_w.cnf', 'hash-11-7.cnf.gz.no_w.cnf', 'hash-11-8.cnf.gz.no_w.cnf', 'hash-11.cnf.gz.no_w.cnf', 'hash-12-1.cnf.gz.no_w.cnf', 'hash-12-2.cnf.gz.no_w.cnf', 'hash-12-3.cnf.gz.no_w.cnf', 'hash-12-4.cnf.gz.no_w.cnf', 'hash-12-5.cnf.gz.no_w.cnf', 'hash-12-6.cnf.gz.no_w.cnf', 'hash-12-7.cnf.gz.no_w.cnf', 'hash-12-8.cnf.gz.no_w.cnf', 'hash-12.cnf.gz.no_w.cnf', 'hash-13-1.cnf.gz.no_w.cnf', 'hash-13-2.cnf.gz.no_w.cnf', 'hash-13-3.cnf.gz.no_w.cnf', 'hash-13-4.cnf.gz.no_w.cnf', 'hash-13-5.cnf.gz.no_w.cnf', 'hash-13-6.cnf.gz.no_w.cnf', 'hash-13-7.cnf.gz.no_w.cnf', 'hash-13-8.cnf.gz.no_w.cnf', 'hash-14.cnf.gz.no_w.cnf', 'hash16-12.cnf.gz.no_w.cnf', 'hash16-4.cnf.gz.no_w.cnf', 'hash16-8.cnf.gz.no_w.cnf', 'hash-16.cnf.gz.no_w.cnf', 'hash-2.cnf.gz.no_w.cnf', 'hash-4.cnf.gz.no_w.cnf', 'hash-6.cnf.gz.no_w.cnf', 'hash-8-1.cnf.gz.no_w.cnf', 'hash-8-2.cnf.gz.no_w.cnf', 'hash-8-3.cnf.gz.no_w.cnf', 'hash-8-4.cnf.gz.no_w.cnf', 'hash-8-5.cnf.gz.no_w.cnf', 'hash-8-6.cnf.gz.no_w.cnf', 'hash-8-7.cnf.gz.no_w.cnf', 'hash-8-8.cnf.gz.no_w.cnf', 'hash-8.cnf.gz.no_w.cnf', 'herman15.pm_20steps_6int_1fract_stable_over.dimacs.gz.no_w.cnf', 'herman15.pm_20steps_6int_1fract_stable_under.dimacs.gz.no_w.cnf', 'herman21.pm_20steps_6int_1fract_stable_over.dimacs.gz.no_w.cnf', 'herman21.pm_20steps_6int_1fract_stable_under.dimacs.gz.no_w.cnf', 'herman31.pm_20steps_6int_1fract_stable_over.dimacs.gz.no_w.cnf', 'herman31.pm_20steps_6int_1fract_stable_under.dimacs.gz.no_w.cnf', 'herman3.pm_20steps_6int_1fract_stable_over.dimacs.gz.no_w.cnf', 'herman3.pm_20steps_6int_1fract_stable_under.dimacs.gz.no_w.cnf', 'herman41.pm_20steps_6int_1fract_stable_over.dimacs.gz.no_w.cnf', 'herman41.pm_20steps_6int_1fract_stable_under.dimacs.gz.no_w.cnf', 'herman9.pm_20steps_6int_1fract_stable_over.dimacs.gz.no_w.cnf', 'herman9.pm_20steps_6int_1fract_stable_under.dimacs.gz.no_w.cnf', 'isolateRightmost.sk_7_481.cnf.gz.no_w.cnf', 'IssueServiceImpl.sk_8_30.cnf.gz.no_w.cnf', 'IterationService.sk_12_27.cnf.gz.no_w.cnf', 'jburnim_morton.sk_13_530.cnf.gz.no_w.cnf', 'karatsuba.sk_7_41.cnf.gz.no_w.cnf', 'leader_sync3_2.pm_4steps_7int_1fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync3_2.pm_4steps_7int_1fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync3_32.pm_4steps_7int_5fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync3_32.pm_4steps_7int_5fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync3_64.pm_4steps_7int_6fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync3_64.pm_4steps_7int_6fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync3_8.pm_4steps_7int_3fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync3_8.pm_4steps_7int_3fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_10fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_10fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_10fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_10fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_11fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_11fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_11fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_11fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_12fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_12fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_12fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_12fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_13fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_13fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_13fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_13fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_14fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_14fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_14fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_14fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_15fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_15fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_15fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_15fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_16fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_16fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_16fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_16fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_17fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_17fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_17fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_17fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_18fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_18fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_18fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_18fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_19fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_19fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_19fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_19fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_20fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_20fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_20fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_20fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_4fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_4fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_4fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_4fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_5fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_5fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_5fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_5fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_6fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_6fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_6fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_6fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_7fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_7fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_7fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_7fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_8fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_8fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_8fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_8fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_9fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_9fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_9fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_9fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_2.pm_5steps_7int_1fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_2.pm_5steps_7int_1fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_32.pm_5steps_7int_5fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_32.pm_5steps_7int_5fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_64.pm_5steps_7int_6fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_64.pm_5steps_7int_6fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_8.pm_5steps_7int_3fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_8.pm_5steps_7int_3fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync6_2.pm_7steps_7int_1fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync6_2.pm_7steps_7int_1fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync6_32.pm_7steps_7int_5fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync6_32.pm_7steps_7int_5fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync6_64.pm_7steps_7int_6fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync6_64.pm_7steps_7int_6fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync6_8.pm_7steps_7int_3fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync6_8.pm_7steps_7int_3fract_elected_under.dimacs.gz.no_w.cnf', 'listReverse.sk_11_43.cnf.gz.no_w.cnf', 'log-1.cnf.gz.no_w.cnf', 'log-2.cnf.gz.no_w.cnf', 'log2.sk_72_391.cnf.gz.no_w.cnf', 'log-3.cnf.gz.no_w.cnf', 'log-4.cnf.gz.no_w.cnf', 'log-5.cnf.gz.no_w.cnf', 'logcount.sk_16_86.cnf.gz.no_w.cnf', 'LoginService2.sk_23_36.cnf.gz.no_w.cnf', 'LoginService.sk_20_34.cnf.gz.no_w.cnf', 'lss.sk_6_7.cnf.gz.no_w.cnf', 'min-12.cnf.gz.no_w.cnf', 'min-12s.cnf.gz.no_w.cnf', 'min-16.cnf.gz.no_w.cnf', 'min-16s.cnf.gz.no_w.cnf', 'min-1s.cnf.gz.no_w.cnf', 'min-20.cnf.gz.no_w.cnf', 'min-20s.cnf.gz.no_w.cnf', 'min-24.cnf.gz.no_w.cnf', 'min-24s.cnf.gz.no_w.cnf', 'min-28.cnf.gz.no_w.cnf', 'min-28s.cnf.gz.no_w.cnf', 'min-2s.cnf.gz.no_w.cnf', 'min-32.cnf.gz.no_w.cnf', 'min-32s.cnf.gz.no_w.cnf', 'min-3s.cnf.gz.no_w.cnf', 'min-4.cnf.gz.no_w.cnf', 'min-4s.cnf.gz.no_w.cnf', 'min-6s.cnf.gz.no_w.cnf', 'min-8.cnf.gz.no_w.cnf', 'min-8s.cnf.gz.no_w.cnf', 'modexp16-2.cnf.gz.no_w.cnf', 'modexp16-4.cnf.gz.no_w.cnf', 'modexp8-4-1.cnf.gz.no_w.cnf', 'modexp8-4-2.cnf.gz.no_w.cnf', 'modexp8-4-3.cnf.gz.no_w.cnf', 'modexp8-4-4.cnf.gz.no_w.cnf', 'modexp8-4-5.cnf.gz.no_w.cnf', 'modexp8-4-6.cnf.gz.no_w.cnf', 'modexp8-4-7.cnf.gz.no_w.cnf', 'modexp8-4-8.cnf.gz.no_w.cnf', 'modexp8-5-1.cnf.gz.no_w.cnf', 'modexp8-5-2.cnf.gz.no_w.cnf', 'modexp8-5-3.cnf.gz.no_w.cnf', 'modexp8-5-4.cnf.gz.no_w.cnf', 'modexp8-5-5.cnf.gz.no_w.cnf', 'modexp8-5-6.cnf.gz.no_w.cnf', 'modexp8-5-7.cnf.gz.no_w.cnf', 'modexp8-5-8.cnf.gz.no_w.cnf', 'modexp8-6-1.cnf.gz.no_w.cnf', 'modexp8-6-2.cnf.gz.no_w.cnf', 'modexp8-6-3.cnf.gz.no_w.cnf', 'modexp8-6-4.cnf.gz.no_w.cnf', 'modexp8-6-5.cnf.gz.no_w.cnf', 'modexp8-6-6.cnf.gz.no_w.cnf', 'modexp8-6-7.cnf.gz.no_w.cnf', 'modexp8-6-8.cnf.gz.no_w.cnf', 'modexp8-7-1.cnf.gz.no_w.cnf', 'modexp8-7-2.cnf.gz.no_w.cnf', 'modexp8-7-3.cnf.gz.no_w.cnf', 'modexp8-7-4.cnf.gz.no_w.cnf', 'modexp8-7-5.cnf.gz.no_w.cnf', 'modexp8-7-6.cnf.gz.no_w.cnf', 'modexp8-7-7.cnf.gz.no_w.cnf', 'modexp8-7-8.cnf.gz.no_w.cnf', 'modexp8-8-1.cnf.gz.no_w.cnf', 'modexp8-8-2.cnf.gz.no_w.cnf', 'modexp8-8-3.cnf.gz.no_w.cnf', 'modexp8-8-4.cnf.gz.no_w.cnf', 'modexp8-8-5.cnf.gz.no_w.cnf', 'modexp8-8-6.cnf.gz.no_w.cnf', 'modexp8-8-7.cnf.gz.no_w.cnf', 'modexp8-8-8.cnf.gz.no_w.cnf', 'nand.pm_100steps_15int_8fract_lessThan10PercentAreErroneous_N=20_K=4over.dimacs.gz.no_w.cnf', 'nand.pm_100steps_15int_8fract_lessThan10PercentAreErroneous_N=20_K=4under.dimacs.gz.no_w.cnf', 'nand.pm_80steps_15int_8fract_lessThan10PercentAreErroneous_N=20_K=2over.dimacs.gz.no_w.cnf', 'nand.pm_80steps_15int_8fract_lessThan10PercentAreErroneous_N=20_K=2under.dimacs.gz.no_w.cnf', 'nand.pm_80steps_15int_8fract_lessThan10PercentAreErroneous_N=20_K=3over.dimacs.gz.no_w.cnf', 'nand.pm_80steps_15int_8fract_lessThan10PercentAreErroneous_N=20_K=3under.dimacs.gz.no_w.cnf', 'NotificationServiceImpl2.sk_10_36.cnf.gz.no_w.cnf', 'or-100-10-10.cnf.gz.no_w.cnf', 'or-100-10-10-UC-10.cnf.gz.no_w.cnf', 'or-100-10-10-UC-20.cnf.gz.no_w.cnf', 'or-100-10-10-UC-30.cnf.gz.no_w.cnf', 'or-100-10-10-UC-40.cnf.gz.no_w.cnf', 'or-100-10-10-UC-50.cnf.gz.no_w.cnf', 'or-100-10-10-UC-60.cnf.gz.no_w.cnf', 'or-100-10-1.cnf.gz.no_w.cnf', 'or-100-10-1-UC-10.cnf.gz.no_w.cnf', 'or-100-10-1-UC-20.cnf.gz.no_w.cnf', 'or-100-10-1-UC-30.cnf.gz.no_w.cnf', 'or-100-10-1-UC-40.cnf.gz.no_w.cnf', 'or-100-10-1-UC-50.cnf.gz.no_w.cnf', 'or-100-10-1-UC-60.cnf.gz.no_w.cnf', 'or-100-10-2.cnf.gz.no_w.cnf', 'or-100-10-2-UC-10.cnf.gz.no_w.cnf', 'or-100-10-2-UC-20.cnf.gz.no_w.cnf', 'or-100-10-2-UC-30.cnf.gz.no_w.cnf', 'or-100-10-2-UC-40.cnf.gz.no_w.cnf', 'or-100-10-2-UC-50.cnf.gz.no_w.cnf', 'or-100-10-2-UC-60.cnf.gz.no_w.cnf', 'or-100-10-3.cnf.gz.no_w.cnf', 'or-100-10-3-UC-10.cnf.gz.no_w.cnf', 'or-100-10-3-UC-20.cnf.gz.no_w.cnf', 'or-100-10-3-UC-30.cnf.gz.no_w.cnf', 'or-100-10-3-UC-40.cnf.gz.no_w.cnf', 'or-100-10-3-UC-50.cnf.gz.no_w.cnf', 'or-100-10-3-UC-60.cnf.gz.no_w.cnf', 'or-100-10-4.cnf.gz.no_w.cnf', 'or-100-10-4-UC-10.cnf.gz.no_w.cnf', 'or-100-10-4-UC-20.cnf.gz.no_w.cnf', 'or-100-10-4-UC-30.cnf.gz.no_w.cnf', 'or-100-10-4-UC-40.cnf.gz.no_w.cnf', 'or-100-10-4-UC-50.cnf.gz.no_w.cnf', 'or-100-10-4-UC-60.cnf.gz.no_w.cnf', 'or-100-10-5.cnf.gz.no_w.cnf', 'or-100-10-5-UC-10.cnf.gz.no_w.cnf', 'or-100-10-5-UC-20.cnf.gz.no_w.cnf', 'or-100-10-5-UC-30.cnf.gz.no_w.cnf', 'or-100-10-5-UC-40.cnf.gz.no_w.cnf', 'or-100-10-5-UC-50.cnf.gz.no_w.cnf', 'or-100-10-5-UC-60.cnf.gz.no_w.cnf', 'or-100-10-6.cnf.gz.no_w.cnf', 'or-100-10-6-UC-10.cnf.gz.no_w.cnf', 'or-100-10-6-UC-20.cnf.gz.no_w.cnf', 'or-100-10-6-UC-30.cnf.gz.no_w.cnf', 'or-100-10-6-UC-40.cnf.gz.no_w.cnf', 'or-100-10-6-UC-50.cnf.gz.no_w.cnf', 'or-100-10-6-UC-60.cnf.gz.no_w.cnf', 'or-100-10-7.cnf.gz.no_w.cnf', 'or-100-10-7-UC-10.cnf.gz.no_w.cnf', 'or-100-10-7-UC-20.cnf.gz.no_w.cnf', 'or-100-10-7-UC-30.cnf.gz.no_w.cnf', 'or-100-10-7-UC-40.cnf.gz.no_w.cnf', 'or-100-10-7-UC-50.cnf.gz.no_w.cnf', 'or-100-10-7-UC-60.cnf.gz.no_w.cnf', 'or-100-10-8.cnf.gz.no_w.cnf', 'or-100-10-8-UC-10.cnf.gz.no_w.cnf', 'or-100-10-8-UC-20.cnf.gz.no_w.cnf', 'or-100-10-8-UC-30.cnf.gz.no_w.cnf', 'or-100-10-8-UC-40.cnf.gz.no_w.cnf', 'or-100-10-8-UC-50.cnf.gz.no_w.cnf', 'or-100-10-8-UC-60.cnf.gz.no_w.cnf', 'or-100-10-9.cnf.gz.no_w.cnf', 'or-100-10-9-UC-10.cnf.gz.no_w.cnf', 'or-100-10-9-UC-20.cnf.gz.no_w.cnf', 'or-100-10-9-UC-30.cnf.gz.no_w.cnf', 'or-100-10-9-UC-40.cnf.gz.no_w.cnf', 'or-100-10-9-UC-50.cnf.gz.no_w.cnf', 'or-100-10-9-UC-60.cnf.gz.no_w.cnf', 'or-100-20-10.cnf.gz.no_w.cnf', 'or-100-20-10-UC-10.cnf.gz.no_w.cnf', 'or-100-20-10-UC-20.cnf.gz.no_w.cnf', 'or-100-20-10-UC-30.cnf.gz.no_w.cnf', 'or-100-20-10-UC-40.cnf.gz.no_w.cnf', 'or-100-20-10-UC-50.cnf.gz.no_w.cnf', 'or-100-20-10-UC-60.cnf.gz.no_w.cnf', 'or-100-20-1.cnf.gz.no_w.cnf', 'or-100-20-1-UC-10.cnf.gz.no_w.cnf', 'or-100-20-1-UC-20.cnf.gz.no_w.cnf', 'or-100-20-1-UC-30.cnf.gz.no_w.cnf', 'or-100-20-1-UC-40.cnf.gz.no_w.cnf', 'or-100-20-1-UC-50.cnf.gz.no_w.cnf', 'or-100-20-1-UC-60.cnf.gz.no_w.cnf', 'or-100-20-2.cnf.gz.no_w.cnf', 'or-100-20-2-UC-10.cnf.gz.no_w.cnf', 'or-100-20-2-UC-20.cnf.gz.no_w.cnf', 'or-100-20-2-UC-30.cnf.gz.no_w.cnf', 'or-100-20-2-UC-40.cnf.gz.no_w.cnf', 'or-100-20-2-UC-50.cnf.gz.no_w.cnf', 'or-100-20-2-UC-60.cnf.gz.no_w.cnf', 'or-100-20-3.cnf.gz.no_w.cnf', 'or-100-20-3-UC-10.cnf.gz.no_w.cnf', 'or-100-20-3-UC-20.cnf.gz.no_w.cnf', 'or-100-20-3-UC-30.cnf.gz.no_w.cnf', 'or-100-20-3-UC-40.cnf.gz.no_w.cnf', 'or-100-20-3-UC-50.cnf.gz.no_w.cnf', 'or-100-20-3-UC-60.cnf.gz.no_w.cnf', 'or-100-20-4.cnf.gz.no_w.cnf', 'or-100-20-4-UC-10.cnf.gz.no_w.cnf', 'or-100-20-4-UC-20.cnf.gz.no_w.cnf', 'or-100-20-4-UC-30.cnf.gz.no_w.cnf', 'or-100-20-4-UC-40.cnf.gz.no_w.cnf', 'or-100-20-4-UC-50.cnf.gz.no_w.cnf', 'or-100-20-4-UC-60.cnf.gz.no_w.cnf', 'or-100-20-5.cnf.gz.no_w.cnf', 'or-100-20-5-UC-10.cnf.gz.no_w.cnf', 'or-100-20-5-UC-20.cnf.gz.no_w.cnf', 'or-100-20-5-UC-30.cnf.gz.no_w.cnf', 'or-100-20-5-UC-40.cnf.gz.no_w.cnf', 'or-100-20-5-UC-50.cnf.gz.no_w.cnf', 'or-100-20-5-UC-60.cnf.gz.no_w.cnf', 'or-100-20-6.cnf.gz.no_w.cnf', 'or-100-20-6-UC-10.cnf.gz.no_w.cnf', 'or-100-20-6-UC-20.cnf.gz.no_w.cnf', 'or-100-20-6-UC-30.cnf.gz.no_w.cnf', 'or-100-20-6-UC-40.cnf.gz.no_w.cnf', 'or-100-20-6-UC-50.cnf.gz.no_w.cnf', 'or-100-20-6-UC-60.cnf.gz.no_w.cnf', 'or-100-20-7.cnf.gz.no_w.cnf', 'or-100-20-7-UC-10.cnf.gz.no_w.cnf', 'or-100-20-7-UC-20.cnf.gz.no_w.cnf', 'or-100-20-7-UC-30.cnf.gz.no_w.cnf', 'or-100-20-7-UC-40.cnf.gz.no_w.cnf', 'or-100-20-7-UC-50.cnf.gz.no_w.cnf', 'or-100-20-7-UC-60.cnf.gz.no_w.cnf', 'or-100-20-8.cnf.gz.no_w.cnf', 'or-100-20-8-UC-10.cnf.gz.no_w.cnf', 'or-100-20-8-UC-20.cnf.gz.no_w.cnf', 'or-100-20-8-UC-30.cnf.gz.no_w.cnf', 'or-100-20-8-UC-40.cnf.gz.no_w.cnf', 'or-100-20-8-UC-50.cnf.gz.no_w.cnf', 'or-100-20-8-UC-60.cnf.gz.no_w.cnf', 'or-100-20-9.cnf.gz.no_w.cnf', 'or-100-20-9-UC-10.cnf.gz.no_w.cnf', 'or-100-20-9-UC-20.cnf.gz.no_w.cnf', 'or-100-20-9-UC-30.cnf.gz.no_w.cnf', 'or-100-20-9-UC-40.cnf.gz.no_w.cnf', 'or-100-20-9-UC-50.cnf.gz.no_w.cnf', 'or-100-20-9-UC-60.cnf.gz.no_w.cnf', 'or-100-5-10.cnf.gz.no_w.cnf', 'or-100-5-10-UC-10.cnf.gz.no_w.cnf', 'or-100-5-10-UC-20.cnf.gz.no_w.cnf', 'or-100-5-10-UC-30.cnf.gz.no_w.cnf', 'or-100-5-10-UC-40.cnf.gz.no_w.cnf', 'or-100-5-10-UC-50.cnf.gz.no_w.cnf', 'or-100-5-10-UC-60.cnf.gz.no_w.cnf', 'or-100-5-1.cnf.gz.no_w.cnf', 'or-100-5-1-UC-10.cnf.gz.no_w.cnf', 'or-100-5-1-UC-20.cnf.gz.no_w.cnf', 'or-100-5-1-UC-30.cnf.gz.no_w.cnf', 'or-100-5-1-UC-40.cnf.gz.no_w.cnf', 'or-100-5-1-UC-50.cnf.gz.no_w.cnf', 'or-100-5-1-UC-60.cnf.gz.no_w.cnf', 'or-100-5-2.cnf.gz.no_w.cnf', 'or-100-5-2-UC-10.cnf.gz.no_w.cnf', 'or-100-5-2-UC-20.cnf.gz.no_w.cnf', 'or-100-5-2-UC-30.cnf.gz.no_w.cnf', 'or-100-5-2-UC-40.cnf.gz.no_w.cnf', 'or-100-5-2-UC-50.cnf.gz.no_w.cnf', 'or-100-5-2-UC-60.cnf.gz.no_w.cnf', 'or-100-5-3.cnf.gz.no_w.cnf', 'or-100-5-3-UC-10.cnf.gz.no_w.cnf', 'or-100-5-3-UC-20.cnf.gz.no_w.cnf', 'or-100-5-3-UC-30.cnf.gz.no_w.cnf', 'or-100-5-3-UC-40.cnf.gz.no_w.cnf', 'or-100-5-3-UC-50.cnf.gz.no_w.cnf', 'or-100-5-3-UC-60.cnf.gz.no_w.cnf', 'or-100-5-4.cnf.gz.no_w.cnf', 'or-100-5-4-UC-10.cnf.gz.no_w.cnf', 'or-100-5-4-UC-20.cnf.gz.no_w.cnf', 'or-100-5-4-UC-30.cnf.gz.no_w.cnf', 'or-100-5-4-UC-40.cnf.gz.no_w.cnf', 'or-100-5-4-UC-50.cnf.gz.no_w.cnf', 'or-100-5-4-UC-60.cnf.gz.no_w.cnf', 'or-100-5-5.cnf.gz.no_w.cnf', 'or-100-5-5-UC-10.cnf.gz.no_w.cnf', 'or-100-5-5-UC-20.cnf.gz.no_w.cnf', 'or-100-5-5-UC-30.cnf.gz.no_w.cnf', 'or-100-5-5-UC-40.cnf.gz.no_w.cnf', 'or-100-5-5-UC-50.cnf.gz.no_w.cnf', 'or-100-5-5-UC-60.cnf.gz.no_w.cnf', 'or-100-5-6.cnf.gz.no_w.cnf', 'or-100-5-6-UC-10.cnf.gz.no_w.cnf', 'or-100-5-6-UC-20.cnf.gz.no_w.cnf', 'or-100-5-6-UC-30.cnf.gz.no_w.cnf', 'or-100-5-6-UC-40.cnf.gz.no_w.cnf', 'or-100-5-6-UC-50.cnf.gz.no_w.cnf', 'or-100-5-6-UC-60.cnf.gz.no_w.cnf', 'or-100-5-7.cnf.gz.no_w.cnf', 'or-100-5-7-UC-10.cnf.gz.no_w.cnf', 'or-100-5-7-UC-20.cnf.gz.no_w.cnf', 'or-100-5-7-UC-30.cnf.gz.no_w.cnf', 'or-100-5-7-UC-40.cnf.gz.no_w.cnf', 'or-100-5-7-UC-50.cnf.gz.no_w.cnf', 'or-100-5-7-UC-60.cnf.gz.no_w.cnf', 'or-100-5-8.cnf.gz.no_w.cnf', 'or-100-5-8-UC-10.cnf.gz.no_w.cnf', 'or-100-5-8-UC-20.cnf.gz.no_w.cnf', 'or-100-5-8-UC-30.cnf.gz.no_w.cnf', 'or-100-5-8-UC-40.cnf.gz.no_w.cnf', 'or-100-5-8-UC-50.cnf.gz.no_w.cnf', 'or-100-5-8-UC-60.cnf.gz.no_w.cnf', 'or-100-5-9.cnf.gz.no_w.cnf', 'or-100-5-9-UC-10.cnf.gz.no_w.cnf', 'or-100-5-9-UC-20.cnf.gz.no_w.cnf', 'or-100-5-9-UC-30.cnf.gz.no_w.cnf', 'or-100-5-9-UC-40.cnf.gz.no_w.cnf', 'or-100-5-9-UC-50.cnf.gz.no_w.cnf', 'or-100-5-9-UC-60.cnf.gz.no_w.cnf', 'or-50-10-10.cnf.gz.no_w.cnf', 'or-50-10-10-UC-10.cnf.gz.no_w.cnf', 'or-50-10-10-UC-20.cnf.gz.no_w.cnf', 'or-50-10-10-UC-30.cnf.gz.no_w.cnf', 'or-50-10-10-UC-40.cnf.gz.no_w.cnf', 'or-50-10-1.cnf.gz.no_w.cnf', 'or-50-10-1-UC-10.cnf.gz.no_w.cnf', 'or-50-10-1-UC-20.cnf.gz.no_w.cnf', 'or-50-10-1-UC-30.cnf.gz.no_w.cnf', 'or-50-10-1-UC-40.cnf.gz.no_w.cnf', 'or-50-10-2.cnf.gz.no_w.cnf', 'or-50-10-2-UC-10.cnf.gz.no_w.cnf', 'or-50-10-2-UC-20.cnf.gz.no_w.cnf', 'or-50-10-2-UC-30.cnf.gz.no_w.cnf', 'or-50-10-2-UC-40.cnf.gz.no_w.cnf', 'or-50-10-3.cnf.gz.no_w.cnf', 'or-50-10-3-UC-10.cnf.gz.no_w.cnf', 'or-50-10-3-UC-20.cnf.gz.no_w.cnf', 'or-50-10-3-UC-30.cnf.gz.no_w.cnf', 'or-50-10-3-UC-40.cnf.gz.no_w.cnf', 'or-50-10-4.cnf.gz.no_w.cnf', 'or-50-10-4-UC-10.cnf.gz.no_w.cnf', 'or-50-10-4-UC-20.cnf.gz.no_w.cnf', 'or-50-10-4-UC-30.cnf.gz.no_w.cnf', 'or-50-10-4-UC-40.cnf.gz.no_w.cnf', 'or-50-10-5.cnf.gz.no_w.cnf', 'or-50-10-5-UC-10.cnf.gz.no_w.cnf', 'or-50-10-5-UC-20.cnf.gz.no_w.cnf', 'or-50-10-5-UC-30.cnf.gz.no_w.cnf', 'or-50-10-5-UC-40.cnf.gz.no_w.cnf', 'or-50-10-6.cnf.gz.no_w.cnf', 'or-50-10-6-UC-10.cnf.gz.no_w.cnf', 'or-50-10-6-UC-20.cnf.gz.no_w.cnf', 'or-50-10-6-UC-30.cnf.gz.no_w.cnf', 'or-50-10-6-UC-40.cnf.gz.no_w.cnf', 'or-50-10-7.cnf.gz.no_w.cnf', 'or-50-10-7-UC-10.cnf.gz.no_w.cnf', 'or-50-10-7-UC-20.cnf.gz.no_w.cnf', 'or-50-10-7-UC-30.cnf.gz.no_w.cnf', 'or-50-10-7-UC-40.cnf.gz.no_w.cnf', 'or-50-10-8.cnf.gz.no_w.cnf', 'or-50-10-8-UC-10.cnf.gz.no_w.cnf', 'or-50-10-8-UC-20.cnf.gz.no_w.cnf', 'or-50-10-8-UC-30.cnf.gz.no_w.cnf', 'or-50-10-8-UC-40.cnf.gz.no_w.cnf', 'or-50-10-9.cnf.gz.no_w.cnf', 'or-50-10-9-UC-10.cnf.gz.no_w.cnf', 'or-50-10-9-UC-20.cnf.gz.no_w.cnf', 'or-50-10-9-UC-30.cnf.gz.no_w.cnf', 'or-50-10-9-UC-40.cnf.gz.no_w.cnf', 'or-50-20-10.cnf.gz.no_w.cnf', 'or-50-20-10-UC-10.cnf.gz.no_w.cnf', 'or-50-20-10-UC-20.cnf.gz.no_w.cnf', 'or-50-20-10-UC-30.cnf.gz.no_w.cnf', 'or-50-20-10-UC-40.cnf.gz.no_w.cnf', 'or-50-20-1.cnf.gz.no_w.cnf', 'or-50-20-1-UC-10.cnf.gz.no_w.cnf', 'or-50-20-1-UC-20.cnf.gz.no_w.cnf', 'or-50-20-1-UC-30.cnf.gz.no_w.cnf', 'or-50-20-1-UC-40.cnf.gz.no_w.cnf', 'or-50-20-2.cnf.gz.no_w.cnf', 'or-50-20-2-UC-10.cnf.gz.no_w.cnf', 'or-50-20-2-UC-20.cnf.gz.no_w.cnf', 'or-50-20-2-UC-30.cnf.gz.no_w.cnf', 'or-50-20-2-UC-40.cnf.gz.no_w.cnf', 'or-50-20-3.cnf.gz.no_w.cnf', 'or-50-20-3-UC-10.cnf.gz.no_w.cnf', 'or-50-20-3-UC-20.cnf.gz.no_w.cnf', 'or-50-20-3-UC-30.cnf.gz.no_w.cnf', 'or-50-20-3-UC-40.cnf.gz.no_w.cnf', 'or-50-20-4.cnf.gz.no_w.cnf', 'or-50-20-4-UC-10.cnf.gz.no_w.cnf', 'or-50-20-4-UC-20.cnf.gz.no_w.cnf', 'or-50-20-4-UC-30.cnf.gz.no_w.cnf', 'or-50-20-4-UC-40.cnf.gz.no_w.cnf', 'or-50-20-5.cnf.gz.no_w.cnf', 'or-50-20-5-UC-10.cnf.gz.no_w.cnf', 'or-50-20-5-UC-20.cnf.gz.no_w.cnf', 'or-50-20-5-UC-30.cnf.gz.no_w.cnf', 'or-50-20-5-UC-40.cnf.gz.no_w.cnf', 'or-50-20-6.cnf.gz.no_w.cnf', 'or-50-20-6-UC-10.cnf.gz.no_w.cnf', 'or-50-20-6-UC-20.cnf.gz.no_w.cnf', 'or-50-20-6-UC-30.cnf.gz.no_w.cnf', 'or-50-20-6-UC-40.cnf.gz.no_w.cnf', 'or-50-20-7.cnf.gz.no_w.cnf', 'or-50-20-7-UC-10.cnf.gz.no_w.cnf', 'or-50-20-7-UC-20.cnf.gz.no_w.cnf', 'or-50-20-7-UC-30.cnf.gz.no_w.cnf', 'or-50-20-7-UC-40.cnf.gz.no_w.cnf', 'or-50-20-8.cnf.gz.no_w.cnf', 'or-50-20-8-UC-10.cnf.gz.no_w.cnf', 'or-50-20-8-UC-20.cnf.gz.no_w.cnf', 'or-50-20-8-UC-30.cnf.gz.no_w.cnf', 'or-50-20-8-UC-40.cnf.gz.no_w.cnf', 'or-50-20-9.cnf.gz.no_w.cnf', 'or-50-20-9-UC-10.cnf.gz.no_w.cnf', 'or-50-20-9-UC-20.cnf.gz.no_w.cnf', 'or-50-20-9-UC-30.cnf.gz.no_w.cnf', 'or-50-20-9-UC-40.cnf.gz.no_w.cnf', 'or-50-5-10.cnf.gz.no_w.cnf', 'or-50-5-10-UC-10.cnf.gz.no_w.cnf', 'or-50-5-10-UC-20.cnf.gz.no_w.cnf', 'or-50-5-10-UC-30.cnf.gz.no_w.cnf', 'or-50-5-10-UC-40.cnf.gz.no_w.cnf', 'or-50-5-1.cnf.gz.no_w.cnf', 'or-50-5-1-UC-10.cnf.gz.no_w.cnf', 'or-50-5-1-UC-20.cnf.gz.no_w.cnf', 'or-50-5-1-UC-30.cnf.gz.no_w.cnf', 'or-50-5-1-UC-40.cnf.gz.no_w.cnf', 'or-50-5-2.cnf.gz.no_w.cnf', 'or-50-5-2-UC-10.cnf.gz.no_w.cnf', 'or-50-5-2-UC-20.cnf.gz.no_w.cnf', 'or-50-5-2-UC-30.cnf.gz.no_w.cnf', 'or-50-5-2-UC-40.cnf.gz.no_w.cnf', 'or-50-5-3.cnf.gz.no_w.cnf', 'or-50-5-3-UC-10.cnf.gz.no_w.cnf', 'or-50-5-3-UC-20.cnf.gz.no_w.cnf', 'or-50-5-3-UC-30.cnf.gz.no_w.cnf', 'or-50-5-3-UC-40.cnf.gz.no_w.cnf', 'or-50-5-4.cnf.gz.no_w.cnf', 'or-50-5-4-UC-10.cnf.gz.no_w.cnf', 'or-50-5-4-UC-20.cnf.gz.no_w.cnf', 'or-50-5-4-UC-30.cnf.gz.no_w.cnf', 'or-50-5-4-UC-40.cnf.gz.no_w.cnf', 'or-50-5-5.cnf.gz.no_w.cnf', 'or-50-5-5-UC-10.cnf.gz.no_w.cnf', 'or-50-5-5-UC-20.cnf.gz.no_w.cnf', 'or-50-5-5-UC-30.cnf.gz.no_w.cnf', 'or-50-5-5-UC-40.cnf.gz.no_w.cnf', 'or-50-5-6.cnf.gz.no_w.cnf', 'or-50-5-6-UC-10.cnf.gz.no_w.cnf', 'or-50-5-6-UC-20.cnf.gz.no_w.cnf', 'or-50-5-6-UC-30.cnf.gz.no_w.cnf', 'or-50-5-6-UC-40.cnf.gz.no_w.cnf', 'or-50-5-7.cnf.gz.no_w.cnf', 'or-50-5-7-UC-10.cnf.gz.no_w.cnf', 'or-50-5-7-UC-20.cnf.gz.no_w.cnf', 'or-50-5-7-UC-30.cnf.gz.no_w.cnf', 'or-50-5-7-UC-40.cnf.gz.no_w.cnf', 'or-50-5-8.cnf.gz.no_w.cnf', 'or-50-5-8-UC-10.cnf.gz.no_w.cnf', 'or-50-5-8-UC-20.cnf.gz.no_w.cnf', 'or-50-5-8-UC-30.cnf.gz.no_w.cnf', 'or-50-5-8-UC-40.cnf.gz.no_w.cnf', 'or-50-5-9.cnf.gz.no_w.cnf', 'or-50-5-9-UC-10.cnf.gz.no_w.cnf', 'or-50-5-9-UC-20.cnf.gz.no_w.cnf', 'or-50-5-9-UC-30.cnf.gz.no_w.cnf', 'or-50-5-9-UC-40.cnf.gz.no_w.cnf', 'or-60-10-10.cnf.gz.no_w.cnf', 'or-60-10-10-UC-10.cnf.gz.no_w.cnf', 'or-60-10-10-UC-20.cnf.gz.no_w.cnf', 'or-60-10-10-UC-30.cnf.gz.no_w.cnf', 'or-60-10-10-UC-40.cnf.gz.no_w.cnf', 'or-60-10-1.cnf.gz.no_w.cnf', 'or-60-10-1-UC-10.cnf.gz.no_w.cnf', 'or-60-10-1-UC-20.cnf.gz.no_w.cnf', 'or-60-10-1-UC-30.cnf.gz.no_w.cnf', 'or-60-10-1-UC-40.cnf.gz.no_w.cnf', 'or-60-10-2.cnf.gz.no_w.cnf', 'or-60-10-2-UC-10.cnf.gz.no_w.cnf', 'or-60-10-2-UC-20.cnf.gz.no_w.cnf', 'or-60-10-2-UC-30.cnf.gz.no_w.cnf', 'or-60-10-2-UC-40.cnf.gz.no_w.cnf', 'or-60-10-3.cnf.gz.no_w.cnf', 'or-60-10-3-UC-10.cnf.gz.no_w.cnf', 'or-60-10-3-UC-20.cnf.gz.no_w.cnf', 'or-60-10-3-UC-30.cnf.gz.no_w.cnf', 'or-60-10-3-UC-40.cnf.gz.no_w.cnf', 'or-60-10-4.cnf.gz.no_w.cnf', 'or-60-10-4-UC-10.cnf.gz.no_w.cnf', 'or-60-10-4-UC-20.cnf.gz.no_w.cnf', 'or-60-10-4-UC-30.cnf.gz.no_w.cnf', 'or-60-10-4-UC-40.cnf.gz.no_w.cnf', 'or-60-10-5.cnf.gz.no_w.cnf', 'or-60-10-5-UC-10.cnf.gz.no_w.cnf', 'or-60-10-5-UC-20.cnf.gz.no_w.cnf', 'or-60-10-5-UC-30.cnf.gz.no_w.cnf', 'or-60-10-5-UC-40.cnf.gz.no_w.cnf', 'or-60-10-6.cnf.gz.no_w.cnf', 'or-60-10-6-UC-10.cnf.gz.no_w.cnf', 'or-60-10-6-UC-20.cnf.gz.no_w.cnf', 'or-60-10-6-UC-30.cnf.gz.no_w.cnf', 'or-60-10-6-UC-40.cnf.gz.no_w.cnf', 'or-60-10-7.cnf.gz.no_w.cnf', 'or-60-10-7-UC-10.cnf.gz.no_w.cnf', 'or-60-10-7-UC-20.cnf.gz.no_w.cnf', 'or-60-10-7-UC-30.cnf.gz.no_w.cnf', 'or-60-10-7-UC-40.cnf.gz.no_w.cnf', 'or-60-10-8.cnf.gz.no_w.cnf', 'or-60-10-8-UC-10.cnf.gz.no_w.cnf', 'or-60-10-8-UC-20.cnf.gz.no_w.cnf', 'or-60-10-8-UC-30.cnf.gz.no_w.cnf', 'or-60-10-8-UC-40.cnf.gz.no_w.cnf', 'or-60-10-9.cnf.gz.no_w.cnf', 'or-60-10-9-UC-10.cnf.gz.no_w.cnf', 'or-60-10-9-UC-20.cnf.gz.no_w.cnf', 'or-60-10-9-UC-30.cnf.gz.no_w.cnf', 'or-60-10-9-UC-40.cnf.gz.no_w.cnf', 'or-60-20-10.cnf.gz.no_w.cnf', 'or-60-20-10-UC-10.cnf.gz.no_w.cnf', 'or-60-20-10-UC-20.cnf.gz.no_w.cnf', 'or-60-20-10-UC-30.cnf.gz.no_w.cnf', 'or-60-20-10-UC-40.cnf.gz.no_w.cnf', 'or-60-20-1.cnf.gz.no_w.cnf', 'or-60-20-1-UC-10.cnf.gz.no_w.cnf', 'or-60-20-1-UC-20.cnf.gz.no_w.cnf', 'or-60-20-1-UC-30.cnf.gz.no_w.cnf', 'or-60-20-1-UC-40.cnf.gz.no_w.cnf', 'or-60-20-2.cnf.gz.no_w.cnf', 'or-60-20-2-UC-10.cnf.gz.no_w.cnf', 'or-60-20-2-UC-20.cnf.gz.no_w.cnf', 'or-60-20-2-UC-30.cnf.gz.no_w.cnf', 'or-60-20-2-UC-40.cnf.gz.no_w.cnf', 'or-60-20-3.cnf.gz.no_w.cnf', 'or-60-20-3-UC-10.cnf.gz.no_w.cnf', 'or-60-20-3-UC-20.cnf.gz.no_w.cnf', 'or-60-20-3-UC-30.cnf.gz.no_w.cnf', 'or-60-20-3-UC-40.cnf.gz.no_w.cnf', 'or-60-20-4.cnf.gz.no_w.cnf', 'or-60-20-4-UC-10.cnf.gz.no_w.cnf', 'or-60-20-4-UC-20.cnf.gz.no_w.cnf', 'or-60-20-4-UC-30.cnf.gz.no_w.cnf', 'or-60-20-4-UC-40.cnf.gz.no_w.cnf', 'or-60-20-5.cnf.gz.no_w.cnf', 'or-60-20-5-UC-10.cnf.gz.no_w.cnf', 'or-60-20-5-UC-20.cnf.gz.no_w.cnf', 'or-60-20-5-UC-30.cnf.gz.no_w.cnf', 'or-60-20-5-UC-40.cnf.gz.no_w.cnf', 'or-60-20-6.cnf.gz.no_w.cnf', 'or-60-20-6-UC-10.cnf.gz.no_w.cnf', 'or-60-20-6-UC-20.cnf.gz.no_w.cnf', 'or-60-20-6-UC-30.cnf.gz.no_w.cnf', 'or-60-20-6-UC-40.cnf.gz.no_w.cnf', 'or-60-20-7.cnf.gz.no_w.cnf', 'or-60-20-7-UC-10.cnf.gz.no_w.cnf', 'or-60-20-7-UC-20.cnf.gz.no_w.cnf', 'or-60-20-7-UC-30.cnf.gz.no_w.cnf', 'or-60-20-7-UC-40.cnf.gz.no_w.cnf', 'or-60-20-8.cnf.gz.no_w.cnf', 'or-60-20-8-UC-10.cnf.gz.no_w.cnf', 'or-60-20-8-UC-20.cnf.gz.no_w.cnf', 'or-60-20-8-UC-30.cnf.gz.no_w.cnf', 'or-60-20-8-UC-40.cnf.gz.no_w.cnf', 'or-60-20-9.cnf.gz.no_w.cnf', 'or-60-20-9-UC-10.cnf.gz.no_w.cnf', 'or-60-20-9-UC-20.cnf.gz.no_w.cnf', 'or-60-20-9-UC-30.cnf.gz.no_w.cnf', 'or-60-20-9-UC-40.cnf.gz.no_w.cnf', 'or-60-5-10.cnf.gz.no_w.cnf', 'or-60-5-10-UC-10.cnf.gz.no_w.cnf', 'or-60-5-10-UC-20.cnf.gz.no_w.cnf', 'or-60-5-10-UC-30.cnf.gz.no_w.cnf', 'or-60-5-10-UC-40.cnf.gz.no_w.cnf', 'or-60-5-1.cnf.gz.no_w.cnf', 'or-60-5-1-UC-10.cnf.gz.no_w.cnf', 'or-60-5-1-UC-20.cnf.gz.no_w.cnf', 'or-60-5-1-UC-30.cnf.gz.no_w.cnf', 'or-60-5-1-UC-40.cnf.gz.no_w.cnf', 'or-60-5-2.cnf.gz.no_w.cnf', 'or-60-5-2-UC-10.cnf.gz.no_w.cnf', 'or-60-5-2-UC-20.cnf.gz.no_w.cnf', 'or-60-5-2-UC-30.cnf.gz.no_w.cnf', 'or-60-5-2-UC-40.cnf.gz.no_w.cnf', 'or-60-5-3.cnf.gz.no_w.cnf', 'or-60-5-3-UC-10.cnf.gz.no_w.cnf', 'or-60-5-3-UC-20.cnf.gz.no_w.cnf', 'or-60-5-3-UC-30.cnf.gz.no_w.cnf', 'or-60-5-3-UC-40.cnf.gz.no_w.cnf', 'or-60-5-4.cnf.gz.no_w.cnf', 'or-60-5-4-UC-10.cnf.gz.no_w.cnf', 'or-60-5-4-UC-20.cnf.gz.no_w.cnf', 'or-60-5-4-UC-30.cnf.gz.no_w.cnf', 'or-60-5-4-UC-40.cnf.gz.no_w.cnf', 'or-60-5-5.cnf.gz.no_w.cnf', 'or-60-5-5-UC-10.cnf.gz.no_w.cnf', 'or-60-5-5-UC-20.cnf.gz.no_w.cnf', 'or-60-5-5-UC-30.cnf.gz.no_w.cnf', 'or-60-5-5-UC-40.cnf.gz.no_w.cnf', 'or-60-5-6.cnf.gz.no_w.cnf', 'or-60-5-6-UC-10.cnf.gz.no_w.cnf', 'or-60-5-6-UC-20.cnf.gz.no_w.cnf', 'or-60-5-6-UC-30.cnf.gz.no_w.cnf', 'or-60-5-6-UC-40.cnf.gz.no_w.cnf', 'or-60-5-7.cnf.gz.no_w.cnf', 'or-60-5-7-UC-10.cnf.gz.no_w.cnf', 'or-60-5-7-UC-20.cnf.gz.no_w.cnf', 'or-60-5-7-UC-30.cnf.gz.no_w.cnf', 'or-60-5-7-UC-40.cnf.gz.no_w.cnf', 'or-60-5-8.cnf.gz.no_w.cnf', 'or-60-5-8-UC-10.cnf.gz.no_w.cnf', 'or-60-5-8-UC-20.cnf.gz.no_w.cnf', 'or-60-5-8-UC-30.cnf.gz.no_w.cnf', 'or-60-5-8-UC-40.cnf.gz.no_w.cnf', 'or-60-5-9.cnf.gz.no_w.cnf', 'or-60-5-9-UC-10.cnf.gz.no_w.cnf', 'or-60-5-9-UC-20.cnf.gz.no_w.cnf', 'or-60-5-9-UC-30.cnf.gz.no_w.cnf', 'or-60-5-9-UC-40.cnf.gz.no_w.cnf', 'or-70-10-10.cnf.gz.no_w.cnf', 'or-70-10-10-UC-10.cnf.gz.no_w.cnf', 'or-70-10-10-UC-20.cnf.gz.no_w.cnf', 'or-70-10-10-UC-30.cnf.gz.no_w.cnf', 'or-70-10-10-UC-40.cnf.gz.no_w.cnf', 'or-70-10-1.cnf.gz.no_w.cnf', 'or-70-10-1-UC-10.cnf.gz.no_w.cnf', 'or-70-10-1-UC-20.cnf.gz.no_w.cnf', 'or-70-10-1-UC-30.cnf.gz.no_w.cnf', 'or-70-10-1-UC-40.cnf.gz.no_w.cnf', 'or-70-10-2.cnf.gz.no_w.cnf', 'or-70-10-2-UC-10.cnf.gz.no_w.cnf', 'or-70-10-2-UC-20.cnf.gz.no_w.cnf', 'or-70-10-2-UC-30.cnf.gz.no_w.cnf', 'or-70-10-2-UC-40.cnf.gz.no_w.cnf', 'or-70-10-3.cnf.gz.no_w.cnf', 'or-70-10-3-UC-10.cnf.gz.no_w.cnf', 'or-70-10-3-UC-20.cnf.gz.no_w.cnf', 'or-70-10-3-UC-30.cnf.gz.no_w.cnf', 'or-70-10-3-UC-40.cnf.gz.no_w.cnf', 'or-70-10-4.cnf.gz.no_w.cnf', 'or-70-10-4-UC-10.cnf.gz.no_w.cnf', 'or-70-10-4-UC-20.cnf.gz.no_w.cnf', 'or-70-10-4-UC-30.cnf.gz.no_w.cnf', 'or-70-10-4-UC-40.cnf.gz.no_w.cnf', 'or-70-10-5.cnf.gz.no_w.cnf', 'or-70-10-5-UC-10.cnf.gz.no_w.cnf', 'or-70-10-5-UC-20.cnf.gz.no_w.cnf', 'or-70-10-5-UC-30.cnf.gz.no_w.cnf', 'or-70-10-5-UC-40.cnf.gz.no_w.cnf', 'or-70-10-6.cnf.gz.no_w.cnf', 'or-70-10-6-UC-10.cnf.gz.no_w.cnf', 'or-70-10-6-UC-20.cnf.gz.no_w.cnf', 'or-70-10-6-UC-30.cnf.gz.no_w.cnf', 'or-70-10-6-UC-40.cnf.gz.no_w.cnf', 'or-70-10-7.cnf.gz.no_w.cnf', 'or-70-10-7-UC-10.cnf.gz.no_w.cnf', 'or-70-10-7-UC-20.cnf.gz.no_w.cnf', 'or-70-10-7-UC-30.cnf.gz.no_w.cnf', 'or-70-10-7-UC-40.cnf.gz.no_w.cnf', 'or-70-10-8.cnf.gz.no_w.cnf', 'or-70-10-8-UC-10.cnf.gz.no_w.cnf', 'or-70-10-8-UC-20.cnf.gz.no_w.cnf', 'or-70-10-8-UC-30.cnf.gz.no_w.cnf', 'or-70-10-8-UC-40.cnf.gz.no_w.cnf', 'or-70-10-9.cnf.gz.no_w.cnf', 'or-70-10-9-UC-10.cnf.gz.no_w.cnf', 'or-70-10-9-UC-20.cnf.gz.no_w.cnf', 'or-70-10-9-UC-30.cnf.gz.no_w.cnf', 'or-70-10-9-UC-40.cnf.gz.no_w.cnf', 'or-70-20-10.cnf.gz.no_w.cnf', 'or-70-20-10-UC-10.cnf.gz.no_w.cnf', 'or-70-20-10-UC-20.cnf.gz.no_w.cnf', 'or-70-20-10-UC-30.cnf.gz.no_w.cnf', 'or-70-20-10-UC-40.cnf.gz.no_w.cnf', 'or-70-20-1.cnf.gz.no_w.cnf', 'or-70-20-1-UC-10.cnf.gz.no_w.cnf', 'or-70-20-1-UC-20.cnf.gz.no_w.cnf', 'or-70-20-1-UC-30.cnf.gz.no_w.cnf', 'or-70-20-1-UC-40.cnf.gz.no_w.cnf', 'or-70-20-2.cnf.gz.no_w.cnf', 'or-70-20-2-UC-10.cnf.gz.no_w.cnf', 'or-70-20-2-UC-20.cnf.gz.no_w.cnf', 'or-70-20-2-UC-30.cnf.gz.no_w.cnf', 'or-70-20-2-UC-40.cnf.gz.no_w.cnf', 'or-70-20-3.cnf.gz.no_w.cnf', 'or-70-20-3-UC-10.cnf.gz.no_w.cnf', 'or-70-20-3-UC-20.cnf.gz.no_w.cnf', 'or-70-20-3-UC-30.cnf.gz.no_w.cnf', 'or-70-20-3-UC-40.cnf.gz.no_w.cnf', 'or-70-20-4.cnf.gz.no_w.cnf', 'or-70-20-4-UC-10.cnf.gz.no_w.cnf', 'or-70-20-4-UC-20.cnf.gz.no_w.cnf', 'or-70-20-4-UC-30.cnf.gz.no_w.cnf', 'or-70-20-4-UC-40.cnf.gz.no_w.cnf', 'or-70-20-5.cnf.gz.no_w.cnf', 'or-70-20-5-UC-10.cnf.gz.no_w.cnf', 'or-70-20-5-UC-20.cnf.gz.no_w.cnf', 'or-70-20-5-UC-30.cnf.gz.no_w.cnf', 'or-70-20-5-UC-40.cnf.gz.no_w.cnf', 'or-70-20-6.cnf.gz.no_w.cnf', 'or-70-20-6-UC-10.cnf.gz.no_w.cnf', 'or-70-20-6-UC-20.cnf.gz.no_w.cnf', 'or-70-20-6-UC-30.cnf.gz.no_w.cnf', 'or-70-20-6-UC-40.cnf.gz.no_w.cnf', 'or-70-20-7.cnf.gz.no_w.cnf', 'or-70-20-7-UC-10.cnf.gz.no_w.cnf', 'or-70-20-7-UC-20.cnf.gz.no_w.cnf', 'or-70-20-7-UC-30.cnf.gz.no_w.cnf', 'or-70-20-7-UC-40.cnf.gz.no_w.cnf', 'or-70-20-8.cnf.gz.no_w.cnf', 'or-70-20-8-UC-10.cnf.gz.no_w.cnf', 'or-70-20-8-UC-20.cnf.gz.no_w.cnf', 'or-70-20-8-UC-30.cnf.gz.no_w.cnf', 'or-70-20-8-UC-40.cnf.gz.no_w.cnf', 'or-70-20-9.cnf.gz.no_w.cnf', 'or-70-20-9-UC-10.cnf.gz.no_w.cnf', 'or-70-20-9-UC-20.cnf.gz.no_w.cnf', 'or-70-20-9-UC-30.cnf.gz.no_w.cnf', 'or-70-20-9-UC-40.cnf.gz.no_w.cnf', 'or-70-5-10.cnf.gz.no_w.cnf', 'or-70-5-10-UC-10.cnf.gz.no_w.cnf', 'or-70-5-10-UC-20.cnf.gz.no_w.cnf', 'or-70-5-10-UC-30.cnf.gz.no_w.cnf', 'or-70-5-10-UC-40.cnf.gz.no_w.cnf', 'or-70-5-1.cnf.gz.no_w.cnf', 'or-70-5-1-UC-10.cnf.gz.no_w.cnf', 'or-70-5-1-UC-20.cnf.gz.no_w.cnf', 'or-70-5-1-UC-30.cnf.gz.no_w.cnf', 'or-70-5-1-UC-40.cnf.gz.no_w.cnf', 'or-70-5-2.cnf.gz.no_w.cnf', 'or-70-5-2-UC-10.cnf.gz.no_w.cnf', 'or-70-5-2-UC-20.cnf.gz.no_w.cnf', 'or-70-5-2-UC-30.cnf.gz.no_w.cnf', 'or-70-5-2-UC-40.cnf.gz.no_w.cnf', 'or-70-5-3.cnf.gz.no_w.cnf', 'or-70-5-3-UC-10.cnf.gz.no_w.cnf', 'or-70-5-3-UC-20.cnf.gz.no_w.cnf', 'or-70-5-3-UC-30.cnf.gz.no_w.cnf', 'or-70-5-3-UC-40.cnf.gz.no_w.cnf', 'or-70-5-4.cnf.gz.no_w.cnf', 'or-70-5-4-UC-10.cnf.gz.no_w.cnf', 'or-70-5-4-UC-20.cnf.gz.no_w.cnf', 'or-70-5-4-UC-30.cnf.gz.no_w.cnf', 'or-70-5-4-UC-40.cnf.gz.no_w.cnf', 'or-70-5-5.cnf.gz.no_w.cnf', 'or-70-5-5-UC-10.cnf.gz.no_w.cnf', 'or-70-5-5-UC-20.cnf.gz.no_w.cnf', 'or-70-5-5-UC-30.cnf.gz.no_w.cnf', 'or-70-5-5-UC-40.cnf.gz.no_w.cnf', 'or-70-5-6.cnf.gz.no_w.cnf', 'or-70-5-6-UC-10.cnf.gz.no_w.cnf', 'or-70-5-6-UC-20.cnf.gz.no_w.cnf', 'or-70-5-6-UC-30.cnf.gz.no_w.cnf', 'or-70-5-6-UC-40.cnf.gz.no_w.cnf', 'or-70-5-7.cnf.gz.no_w.cnf', 'or-70-5-7-UC-10.cnf.gz.no_w.cnf', 'or-70-5-7-UC-20.cnf.gz.no_w.cnf', 'or-70-5-7-UC-30.cnf.gz.no_w.cnf', 'or-70-5-7-UC-40.cnf.gz.no_w.cnf', 'or-70-5-8.cnf.gz.no_w.cnf', 'or-70-5-8-UC-10.cnf.gz.no_w.cnf', 'or-70-5-8-UC-20.cnf.gz.no_w.cnf', 'or-70-5-8-UC-30.cnf.gz.no_w.cnf', 'or-70-5-8-UC-40.cnf.gz.no_w.cnf', 'or-70-5-9.cnf.gz.no_w.cnf', 'or-70-5-9-UC-10.cnf.gz.no_w.cnf', 'or-70-5-9-UC-20.cnf.gz.no_w.cnf', 'or-70-5-9-UC-30.cnf.gz.no_w.cnf', 'or-70-5-9-UC-40.cnf.gz.no_w.cnf', 'parity.sk_11_11.cnf.gz.no_w.cnf', 'partition.sk_22_155.cnf.gz.no_w.cnf', 'PhaseService.sk_14_27.cnf.gz.no_w.cnf', 'Pollard.sk_1_10.cnf.gz.no_w.cnf', 'polynomial.sk_7_25.cnf.gz.no_w.cnf', 'ProcessBean.sk_8_64.cnf.gz.no_w.cnf', 'prod-16.cnf.gz.no_w.cnf', 'prod-1s.cnf.gz.no_w.cnf', 'prod-20.cnf.gz.no_w.cnf', 'prod-24.cnf.gz.no_w.cnf', 'prod-28.cnf.gz.no_w.cnf', 'prod-2.cnf.gz.no_w.cnf', 'prod-2s.cnf.gz.no_w.cnf', 'prod-32.cnf.gz.no_w.cnf', 'prod-3s.cnf.gz.no_w.cnf', 'prod-4.cnf.gz.no_w.cnf', 'prod-4s.cnf.gz.no_w.cnf', 'prod-8.cnf.gz.no_w.cnf', 'prod-8s.cnf.gz.no_w.cnf', 'ProjectService3.sk_12_55.cnf.gz.no_w.cnf', 'registerlesSwap.sk_3_10.cnf.gz.no_w.cnf', 'reverse.sk_11_258.cnf.gz.no_w.cnf', 's1196a_15_7.cnf.gz.no_w.cnf', 's1196a_3_2.cnf.gz.no_w.cnf', 's1196a_7_4.cnf.gz.no_w.cnf', 's1238a_15_7.cnf.gz.no_w.cnf', 's1238a_3_2.cnf.gz.no_w.cnf', 's1238a_7_4.cnf.gz.no_w.cnf', 's13207a_15_7.cnf.gz.no_w.cnf', 's13207a_3_2.cnf.gz.no_w.cnf', 's13207a_7_4.cnf.gz.no_w.cnf', 's1423a_15_7.cnf.gz.no_w.cnf', 's1423a_3_2.cnf.gz.no_w.cnf', 's1423a_7_4.cnf.gz.no_w.cnf', 's1488_15_7.cnf.gz.no_w.cnf', 's1488_3_2.cnf.gz.no_w.cnf', 's1488_7_4.cnf.gz.no_w.cnf', 's15850a_15_7.cnf.gz.no_w.cnf', 's15850a_3_2.cnf.gz.no_w.cnf', 's15850a_7_4.cnf.gz.no_w.cnf', 's27_15_7.cnf.gz.no_w.cnf', 's27_3_2.cnf.gz.no_w.cnf', 's27_7_4.cnf.gz.no_w.cnf', 's27_new_15_7.cnf.gz.no_w.cnf', 's27_new_3_2.cnf.gz.no_w.cnf', 's27_new_7_4.cnf.gz.no_w.cnf', 's298_15_7.cnf.gz.no_w.cnf', 's298_3_2.cnf.gz.no_w.cnf', 's298_7_4.cnf.gz.no_w.cnf', 's344_15_7.cnf.gz.no_w.cnf', 's344_3_2.cnf.gz.no_w.cnf', 's344_7_4.cnf.gz.no_w.cnf', 's349_15_7.cnf.gz.no_w.cnf', 's349_3_2.cnf.gz.no_w.cnf', 's349_7_4.cnf.gz.no_w.cnf', 's35932_15_7.cnf.gz.no_w.cnf', 's35932_3_2.cnf.gz.no_w.cnf', 's35932_7_4.cnf.gz.no_w.cnf', 's382_15_7.cnf.gz.no_w.cnf', 's382_3_2.cnf.gz.no_w.cnf', 's382_7_4.cnf.gz.no_w.cnf', 's38417_15_7.cnf.gz.no_w.cnf', 's38417_3_2.cnf.gz.no_w.cnf', 's38417_7_4.cnf.gz.no_w.cnf', 's38584_15_7.cnf.gz.no_w.cnf', 's38584_3_2.cnf.gz.no_w.cnf', 's38584_7_4.cnf.gz.no_w.cnf', 's420_15_7.cnf.gz.no_w.cnf', 's420_3_2.cnf.gz.no_w.cnf', 's420_7_4.cnf.gz.no_w.cnf', 's420_new1_15_7.cnf.gz.no_w.cnf', 's420_new1_3_2.cnf.gz.no_w.cnf', 's420_new_15_7.cnf.gz.no_w.cnf', 's420_new1_7_4.cnf.gz.no_w.cnf', 's420_new_3_2.cnf.gz.no_w.cnf', 's420_new_7_4.cnf.gz.no_w.cnf', 's444_15_7.cnf.gz.no_w.cnf', 's444_3_2.cnf.gz.no_w.cnf', 's444_7_4.cnf.gz.no_w.cnf', 's510_15_7.cnf.gz.no_w.cnf', 's510_3_2.cnf.gz.no_w.cnf', 's510_7_4.cnf.gz.no_w.cnf', 's526_15_7.cnf.gz.no_w.cnf', 's526_3_2.cnf.gz.no_w.cnf', 's526_7_4.cnf.gz.no_w.cnf', 's526a_15_7.cnf.gz.no_w.cnf', 's526a_3_2.cnf.gz.no_w.cnf', 's526a_7_4.cnf.gz.no_w.cnf', 's5378a_15_7.cnf.gz.no_w.cnf', 's5378a_3_2.cnf.gz.no_w.cnf', 's5378a_7_4.cnf.gz.no_w.cnf', 's641_15_7.cnf.gz.no_w.cnf', 's641_3_2.cnf.gz.no_w.cnf', 's641_7_4.cnf.gz.no_w.cnf', 's713_15_7.cnf.gz.no_w.cnf', 's713_3_2.cnf.gz.no_w.cnf', 's713_7_4.cnf.gz.no_w.cnf', 's820a_15_7.cnf.gz.no_w.cnf', 's820a_3_2.cnf.gz.no_w.cnf', 's820a_7_4.cnf.gz.no_w.cnf', 's832a_15_7.cnf.gz.no_w.cnf', 's832a_3_2.cnf.gz.no_w.cnf', 's832a_7_4.cnf.gz.no_w.cnf', 's838_15_7.cnf.gz.no_w.cnf', 's838_3_2.cnf.gz.no_w.cnf', 's838_7_4.cnf.gz.no_w.cnf', 's9234a_15_7.cnf.gz.no_w.cnf', 's9234a_3_2.cnf.gz.no_w.cnf', 's9234a_7_4.cnf.gz.no_w.cnf', 's953a_15_7.cnf.gz.no_w.cnf', 's953a_3_2.cnf.gz.no_w.cnf', 's953a_7_4.cnf.gz.no_w.cnf', 'SetTest.sk_9_21.cnf.gz.no_w.cnf', 'signedAvg.sk_8_1020.cnf.gz.no_w.cnf', 'sort.sk_8_52.cnf.gz.no_w.cnf', 'tableBasedAddition.sk_240_1024.cnf.gz.no_w.cnf', 'tire-1.cnf.gz.no_w.cnf', 'tire-2.cnf.gz.no_w.cnf', 'tire-3.cnf.gz.no_w.cnf', 'tire-4.cnf.gz.no_w.cnf', 'tutorial1.sk_1_1.cnf.gz.no_w.cnf', 'tutorial2.sk_3_4.cnf.gz.no_w.cnf', 'tutorial3.sk_4_31.cnf.gz.no_w.cnf', 'UserServiceImpl.sk_8_32.cnf.gz.no_w.cnf', 'xpose.sk_6_134.cnf.gz.no_w.cnf']
else:
# PROBLEM_NAMES = ['01A-1.cnf.gz.no_w.cnf', '01B-1.cnf.gz.no_w.cnf']
# PROBLEM_NAMES = ['75-18-7-q.cnf.gz.no_w.cnf']
# all problems
PROBLEM_NAMES = ['01A-1.cnf.gz.no_w.cnf', '01B-1.cnf.gz.no_w.cnf', '01B-2.cnf.gz.no_w.cnf', '01B-3.cnf.gz.no_w.cnf', '01B-4.cnf.gz.no_w.cnf', '01B-5.cnf.gz.no_w.cnf', '02A-1.cnf.gz.no_w.cnf', '02A-2.cnf.gz.no_w.cnf', '02A-3.cnf.gz.no_w.cnf', '02B-1.cnf.gz.no_w.cnf', '02B-2.cnf.gz.no_w.cnf', '02B-3.cnf.gz.no_w.cnf', '02B-4.cnf.gz.no_w.cnf', '02B-5.cnf.gz.no_w.cnf', '03A-1.cnf.gz.no_w.cnf', '03A-2.cnf.gz.no_w.cnf', '03B-1.cnf.gz.no_w.cnf', '03B-2.cnf.gz.no_w.cnf', '03B-3.cnf.gz.no_w.cnf', '03B-4.cnf.gz.no_w.cnf', '04A-1.cnf.gz.no_w.cnf', '04A-2.cnf.gz.no_w.cnf', '04A-3.cnf.gz.no_w.cnf', '04B-1.cnf.gz.no_w.cnf', '04B-2.cnf.gz.no_w.cnf', '04B-3.cnf.gz.no_w.cnf', '04B-3.cnf.gz.no_w.cnf.gz.log', '04B-4.cnf.gz.no_w.cnf', '05A-1.cnf.gz.no_w.cnf', '05A-2.cnf.gz.no_w.cnf', '05B-1.cnf.gz.no_w.cnf', '05B-2.cnf.gz.no_w.cnf', '05B-3.cnf.gz.no_w.cnf', '06A-1.cnf.gz.no_w.cnf', '06A-2.cnf.gz.no_w.cnf', '06A-3.cnf.gz.no_w.cnf', '06A-4.cnf.gz.no_w.cnf', '06B-1.cnf.gz.no_w.cnf', '06B-2.cnf.gz.no_w.cnf', '06B-3.cnf.gz.no_w.cnf', '06B-4.cnf.gz.no_w.cnf', '07A-1.cnf.gz.no_w.cnf', '07A-2.cnf.gz.no_w.cnf', '07A-3.cnf.gz.no_w.cnf', '07A-4.cnf.gz.no_w.cnf', '07A-5.cnf.gz.no_w.cnf', '07B-1.cnf.gz.no_w.cnf', '07B-2.cnf.gz.no_w.cnf', '07B-3.cnf.gz.no_w.cnf', '07B-4.cnf.gz.no_w.cnf', '07B-5.cnf.gz.no_w.cnf', '07B-6.cnf.gz.no_w.cnf', '08A-1.cnf.gz.no_w.cnf', '08A-2.cnf.gz.no_w.cnf', '08A-3.cnf.gz.no_w.cnf', '08A-4.cnf.gz.no_w.cnf', '08B-1.cnf.gz.no_w.cnf', '08B-2.cnf.gz.no_w.cnf', '08B-3.cnf.gz.no_w.cnf', '08B-4.cnf.gz.no_w.cnf', '09A-1.cnf.gz.no_w.cnf', '09A-2.cnf.gz.no_w.cnf', '09A-3.cnf.gz.no_w.cnf', '09B-1.cnf.gz.no_w.cnf', '09B-2.cnf.gz.no_w.cnf', '09B-3.cnf.gz.no_w.cnf', '09B-4.cnf.gz.no_w.cnf', '09B-5.cnf.gz.no_w.cnf', '09B-6.cnf.gz.no_w.cnf', '107.sk_3_90.cnf.gz.no_w.cnf', '109.sk_4_36.cnf.gz.no_w.cnf', '10A-1.cnf.gz.no_w.cnf', '10A-2.cnf.gz.no_w.cnf', '10A-3.cnf.gz.no_w.cnf', '10A-4.cnf.gz.no_w.cnf', '10B-10.cnf.gz.no_w.cnf', '10B-11.cnf.gz.no_w.cnf', '10B-1.cnf.gz.no_w.cnf', '10B-2.cnf.gz.no_w.cnf', '10B-3.cnf.gz.no_w.cnf', '10B-4.cnf.gz.no_w.cnf', '10B-5.cnf.gz.no_w.cnf', '10B-6.cnf.gz.no_w.cnf', '10B-7.cnf.gz.no_w.cnf', '10B-8.cnf.gz.no_w.cnf', '10B-9.cnf.gz.no_w.cnf', '10.sk_1_46.cnf.gz.no_w.cnf', '110.sk_3_88.cnf.gz.no_w.cnf', '111.sk_2_36.cnf.gz.no_w.cnf', '11A-1.cnf.gz.no_w.cnf', '11A-2.cnf.gz.no_w.cnf', '11A-3.cnf.gz.no_w.cnf', '11A-4.cnf.gz.no_w.cnf', '11B-1.cnf.gz.no_w.cnf', '11B-2.cnf.gz.no_w.cnf', '11B-3.cnf.gz.no_w.cnf', '11B-4.cnf.gz.no_w.cnf', '11B-5.cnf.gz.no_w.cnf', '12A-1.cnf.gz.no_w.cnf', '12A-2.cnf.gz.no_w.cnf', '12A-3.cnf.gz.no_w.cnf', '12A-4.cnf.gz.no_w.cnf', '12B-1.cnf.gz.no_w.cnf', '12B-2.cnf.gz.no_w.cnf', '12B-3.cnf.gz.no_w.cnf', '12B-4.cnf.gz.no_w.cnf', '12B-5.cnf.gz.no_w.cnf', '12B-6.cnf.gz.no_w.cnf', '13A-1.cnf.gz.no_w.cnf', '13A-2.cnf.gz.no_w.cnf', '13A-3.cnf.gz.no_w.cnf', '13A-4.cnf.gz.no_w.cnf', '13B-1.cnf.gz.no_w.cnf', '13B-2.cnf.gz.no_w.cnf', '13B-3.cnf.gz.no_w.cnf', '13B-4.cnf.gz.no_w.cnf', '13B-5.cnf.gz.no_w.cnf', '14A-1.cnf.gz.no_w.cnf', '14A-2.cnf.gz.no_w.cnf', '14A-3.cnf.gz.no_w.cnf', '15A-1.cnf.gz.no_w.cnf', '15A-2.cnf.gz.no_w.cnf', '15A-3.cnf.gz.no_w.cnf', '15A-4.cnf.gz.no_w.cnf', '15B-1.cnf.gz.no_w.cnf', '15B-2.cnf.gz.no_w.cnf', '15B-3.cnf.gz.no_w.cnf', '15B-4.cnf.gz.no_w.cnf', '15B-5.cnf.gz.no_w.cnf', '17A-1.cnf.gz.no_w.cnf', '17A-2.cnf.gz.no_w.cnf', '17A-3.cnf.gz.no_w.cnf', '17A-4.cnf.gz.no_w.cnf', '17A-5.cnf.gz.no_w.cnf', '17A-6.cnf.gz.no_w.cnf', '17B-1.cnf.gz.no_w.cnf', '17B-2.cnf.gz.no_w.cnf', '17B-3.cnf.gz.no_w.cnf', '17B-4.cnf.gz.no_w.cnf', '17B-5.cnf.gz.no_w.cnf', '17.sk_3_45.cnf.gz.no_w.cnf', '18A-1.cnf.gz.no_w.cnf', '18A-2.cnf.gz.no_w.cnf', '18A-3.cnf.gz.no_w.cnf', '18A-4.cnf.gz.no_w.cnf', '19.sk_3_48.cnf.gz.no_w.cnf', '20.sk_1_51.cnf.gz.no_w.cnf', '27.sk_3_32.cnf.gz.no_w.cnf', '29.sk_3_45.cnf.gz.no_w.cnf', '30.sk_5_76.cnf.gz.no_w.cnf', '32.sk_4_38.cnf.gz.no_w.cnf', '35.sk_3_52.cnf.gz.no_w.cnf', '36.sk_3_77.cnf.gz.no_w.cnf', '4step.cnf.gz.no_w.cnf', '50-10-10-q.cnf.gz.no_w.cnf', '50-10-1-q.cnf.gz.no_w.cnf', '50-10-2-q.cnf.gz.no_w.cnf', '50-10-3-q.cnf.gz.no_w.cnf', '50-10-4-q.cnf.gz.no_w.cnf', '50-10-5-q.cnf.gz.no_w.cnf', '50-10-6-q.cnf.gz.no_w.cnf', '50-10-7-q.cnf.gz.no_w.cnf', '50-10-8-q.cnf.gz.no_w.cnf', '50-10-9-q.cnf.gz.no_w.cnf', '50-12-10-q.cnf.gz.no_w.cnf', '50-12-1-q.cnf.gz.no_w.cnf', '50-12-2-q.cnf.gz.no_w.cnf', '50-12-3-q.cnf.gz.no_w.cnf', '50-12-4-q.cnf.gz.no_w.cnf', '50-12-5-q.cnf.gz.no_w.cnf', '50-12-6-q.cnf.gz.no_w.cnf', '50-12-7-q.cnf.gz.no_w.cnf', '50-12-8-q.cnf.gz.no_w.cnf', '50-12-9-q.cnf.gz.no_w.cnf', '50-14-10-q.cnf.gz.no_w.cnf', '50-14-1-q.cnf.gz.no_w.cnf', '50-14-2-q.cnf.gz.no_w.cnf', '50-14-3-q.cnf.gz.no_w.cnf', '50-14-4-q.cnf.gz.no_w.cnf', '50-14-5-q.cnf.gz.no_w.cnf', '50-14-6-q.cnf.gz.no_w.cnf', '50-14-7-q.cnf.gz.no_w.cnf', '50-14-8-q.cnf.gz.no_w.cnf', '50-14-9-q.cnf.gz.no_w.cnf', '50-16-10-q.cnf.gz.no_w.cnf', '50-16-1-q.cnf.gz.no_w.cnf', '50-16-2-q.cnf.gz.no_w.cnf', '50-16-3-q.cnf.gz.no_w.cnf', '50-16-4-q.cnf.gz.no_w.cnf', '50-16-5-q.cnf.gz.no_w.cnf', '50-16-6-q.cnf.gz.no_w.cnf', '50-16-7-q.cnf.gz.no_w.cnf', '50-16-8-q.cnf.gz.no_w.cnf', '50-16-9-q.cnf.gz.no_w.cnf', '50-18-10-q.cnf.gz.no_w.cnf', '50-18-1-q.cnf.gz.no_w.cnf', '50-18-2-q.cnf.gz.no_w.cnf', '50-18-3-q.cnf.gz.no_w.cnf', '50-18-4-q.cnf.gz.no_w.cnf', '50-18-5-q.cnf.gz.no_w.cnf', '50-18-6-q.cnf.gz.no_w.cnf', '50-18-7-q.cnf.gz.no_w.cnf', '50-18-8-q.cnf.gz.no_w.cnf', '50-18-9-q.cnf.gz.no_w.cnf', '50-20-10-q.cnf.gz.no_w.cnf', '50-20-1-q.cnf.gz.no_w.cnf', '50-20-2-q.cnf.gz.no_w.cnf', '50-20-3-q.cnf.gz.no_w.cnf', '50-20-4-q.cnf.gz.no_w.cnf', '50-20-5-q.cnf.gz.no_w.cnf', '50-20-6-q.cnf.gz.no_w.cnf', '50-20-7-q.cnf.gz.no_w.cnf', '50-20-8-q.cnf.gz.no_w.cnf', '50-20-9-q.cnf.gz.no_w.cnf', '51.sk_4_38.cnf.gz.no_w.cnf', '53.sk_4_32.cnf.gz.no_w.cnf', '54.sk_12_97.cnf.gz.no_w.cnf', '54.sk_12_97.cnf.gz.no_w.no_independent_set.cnf', '55.sk_3_46.cnf.gz.no_w.cnf', '56.sk_6_38.cnf.gz.no_w.cnf', '57.sk_4_64.cnf.gz.no_w.cnf', '5step.cnf.gz.no_w.cnf', '63.sk_3_64.cnf.gz.no_w.cnf', '70.sk_3_40.cnf.gz.no_w.cnf', '71.sk_3_65.cnf.gz.no_w.cnf', '75-10-10-q.cnf.gz.no_w.cnf', '75-10-1-q.cnf.gz.no_w.cnf', '75-10-2-q.cnf.gz.no_w.cnf', '75-10-3-q.cnf.gz.no_w.cnf', '75-10-4-q.cnf.gz.no_w.cnf', '75-10-5-q.cnf.gz.no_w.cnf', '75-10-6-q.cnf.gz.no_w.cnf', '75-10-7-q.cnf.gz.no_w.cnf', '75-10-8-q.cnf.gz.no_w.cnf', '75-10-9-q.cnf.gz.no_w.cnf', '75-12-10-q.cnf.gz.no_w.cnf', '75-12-1-q.cnf.gz.no_w.cnf', '75-12-2-q.cnf.gz.no_w.cnf', '75-12-3-q.cnf.gz.no_w.cnf', '75-12-4-q.cnf.gz.no_w.cnf', '75-12-5-q.cnf.gz.no_w.cnf', '75-12-6-q.cnf.gz.no_w.cnf', '75-12-7-q.cnf.gz.no_w.cnf', '75-12-8-q.cnf.gz.no_w.cnf', '75-12-9-q.cnf.gz.no_w.cnf', '75-14-10-q.cnf.gz.no_w.cnf', '75-14-1-q.cnf.gz.no_w.cnf', '75-14-2-q.cnf.gz.no_w.cnf', '75-14-3-q.cnf.gz.no_w.cnf', '75-14-4-q.cnf.gz.no_w.cnf', '75-14-5-q.cnf.gz.no_w.cnf', '75-14-6-q.cnf.gz.no_w.cnf', '75-14-7-q.cnf.gz.no_w.cnf', '75-14-8-q.cnf.gz.no_w.cnf', '75-14-9-q.cnf.gz.no_w.cnf', '75-15-10-q.cnf.gz.no_w.cnf', '75-15-1-q.cnf.gz.no_w.cnf', '75-15-2-q.cnf.gz.no_w.cnf', '75-15-3-q.cnf.gz.no_w.cnf', '75-15-4-q.cnf.gz.no_w.cnf', '75-15-5-q.cnf.gz.no_w.cnf', '75-15-6-q.cnf.gz.no_w.cnf', '75-15-7-q.cnf.gz.no_w.cnf', '75-15-8-q.cnf.gz.no_w.cnf', '75-15-9-q.cnf.gz.no_w.cnf', '75-16-10-q.cnf.gz.no_w.cnf', '75-16-1-q.cnf.gz.no_w.cnf', '75-16-2-q.cnf.gz.no_w.cnf', '75-16-3-q.cnf.gz.no_w.cnf', '75-16-4-q.cnf.gz.no_w.cnf', '75-16-5-q.cnf.gz.no_w.cnf', '75-16-6-q.cnf.gz.no_w.cnf', '75-16-7-q.cnf.gz.no_w.cnf', '75-16-8-q.cnf.gz.no_w.cnf', '75-16-9-q.cnf.gz.no_w.cnf', '75-17-10-q.cnf.gz.no_w.cnf', '75-17-1-q.cnf.gz.no_w.cnf', '75-17-2-q.cnf.gz.no_w.cnf', '75-17-3-q.cnf.gz.no_w.cnf', '75-17-4-q.cnf.gz.no_w.cnf', '75-17-5-q.cnf.gz.no_w.cnf', '75-17-6-q.cnf.gz.no_w.cnf', '75-17-7-q.cnf.gz.no_w.cnf', '75-17-8-q.cnf.gz.no_w.cnf', '75-17-9-q.cnf.gz.no_w.cnf', '75-18-10-q.cnf.gz.no_w.cnf', '75-18-1-q.cnf.gz.no_w.cnf', '75-18-2-q.cnf.gz.no_w.cnf', '75-18-3-q.cnf.gz.no_w.cnf', '75-18-4-q.cnf.gz.no_w.cnf', '75-18-5-q.cnf.gz.no_w.cnf', '75-18-6-q.cnf.gz.no_w.cnf', '75-18-7-q.cnf.gz.no_w.cnf', '75-18-8-q.cnf.gz.no_w.cnf', '75-18-9-q.cnf.gz.no_w.cnf', '75-19-10-q.cnf.gz.no_w.cnf', '75-19-1-q.cnf.gz.no_w.cnf', '75-19-2-q.cnf.gz.no_w.cnf', '75-19-3-q.cnf.gz.no_w.cnf', '75-19-4-q.cnf.gz.no_w.cnf', '75-19-5-q.cnf.gz.no_w.cnf', '75-19-6-q.cnf.gz.no_w.cnf', '75-19-7-q.cnf.gz.no_w.cnf', '75-19-8-q.cnf.gz.no_w.cnf', '75-19-9-q.cnf.gz.no_w.cnf', '75-20-10-q.cnf.gz.no_w.cnf', '75-20-1-q.cnf.gz.no_w.cnf', '75-20-2-q.cnf.gz.no_w.cnf', '75-20-3-q.cnf.gz.no_w.cnf', '75-20-4-q.cnf.gz.no_w.cnf', '75-20-5-q.cnf.gz.no_w.cnf', '75-20-6-q.cnf.gz.no_w.cnf', '75-20-7-q.cnf.gz.no_w.cnf', '75-20-8-q.cnf.gz.no_w.cnf', '75-20-9-q.cnf.gz.no_w.cnf', '75-21-10-q.cnf.gz.no_w.cnf', '75-21-1-q.cnf.gz.no_w.cnf', '75-21-2-q.cnf.gz.no_w.cnf', '75-21-3-q.cnf.gz.no_w.cnf', '75-21-4-q.cnf.gz.no_w.cnf', '75-21-5-q.cnf.gz.no_w.cnf', '75-21-6-q.cnf.gz.no_w.cnf', '75-21-7-q.cnf.gz.no_w.cnf', '75-21-8-q.cnf.gz.no_w.cnf', '75-21-9-q.cnf.gz.no_w.cnf', '75-22-10-q.cnf.gz.no_w.cnf', '75-22-1-q.cnf.gz.no_w.cnf', '75-22-2-q.cnf.gz.no_w.cnf', '75-22-3-q.cnf.gz.no_w.cnf', '75-22-4-q.cnf.gz.no_w.cnf', '75-22-5-q.cnf.gz.no_w.cnf', '75-22-6-q.cnf.gz.no_w.cnf', '75-22-7-q.cnf.gz.no_w.cnf', '75-22-8-q.cnf.gz.no_w.cnf', '75-22-9-q.cnf.gz.no_w.cnf', '75-23-10-q.cnf.gz.no_w.cnf', '75-23-1-q.cnf.gz.no_w.cnf', '75-23-2-q.cnf.gz.no_w.cnf', '75-23-3-q.cnf.gz.no_w.cnf', '75-23-4-q.cnf.gz.no_w.cnf', '75-23-5-q.cnf.gz.no_w.cnf', '75-23-6-q.cnf.gz.no_w.cnf', '75-23-7-q.cnf.gz.no_w.cnf', '75-23-8-q.cnf.gz.no_w.cnf', '75-23-9-q.cnf.gz.no_w.cnf', '75-24-10-q.cnf.gz.no_w.cnf', '75-24-1-q.cnf.gz.no_w.cnf', '75-24-2-q.cnf.gz.no_w.cnf', '75-24-3-q.cnf.gz.no_w.cnf', '75-24-4-q.cnf.gz.no_w.cnf', '75-24-5-q.cnf.gz.no_w.cnf', '75-24-6-q.cnf.gz.no_w.cnf', '75-24-7-q.cnf.gz.no_w.cnf', '75-24-8-q.cnf.gz.no_w.cnf', '75-24-9-q.cnf.gz.no_w.cnf', '75-25-10-q.cnf.gz.no_w.cnf', '75-25-1-q.cnf.gz.no_w.cnf', '75-25-2-q.cnf.gz.no_w.cnf', '75-25-3-q.cnf.gz.no_w.cnf', '75-25-4-q.cnf.gz.no_w.cnf', '75-25-5-q.cnf.gz.no_w.cnf', '75-25-6-q.cnf.gz.no_w.cnf', '75-25-7-q.cnf.gz.no_w.cnf', '75-25-8-q.cnf.gz.no_w.cnf', '75-25-9-q.cnf.gz.no_w.cnf', '75-26-10-q.cnf.gz.no_w.cnf', '75-26-1-q.cnf.gz.no_w.cnf', '75-26-2-q.cnf.gz.no_w.cnf', '75-26-3-q.cnf.gz.no_w.cnf', '75-26-4-q.cnf.gz.no_w.cnf', '75-26-5-q.cnf.gz.no_w.cnf', '75-26-6-q.cnf.gz.no_w.cnf', '75-26-7-q.cnf.gz.no_w.cnf', '75-26-8-q.cnf.gz.no_w.cnf', '75-26-9-q.cnf.gz.no_w.cnf', '77.sk_3_44.cnf.gz.no_w.cnf', '79.sk_4_40.cnf.gz.no_w.cnf', '7.sk_4_50.cnf.gz.no_w.cnf', '80.sk_2_48.cnf.gz.no_w.cnf', '81.sk_5_51.cnf.gz.no_w.cnf', '84.sk_4_77.cnf.gz.no_w.cnf', '90-10-10-q.cnf.gz.no_w.cnf', '90-10-1-q.cnf.gz.no_w.cnf', '90-10-2-q.cnf.gz.no_w.cnf', '90-10-3-q.cnf.gz.no_w.cnf', '90-10-4-q.cnf.gz.no_w.cnf', '90-10-5-q.cnf.gz.no_w.cnf', '90-10-6-q.cnf.gz.no_w.cnf', '90-10-7-q.cnf.gz.no_w.cnf', '90-10-8-q.cnf.gz.no_w.cnf', '90-10-9-q.cnf.gz.no_w.cnf', '90-12-10-q.cnf.gz.no_w.cnf', '90-12-1-q.cnf.gz.no_w.cnf', '90-12-2-q.cnf.gz.no_w.cnf', '90-12-3-q.cnf.gz.no_w.cnf', '90-12-4-q.cnf.gz.no_w.cnf', '90-12-5-q.cnf.gz.no_w.cnf', '90-12-6-q.cnf.gz.no_w.cnf', '90-12-7-q.cnf.gz.no_w.cnf', '90-12-8-q.cnf.gz.no_w.cnf', '90-12-9-q.cnf.gz.no_w.cnf', '90-14-10-q.cnf.gz.no_w.cnf', '90-14-1-q.cnf.gz.no_w.cnf', '90-14-2-q.cnf.gz.no_w.cnf', '90-14-3-q.cnf.gz.no_w.cnf', '90-14-4-q.cnf.gz.no_w.cnf', '90-14-5-q.cnf.gz.no_w.cnf', '90-14-6-q.cnf.gz.no_w.cnf', '90-14-7-q.cnf.gz.no_w.cnf', '90-14-8-q.cnf.gz.no_w.cnf', '90-14-9-q.cnf.gz.no_w.cnf', '90-15-10-q.cnf.gz.no_w.cnf', '90-15-1-q.cnf.gz.no_w.cnf', '90-15-2-q.cnf.gz.no_w.cnf', '90-15-3-q.cnf.gz.no_w.cnf', '90-15-4-q.cnf.gz.no_w.cnf', '90-15-5-q.cnf.gz.no_w.cnf', '90-15-6-q.cnf.gz.no_w.cnf', '90-15-7-q.cnf.gz.no_w.cnf', '90-15-8-q.cnf.gz.no_w.cnf', '90-15-9-q.cnf.gz.no_w.cnf', '90-16-10-q.cnf.gz.no_w.cnf', '90-16-1-q.cnf.gz.no_w.cnf', '90-16-2-q.cnf.gz.no_w.cnf', '90-16-3-q.cnf.gz.no_w.cnf', '90-16-4-q.cnf.gz.no_w.cnf', '90-16-5-q.cnf.gz.no_w.cnf', '90-16-6-q.cnf.gz.no_w.cnf', '90-16-7-q.cnf.gz.no_w.cnf', '90-16-8-q.cnf.gz.no_w.cnf', '90-16-9-q.cnf.gz.no_w.cnf', '90-17-10-q.cnf.gz.no_w.cnf', '90-17-1-q.cnf.gz.no_w.cnf', '90-17-2-q.cnf.gz.no_w.cnf', '90-17-3-q.cnf.gz.no_w.cnf', '90-17-4-q.cnf.gz.no_w.cnf', '90-17-5-q.cnf.gz.no_w.cnf', '90-17-6-q.cnf.gz.no_w.cnf', '90-17-7-q.cnf.gz.no_w.cnf', '90-17-8-q.cnf.gz.no_w.cnf', '90-17-9-q.cnf.gz.no_w.cnf', '90-18-10-q.cnf.gz.no_w.cnf', '90-18-1-q.cnf.gz.no_w.cnf', '90-18-2-q.cnf.gz.no_w.cnf', '90-18-3-q.cnf.gz.no_w.cnf', '90-18-4-q.cnf.gz.no_w.cnf', '90-18-5-q.cnf.gz.no_w.cnf', '90-18-6-q.cnf.gz.no_w.cnf', '90-18-7-q.cnf.gz.no_w.cnf', '90-18-8-q.cnf.gz.no_w.cnf', '90-18-9-q.cnf.gz.no_w.cnf', '90-19-10-q.cnf.gz.no_w.cnf', '90-19-1-q.cnf.gz.no_w.cnf', '90-19-2-q.cnf.gz.no_w.cnf', '90-19-3-q.cnf.gz.no_w.cnf', '90-19-4-q.cnf.gz.no_w.cnf', '90-19-5-q.cnf.gz.no_w.cnf', '90-19-6-q.cnf.gz.no_w.cnf', '90-19-7-q.cnf.gz.no_w.cnf', '90-19-8-q.cnf.gz.no_w.cnf', '90-19-9-q.cnf.gz.no_w.cnf', '90-20-10-q.cnf.gz.no_w.cnf', '90-20-1-q.cnf.gz.no_w.cnf', '90-20-2-q.cnf.gz.no_w.cnf', '90-20-3-q.cnf.gz.no_w.cnf', '90-20-4-q.cnf.gz.no_w.cnf', '90-20-5-q.cnf.gz.no_w.cnf', '90-20-6-q.cnf.gz.no_w.cnf', '90-20-7-q.cnf.gz.no_w.cnf', '90-20-8-q.cnf.gz.no_w.cnf', '90-20-9-q.cnf.gz.no_w.cnf', '90-21-10-q.cnf.gz.no_w.cnf', '90-21-1-q.cnf.gz.no_w.cnf', '90-21-2-q.cnf.gz.no_w.cnf', '90-21-3-q.cnf.gz.no_w.cnf', '90-21-4-q.cnf.gz.no_w.cnf', '90-21-5-q.cnf.gz.no_w.cnf', '90-21-6-q.cnf.gz.no_w.cnf', '90-21-7-q.cnf.gz.no_w.cnf', '90-21-8-q.cnf.gz.no_w.cnf', '90-21-9-q.cnf.gz.no_w.cnf', '90-22-10-q.cnf.gz.no_w.cnf', '90-22-1-q.cnf.gz.no_w.cnf', '90-22-2-q.cnf.gz.no_w.cnf', '90-22-3-q.cnf.gz.no_w.cnf', '90-22-4-q.cnf.gz.no_w.cnf', '90-22-5-q.cnf.gz.no_w.cnf', '90-22-6-q.cnf.gz.no_w.cnf', '90-22-7-q.cnf.gz.no_w.cnf', '90-22-8-q.cnf.gz.no_w.cnf', '90-22-9-q.cnf.gz.no_w.cnf', '90-23-10-q.cnf.gz.no_w.cnf', '90-23-1-q.cnf.gz.no_w.cnf', '90-23-2-q.cnf.gz.no_w.cnf', '90-23-3-q.cnf.gz.no_w.cnf', '90-23-4-q.cnf.gz.no_w.cnf', '90-23-5-q.cnf.gz.no_w.cnf', '90-23-6-q.cnf.gz.no_w.cnf', '90-23-7-q.cnf.gz.no_w.cnf', '90-23-8-q.cnf.gz.no_w.cnf', '90-23-9-q.cnf.gz.no_w.cnf', '90-24-10-q.cnf.gz.no_w.cnf', '90-24-1-q.cnf.gz.no_w.cnf', '90-24-2-q.cnf.gz.no_w.cnf', '90-24-3-q.cnf.gz.no_w.cnf', '90-24-4-q.cnf.gz.no_w.cnf', '90-24-5-q.cnf.gz.no_w.cnf', '90-24-6-q.cnf.gz.no_w.cnf', '90-24-7-q.cnf.gz.no_w.cnf', '90-24-8-q.cnf.gz.no_w.cnf', '90-24-9-q.cnf.gz.no_w.cnf', '90-25-10-q.cnf.gz.no_w.cnf', '90-25-1-q.cnf.gz.no_w.cnf', '90-25-2-q.cnf.gz.no_w.cnf', '90-25-3-q.cnf.gz.no_w.cnf', '90-25-4-q.cnf.gz.no_w.cnf', '90-25-5-q.cnf.gz.no_w.cnf', '90-25-6-q.cnf.gz.no_w.cnf', '90-25-7-q.cnf.gz.no_w.cnf', '90-25-8-q.cnf.gz.no_w.cnf', '90-25-9-q.cnf.gz.no_w.cnf', '90-26-10-q.cnf.gz.no_w.cnf', '90-26-1-q.cnf.gz.no_w.cnf', '90-26-2-q.cnf.gz.no_w.cnf', '90-26-3-q.cnf.gz.no_w.cnf', '90-26-4-q.cnf.gz.no_w.cnf', '90-26-5-q.cnf.gz.no_w.cnf', '90-26-6-q.cnf.gz.no_w.cnf', '90-26-7-q.cnf.gz.no_w.cnf', '90-26-8-q.cnf.gz.no_w.cnf', '90-26-9-q.cnf.gz.no_w.cnf', '90-30-10-q.cnf.gz.no_w.cnf', '90-30-1-q.cnf.gz.no_w.cnf', '90-30-2-q.cnf.gz.no_w.cnf', '90-30-3-q.cnf.gz.no_w.cnf', '90-30-4-q.cnf.gz.no_w.cnf', '90-30-5-q.cnf.gz.no_w.cnf', '90-30-6-q.cnf.gz.no_w.cnf', '90-30-7-q.cnf.gz.no_w.cnf', '90-30-8-q.cnf.gz.no_w.cnf', '90-30-9-q.cnf.gz.no_w.cnf', '90-34-10-q.cnf.gz.no_w.cnf', '90-34-1-q.cnf.gz.no_w.cnf', '90-34-2-q.cnf.gz.no_w.cnf', '90-34-3-q.cnf.gz.no_w.cnf', '90-34-4-q.cnf.gz.no_w.cnf', '90-34-5-q.cnf.gz.no_w.cnf', '90-34-6-q.cnf.gz.no_w.cnf', '90-34-7-q.cnf.gz.no_w.cnf', '90-34-8-q.cnf.gz.no_w.cnf', '90-34-9-q.cnf.gz.no_w.cnf', '90-38-10-q.cnf.gz.no_w.cnf', '90-38-1-q.cnf.gz.no_w.cnf', '90-38-2-q.cnf.gz.no_w.cnf', '90-38-3-q.cnf.gz.no_w.cnf', '90-38-4-q.cnf.gz.no_w.cnf', '90-38-5-q.cnf.gz.no_w.cnf', '90-38-6-q.cnf.gz.no_w.cnf', '90-38-7-q.cnf.gz.no_w.cnf', '90-38-8-q.cnf.gz.no_w.cnf', '90-38-9-q.cnf.gz.no_w.cnf', '90-42-10-q.cnf.gz.no_w.cnf', '90-42-1-q.cnf.gz.no_w.cnf', '90-42-2-q.cnf.gz.no_w.cnf', '90-42-3-q.cnf.gz.no_w.cnf', '90-42-4-q.cnf.gz.no_w.cnf', '90-42-5-q.cnf.gz.no_w.cnf', '90-42-6-q.cnf.gz.no_w.cnf', '90-42-7-q.cnf.gz.no_w.cnf', '90-42-8-q.cnf.gz.no_w.cnf', '90-42-9-q.cnf.gz.no_w.cnf', '90-46-10-q.cnf.gz.no_w.cnf', '90-46-1-q.cnf.gz.no_w.cnf', '90-46-2-q.cnf.gz.no_w.cnf', '90-46-3-q.cnf.gz.no_w.cnf', '90-46-4-q.cnf.gz.no_w.cnf', '90-46-5-q.cnf.gz.no_w.cnf', '90-46-6-q.cnf.gz.no_w.cnf', '90-46-7-q.cnf.gz.no_w.cnf', '90-46-8-q.cnf.gz.no_w.cnf', '90-46-9-q.cnf.gz.no_w.cnf', '90-50-10-q.cnf.gz.no_w.cnf', '90-50-1-q.cnf.gz.no_w.cnf', '90-50-2-q.cnf.gz.no_w.cnf', '90-50-3-q.cnf.gz.no_w.cnf', '90-50-4-q.cnf.gz.no_w.cnf', '90-50-5-q.cnf.gz.no_w.cnf', '90-50-6-q.cnf.gz.no_w.cnf', '90-50-7-q.cnf.gz.no_w.cnf', '90-50-8-q.cnf.gz.no_w.cnf', '90-50-9-q.cnf.gz.no_w.cnf', 'ActivityService2.sk_10_27.cnf.gz.no_w.cnf', 'ActivityService.sk_11_27.cnf.gz.no_w.cnf', 'blasted_case_0_b11_1.cnf.gz.no_w.cnf', 'blasted_case_0_b12_1.cnf.gz.no_w.cnf', 'blasted_case_0_b12_2.cnf.gz.no_w.cnf', 'blasted_case_0_b12_even1.cnf.gz.no_w.cnf', 'blasted_case_0_b12_even2.cnf.gz.no_w.cnf', 'blasted_case_0_b12_even3.cnf.gz.no_w.cnf', 'blasted_case_0_b14_1.cnf.gz.no_w.cnf', 'blasted_case_0_ptb_1.cnf.gz.no_w.cnf', 'blasted_case_0_ptb_2.cnf.gz.no_w.cnf', 'blasted_case100.cnf.gz.no_w.cnf', 'blasted_case101.cnf.gz.no_w.cnf', 'blasted_case102.cnf.gz.no_w.cnf', 'blasted_case103.cnf.gz.no_w.cnf', 'blasted_case104.cnf.gz.no_w.cnf', 'blasted_case105.cnf.gz.no_w.cnf', 'blasted_case106.cnf.gz.no_w.cnf', 'blasted_case107.cnf.gz.no_w.cnf', 'blasted_case108.cnf.gz.no_w.cnf', 'blasted_case109.cnf.gz.no_w.cnf', 'blasted_case10.cnf.gz.no_w.cnf', 'blasted_case110.cnf.gz.no_w.cnf', 'blasted_case111.cnf.gz.no_w.cnf', 'blasted_case112.cnf.gz.no_w.cnf', 'blasted_case113.cnf.gz.no_w.cnf', 'blasted_case114.cnf.gz.no_w.cnf', 'blasted_case115.cnf.gz.no_w.cnf', 'blasted_case116.cnf.gz.no_w.cnf', 'blasted_case117.cnf.gz.no_w.cnf', 'blasted_case118.cnf.gz.no_w.cnf', 'blasted_case119.cnf.gz.no_w.cnf', 'blasted_case11.cnf.gz.no_w.cnf', 'blasted_case120.cnf.gz.no_w.cnf', 'blasted_case121.cnf.gz.no_w.cnf', 'blasted_case122.cnf.gz.no_w.cnf', 'blasted_case123.cnf.gz.no_w.cnf', 'blasted_case124.cnf.gz.no_w.cnf', 'blasted_case125.cnf.gz.no_w.cnf', 'blasted_case126.cnf.gz.no_w.cnf', 'blasted_case127.cnf.gz.no_w.cnf', 'blasted_case128.cnf.gz.no_w.cnf', 'blasted_case12.cnf.gz.no_w.cnf', 'blasted_case130.cnf.gz.no_w.cnf', 'blasted_case131.cnf.gz.no_w.cnf', 'blasted_case132.cnf.gz.no_w.cnf', 'blasted_case133.cnf.gz.no_w.cnf', 'blasted_case134.cnf.gz.no_w.cnf', 'blasted_case135.cnf.gz.no_w.cnf', 'blasted_case136.cnf.gz.no_w.cnf', 'blasted_case137.cnf.gz.no_w.cnf', 'blasted_case138.cnf.gz.no_w.cnf', 'blasted_case139.cnf.gz.no_w.cnf', 'blasted_case140.cnf.gz.no_w.cnf', 'blasted_case141.cnf.gz.no_w.cnf', 'blasted_case142.cnf.gz.no_w.cnf', 'blasted_case143.cnf.gz.no_w.cnf', 'blasted_case144.cnf.gz.no_w.cnf', 'blasted_case145.cnf.gz.no_w.cnf', 'blasted_case146.cnf.gz.no_w.cnf', 'blasted_case_1_4_b14_even.cnf.gz.no_w.cnf', 'blasted_case14.cnf.gz.no_w.cnf', 'blasted_case15.cnf.gz.no_w.cnf', 'blasted_case17.cnf.gz.no_w.cnf', 'blasted_case18.cnf.gz.no_w.cnf', 'blasted_case19.cnf.gz.no_w.cnf', 'blasted_case_1_b11_1.cnf.gz.no_w.cnf', 'blasted_case_1_b12_1.cnf.gz.no_w.cnf', 'blasted_case_1_b12_2.cnf.gz.no_w.cnf', 'blasted_case_1_b12_even1.cnf.gz.no_w.cnf', 'blasted_case_1_b12_even2.cnf.gz.no_w.cnf', 'blasted_case_1_b12_even3.cnf.gz.no_w.cnf', 'blasted_case_1_b14_1.cnf.gz.no_w.cnf', 'blasted_case_1_b14_2.cnf.gz.no_w.cnf', 'blasted_case_1_b14_3.cnf.gz.no_w.cnf', 'blasted_case1_b14_even3.cnf.gz.no_w.cnf', 'blasted_case_1_b14_even.cnf.gz.no_w.cnf', 'blasted_case1.cnf.gz.no_w.cnf', 'blasted_case_1_ptb_1.cnf.gz.no_w.cnf', 'blasted_case_1_ptb_2.cnf.gz.no_w.cnf', 'blasted_case200.cnf.gz.no_w.cnf', 'blasted_case201.cnf.gz.no_w.cnf', 'blasted_case202.cnf.gz.no_w.cnf', 'blasted_case203.cnf.gz.no_w.cnf', 'blasted_case204.cnf.gz.no_w.cnf', 'blasted_case205.cnf.gz.no_w.cnf', 'blasted_case206.cnf.gz.no_w.cnf', 'blasted_case207.cnf.gz.no_w.cnf', 'blasted_case208.cnf.gz.no_w.cnf', 'blasted_case209.cnf.gz.no_w.cnf', 'blasted_case20.cnf.gz.no_w.cnf', 'blasted_case210.cnf.gz.no_w.cnf', 'blasted_case211.cnf.gz.no_w.cnf', 'blasted_case212.cnf.gz.no_w.cnf', 'blasted_case213.cnf.gz.no_w.cnf', 'blasted_case214.cnf.gz.no_w.cnf', 'blasted_case21.cnf.gz.no_w.cnf', 'blasted_case22.cnf.gz.no_w.cnf', 'blasted_case23.cnf.gz.no_w.cnf', 'blasted_case24.cnf.gz.no_w.cnf', 'blasted_case25.cnf.gz.no_w.cnf', 'blasted_case26.cnf.gz.no_w.cnf', 'blasted_case27.cnf.gz.no_w.cnf', 'blasted_case28.cnf.gz.no_w.cnf', 'blasted_case29.cnf.gz.no_w.cnf', 'blasted_case_2_b12_1.cnf.gz.no_w.cnf', 'blasted_case_2_b12_2.cnf.gz.no_w.cnf', 'blasted_case_2_b12_even1.cnf.gz.no_w.cnf', 'blasted_case_2_b12_even2.cnf.gz.no_w.cnf', 'blasted_case_2_b12_even3.cnf.gz.no_w.cnf', 'blasted_case_2_b14_1.cnf.gz.no_w.cnf', 'blasted_case_2_b14_2.cnf.gz.no_w.cnf', 'blasted_case_2_b14_3.cnf.gz.no_w.cnf', 'blasted_case_2_b14_even.cnf.gz.no_w.cnf', 'blasted_case2.cnf.gz.no_w.cnf', 'blasted_case_2_ptb_1.cnf.gz.no_w.cnf', 'blasted_case_2_ptb_2.cnf.gz.no_w.cnf', 'blasted_case30.cnf.gz.no_w.cnf', 'blasted_case31.cnf.gz.no_w.cnf', 'blasted_case32.cnf.gz.no_w.cnf', 'blasted_case33.cnf.gz.no_w.cnf', 'blasted_case_3_4_b14_even.cnf.gz.no_w.cnf', 'blasted_case34.cnf.gz.no_w.cnf', 'blasted_case35.cnf.gz.no_w.cnf', 'blasted_case36.cnf.gz.no_w.cnf', 'blasted_case37.cnf.gz.no_w.cnf', 'blasted_case38.cnf.gz.no_w.cnf', 'blasted_case39.cnf.gz.no_w.cnf', 'blasted_case_3_b14_1.cnf.gz.no_w.cnf', 'blasted_case_3_b14_2.cnf.gz.no_w.cnf', 'blasted_case_3_b14_3.cnf.gz.no_w.cnf', 'blasted_case3_b14_even3.cnf.gz.no_w.cnf', 'blasted_case3.cnf.gz.no_w.cnf', 'blasted_case40.cnf.gz.no_w.cnf', 'blasted_case41.cnf.gz.no_w.cnf', 'blasted_case42.cnf.gz.no_w.cnf', 'blasted_case43.cnf.gz.no_w.cnf', 'blasted_case44.cnf.gz.no_w.cnf', 'blasted_case45.cnf.gz.no_w.cnf', 'blasted_case46.cnf.gz.no_w.cnf', 'blasted_case47.cnf.gz.no_w.cnf', 'blasted_case49.cnf.gz.no_w.cnf', 'blasted_case4.cnf.gz.no_w.cnf', 'blasted_case50.cnf.gz.no_w.cnf', 'blasted_case51.cnf.gz.no_w.cnf', 'blasted_case52.cnf.gz.no_w.cnf', 'blasted_case53.cnf.gz.no_w.cnf', 'blasted_case54.cnf.gz.no_w.cnf', 'blasted_case55.cnf.gz.no_w.cnf', 'blasted_case56.cnf.gz.no_w.cnf', 'blasted_case57.cnf.gz.no_w.cnf', 'blasted_case58.cnf.gz.no_w.cnf', 'blasted_case59_1.cnf.gz.no_w.cnf', 'blasted_case59.cnf.gz.no_w.cnf', 'blasted_case5.cnf.gz.no_w.cnf', 'blasted_case60.cnf.gz.no_w.cnf', 'blasted_case61.cnf.gz.no_w.cnf', 'blasted_case62.cnf.gz.no_w.cnf', 'blasted_case63.cnf.gz.no_w.cnf', 'blasted_case64.cnf.gz.no_w.cnf', 'blasted_case68.cnf.gz.no_w.cnf', 'blasted_case6.cnf.gz.no_w.cnf', 'blasted_case7.cnf.gz.no_w.cnf', 'blasted_case8.cnf.gz.no_w.cnf', 'blasted_case9.cnf.gz.no_w.cnf', 'blasted_squaring10.cnf.gz.no_w.cnf', 'blasted_squaring11.cnf.gz.no_w.cnf', 'blasted_squaring12.cnf.gz.no_w.cnf', 'blasted_squaring14.cnf.gz.no_w.cnf', 'blasted_squaring16.cnf.gz.no_w.cnf', 'blasted_squaring1.cnf.gz.no_w.cnf', 'blasted_squaring20.cnf.gz.no_w.cnf', 'blasted_squaring21.cnf.gz.no_w.cnf', 'blasted_squaring22.cnf.gz.no_w.cnf', 'blasted_squaring23.cnf.gz.no_w.cnf', 'blasted_squaring24.cnf.gz.no_w.cnf', 'blasted_squaring25.cnf.gz.no_w.cnf', 'blasted_squaring26.cnf.gz.no_w.cnf', 'blasted_squaring27.cnf.gz.no_w.cnf', 'blasted_squaring28.cnf.gz.no_w.cnf', 'blasted_squaring29.cnf.gz.no_w.cnf', 'blasted_squaring2.cnf.gz.no_w.cnf', 'blasted_squaring30.cnf.gz.no_w.cnf', 'blasted_squaring3.cnf.gz.no_w.cnf', 'blasted_squaring40.cnf.gz.no_w.cnf', 'blasted_squaring41.cnf.gz.no_w.cnf', 'blasted_squaring42.cnf.gz.no_w.cnf', 'blasted_squaring4.cnf.gz.no_w.cnf', 'blasted_squaring50.cnf.gz.no_w.cnf', 'blasted_squaring51.cnf.gz.no_w.cnf', 'blasted_squaring5.cnf.gz.no_w.cnf', 'blasted_squaring60.cnf.gz.no_w.cnf', 'blasted_squaring6.cnf.gz.no_w.cnf', 'blasted_squaring70.cnf.gz.no_w.cnf', 'blasted_squaring7.cnf.gz.no_w.cnf', 'blasted_squaring8.cnf.gz.no_w.cnf', 'blasted_squaring9.cnf.gz.no_w.cnf', 'blasted_TR_b12_1_linear.cnf.gz.no_w.cnf', 'blasted_TR_b12_2_linear.cnf.gz.no_w.cnf', 'blasted_TR_b12_even2_linear.cnf.gz.no_w.cnf', 'blasted_TR_b12_even3_linear.cnf.gz.no_w.cnf', 'blasted_TR_b12_even7_linear.cnf.gz.no_w.cnf', 'blasted_TR_b14_1_linear.cnf.gz.no_w.cnf', 'blasted_TR_b14_2_linear.cnf.gz.no_w.cnf', 'blasted_TR_b14_3_linear.cnf.gz.no_w.cnf', 'blasted_TR_b14_even2_linear.cnf.gz.no_w.cnf', 'blasted_TR_b14_even3_linear.cnf.gz.no_w.cnf', 'blasted_TR_b14_even_linear.cnf.gz.no_w.cnf', 'blasted_TR_device_1_even_linear.cnf.gz.no_w.cnf', 'blasted_TR_device_1_linear.cnf.gz.no_w.cnf', 'blasted_TR_ptb_1_linear.cnf.gz.no_w.cnf', 'blasted_TR_ptb_2_linear.cnf.gz.no_w.cnf', 'brp.pm_14steps_10int_8fract_p1_N=200_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_10int_8fract_p1_N=200_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=1000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=1000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=400_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=400_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=600_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=600_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=800_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_12int_8fract_p1_N=800_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_13int_8fract_p1_N=2000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_13int_8fract_p1_N=2000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_14int_8fract_p1_N=3000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_14int_8fract_p1_N=3000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_14int_8fract_p1_N=4000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_14int_8fract_p1_N=4000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_14int_8fract_p1_N=5000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_14int_8fract_p1_N=5000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_22int_8fract_p1_N=1000000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_22int_8fract_p1_N=1000000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_22int_8fract_p1_N=100000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_22int_8fract_p1_N=100000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_22int_8fract_p1_N=10000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_22int_8fract_p1_N=10000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_26int_8fract_p1_N=10000000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_26int_8fract_p1_N=10000000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_30int_8fract_p1_N=100000000_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_30int_8fract_p1_N=100000000_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_8int_8fract_p1_N=16_MAX=2over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_8int_8fract_p1_N=16_MAX=2under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_8int_8fract_p1_N=32_MAX=4over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_8int_8fract_p1_N=32_MAX=4under.dimacs.gz.no_w.cnf', 'brp.pm_14steps_8int_8fract_p1_N=64_MAX=5over.dimacs.gz.no_w.cnf', 'brp.pm_14steps_8int_8fract_p1_N=64_MAX=5under.dimacs.gz.no_w.cnf', 'compress.sk_17_291.cnf.gz.no_w.cnf', 'ConcreteActivityService.sk_13_28.cnf.gz.no_w.cnf', 'ConcreteRoleAffectationService.sk_119_273.cnf.gz.no_w.cnf', 'crowds_big.pm_15steps_8int_7fract_PCTL_TotalRuns=10_CrowdSize=40over.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_8int_7fract_PCTL_TotalRuns=10_CrowdSize=40under.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_8int_7fract_PCTL_TotalRuns=20_CrowdSize=40over.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_8int_7fract_PCTL_TotalRuns=20_CrowdSize=40under.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_9int_7fract_PCTL_TotalRuns=40_CrowdSize=128over.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_9int_7fract_PCTL_TotalRuns=40_CrowdSize=128under.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_9int_7fract_PCTL_TotalRuns=60_CrowdSize=128over.dimacs.gz.no_w.cnf', 'crowds_big.pm_15steps_9int_7fract_PCTL_TotalRuns=60_CrowdSize=128under.dimacs.gz.no_w.cnf', 'crowds.pm_15steps_8int_7fract_PCTL_TotalRuns=10_CrowdSize=20over.dimacs.gz.no_w.cnf', 'crowds.pm_15steps_8int_7fract_PCTL_TotalRuns=10_CrowdSize=20under.dimacs.gz.no_w.cnf', 'crowds.pm_15steps_8int_7fract_PCTL_TotalRuns=3_CrowdSize=5over.dimacs.gz.no_w.cnf', 'crowds.pm_15steps_8int_7fract_PCTL_TotalRuns=3_CrowdSize=5under.dimacs.gz.no_w.cnf', 'crowds.pm_15steps_8int_7fract_PCTL_TotalRuns=6_CrowdSize=10over.dimacs.gz.no_w.cnf', 'crowds.pm_15steps_8int_7fract_PCTL_TotalRuns=6_CrowdSize=10under.dimacs.gz.no_w.cnf', 'diagStencilClean.sk_41_36.cnf.gz.no_w.cnf', 'diagStencil.sk_35_36.cnf.gz.no_w.cnf', 'doublyLinkedList.sk_8_37.cnf.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=10_L=2over.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=10_L=2under.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=10_L=4over.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=10_L=4under.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=7_L=2over.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=7_L=2under.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=7_L=4over.dimacs.gz.no_w.cnf', 'egl.pm_100steps_6int_1fract_unfairA_N=7_L=4under.dimacs.gz.no_w.cnf', 'egl.pm_31steps_6int_1fract_unfairA_N=5_L=2over.dimacs.gz.no_w.cnf', 'egl.pm_31steps_6int_1fract_unfairA_N=5_L=2under.dimacs.gz.no_w.cnf', 'egl.pm_60steps_6int_1fract_unfairA_N=5_L=4over.dimacs.gz.no_w.cnf', 'egl.pm_60steps_6int_1fract_unfairA_N=5_L=4under.dimacs.gz.no_w.cnf', 'enqueueSeqSK.sk_10_42.cnf.gz.no_w.cnf', 'GuidanceService2.sk_2_27.cnf.gz.no_w.cnf', 'GuidanceService.sk_4_27.cnf.gz.no_w.cnf', 'hash-10-1.cnf.gz.no_w.cnf', 'hash-10-2.cnf.gz.no_w.cnf', 'hash-10-3.cnf.gz.no_w.cnf', 'hash-10-4.cnf.gz.no_w.cnf', 'hash-10-5.cnf.gz.no_w.cnf', 'hash-10-6.cnf.gz.no_w.cnf', 'hash-10-7.cnf.gz.no_w.cnf', 'hash-10-8.cnf.gz.no_w.cnf', 'hash-10.cnf.gz.no_w.cnf', 'hash-11-1.cnf.gz.no_w.cnf', 'hash-11-2.cnf.gz.no_w.cnf', 'hash-11-3.cnf.gz.no_w.cnf', 'hash-11-4.cnf.gz.no_w.cnf', 'hash-11-5.cnf.gz.no_w.cnf', 'hash-11-6.cnf.gz.no_w.cnf', 'hash-11-7.cnf.gz.no_w.cnf', 'hash-11-8.cnf.gz.no_w.cnf', 'hash-11.cnf.gz.no_w.cnf', 'hash-12-1.cnf.gz.no_w.cnf', 'hash-12-2.cnf.gz.no_w.cnf', 'hash-12-3.cnf.gz.no_w.cnf', 'hash-12-4.cnf.gz.no_w.cnf', 'hash-12-5.cnf.gz.no_w.cnf', 'hash-12-6.cnf.gz.no_w.cnf', 'hash-12-7.cnf.gz.no_w.cnf', 'hash-12-8.cnf.gz.no_w.cnf', 'hash-12.cnf.gz.no_w.cnf', 'hash-13-1.cnf.gz.no_w.cnf', 'hash-13-2.cnf.gz.no_w.cnf', 'hash-13-3.cnf.gz.no_w.cnf', 'hash-13-4.cnf.gz.no_w.cnf', 'hash-13-5.cnf.gz.no_w.cnf', 'hash-13-6.cnf.gz.no_w.cnf', 'hash-13-7.cnf.gz.no_w.cnf', 'hash-13-8.cnf.gz.no_w.cnf', 'hash-14.cnf.gz.no_w.cnf', 'hash16-12.cnf.gz.no_w.cnf', 'hash16-4.cnf.gz.no_w.cnf', 'hash16-8.cnf.gz.no_w.cnf', 'hash-16.cnf.gz.no_w.cnf', 'hash-2.cnf.gz.no_w.cnf', 'hash-4.cnf.gz.no_w.cnf', 'hash-6.cnf.gz.no_w.cnf', 'hash-8-1.cnf.gz.no_w.cnf', 'hash-8-2.cnf.gz.no_w.cnf', 'hash-8-3.cnf.gz.no_w.cnf', 'hash-8-4.cnf.gz.no_w.cnf', 'hash-8-5.cnf.gz.no_w.cnf', 'hash-8-6.cnf.gz.no_w.cnf', 'hash-8-7.cnf.gz.no_w.cnf', 'hash-8-8.cnf.gz.no_w.cnf', 'hash-8.cnf.gz.no_w.cnf', 'herman15.pm_20steps_6int_1fract_stable_over.dimacs.gz.no_w.cnf', 'herman15.pm_20steps_6int_1fract_stable_under.dimacs.gz.no_w.cnf', 'herman21.pm_20steps_6int_1fract_stable_over.dimacs.gz.no_w.cnf', 'herman21.pm_20steps_6int_1fract_stable_under.dimacs.gz.no_w.cnf', 'herman31.pm_20steps_6int_1fract_stable_over.dimacs.gz.no_w.cnf', 'herman31.pm_20steps_6int_1fract_stable_under.dimacs.gz.no_w.cnf', 'herman3.pm_20steps_6int_1fract_stable_over.dimacs.gz.no_w.cnf', 'herman3.pm_20steps_6int_1fract_stable_under.dimacs.gz.no_w.cnf', 'herman41.pm_20steps_6int_1fract_stable_over.dimacs.gz.no_w.cnf', 'herman41.pm_20steps_6int_1fract_stable_under.dimacs.gz.no_w.cnf', 'herman9.pm_20steps_6int_1fract_stable_over.dimacs.gz.no_w.cnf', 'herman9.pm_20steps_6int_1fract_stable_under.dimacs.gz.no_w.cnf', 'isolateRightmost.sk_7_481.cnf.gz.no_w.cnf', 'IssueServiceImpl.sk_8_30.cnf.gz.no_w.cnf', 'IterationService.sk_12_27.cnf.gz.no_w.cnf', 'jburnim_morton.sk_13_530.cnf.gz.no_w.cnf', 'karatsuba.sk_7_41.cnf.gz.no_w.cnf', 'leader_sync3_2.pm_4steps_7int_1fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync3_2.pm_4steps_7int_1fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync3_32.pm_4steps_7int_5fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync3_32.pm_4steps_7int_5fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync3_64.pm_4steps_7int_6fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync3_64.pm_4steps_7int_6fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync3_8.pm_4steps_7int_3fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync3_8.pm_4steps_7int_3fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_10fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_10fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_10fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_10fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_11fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_11fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_11fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_11fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_12fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_12fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_12fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_12fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_13fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_13fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_13fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_13fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_14fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_14fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_14fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_14fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_15fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_15fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_15fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_15fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_16fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_16fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_16fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_16fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_17fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_17fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_17fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_17fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_18fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_18fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_18fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_18fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_19fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_19fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_19fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_19fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_20fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_20fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_20fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_20fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_4fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_4fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_4fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_4fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_5fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_5fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_5fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_5fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_6fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_6fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_6fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_6fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_7fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_7fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_7fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_7fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_8fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_8fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_8fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_8fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_9fract_elected_neg_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_9fract_elected_neg_under.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_9fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_11.pm_5steps_7int_9fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_2.pm_5steps_7int_1fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_2.pm_5steps_7int_1fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_32.pm_5steps_7int_5fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_32.pm_5steps_7int_5fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_64.pm_5steps_7int_6fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_64.pm_5steps_7int_6fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync4_8.pm_5steps_7int_3fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync4_8.pm_5steps_7int_3fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync6_2.pm_7steps_7int_1fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync6_2.pm_7steps_7int_1fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync6_32.pm_7steps_7int_5fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync6_32.pm_7steps_7int_5fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync6_64.pm_7steps_7int_6fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync6_64.pm_7steps_7int_6fract_elected_under.dimacs.gz.no_w.cnf', 'leader_sync6_8.pm_7steps_7int_3fract_elected_over.dimacs.gz.no_w.cnf', 'leader_sync6_8.pm_7steps_7int_3fract_elected_under.dimacs.gz.no_w.cnf', 'listReverse.sk_11_43.cnf.gz.no_w.cnf', 'log-1.cnf.gz.no_w.cnf', 'log-2.cnf.gz.no_w.cnf', 'log2.sk_72_391.cnf.gz.no_w.cnf', 'log-3.cnf.gz.no_w.cnf', 'log-4.cnf.gz.no_w.cnf', 'log-5.cnf.gz.no_w.cnf', 'logcount.sk_16_86.cnf.gz.no_w.cnf', 'LoginService2.sk_23_36.cnf.gz.no_w.cnf', 'LoginService.sk_20_34.cnf.gz.no_w.cnf', 'lss.sk_6_7.cnf.gz.no_w.cnf', 'min-12.cnf.gz.no_w.cnf', 'min-12s.cnf.gz.no_w.cnf', 'min-16.cnf.gz.no_w.cnf', 'min-16s.cnf.gz.no_w.cnf', 'min-1s.cnf.gz.no_w.cnf', 'min-20.cnf.gz.no_w.cnf', 'min-20s.cnf.gz.no_w.cnf', 'min-24.cnf.gz.no_w.cnf', 'min-24s.cnf.gz.no_w.cnf', 'min-28.cnf.gz.no_w.cnf', 'min-28s.cnf.gz.no_w.cnf', 'min-2s.cnf.gz.no_w.cnf', 'min-32.cnf.gz.no_w.cnf', 'min-32s.cnf.gz.no_w.cnf', 'min-3s.cnf.gz.no_w.cnf', 'min-4.cnf.gz.no_w.cnf', 'min-4s.cnf.gz.no_w.cnf', 'min-6s.cnf.gz.no_w.cnf', 'min-8.cnf.gz.no_w.cnf', 'min-8s.cnf.gz.no_w.cnf', 'modexp16-2.cnf.gz.no_w.cnf', 'modexp16-4.cnf.gz.no_w.cnf', 'modexp8-4-1.cnf.gz.no_w.cnf', 'modexp8-4-2.cnf.gz.no_w.cnf', 'modexp8-4-3.cnf.gz.no_w.cnf', 'modexp8-4-4.cnf.gz.no_w.cnf', 'modexp8-4-5.cnf.gz.no_w.cnf', 'modexp8-4-6.cnf.gz.no_w.cnf', 'modexp8-4-7.cnf.gz.no_w.cnf', 'modexp8-4-8.cnf.gz.no_w.cnf', 'modexp8-5-1.cnf.gz.no_w.cnf', 'modexp8-5-2.cnf.gz.no_w.cnf', 'modexp8-5-3.cnf.gz.no_w.cnf', 'modexp8-5-4.cnf.gz.no_w.cnf', 'modexp8-5-5.cnf.gz.no_w.cnf', 'modexp8-5-6.cnf.gz.no_w.cnf', 'modexp8-5-7.cnf.gz.no_w.cnf', 'modexp8-5-8.cnf.gz.no_w.cnf', 'modexp8-6-1.cnf.gz.no_w.cnf', 'modexp8-6-2.cnf.gz.no_w.cnf', 'modexp8-6-3.cnf.gz.no_w.cnf', 'modexp8-6-4.cnf.gz.no_w.cnf', 'modexp8-6-5.cnf.gz.no_w.cnf', 'modexp8-6-6.cnf.gz.no_w.cnf', 'modexp8-6-7.cnf.gz.no_w.cnf', 'modexp8-6-8.cnf.gz.no_w.cnf', 'modexp8-7-1.cnf.gz.no_w.cnf', 'modexp8-7-2.cnf.gz.no_w.cnf', 'modexp8-7-3.cnf.gz.no_w.cnf', 'modexp8-7-4.cnf.gz.no_w.cnf', 'modexp8-7-5.cnf.gz.no_w.cnf', 'modexp8-7-6.cnf.gz.no_w.cnf', 'modexp8-7-7.cnf.gz.no_w.cnf', 'modexp8-7-8.cnf.gz.no_w.cnf', 'modexp8-8-1.cnf.gz.no_w.cnf', 'modexp8-8-2.cnf.gz.no_w.cnf', 'modexp8-8-3.cnf.gz.no_w.cnf', 'modexp8-8-4.cnf.gz.no_w.cnf', 'modexp8-8-5.cnf.gz.no_w.cnf', 'modexp8-8-6.cnf.gz.no_w.cnf', 'modexp8-8-7.cnf.gz.no_w.cnf', 'modexp8-8-8.cnf.gz.no_w.cnf', 'nand.pm_100steps_15int_8fract_lessThan10PercentAreErroneous_N=20_K=4over.dimacs.gz.no_w.cnf', 'nand.pm_100steps_15int_8fract_lessThan10PercentAreErroneous_N=20_K=4under.dimacs.gz.no_w.cnf', 'nand.pm_80steps_15int_8fract_lessThan10PercentAreErroneous_N=20_K=2over.dimacs.gz.no_w.cnf', 'nand.pm_80steps_15int_8fract_lessThan10PercentAreErroneous_N=20_K=2under.dimacs.gz.no_w.cnf', 'nand.pm_80steps_15int_8fract_lessThan10PercentAreErroneous_N=20_K=3over.dimacs.gz.no_w.cnf', 'nand.pm_80steps_15int_8fract_lessThan10PercentAreErroneous_N=20_K=3under.dimacs.gz.no_w.cnf', 'NotificationServiceImpl2.sk_10_36.cnf.gz.no_w.cnf', 'or-100-10-10.cnf.gz.no_w.cnf', 'or-100-10-10-UC-10.cnf.gz.no_w.cnf', 'or-100-10-10-UC-20.cnf.gz.no_w.cnf', 'or-100-10-10-UC-30.cnf.gz.no_w.cnf', 'or-100-10-10-UC-40.cnf.gz.no_w.cnf', 'or-100-10-10-UC-50.cnf.gz.no_w.cnf', 'or-100-10-10-UC-60.cnf.gz.no_w.cnf', 'or-100-10-1.cnf.gz.no_w.cnf', 'or-100-10-1-UC-10.cnf.gz.no_w.cnf', 'or-100-10-1-UC-20.cnf.gz.no_w.cnf', 'or-100-10-1-UC-30.cnf.gz.no_w.cnf', 'or-100-10-1-UC-40.cnf.gz.no_w.cnf', 'or-100-10-1-UC-50.cnf.gz.no_w.cnf', 'or-100-10-1-UC-60.cnf.gz.no_w.cnf', 'or-100-10-2.cnf.gz.no_w.cnf', 'or-100-10-2-UC-10.cnf.gz.no_w.cnf', 'or-100-10-2-UC-20.cnf.gz.no_w.cnf', 'or-100-10-2-UC-30.cnf.gz.no_w.cnf', 'or-100-10-2-UC-40.cnf.gz.no_w.cnf', 'or-100-10-2-UC-50.cnf.gz.no_w.cnf', 'or-100-10-2-UC-60.cnf.gz.no_w.cnf', 'or-100-10-3.cnf.gz.no_w.cnf', 'or-100-10-3-UC-10.cnf.gz.no_w.cnf', 'or-100-10-3-UC-20.cnf.gz.no_w.cnf', 'or-100-10-3-UC-30.cnf.gz.no_w.cnf', 'or-100-10-3-UC-40.cnf.gz.no_w.cnf', 'or-100-10-3-UC-50.cnf.gz.no_w.cnf', 'or-100-10-3-UC-60.cnf.gz.no_w.cnf', 'or-100-10-4.cnf.gz.no_w.cnf', 'or-100-10-4-UC-10.cnf.gz.no_w.cnf', 'or-100-10-4-UC-20.cnf.gz.no_w.cnf', 'or-100-10-4-UC-30.cnf.gz.no_w.cnf', 'or-100-10-4-UC-40.cnf.gz.no_w.cnf', 'or-100-10-4-UC-50.cnf.gz.no_w.cnf', 'or-100-10-4-UC-60.cnf.gz.no_w.cnf', 'or-100-10-5.cnf.gz.no_w.cnf', 'or-100-10-5-UC-10.cnf.gz.no_w.cnf', 'or-100-10-5-UC-20.cnf.gz.no_w.cnf', 'or-100-10-5-UC-30.cnf.gz.no_w.cnf', 'or-100-10-5-UC-40.cnf.gz.no_w.cnf', 'or-100-10-5-UC-50.cnf.gz.no_w.cnf', 'or-100-10-5-UC-60.cnf.gz.no_w.cnf', 'or-100-10-6.cnf.gz.no_w.cnf', 'or-100-10-6-UC-10.cnf.gz.no_w.cnf', 'or-100-10-6-UC-20.cnf.gz.no_w.cnf', 'or-100-10-6-UC-30.cnf.gz.no_w.cnf', 'or-100-10-6-UC-40.cnf.gz.no_w.cnf', 'or-100-10-6-UC-50.cnf.gz.no_w.cnf', 'or-100-10-6-UC-60.cnf.gz.no_w.cnf', 'or-100-10-7.cnf.gz.no_w.cnf', 'or-100-10-7-UC-10.cnf.gz.no_w.cnf', 'or-100-10-7-UC-20.cnf.gz.no_w.cnf', 'or-100-10-7-UC-30.cnf.gz.no_w.cnf', 'or-100-10-7-UC-40.cnf.gz.no_w.cnf', 'or-100-10-7-UC-50.cnf.gz.no_w.cnf', 'or-100-10-7-UC-60.cnf.gz.no_w.cnf', 'or-100-10-8.cnf.gz.no_w.cnf', 'or-100-10-8-UC-10.cnf.gz.no_w.cnf', 'or-100-10-8-UC-20.cnf.gz.no_w.cnf', 'or-100-10-8-UC-30.cnf.gz.no_w.cnf', 'or-100-10-8-UC-40.cnf.gz.no_w.cnf', 'or-100-10-8-UC-50.cnf.gz.no_w.cnf', 'or-100-10-8-UC-60.cnf.gz.no_w.cnf', 'or-100-10-9.cnf.gz.no_w.cnf', 'or-100-10-9-UC-10.cnf.gz.no_w.cnf', 'or-100-10-9-UC-20.cnf.gz.no_w.cnf', 'or-100-10-9-UC-30.cnf.gz.no_w.cnf', 'or-100-10-9-UC-40.cnf.gz.no_w.cnf', 'or-100-10-9-UC-50.cnf.gz.no_w.cnf', 'or-100-10-9-UC-60.cnf.gz.no_w.cnf', 'or-100-20-10.cnf.gz.no_w.cnf', 'or-100-20-10-UC-10.cnf.gz.no_w.cnf', 'or-100-20-10-UC-20.cnf.gz.no_w.cnf', 'or-100-20-10-UC-30.cnf.gz.no_w.cnf', 'or-100-20-10-UC-40.cnf.gz.no_w.cnf', 'or-100-20-10-UC-50.cnf.gz.no_w.cnf', 'or-100-20-10-UC-60.cnf.gz.no_w.cnf', 'or-100-20-1.cnf.gz.no_w.cnf', 'or-100-20-1-UC-10.cnf.gz.no_w.cnf', 'or-100-20-1-UC-20.cnf.gz.no_w.cnf', 'or-100-20-1-UC-30.cnf.gz.no_w.cnf', 'or-100-20-1-UC-40.cnf.gz.no_w.cnf', 'or-100-20-1-UC-50.cnf.gz.no_w.cnf', 'or-100-20-1-UC-60.cnf.gz.no_w.cnf', 'or-100-20-2.cnf.gz.no_w.cnf', 'or-100-20-2-UC-10.cnf.gz.no_w.cnf', 'or-100-20-2-UC-20.cnf.gz.no_w.cnf', 'or-100-20-2-UC-30.cnf.gz.no_w.cnf', 'or-100-20-2-UC-40.cnf.gz.no_w.cnf', 'or-100-20-2-UC-50.cnf.gz.no_w.cnf', 'or-100-20-2-UC-60.cnf.gz.no_w.cnf', 'or-100-20-3.cnf.gz.no_w.cnf', 'or-100-20-3-UC-10.cnf.gz.no_w.cnf', 'or-100-20-3-UC-20.cnf.gz.no_w.cnf', 'or-100-20-3-UC-30.cnf.gz.no_w.cnf', 'or-100-20-3-UC-40.cnf.gz.no_w.cnf', 'or-100-20-3-UC-50.cnf.gz.no_w.cnf', 'or-100-20-3-UC-60.cnf.gz.no_w.cnf', 'or-100-20-4.cnf.gz.no_w.cnf', 'or-100-20-4-UC-10.cnf.gz.no_w.cnf', 'or-100-20-4-UC-20.cnf.gz.no_w.cnf', 'or-100-20-4-UC-30.cnf.gz.no_w.cnf', 'or-100-20-4-UC-40.cnf.gz.no_w.cnf', 'or-100-20-4-UC-50.cnf.gz.no_w.cnf', 'or-100-20-4-UC-60.cnf.gz.no_w.cnf', 'or-100-20-5.cnf.gz.no_w.cnf', 'or-100-20-5-UC-10.cnf.gz.no_w.cnf', 'or-100-20-5-UC-20.cnf.gz.no_w.cnf', 'or-100-20-5-UC-30.cnf.gz.no_w.cnf', 'or-100-20-5-UC-40.cnf.gz.no_w.cnf', 'or-100-20-5-UC-50.cnf.gz.no_w.cnf', 'or-100-20-5-UC-60.cnf.gz.no_w.cnf', 'or-100-20-6.cnf.gz.no_w.cnf', 'or-100-20-6-UC-10.cnf.gz.no_w.cnf', 'or-100-20-6-UC-20.cnf.gz.no_w.cnf', 'or-100-20-6-UC-30.cnf.gz.no_w.cnf', 'or-100-20-6-UC-40.cnf.gz.no_w.cnf', 'or-100-20-6-UC-50.cnf.gz.no_w.cnf', 'or-100-20-6-UC-60.cnf.gz.no_w.cnf', 'or-100-20-7.cnf.gz.no_w.cnf', 'or-100-20-7-UC-10.cnf.gz.no_w.cnf', 'or-100-20-7-UC-20.cnf.gz.no_w.cnf', 'or-100-20-7-UC-30.cnf.gz.no_w.cnf', 'or-100-20-7-UC-40.cnf.gz.no_w.cnf', 'or-100-20-7-UC-50.cnf.gz.no_w.cnf', 'or-100-20-7-UC-60.cnf.gz.no_w.cnf', 'or-100-20-8.cnf.gz.no_w.cnf', 'or-100-20-8-UC-10.cnf.gz.no_w.cnf', 'or-100-20-8-UC-20.cnf.gz.no_w.cnf', 'or-100-20-8-UC-30.cnf.gz.no_w.cnf', 'or-100-20-8-UC-40.cnf.gz.no_w.cnf', 'or-100-20-8-UC-50.cnf.gz.no_w.cnf', 'or-100-20-8-UC-60.cnf.gz.no_w.cnf', 'or-100-20-9.cnf.gz.no_w.cnf', 'or-100-20-9-UC-10.cnf.gz.no_w.cnf', 'or-100-20-9-UC-20.cnf.gz.no_w.cnf', 'or-100-20-9-UC-30.cnf.gz.no_w.cnf', 'or-100-20-9-UC-40.cnf.gz.no_w.cnf', 'or-100-20-9-UC-50.cnf.gz.no_w.cnf', 'or-100-20-9-UC-60.cnf.gz.no_w.cnf', 'or-100-5-10.cnf.gz.no_w.cnf', 'or-100-5-10-UC-10.cnf.gz.no_w.cnf', 'or-100-5-10-UC-20.cnf.gz.no_w.cnf', 'or-100-5-10-UC-30.cnf.gz.no_w.cnf', 'or-100-5-10-UC-40.cnf.gz.no_w.cnf', 'or-100-5-10-UC-50.cnf.gz.no_w.cnf', 'or-100-5-10-UC-60.cnf.gz.no_w.cnf', 'or-100-5-1.cnf.gz.no_w.cnf', 'or-100-5-1-UC-10.cnf.gz.no_w.cnf', 'or-100-5-1-UC-20.cnf.gz.no_w.cnf', 'or-100-5-1-UC-30.cnf.gz.no_w.cnf', 'or-100-5-1-UC-40.cnf.gz.no_w.cnf', 'or-100-5-1-UC-50.cnf.gz.no_w.cnf', 'or-100-5-1-UC-60.cnf.gz.no_w.cnf', 'or-100-5-2.cnf.gz.no_w.cnf', 'or-100-5-2-UC-10.cnf.gz.no_w.cnf', 'or-100-5-2-UC-20.cnf.gz.no_w.cnf', 'or-100-5-2-UC-30.cnf.gz.no_w.cnf', 'or-100-5-2-UC-40.cnf.gz.no_w.cnf', 'or-100-5-2-UC-50.cnf.gz.no_w.cnf', 'or-100-5-2-UC-60.cnf.gz.no_w.cnf', 'or-100-5-3.cnf.gz.no_w.cnf', 'or-100-5-3-UC-10.cnf.gz.no_w.cnf', 'or-100-5-3-UC-20.cnf.gz.no_w.cnf', 'or-100-5-3-UC-30.cnf.gz.no_w.cnf', 'or-100-5-3-UC-40.cnf.gz.no_w.cnf', 'or-100-5-3-UC-50.cnf.gz.no_w.cnf', 'or-100-5-3-UC-60.cnf.gz.no_w.cnf', 'or-100-5-4.cnf.gz.no_w.cnf', 'or-100-5-4-UC-10.cnf.gz.no_w.cnf', 'or-100-5-4-UC-20.cnf.gz.no_w.cnf', 'or-100-5-4-UC-30.cnf.gz.no_w.cnf', 'or-100-5-4-UC-40.cnf.gz.no_w.cnf', 'or-100-5-4-UC-50.cnf.gz.no_w.cnf', 'or-100-5-4-UC-60.cnf.gz.no_w.cnf', 'or-100-5-5.cnf.gz.no_w.cnf', 'or-100-5-5-UC-10.cnf.gz.no_w.cnf', 'or-100-5-5-UC-20.cnf.gz.no_w.cnf', 'or-100-5-5-UC-30.cnf.gz.no_w.cnf', 'or-100-5-5-UC-40.cnf.gz.no_w.cnf', 'or-100-5-5-UC-50.cnf.gz.no_w.cnf', 'or-100-5-5-UC-60.cnf.gz.no_w.cnf', 'or-100-5-6.cnf.gz.no_w.cnf', 'or-100-5-6-UC-10.cnf.gz.no_w.cnf', 'or-100-5-6-UC-20.cnf.gz.no_w.cnf', 'or-100-5-6-UC-30.cnf.gz.no_w.cnf', 'or-100-5-6-UC-40.cnf.gz.no_w.cnf', 'or-100-5-6-UC-50.cnf.gz.no_w.cnf', 'or-100-5-6-UC-60.cnf.gz.no_w.cnf', 'or-100-5-7.cnf.gz.no_w.cnf', 'or-100-5-7-UC-10.cnf.gz.no_w.cnf', 'or-100-5-7-UC-20.cnf.gz.no_w.cnf', 'or-100-5-7-UC-30.cnf.gz.no_w.cnf', 'or-100-5-7-UC-40.cnf.gz.no_w.cnf', 'or-100-5-7-UC-50.cnf.gz.no_w.cnf', 'or-100-5-7-UC-60.cnf.gz.no_w.cnf', 'or-100-5-8.cnf.gz.no_w.cnf', 'or-100-5-8-UC-10.cnf.gz.no_w.cnf', 'or-100-5-8-UC-20.cnf.gz.no_w.cnf', 'or-100-5-8-UC-30.cnf.gz.no_w.cnf', 'or-100-5-8-UC-40.cnf.gz.no_w.cnf', 'or-100-5-8-UC-50.cnf.gz.no_w.cnf', 'or-100-5-8-UC-60.cnf.gz.no_w.cnf', 'or-100-5-9.cnf.gz.no_w.cnf', 'or-100-5-9-UC-10.cnf.gz.no_w.cnf', 'or-100-5-9-UC-20.cnf.gz.no_w.cnf', 'or-100-5-9-UC-30.cnf.gz.no_w.cnf', 'or-100-5-9-UC-40.cnf.gz.no_w.cnf', 'or-100-5-9-UC-50.cnf.gz.no_w.cnf', 'or-100-5-9-UC-60.cnf.gz.no_w.cnf', 'or-50-10-10.cnf.gz.no_w.cnf', 'or-50-10-10-UC-10.cnf.gz.no_w.cnf', 'or-50-10-10-UC-20.cnf.gz.no_w.cnf', 'or-50-10-10-UC-30.cnf.gz.no_w.cnf', 'or-50-10-10-UC-40.cnf.gz.no_w.cnf', 'or-50-10-1.cnf.gz.no_w.cnf', 'or-50-10-1-UC-10.cnf.gz.no_w.cnf', 'or-50-10-1-UC-20.cnf.gz.no_w.cnf', 'or-50-10-1-UC-30.cnf.gz.no_w.cnf', 'or-50-10-1-UC-40.cnf.gz.no_w.cnf', 'or-50-10-2.cnf.gz.no_w.cnf', 'or-50-10-2-UC-10.cnf.gz.no_w.cnf', 'or-50-10-2-UC-20.cnf.gz.no_w.cnf', 'or-50-10-2-UC-30.cnf.gz.no_w.cnf', 'or-50-10-2-UC-40.cnf.gz.no_w.cnf', 'or-50-10-3.cnf.gz.no_w.cnf', 'or-50-10-3-UC-10.cnf.gz.no_w.cnf', 'or-50-10-3-UC-20.cnf.gz.no_w.cnf', 'or-50-10-3-UC-30.cnf.gz.no_w.cnf', 'or-50-10-3-UC-40.cnf.gz.no_w.cnf', 'or-50-10-4.cnf.gz.no_w.cnf', 'or-50-10-4-UC-10.cnf.gz.no_w.cnf', 'or-50-10-4-UC-20.cnf.gz.no_w.cnf', 'or-50-10-4-UC-30.cnf.gz.no_w.cnf', 'or-50-10-4-UC-40.cnf.gz.no_w.cnf', 'or-50-10-5.cnf.gz.no_w.cnf', 'or-50-10-5-UC-10.cnf.gz.no_w.cnf', 'or-50-10-5-UC-20.cnf.gz.no_w.cnf', 'or-50-10-5-UC-30.cnf.gz.no_w.cnf', 'or-50-10-5-UC-40.cnf.gz.no_w.cnf', 'or-50-10-6.cnf.gz.no_w.cnf', 'or-50-10-6-UC-10.cnf.gz.no_w.cnf', 'or-50-10-6-UC-20.cnf.gz.no_w.cnf', 'or-50-10-6-UC-30.cnf.gz.no_w.cnf', 'or-50-10-6-UC-40.cnf.gz.no_w.cnf', 'or-50-10-7.cnf.gz.no_w.cnf', 'or-50-10-7-UC-10.cnf.gz.no_w.cnf', 'or-50-10-7-UC-20.cnf.gz.no_w.cnf', 'or-50-10-7-UC-30.cnf.gz.no_w.cnf', 'or-50-10-7-UC-40.cnf.gz.no_w.cnf', 'or-50-10-8.cnf.gz.no_w.cnf', 'or-50-10-8-UC-10.cnf.gz.no_w.cnf', 'or-50-10-8-UC-20.cnf.gz.no_w.cnf', 'or-50-10-8-UC-30.cnf.gz.no_w.cnf', 'or-50-10-8-UC-40.cnf.gz.no_w.cnf', 'or-50-10-9.cnf.gz.no_w.cnf', 'or-50-10-9-UC-10.cnf.gz.no_w.cnf', 'or-50-10-9-UC-20.cnf.gz.no_w.cnf', 'or-50-10-9-UC-30.cnf.gz.no_w.cnf', 'or-50-10-9-UC-40.cnf.gz.no_w.cnf', 'or-50-20-10.cnf.gz.no_w.cnf', 'or-50-20-10-UC-10.cnf.gz.no_w.cnf', 'or-50-20-10-UC-20.cnf.gz.no_w.cnf', 'or-50-20-10-UC-30.cnf.gz.no_w.cnf', 'or-50-20-10-UC-40.cnf.gz.no_w.cnf', 'or-50-20-1.cnf.gz.no_w.cnf', 'or-50-20-1-UC-10.cnf.gz.no_w.cnf', 'or-50-20-1-UC-20.cnf.gz.no_w.cnf', 'or-50-20-1-UC-30.cnf.gz.no_w.cnf', 'or-50-20-1-UC-40.cnf.gz.no_w.cnf', 'or-50-20-2.cnf.gz.no_w.cnf', 'or-50-20-2-UC-10.cnf.gz.no_w.cnf', 'or-50-20-2-UC-20.cnf.gz.no_w.cnf', 'or-50-20-2-UC-30.cnf.gz.no_w.cnf', 'or-50-20-2-UC-40.cnf.gz.no_w.cnf', 'or-50-20-3.cnf.gz.no_w.cnf', 'or-50-20-3-UC-10.cnf.gz.no_w.cnf', 'or-50-20-3-UC-20.cnf.gz.no_w.cnf', 'or-50-20-3-UC-30.cnf.gz.no_w.cnf', 'or-50-20-3-UC-40.cnf.gz.no_w.cnf', 'or-50-20-4.cnf.gz.no_w.cnf', 'or-50-20-4-UC-10.cnf.gz.no_w.cnf', 'or-50-20-4-UC-20.cnf.gz.no_w.cnf', 'or-50-20-4-UC-30.cnf.gz.no_w.cnf', 'or-50-20-4-UC-40.cnf.gz.no_w.cnf', 'or-50-20-5.cnf.gz.no_w.cnf', 'or-50-20-5-UC-10.cnf.gz.no_w.cnf', 'or-50-20-5-UC-20.cnf.gz.no_w.cnf', 'or-50-20-5-UC-30.cnf.gz.no_w.cnf', 'or-50-20-5-UC-40.cnf.gz.no_w.cnf', 'or-50-20-6.cnf.gz.no_w.cnf', 'or-50-20-6-UC-10.cnf.gz.no_w.cnf', 'or-50-20-6-UC-20.cnf.gz.no_w.cnf', 'or-50-20-6-UC-30.cnf.gz.no_w.cnf', 'or-50-20-6-UC-40.cnf.gz.no_w.cnf', 'or-50-20-7.cnf.gz.no_w.cnf', 'or-50-20-7-UC-10.cnf.gz.no_w.cnf', 'or-50-20-7-UC-20.cnf.gz.no_w.cnf', 'or-50-20-7-UC-30.cnf.gz.no_w.cnf', 'or-50-20-7-UC-40.cnf.gz.no_w.cnf', 'or-50-20-8.cnf.gz.no_w.cnf', 'or-50-20-8-UC-10.cnf.gz.no_w.cnf', 'or-50-20-8-UC-20.cnf.gz.no_w.cnf', 'or-50-20-8-UC-30.cnf.gz.no_w.cnf', 'or-50-20-8-UC-40.cnf.gz.no_w.cnf', 'or-50-20-9.cnf.gz.no_w.cnf', 'or-50-20-9-UC-10.cnf.gz.no_w.cnf', 'or-50-20-9-UC-20.cnf.gz.no_w.cnf', 'or-50-20-9-UC-30.cnf.gz.no_w.cnf', 'or-50-20-9-UC-40.cnf.gz.no_w.cnf', 'or-50-5-10.cnf.gz.no_w.cnf', 'or-50-5-10-UC-10.cnf.gz.no_w.cnf', 'or-50-5-10-UC-20.cnf.gz.no_w.cnf', 'or-50-5-10-UC-30.cnf.gz.no_w.cnf', 'or-50-5-10-UC-40.cnf.gz.no_w.cnf', 'or-50-5-1.cnf.gz.no_w.cnf', 'or-50-5-1-UC-10.cnf.gz.no_w.cnf', 'or-50-5-1-UC-20.cnf.gz.no_w.cnf', 'or-50-5-1-UC-30.cnf.gz.no_w.cnf', 'or-50-5-1-UC-40.cnf.gz.no_w.cnf', 'or-50-5-2.cnf.gz.no_w.cnf', 'or-50-5-2-UC-10.cnf.gz.no_w.cnf', 'or-50-5-2-UC-20.cnf.gz.no_w.cnf', 'or-50-5-2-UC-30.cnf.gz.no_w.cnf', 'or-50-5-2-UC-40.cnf.gz.no_w.cnf', 'or-50-5-3.cnf.gz.no_w.cnf', 'or-50-5-3-UC-10.cnf.gz.no_w.cnf', 'or-50-5-3-UC-20.cnf.gz.no_w.cnf', 'or-50-5-3-UC-30.cnf.gz.no_w.cnf', 'or-50-5-3-UC-40.cnf.gz.no_w.cnf', 'or-50-5-4.cnf.gz.no_w.cnf', 'or-50-5-4-UC-10.cnf.gz.no_w.cnf', 'or-50-5-4-UC-20.cnf.gz.no_w.cnf', 'or-50-5-4-UC-30.cnf.gz.no_w.cnf', 'or-50-5-4-UC-40.cnf.gz.no_w.cnf', 'or-50-5-5.cnf.gz.no_w.cnf', 'or-50-5-5-UC-10.cnf.gz.no_w.cnf', 'or-50-5-5-UC-20.cnf.gz.no_w.cnf', 'or-50-5-5-UC-30.cnf.gz.no_w.cnf', 'or-50-5-5-UC-40.cnf.gz.no_w.cnf', 'or-50-5-6.cnf.gz.no_w.cnf', 'or-50-5-6-UC-10.cnf.gz.no_w.cnf', 'or-50-5-6-UC-20.cnf.gz.no_w.cnf', 'or-50-5-6-UC-30.cnf.gz.no_w.cnf', 'or-50-5-6-UC-40.cnf.gz.no_w.cnf', 'or-50-5-7.cnf.gz.no_w.cnf', 'or-50-5-7-UC-10.cnf.gz.no_w.cnf', 'or-50-5-7-UC-20.cnf.gz.no_w.cnf', 'or-50-5-7-UC-30.cnf.gz.no_w.cnf', 'or-50-5-7-UC-40.cnf.gz.no_w.cnf', 'or-50-5-8.cnf.gz.no_w.cnf', 'or-50-5-8-UC-10.cnf.gz.no_w.cnf', 'or-50-5-8-UC-20.cnf.gz.no_w.cnf', 'or-50-5-8-UC-30.cnf.gz.no_w.cnf', 'or-50-5-8-UC-40.cnf.gz.no_w.cnf', 'or-50-5-9.cnf.gz.no_w.cnf', 'or-50-5-9-UC-10.cnf.gz.no_w.cnf', 'or-50-5-9-UC-20.cnf.gz.no_w.cnf', 'or-50-5-9-UC-30.cnf.gz.no_w.cnf', 'or-50-5-9-UC-40.cnf.gz.no_w.cnf', 'or-60-10-10.cnf.gz.no_w.cnf', 'or-60-10-10-UC-10.cnf.gz.no_w.cnf', 'or-60-10-10-UC-20.cnf.gz.no_w.cnf', 'or-60-10-10-UC-30.cnf.gz.no_w.cnf', 'or-60-10-10-UC-40.cnf.gz.no_w.cnf', 'or-60-10-1.cnf.gz.no_w.cnf', 'or-60-10-1-UC-10.cnf.gz.no_w.cnf', 'or-60-10-1-UC-20.cnf.gz.no_w.cnf', 'or-60-10-1-UC-30.cnf.gz.no_w.cnf', 'or-60-10-1-UC-40.cnf.gz.no_w.cnf', 'or-60-10-2.cnf.gz.no_w.cnf', 'or-60-10-2-UC-10.cnf.gz.no_w.cnf', 'or-60-10-2-UC-20.cnf.gz.no_w.cnf', 'or-60-10-2-UC-30.cnf.gz.no_w.cnf', 'or-60-10-2-UC-40.cnf.gz.no_w.cnf', 'or-60-10-3.cnf.gz.no_w.cnf', 'or-60-10-3-UC-10.cnf.gz.no_w.cnf', 'or-60-10-3-UC-20.cnf.gz.no_w.cnf', 'or-60-10-3-UC-30.cnf.gz.no_w.cnf', 'or-60-10-3-UC-40.cnf.gz.no_w.cnf', 'or-60-10-4.cnf.gz.no_w.cnf', 'or-60-10-4-UC-10.cnf.gz.no_w.cnf', 'or-60-10-4-UC-20.cnf.gz.no_w.cnf', 'or-60-10-4-UC-30.cnf.gz.no_w.cnf', 'or-60-10-4-UC-40.cnf.gz.no_w.cnf', 'or-60-10-5.cnf.gz.no_w.cnf', 'or-60-10-5-UC-10.cnf.gz.no_w.cnf', 'or-60-10-5-UC-20.cnf.gz.no_w.cnf', 'or-60-10-5-UC-30.cnf.gz.no_w.cnf', 'or-60-10-5-UC-40.cnf.gz.no_w.cnf', 'or-60-10-6.cnf.gz.no_w.cnf', 'or-60-10-6-UC-10.cnf.gz.no_w.cnf', 'or-60-10-6-UC-20.cnf.gz.no_w.cnf', 'or-60-10-6-UC-30.cnf.gz.no_w.cnf', 'or-60-10-6-UC-40.cnf.gz.no_w.cnf', 'or-60-10-7.cnf.gz.no_w.cnf', 'or-60-10-7-UC-10.cnf.gz.no_w.cnf', 'or-60-10-7-UC-20.cnf.gz.no_w.cnf', 'or-60-10-7-UC-30.cnf.gz.no_w.cnf', 'or-60-10-7-UC-40.cnf.gz.no_w.cnf', 'or-60-10-8.cnf.gz.no_w.cnf', 'or-60-10-8-UC-10.cnf.gz.no_w.cnf', 'or-60-10-8-UC-20.cnf.gz.no_w.cnf', 'or-60-10-8-UC-30.cnf.gz.no_w.cnf', 'or-60-10-8-UC-40.cnf.gz.no_w.cnf', 'or-60-10-9.cnf.gz.no_w.cnf', 'or-60-10-9-UC-10.cnf.gz.no_w.cnf', 'or-60-10-9-UC-20.cnf.gz.no_w.cnf', 'or-60-10-9-UC-30.cnf.gz.no_w.cnf', 'or-60-10-9-UC-40.cnf.gz.no_w.cnf', 'or-60-20-10.cnf.gz.no_w.cnf', 'or-60-20-10-UC-10.cnf.gz.no_w.cnf', 'or-60-20-10-UC-20.cnf.gz.no_w.cnf', 'or-60-20-10-UC-30.cnf.gz.no_w.cnf', 'or-60-20-10-UC-40.cnf.gz.no_w.cnf', 'or-60-20-1.cnf.gz.no_w.cnf', 'or-60-20-1-UC-10.cnf.gz.no_w.cnf', 'or-60-20-1-UC-20.cnf.gz.no_w.cnf', 'or-60-20-1-UC-30.cnf.gz.no_w.cnf', 'or-60-20-1-UC-40.cnf.gz.no_w.cnf', 'or-60-20-2.cnf.gz.no_w.cnf', 'or-60-20-2-UC-10.cnf.gz.no_w.cnf', 'or-60-20-2-UC-20.cnf.gz.no_w.cnf', 'or-60-20-2-UC-30.cnf.gz.no_w.cnf', 'or-60-20-2-UC-40.cnf.gz.no_w.cnf', 'or-60-20-3.cnf.gz.no_w.cnf', 'or-60-20-3-UC-10.cnf.gz.no_w.cnf', 'or-60-20-3-UC-20.cnf.gz.no_w.cnf', 'or-60-20-3-UC-30.cnf.gz.no_w.cnf', 'or-60-20-3-UC-40.cnf.gz.no_w.cnf', 'or-60-20-4.cnf.gz.no_w.cnf', 'or-60-20-4-UC-10.cnf.gz.no_w.cnf', 'or-60-20-4-UC-20.cnf.gz.no_w.cnf', 'or-60-20-4-UC-30.cnf.gz.no_w.cnf', 'or-60-20-4-UC-40.cnf.gz.no_w.cnf', 'or-60-20-5.cnf.gz.no_w.cnf', 'or-60-20-5-UC-10.cnf.gz.no_w.cnf', 'or-60-20-5-UC-20.cnf.gz.no_w.cnf', 'or-60-20-5-UC-30.cnf.gz.no_w.cnf', 'or-60-20-5-UC-40.cnf.gz.no_w.cnf', 'or-60-20-6.cnf.gz.no_w.cnf', 'or-60-20-6-UC-10.cnf.gz.no_w.cnf', 'or-60-20-6-UC-20.cnf.gz.no_w.cnf', 'or-60-20-6-UC-30.cnf.gz.no_w.cnf', 'or-60-20-6-UC-40.cnf.gz.no_w.cnf', 'or-60-20-7.cnf.gz.no_w.cnf', 'or-60-20-7-UC-10.cnf.gz.no_w.cnf', 'or-60-20-7-UC-20.cnf.gz.no_w.cnf', 'or-60-20-7-UC-30.cnf.gz.no_w.cnf', 'or-60-20-7-UC-40.cnf.gz.no_w.cnf', 'or-60-20-8.cnf.gz.no_w.cnf', 'or-60-20-8-UC-10.cnf.gz.no_w.cnf', 'or-60-20-8-UC-20.cnf.gz.no_w.cnf', 'or-60-20-8-UC-30.cnf.gz.no_w.cnf', 'or-60-20-8-UC-40.cnf.gz.no_w.cnf', 'or-60-20-9.cnf.gz.no_w.cnf', 'or-60-20-9-UC-10.cnf.gz.no_w.cnf', 'or-60-20-9-UC-20.cnf.gz.no_w.cnf', 'or-60-20-9-UC-30.cnf.gz.no_w.cnf', 'or-60-20-9-UC-40.cnf.gz.no_w.cnf', 'or-60-5-10.cnf.gz.no_w.cnf', 'or-60-5-10-UC-10.cnf.gz.no_w.cnf', 'or-60-5-10-UC-20.cnf.gz.no_w.cnf', 'or-60-5-10-UC-30.cnf.gz.no_w.cnf', 'or-60-5-10-UC-40.cnf.gz.no_w.cnf', 'or-60-5-1.cnf.gz.no_w.cnf', 'or-60-5-1-UC-10.cnf.gz.no_w.cnf', 'or-60-5-1-UC-20.cnf.gz.no_w.cnf', 'or-60-5-1-UC-30.cnf.gz.no_w.cnf', 'or-60-5-1-UC-40.cnf.gz.no_w.cnf', 'or-60-5-2.cnf.gz.no_w.cnf', 'or-60-5-2-UC-10.cnf.gz.no_w.cnf', 'or-60-5-2-UC-20.cnf.gz.no_w.cnf', 'or-60-5-2-UC-30.cnf.gz.no_w.cnf', 'or-60-5-2-UC-40.cnf.gz.no_w.cnf', 'or-60-5-3.cnf.gz.no_w.cnf', 'or-60-5-3-UC-10.cnf.gz.no_w.cnf', 'or-60-5-3-UC-20.cnf.gz.no_w.cnf', 'or-60-5-3-UC-30.cnf.gz.no_w.cnf', 'or-60-5-3-UC-40.cnf.gz.no_w.cnf', 'or-60-5-4.cnf.gz.no_w.cnf', 'or-60-5-4-UC-10.cnf.gz.no_w.cnf', 'or-60-5-4-UC-20.cnf.gz.no_w.cnf', 'or-60-5-4-UC-30.cnf.gz.no_w.cnf', 'or-60-5-4-UC-40.cnf.gz.no_w.cnf', 'or-60-5-5.cnf.gz.no_w.cnf', 'or-60-5-5-UC-10.cnf.gz.no_w.cnf', 'or-60-5-5-UC-20.cnf.gz.no_w.cnf', 'or-60-5-5-UC-30.cnf.gz.no_w.cnf', 'or-60-5-5-UC-40.cnf.gz.no_w.cnf', 'or-60-5-6.cnf.gz.no_w.cnf', 'or-60-5-6-UC-10.cnf.gz.no_w.cnf', 'or-60-5-6-UC-20.cnf.gz.no_w.cnf', 'or-60-5-6-UC-30.cnf.gz.no_w.cnf', 'or-60-5-6-UC-40.cnf.gz.no_w.cnf', 'or-60-5-7.cnf.gz.no_w.cnf', 'or-60-5-7-UC-10.cnf.gz.no_w.cnf', 'or-60-5-7-UC-20.cnf.gz.no_w.cnf', 'or-60-5-7-UC-30.cnf.gz.no_w.cnf', 'or-60-5-7-UC-40.cnf.gz.no_w.cnf', 'or-60-5-8.cnf.gz.no_w.cnf', 'or-60-5-8-UC-10.cnf.gz.no_w.cnf', 'or-60-5-8-UC-20.cnf.gz.no_w.cnf', 'or-60-5-8-UC-30.cnf.gz.no_w.cnf', 'or-60-5-8-UC-40.cnf.gz.no_w.cnf', 'or-60-5-9.cnf.gz.no_w.cnf', 'or-60-5-9-UC-10.cnf.gz.no_w.cnf', 'or-60-5-9-UC-20.cnf.gz.no_w.cnf', 'or-60-5-9-UC-30.cnf.gz.no_w.cnf', 'or-60-5-9-UC-40.cnf.gz.no_w.cnf', 'or-70-10-10.cnf.gz.no_w.cnf', 'or-70-10-10-UC-10.cnf.gz.no_w.cnf', 'or-70-10-10-UC-20.cnf.gz.no_w.cnf', 'or-70-10-10-UC-30.cnf.gz.no_w.cnf', 'or-70-10-10-UC-40.cnf.gz.no_w.cnf', 'or-70-10-1.cnf.gz.no_w.cnf', 'or-70-10-1-UC-10.cnf.gz.no_w.cnf', 'or-70-10-1-UC-20.cnf.gz.no_w.cnf', 'or-70-10-1-UC-30.cnf.gz.no_w.cnf', 'or-70-10-1-UC-40.cnf.gz.no_w.cnf', 'or-70-10-2.cnf.gz.no_w.cnf', 'or-70-10-2-UC-10.cnf.gz.no_w.cnf', 'or-70-10-2-UC-20.cnf.gz.no_w.cnf', 'or-70-10-2-UC-30.cnf.gz.no_w.cnf', 'or-70-10-2-UC-40.cnf.gz.no_w.cnf', 'or-70-10-3.cnf.gz.no_w.cnf', 'or-70-10-3-UC-10.cnf.gz.no_w.cnf', 'or-70-10-3-UC-20.cnf.gz.no_w.cnf', 'or-70-10-3-UC-30.cnf.gz.no_w.cnf', 'or-70-10-3-UC-40.cnf.gz.no_w.cnf', 'or-70-10-4.cnf.gz.no_w.cnf', 'or-70-10-4-UC-10.cnf.gz.no_w.cnf', 'or-70-10-4-UC-20.cnf.gz.no_w.cnf', 'or-70-10-4-UC-30.cnf.gz.no_w.cnf', 'or-70-10-4-UC-40.cnf.gz.no_w.cnf', 'or-70-10-5.cnf.gz.no_w.cnf', 'or-70-10-5-UC-10.cnf.gz.no_w.cnf', 'or-70-10-5-UC-20.cnf.gz.no_w.cnf', 'or-70-10-5-UC-30.cnf.gz.no_w.cnf', 'or-70-10-5-UC-40.cnf.gz.no_w.cnf', 'or-70-10-6.cnf.gz.no_w.cnf', 'or-70-10-6-UC-10.cnf.gz.no_w.cnf', 'or-70-10-6-UC-20.cnf.gz.no_w.cnf', 'or-70-10-6-UC-30.cnf.gz.no_w.cnf', 'or-70-10-6-UC-40.cnf.gz.no_w.cnf', 'or-70-10-7.cnf.gz.no_w.cnf', 'or-70-10-7-UC-10.cnf.gz.no_w.cnf', 'or-70-10-7-UC-20.cnf.gz.no_w.cnf', 'or-70-10-7-UC-30.cnf.gz.no_w.cnf', 'or-70-10-7-UC-40.cnf.gz.no_w.cnf', 'or-70-10-8.cnf.gz.no_w.cnf', 'or-70-10-8-UC-10.cnf.gz.no_w.cnf', 'or-70-10-8-UC-20.cnf.gz.no_w.cnf', 'or-70-10-8-UC-30.cnf.gz.no_w.cnf', 'or-70-10-8-UC-40.cnf.gz.no_w.cnf', 'or-70-10-9.cnf.gz.no_w.cnf', 'or-70-10-9-UC-10.cnf.gz.no_w.cnf', 'or-70-10-9-UC-20.cnf.gz.no_w.cnf', 'or-70-10-9-UC-30.cnf.gz.no_w.cnf', 'or-70-10-9-UC-40.cnf.gz.no_w.cnf', 'or-70-20-10.cnf.gz.no_w.cnf', 'or-70-20-10-UC-10.cnf.gz.no_w.cnf', 'or-70-20-10-UC-20.cnf.gz.no_w.cnf', 'or-70-20-10-UC-30.cnf.gz.no_w.cnf', 'or-70-20-10-UC-40.cnf.gz.no_w.cnf', 'or-70-20-1.cnf.gz.no_w.cnf', 'or-70-20-1-UC-10.cnf.gz.no_w.cnf', 'or-70-20-1-UC-20.cnf.gz.no_w.cnf', 'or-70-20-1-UC-30.cnf.gz.no_w.cnf', 'or-70-20-1-UC-40.cnf.gz.no_w.cnf', 'or-70-20-2.cnf.gz.no_w.cnf', 'or-70-20-2-UC-10.cnf.gz.no_w.cnf', 'or-70-20-2-UC-20.cnf.gz.no_w.cnf', 'or-70-20-2-UC-30.cnf.gz.no_w.cnf', 'or-70-20-2-UC-40.cnf.gz.no_w.cnf', 'or-70-20-3.cnf.gz.no_w.cnf', 'or-70-20-3-UC-10.cnf.gz.no_w.cnf', 'or-70-20-3-UC-20.cnf.gz.no_w.cnf', 'or-70-20-3-UC-30.cnf.gz.no_w.cnf', 'or-70-20-3-UC-40.cnf.gz.no_w.cnf', 'or-70-20-4.cnf.gz.no_w.cnf', 'or-70-20-4-UC-10.cnf.gz.no_w.cnf', 'or-70-20-4-UC-20.cnf.gz.no_w.cnf', 'or-70-20-4-UC-30.cnf.gz.no_w.cnf', 'or-70-20-4-UC-40.cnf.gz.no_w.cnf', 'or-70-20-5.cnf.gz.no_w.cnf', 'or-70-20-5-UC-10.cnf.gz.no_w.cnf', 'or-70-20-5-UC-20.cnf.gz.no_w.cnf', 'or-70-20-5-UC-30.cnf.gz.no_w.cnf', 'or-70-20-5-UC-40.cnf.gz.no_w.cnf', 'or-70-20-6.cnf.gz.no_w.cnf', 'or-70-20-6-UC-10.cnf.gz.no_w.cnf', 'or-70-20-6-UC-20.cnf.gz.no_w.cnf', 'or-70-20-6-UC-30.cnf.gz.no_w.cnf', 'or-70-20-6-UC-40.cnf.gz.no_w.cnf', 'or-70-20-7.cnf.gz.no_w.cnf', 'or-70-20-7-UC-10.cnf.gz.no_w.cnf', 'or-70-20-7-UC-20.cnf.gz.no_w.cnf', 'or-70-20-7-UC-30.cnf.gz.no_w.cnf', 'or-70-20-7-UC-40.cnf.gz.no_w.cnf', 'or-70-20-8.cnf.gz.no_w.cnf', 'or-70-20-8-UC-10.cnf.gz.no_w.cnf', 'or-70-20-8-UC-20.cnf.gz.no_w.cnf', 'or-70-20-8-UC-30.cnf.gz.no_w.cnf', 'or-70-20-8-UC-40.cnf.gz.no_w.cnf', 'or-70-20-9.cnf.gz.no_w.cnf', 'or-70-20-9-UC-10.cnf.gz.no_w.cnf', 'or-70-20-9-UC-20.cnf.gz.no_w.cnf', 'or-70-20-9-UC-30.cnf.gz.no_w.cnf', 'or-70-20-9-UC-40.cnf.gz.no_w.cnf', 'or-70-5-10.cnf.gz.no_w.cnf', 'or-70-5-10-UC-10.cnf.gz.no_w.cnf', 'or-70-5-10-UC-20.cnf.gz.no_w.cnf', 'or-70-5-10-UC-30.cnf.gz.no_w.cnf', 'or-70-5-10-UC-40.cnf.gz.no_w.cnf', 'or-70-5-1.cnf.gz.no_w.cnf', 'or-70-5-1-UC-10.cnf.gz.no_w.cnf', 'or-70-5-1-UC-20.cnf.gz.no_w.cnf', 'or-70-5-1-UC-30.cnf.gz.no_w.cnf', 'or-70-5-1-UC-40.cnf.gz.no_w.cnf', 'or-70-5-2.cnf.gz.no_w.cnf', 'or-70-5-2-UC-10.cnf.gz.no_w.cnf', 'or-70-5-2-UC-20.cnf.gz.no_w.cnf', 'or-70-5-2-UC-30.cnf.gz.no_w.cnf', 'or-70-5-2-UC-40.cnf.gz.no_w.cnf', 'or-70-5-3.cnf.gz.no_w.cnf', 'or-70-5-3-UC-10.cnf.gz.no_w.cnf', 'or-70-5-3-UC-20.cnf.gz.no_w.cnf', 'or-70-5-3-UC-30.cnf.gz.no_w.cnf', 'or-70-5-3-UC-40.cnf.gz.no_w.cnf', 'or-70-5-4.cnf.gz.no_w.cnf', 'or-70-5-4-UC-10.cnf.gz.no_w.cnf', 'or-70-5-4-UC-20.cnf.gz.no_w.cnf', 'or-70-5-4-UC-30.cnf.gz.no_w.cnf', 'or-70-5-4-UC-40.cnf.gz.no_w.cnf', 'or-70-5-5.cnf.gz.no_w.cnf', 'or-70-5-5-UC-10.cnf.gz.no_w.cnf', 'or-70-5-5-UC-20.cnf.gz.no_w.cnf', 'or-70-5-5-UC-30.cnf.gz.no_w.cnf', 'or-70-5-5-UC-40.cnf.gz.no_w.cnf', 'or-70-5-6.cnf.gz.no_w.cnf', 'or-70-5-6-UC-10.cnf.gz.no_w.cnf', 'or-70-5-6-UC-20.cnf.gz.no_w.cnf', 'or-70-5-6-UC-30.cnf.gz.no_w.cnf', 'or-70-5-6-UC-40.cnf.gz.no_w.cnf', 'or-70-5-7.cnf.gz.no_w.cnf', 'or-70-5-7-UC-10.cnf.gz.no_w.cnf', 'or-70-5-7-UC-20.cnf.gz.no_w.cnf', 'or-70-5-7-UC-30.cnf.gz.no_w.cnf', 'or-70-5-7-UC-40.cnf.gz.no_w.cnf', 'or-70-5-8.cnf.gz.no_w.cnf', 'or-70-5-8-UC-10.cnf.gz.no_w.cnf', 'or-70-5-8-UC-20.cnf.gz.no_w.cnf', 'or-70-5-8-UC-30.cnf.gz.no_w.cnf', 'or-70-5-8-UC-40.cnf.gz.no_w.cnf', 'or-70-5-9.cnf.gz.no_w.cnf', 'or-70-5-9-UC-10.cnf.gz.no_w.cnf', 'or-70-5-9-UC-20.cnf.gz.no_w.cnf', 'or-70-5-9-UC-30.cnf.gz.no_w.cnf', 'or-70-5-9-UC-40.cnf.gz.no_w.cnf', 'parity.sk_11_11.cnf.gz.no_w.cnf', 'partition.sk_22_155.cnf.gz.no_w.cnf', 'PhaseService.sk_14_27.cnf.gz.no_w.cnf', 'Pollard.sk_1_10.cnf.gz.no_w.cnf', 'polynomial.sk_7_25.cnf.gz.no_w.cnf', 'ProcessBean.sk_8_64.cnf.gz.no_w.cnf', 'prod-16.cnf.gz.no_w.cnf', 'prod-1s.cnf.gz.no_w.cnf', 'prod-20.cnf.gz.no_w.cnf', 'prod-24.cnf.gz.no_w.cnf', 'prod-28.cnf.gz.no_w.cnf', 'prod-2.cnf.gz.no_w.cnf', 'prod-2s.cnf.gz.no_w.cnf', 'prod-32.cnf.gz.no_w.cnf', 'prod-3s.cnf.gz.no_w.cnf', 'prod-4.cnf.gz.no_w.cnf', 'prod-4s.cnf.gz.no_w.cnf', 'prod-8.cnf.gz.no_w.cnf', 'prod-8s.cnf.gz.no_w.cnf', 'ProjectService3.sk_12_55.cnf.gz.no_w.cnf', 'registerlesSwap.sk_3_10.cnf.gz.no_w.cnf', 'reverse.sk_11_258.cnf.gz.no_w.cnf', 's1196a_15_7.cnf.gz.no_w.cnf', 's1196a_3_2.cnf.gz.no_w.cnf', 's1196a_7_4.cnf.gz.no_w.cnf', 's1238a_15_7.cnf.gz.no_w.cnf', 's1238a_3_2.cnf.gz.no_w.cnf', 's1238a_7_4.cnf.gz.no_w.cnf', 's13207a_15_7.cnf.gz.no_w.cnf', 's13207a_3_2.cnf.gz.no_w.cnf', 's13207a_7_4.cnf.gz.no_w.cnf', 's1423a_15_7.cnf.gz.no_w.cnf', 's1423a_3_2.cnf.gz.no_w.cnf', 's1423a_7_4.cnf.gz.no_w.cnf', 's1488_15_7.cnf.gz.no_w.cnf', 's1488_3_2.cnf.gz.no_w.cnf', 's1488_7_4.cnf.gz.no_w.cnf', 's15850a_15_7.cnf.gz.no_w.cnf', 's15850a_3_2.cnf.gz.no_w.cnf', 's15850a_7_4.cnf.gz.no_w.cnf', 's27_15_7.cnf.gz.no_w.cnf', 's27_3_2.cnf.gz.no_w.cnf', 's27_7_4.cnf.gz.no_w.cnf', 's27_new_15_7.cnf.gz.no_w.cnf', 's27_new_3_2.cnf.gz.no_w.cnf', 's27_new_7_4.cnf.gz.no_w.cnf', 's298_15_7.cnf.gz.no_w.cnf', 's298_3_2.cnf.gz.no_w.cnf', 's298_7_4.cnf.gz.no_w.cnf', 's344_15_7.cnf.gz.no_w.cnf', 's344_3_2.cnf.gz.no_w.cnf', 's344_7_4.cnf.gz.no_w.cnf', 's349_15_7.cnf.gz.no_w.cnf', 's349_3_2.cnf.gz.no_w.cnf', 's349_7_4.cnf.gz.no_w.cnf', 's35932_15_7.cnf.gz.no_w.cnf', 's35932_3_2.cnf.gz.no_w.cnf', 's35932_7_4.cnf.gz.no_w.cnf', 's382_15_7.cnf.gz.no_w.cnf', 's382_3_2.cnf.gz.no_w.cnf', 's382_7_4.cnf.gz.no_w.cnf', 's38417_15_7.cnf.gz.no_w.cnf', 's38417_3_2.cnf.gz.no_w.cnf', 's38417_7_4.cnf.gz.no_w.cnf', 's38584_15_7.cnf.gz.no_w.cnf', 's38584_3_2.cnf.gz.no_w.cnf', 's38584_7_4.cnf.gz.no_w.cnf', 's420_15_7.cnf.gz.no_w.cnf', 's420_3_2.cnf.gz.no_w.cnf', 's420_7_4.cnf.gz.no_w.cnf', 's420_new1_15_7.cnf.gz.no_w.cnf', 's420_new1_3_2.cnf.gz.no_w.cnf', 's420_new_15_7.cnf.gz.no_w.cnf', 's420_new1_7_4.cnf.gz.no_w.cnf', 's420_new_3_2.cnf.gz.no_w.cnf', 's420_new_7_4.cnf.gz.no_w.cnf', 's444_15_7.cnf.gz.no_w.cnf', 's444_3_2.cnf.gz.no_w.cnf', 's444_7_4.cnf.gz.no_w.cnf', 's510_15_7.cnf.gz.no_w.cnf', 's510_3_2.cnf.gz.no_w.cnf', 's510_7_4.cnf.gz.no_w.cnf', 's526_15_7.cnf.gz.no_w.cnf', 's526_3_2.cnf.gz.no_w.cnf', 's526_7_4.cnf.gz.no_w.cnf', 's526a_15_7.cnf.gz.no_w.cnf', 's526a_3_2.cnf.gz.no_w.cnf', 's526a_7_4.cnf.gz.no_w.cnf', 's5378a_15_7.cnf.gz.no_w.cnf', 's5378a_3_2.cnf.gz.no_w.cnf', 's5378a_7_4.cnf.gz.no_w.cnf', 's641_15_7.cnf.gz.no_w.cnf', 's641_3_2.cnf.gz.no_w.cnf', 's641_7_4.cnf.gz.no_w.cnf', 's713_15_7.cnf.gz.no_w.cnf', 's713_3_2.cnf.gz.no_w.cnf', 's713_7_4.cnf.gz.no_w.cnf', 's820a_15_7.cnf.gz.no_w.cnf', 's820a_3_2.cnf.gz.no_w.cnf', 's820a_7_4.cnf.gz.no_w.cnf', 's832a_15_7.cnf.gz.no_w.cnf', 's832a_3_2.cnf.gz.no_w.cnf', 's832a_7_4.cnf.gz.no_w.cnf', 's838_15_7.cnf.gz.no_w.cnf', 's838_3_2.cnf.gz.no_w.cnf', 's838_7_4.cnf.gz.no_w.cnf', 's9234a_15_7.cnf.gz.no_w.cnf', 's9234a_3_2.cnf.gz.no_w.cnf', 's9234a_7_4.cnf.gz.no_w.cnf', 's953a_15_7.cnf.gz.no_w.cnf', 's953a_3_2.cnf.gz.no_w.cnf', 's953a_7_4.cnf.gz.no_w.cnf', 'SetTest.sk_9_21.cnf.gz.no_w.cnf', 'signedAvg.sk_8_1020.cnf.gz.no_w.cnf', 'sort.sk_8_52.cnf.gz.no_w.cnf', 'tableBasedAddition.sk_240_1024.cnf.gz.no_w.cnf', 'tire-1.cnf.gz.no_w.cnf', 'tire-2.cnf.gz.no_w.cnf', 'tire-3.cnf.gz.no_w.cnf', 'tire-4.cnf.gz.no_w.cnf', 'tutorial1.sk_1_1.cnf.gz.no_w.cnf', 'tutorial2.sk_3_4.cnf.gz.no_w.cnf', 'tutorial3.sk_4_31.cnf.gz.no_w.cnf', 'UserServiceImpl.sk_8_32.cnf.gz.no_w.cnf', 'xpose.sk_6_134.cnf.gz.no_w.cnf']
# PROBLEM_NAMES = ['hypercube.cnf', 'hypercube1.cnf', 'hypercube2.cnf', 'hypercube3.cnf', 'hypercube4.cnf', 'hypercube5.cnf', 'hypercube6.cnf', 'hypercube7.cnf']
#these problems report timeout for using marginals with random chunks, but SAT time is less than 100 and testing 1 or 2 they seem fast
# PROBLEM_NAMES = ['90-26-10-q.cnf.gz.no_w.cnf', '90-42-7-q.cnf.gz.no_w.cnf', '50-20-9-q.cnf.gz.no_w.cnf', '75-25-4-q.cnf.gz.no_w.cnf', '75-22-10-q.cnf.gz.no_w.cnf', '75-21-4-q.cnf.gz.no_w.cnf', '75-23-2-q.cnf.gz.no_w.cnf', '75-22-1-q.cnf.gz.no_w.cnf', '90-50-3-q.cnf.gz.no_w.cnf', '90-26-4-q.cnf.gz.no_w.cnf', 's38417_7_4.cnf.gz.no_w.cnf', '90-25-1-q.cnf.gz.no_w.cnf', '75-21-8-q.cnf.gz.no_w.cnf', '90-30-1-q.cnf.gz.no_w.cnf', '90-34-1-q.cnf.gz.no_w.cnf', '90-21-10-q.cnf.gz.no_w.cnf', '75-21-5-q.cnf.gz.no_w.cnf', '75-24-6-q.cnf.gz.no_w.cnf', '75-23-3-q.cnf.gz.no_w.cnf', '90-22-5-q.cnf.gz.no_w.cnf', '75-25-10-q.cnf.gz.no_w.cnf', '90-26-5-q.cnf.gz.no_w.cnf', '75-25-9-q.cnf.gz.no_w.cnf', '75-23-10-q.cnf.gz.no_w.cnf', '90-46-5-q.cnf.gz.no_w.cnf', '90-42-5-q.cnf.gz.no_w.cnf', '90-34-2-q.cnf.gz.no_w.cnf', '90-42-9-q.cnf.gz.no_w.cnf', '90-46-9-q.cnf.gz.no_w.cnf', '90-30-2-q.cnf.gz.no_w.cnf', '75-26-3-q.cnf.gz.no_w.cnf', '90-50-1-q.cnf.gz.no_w.cnf', '75-22-3-q.cnf.gz.no_w.cnf', '75-25-6-q.cnf.gz.no_w.cnf', '75-24-5-q.cnf.gz.no_w.cnf', '75-21-6-q.cnf.gz.no_w.cnf', '75-24-9-q.cnf.gz.no_w.cnf', '90-38-10-q.cnf.gz.no_w.cnf', '90-25-3-q.cnf.gz.no_w.cnf', '75-20-9-q.cnf.gz.no_w.cnf', '90-26-6-q.cnf.gz.no_w.cnf', '90-23-5-q.cnf.gz.no_w.cnf', '90-42-4-q.cnf.gz.no_w.cnf', '50-20-6-q.cnf.gz.no_w.cnf', '75-17-6-q.cnf.gz.no_w.cnf', '75-24-10-q.cnf.gz.no_w.cnf', '90-30-3-q.cnf.gz.no_w.cnf', '90-34-3-q.cnf.gz.no_w.cnf', '90-23-8-q.cnf.gz.no_w.cnf', '75-22-2-q.cnf.gz.no_w.cnf', '90-30-10-q.cnf.gz.no_w.cnf', '75-26-2-q.cnf.gz.no_w.cnf', '75-23-1-q.cnf.gz.no_w.cnf', '75-21-7-q.cnf.gz.no_w.cnf', '75-24-4-q.cnf.gz.no_w.cnf', '75-20-4-q.cnf.gz.no_w.cnf', '75-25-7-q.cnf.gz.no_w.cnf', '75-20-8-q.cnf.gz.no_w.cnf', '75-24-8-q.cnf.gz.no_w.cnf', '90-30-4-q.cnf.gz.no_w.cnf', '90-34-4-q.cnf.gz.no_w.cnf', '90-38-4-q.cnf.gz.no_w.cnf', '90-38-8-q.cnf.gz.no_w.cnf', '90-19-2-q.cnf.gz.no_w.cnf', '90-42-3-q.cnf.gz.no_w.cnf', '90-30-8-q.cnf.gz.no_w.cnf', '75-26-9-q.cnf.gz.no_w.cnf', '75-22-9-q.cnf.gz.no_w.cnf', '90-25-5-q.cnf.gz.no_w.cnf', '75-24-3-q.cnf.gz.no_w.cnf', '90-25-9-q.cnf.gz.no_w.cnf', '75-22-5-q.cnf.gz.no_w.cnf', '75-26-5-q.cnf.gz.no_w.cnf', '75-23-6-q.cnf.gz.no_w.cnf', '90-34-5-q.cnf.gz.no_w.cnf', '90-30-5-q.cnf.gz.no_w.cnf', '75-19-6-q.cnf.gz.no_w.cnf', '90-38-5-q.cnf.gz.no_w.cnf', '75-18-9-q.cnf.gz.no_w.cnf', '90-34-10-q.cnf.gz.no_w.cnf', '90-46-2-q.cnf.gz.no_w.cnf', '90-30-9-q.cnf.gz.no_w.cnf', '90-34-9-q.cnf.gz.no_w.cnf', '75-22-8-q.cnf.gz.no_w.cnf', '75-20-10-q.cnf.gz.no_w.cnf', '90-25-4-q.cnf.gz.no_w.cnf', '90-24-7-q.cnf.gz.no_w.cnf', '75-25-1-q.cnf.gz.no_w.cnf', '75-20-2-q.cnf.gz.no_w.cnf', '75-21-1-q.cnf.gz.no_w.cnf', '75-23-7-q.cnf.gz.no_w.cnf', '75-26-4-q.cnf.gz.no_w.cnf', '90-50-6-q.cnf.gz.no_w.cnf', '75-18-6-q.cnf.gz.no_w.cnf', '90-38-6-q.cnf.gz.no_w.cnf', '90-30-6-q.cnf.gz.no_w.cnf', '90-34-6-q.cnf.gz.no_w.cnf', '90-46-1-q.cnf.gz.no_w.cnf', 's38417_3_2.cnf.gz.no_w.cnf', '90-24-4-q.cnf.gz.no_w.cnf', '90-22-2-q.cnf.gz.no_w.cnf', '75-23-8-q.cnf.gz.no_w.cnf', '90-50-9-q.cnf.gz.no_w.cnf', '75-22-7-q.cnf.gz.no_w.cnf', '90-50-5-q.cnf.gz.no_w.cnf', '75-26-7-q.cnf.gz.no_w.cnf', '75-23-4-q.cnf.gz.no_w.cnf', '50-18-8-q.cnf.gz.no_w.cnf', '75-21-2-q.cnf.gz.no_w.cnf', '75-24-1-q.cnf.gz.no_w.cnf', '75-25-2-q.cnf.gz.no_w.cnf', '90-22-10-q.cnf.gz.no_w.cnf', '90-38-7-q.cnf.gz.no_w.cnf', '90-34-7-q.cnf.gz.no_w.cnf', '90-30-7-q.cnf.gz.no_w.cnf', '75-21-10-q.cnf.gz.no_w.cnf', '90-42-10-q.cnf.gz.no_w.cnf', '90-25-6-q.cnf.gz.no_w.cnf', '90-26-3-q.cnf.gz.no_w.cnf', '90-22-3-q.cnf.gz.no_w.cnf', '90-25-10-q.cnf.gz.no_w.cnf', '75-23-9-q.cnf.gz.no_w.cnf', '50-18-9-q.cnf.gz.no_w.cnf', '75-23-5-q.cnf.gz.no_w.cnf', '75-22-6-q.cnf.gz.no_w.cnf', '90-50-4-q.cnf.gz.no_w.cnf', '90-24-9-q.cnf.gz.no_w.cnf', '75-21-3-q.cnf.gz.no_w.cnf']
for problem_name in PROBLEM_NAMES:
for repeats_per_experiment in [2]:
cur_spec = {
'problem_name': problem_name,
'repeats': repeats_per_experiment, #the
}
all_fireworks.append(Firework(RunSpecificExperimentBatch(), spec=cur_spec))
firework_dependencies = {}
workflow = Workflow(all_fireworks, firework_dependencies)
if TEST_LOCAL:
launchpad.add_wf(workflow)
rapidfire(launchpad, FWorker())
else:
launchpad.add_wf(workflow)
qadapter = CommonAdapter.from_file("%s/my_qadapter.yaml" % HOME_DIRECTORY)
rapidfire(launchpad, FWorker(), qadapter, launch_dir='.', nlaunches='infinite', njobs_queue=NJOBS_QUEUE,
njobs_block=500, sleep_time=None, reserve=False, strm_lvl='INFO', timeout=None,
fill_mode=False)
def dsharp_call_from_python(problem_name, time_limit, problem_directory='/atlas/u/jkuck/approxmc/counting2/'):
global SAT_SOLVER_TIME
SAT_SOLVER_TIME = 0
input_filename = '%s/%s' % (problem_directory, problem_name)
t0 = resource.getrusage(resource.RUSAGE_CHILDREN).ru_utime
time_out, solution_count = dsharp_count(formula=input_filename, time_limit=time_limit)
t1 = resource.getrusage(resource.RUSAGE_CHILDREN).ru_utime
dsharp_time = t1-t0
return time_out, solution_count, dsharp_time
def dsharp_count(formula, time_limit):
"""
Count the number of solutions of a cnf formula by invoking
sharpsat.
:param formula: The full pathname of the formula file
:param time_limit: Maximum run time (in sec) allowed
:return: a pair (t, n) such that:
t: [boolean] has the time_limit been reached without finishing?
n: number of solutions (if t is False)
"""
sh_cmd = '{dsharp_exe} {form}'.format(form=formula,
dsharp_exe=DSHARP_EXECUTABLE)
n_sol, _ = execute_cmd(sh_cmd, time_limit, parse_dsharp_output,
plain_timeout=True, count_time=False)
return n_sol is math.nan, n_sol
def parse_dsharp_output(output):
"""
Parse the output of dsharp
If it finds the number of solutions it is reported in the first
element of the returning pair, otherwise it is math.nan.
The second element of the returning pair is 0, to conform with the
protocol for the last element of the returning tuple from output parsers
:param output: The sharpsat output. If None then a timeout occured
:return: A pair of values
"""
if output is None:
return math.nan, 0
nsol = math.nan
all_lines = output.split('\n')
for i, line in enumerate(all_lines):
line = line.strip()
if line.startswith('# of solutions:'):
solution_count = line.split('\t')[-1]
if solution_count in ['-nan', 'inf']:
nsol = np.nan
else:
# nsol = int(solution_count)
nsol = int(decimal.Decimal(solution_count))
return nsol, 0
if __name__=="__main__":
run_experiment()
######################### Fireworks info copied from anothor project #########################
# If the database thinks a firework is still running, but no jobs are running on the cluster, try:
# $ lpad detect_lostruns --time 1 --refresh
#
# If a firework fizzles and you are trying to find the error/output, note the fireworks fw_id
# in the online database, then search for this fw_id in the launcher block, e.g.:
# $ cd block_2017-11-01-07-30-53-457640
# $ pt 'fw_id: 34'
# or on atlas-ws-6 use silver searcher:
# $ ag 'fw_id: 34'
#
#Note, on Atlas before this script:
# start a krbscreen session:
# $ krbscreen #reattach using $ screen -rx
# $ reauth #important so that jobs can be submitted after logging out, enter password
#
# $ export PATH=/opt/rh/python27/root/usr/bin:$PATH
# $ export LD_LIBRARY_PATH=/opt/rh/python27/root/usr/lib64/:$LD_LIBRARY_PATH
# $ PACKAGE_DIR=/atlas/u/jkuck/software
# $ export PATH=$PACKAGE_DIR/anaconda2/bin:$PATH
# $ export LD_LIBRARY_PATH=$PACKAGE_DIR/anaconda2/local:$LD_LIBRARY_PATH
# $ source activate anaconda_venv
# $ cd /atlas/u/jkuck/rbpf_fireworks/
#
# To install anaconda packages run, e.g.:
# $ conda install -c matsci fireworks=1.3.9
#
#May need to run $ kinit -r 30d
#
# Add the following line to the file ~/.bashrc.user on Atlas:
# export PYTHONPATH="/atlas/u/jkuck/rbpf_fireworks:$PYTHONPATH"
# Weird, but to run commands like "lpad -l my_launchpad.yaml get_fws",
# add the following line to the file ~/.bashrc.user on Atlas:
# export PYTHONPATH="${PYTHONPATH}:/atlas/u/jkuck/rbpf_fireworks/KITTI_helpers/"
#
# To install cvxpy on atlas run (hopefully):
#
#$ export PATH=/opt/rh/python27/root/usr/bin:$PATH
#$ export LD_LIBRARY_PATH=/opt/rh/python27/root/usr/lib64/:$LD_LIBRARY_PATH
#$ pip install --user numpy
#$ pip install --user cvxpy
#
# Install pymatgen:
#$ pip install --user pymatgen
##########################################################################################
#
#Note, on Sherlock before this script:
#$ ml load python/2.7.5
#$ easy_install-2.7 --user pip
#$ export PATH=~/.local/bin:$PATH
#$ pip2.7 install --user fireworks #and others
#$ pip2.7 install --user filterpy
#$ pip2.7 install --user scipy --upgrade
#$ pip2.7 install --user munkres
#$ pip2.7 install --user pymatgen
#$ cd /scratch/users/kuck/rbpf_fireworks/
#
# Add the following line to the file ~/.bashrc on Sherlock:
# export PYTHONPATH="/scratch/users/kuck/rbpf_fireworks:$PYTHONPATH"
# Weird, but to run commands like "lpad -l my_launchpad.yaml get_fws",
# add the following line to the file ~/.bashrc.user on Atlas:
# export PYTHONPATH="${PYTHONPATH}:/scratch/users/kuck/rbpf_fireworks/KITTI_helpers/"
#
#
# When setting up:
# - make cluster_config.py file
# - make my_qadapter.yaml file (look at fireworks workflow manager website for info)
#
# To install cvxpy on sherlock run:
# $ pip2.7 install --user cvxpy
| 253.69735 | 68,602 | 0.680579 | 43,523 | 181,901 | 2.638995 | 0.022655 | 0.136831 | 0.171039 | 0.273523 | 0.951505 | 0.948736 | 0.94676 | 0.944296 | 0.942058 | 0.93841 | 0 | 0.135163 | 0.081319 | 181,901 | 716 | 68,603 | 254.051676 | 0.552155 | 0.485654 | 0 | 0.515581 | 0 | 0.031161 | 0.703196 | 0.672457 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016997 | false | 0.011331 | 0.050992 | 0 | 0.084986 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
ea3b43871f9cbbf6c5b0bd64763911bf44739912 | 133 | py | Python | tasks/models/template/__init__.py | heolin123/funcrowd | 20167783de208394c09ed0429a5f02ec6dd79c42 | [
"MIT"
] | null | null | null | tasks/models/template/__init__.py | heolin123/funcrowd | 20167783de208394c09ed0429a5f02ec6dd79c42 | [
"MIT"
] | 11 | 2019-11-12T23:26:45.000Z | 2021-06-10T17:37:23.000Z | tasks/models/template/__init__.py | heolin123/funcrowd | 20167783de208394c09ed0429a5f02ec6dd79c42 | [
"MIT"
] | null | null | null | from tasks.models.template.item_template import ItemTemplate
from tasks.models.template.item_template_field import ItemTemplateField
| 44.333333 | 71 | 0.894737 | 17 | 133 | 6.823529 | 0.529412 | 0.155172 | 0.258621 | 0.396552 | 0.603448 | 0.603448 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06015 | 133 | 2 | 72 | 66.5 | 0.928 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
ea42729534366c88e976303730ebfda5ae5f6c80 | 134 | py | Python | src/icemac/ab/calendar/roles.py | icemac/icemac.ab.calendar | c0cdedd3a8fdd39520156c2ea7cf83aca742e3d9 | [
"BSD-2-Clause"
] | 1 | 2020-04-21T19:34:04.000Z | 2020-04-21T19:34:04.000Z | src/icemac/ab/calendar/roles.py | icemac/icemac.ab.calendar | c0cdedd3a8fdd39520156c2ea7cf83aca742e3d9 | [
"BSD-2-Clause"
] | null | null | null | src/icemac/ab/calendar/roles.py | icemac/icemac.ab.calendar | c0cdedd3a8fdd39520156c2ea7cf83aca742e3d9 | [
"BSD-2-Clause"
] | null | null | null | def editor_role(ignored):
return 'icemac.ab.calendar.Editor'
def visitor_role(ignored):
return 'icemac.ab.calendar.Visitor'
| 19.142857 | 39 | 0.746269 | 18 | 134 | 5.444444 | 0.5 | 0.22449 | 0.346939 | 0.469388 | 0.673469 | 0.673469 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134328 | 134 | 6 | 40 | 22.333333 | 0.844828 | 0 | 0 | 0 | 0 | 0 | 0.380597 | 0.380597 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
ea5aeb9ca3734a446502ce60ba1d7e97f842ab84 | 40,189 | py | Python | retrieval/AIR-retriever/Graph_nodes.py | dair-iitd/ECQA | a74bb658194e647e5d7955a84cc895bf5a5d8b3e | [
"Apache-2.0"
] | 3 | 2021-07-28T01:13:23.000Z | 2022-01-27T15:51:49.000Z | retrieval/AIR-retriever/Graph_nodes.py | dair-iitd/ECQA | a74bb658194e647e5d7955a84cc895bf5a5d8b3e | [
"Apache-2.0"
] | null | null | null | retrieval/AIR-retriever/Graph_nodes.py | dair-iitd/ECQA | a74bb658194e647e5d7955a84cc895bf5a5d8b3e | [
"Apache-2.0"
] | null | null | null |
import numpy as np
import collections
from Overlap_analysis import calculate_overlap, calculate_all_overlap, calculate_overlap_labels, get_union, get_intersection, get_intersection_withIDF, calculate_kappa, calculate_alignment_overlap \
, calculate_alignment_union
# from Compute_F1 import mean_confidence_interval, meta_voting_ensemble, meta_voting_ensemble_BECKY
from itertools import combinations
from Compute_F1 import get_differences_list
import math
######################################## These functions are to implement 2^n combinations of subgraphs and select the best one out of it.
def get_all_combination_best_graph(pred_labels_over_runs, performance_runs): ## Edge based model
runs = list(pred_labels_over_runs.keys())
meta_subgraphs = []
for i in range(len(runs)-1):
meta_subgraphs += list(combinations(runs, i+2))
print ("len of meta subgraphs are ", len(meta_subgraphs))
meta_graph_scores = []
for meta_sub_graph1 in meta_subgraphs:
current_subgraph_score = []
for ik1, rk1 in enumerate(meta_sub_graph1[:-1]):
for rk2 in meta_sub_graph1[ik1+1:]:
current_subgraph_score.append ( performance_runs[rk1]+performance_runs[rk2] / float(calculate_overlap_labels(pred_labels_over_runs[rk1], pred_labels_over_runs[rk2]) ) )
meta_graph_scores.append(sum(current_subgraph_score)/float(len(current_subgraph_score))) ## taking average of subgraph scores
print ("the len of meta graph scores are : ", len(meta_graph_scores))
best_sub_graph_index = meta_graph_scores.index(max(meta_graph_scores))
print ("best subgraph is: ", meta_subgraphs[best_sub_graph_index], max(meta_graph_scores))
return meta_subgraphs[best_sub_graph_index]
def get_all_combination_Vikas_EdgeAPPROACH_withCoverage_best_graph(pred_labels_over_runs, performance_runs, gold_labels):
runs = list(pred_labels_over_runs.keys())
meta_subgraphs = []
for i in range(len(runs)-1):
meta_subgraphs += list(combinations(runs, i+2))
print ("len of meta subgraphs are ", len(meta_subgraphs))
meta_graph_scores = []
meta_graph_coverage_scores = []
for meta_sub_graph1 in meta_subgraphs:
current_subgraph_score = []
for ik1, rk1 in enumerate(meta_sub_graph1[:-1]):
if ik1 == 0: ## initializing the coverage list
prediction_coverage = pred_labels_over_runs[rk1]
for rk2 in meta_sub_graph1[ik1+1:]:
current_subgraph_score.append(performance_runs[rk1] + performance_runs[rk2] / float(calculate_overlap_labels(pred_labels_over_runs[rk1], pred_labels_over_runs[rk2])))
prediction_coverage = get_union(prediction_coverage, pred_labels_over_runs[rk2])
final_coverage = sum(get_intersection(prediction_coverage, gold_labels))/float(sum(gold_labels))
meta_graph_coverage_scores.append(final_coverage)
meta_graph_scores.append( (sum(current_subgraph_score)/float(len(current_subgraph_score)) ) * final_coverage ) ## taking average of subgraph scores
print ("the len of meta graph scores are : ", len(meta_graph_scores))
best_sub_graph_index = meta_graph_scores.index(max(meta_graph_scores))
print ("best subgraph is: ", meta_subgraphs[best_sub_graph_index], max(meta_graph_scores),meta_graph_coverage_scores[best_sub_graph_index])
return meta_subgraphs[best_sub_graph_index]
def get_all_combination_STEVE_best_graph(pred_labels_over_runs, performance_runs): ## steve's suggestions
runs = list(pred_labels_over_runs.keys())
meta_subgraphs = []
for i in range(len(runs)-1):
meta_subgraphs += list(combinations(runs, i+2))
print ("len of meta subgraphs are ", len(meta_subgraphs))
meta_graph_scores = []
for meta_sub_graph1 in meta_subgraphs:
current_subgraph_overlap = []
current_subgraph_perf = []
for ik1, rk1 in enumerate(meta_sub_graph1[:-1]):
current_subgraph_perf.append(performance_runs[rk1])
for rk2 in meta_sub_graph1[ik1+1:]:
current_subgraph_overlap.append (float(calculate_overlap_labels(pred_labels_over_runs[rk1], pred_labels_over_runs[rk2]) ) )
avg_score = sum(current_subgraph_perf)/float(len(current_subgraph_perf))
avg_overlap = sum(current_subgraph_overlap)/float(len(current_subgraph_overlap))
meta_graph_scores.append(avg_score/float(avg_overlap)) ## taking average of subgraph scores
print ("the len of meta graph scores are : ", len(meta_graph_scores))
best_sub_graph_index = meta_graph_scores.index(max(meta_graph_scores))
print ("best subgraph is: ", meta_subgraphs[best_sub_graph_index], max(meta_graph_scores))
return meta_subgraphs[best_sub_graph_index]
def get_all_combination_withCoverage_best_graph(KB_terms, performance_runs, Ques_terms, Ans_terms): ## gold_labels_list is QA terms and pred_labels_over_runs is justification terms
runs = list(performance_runs.keys())
gold_labels = Ques_terms + Ans_terms
# print("the gold_labels list looks like: ", runs)
meta_subgraphs = []
# for i in range(len(runs)-1):
# meta_subgraphs += list(combinations(runs, i+2))
meta_subgraphs += list(combinations(runs, 4))
meta_graph_scores = []
meta_graph_coverage_scores = []
for meta_sub_graph1 in meta_subgraphs:
current_subgraph_overlap = []
current_subgraph_perf = []
for ik1, rk1 in enumerate(meta_sub_graph1[:-1]):
if ik1 == 0: ## initializing the coverage list
prediction_coverage = KB_terms[rk1]
current_subgraph_perf.append(performance_runs[rk1])
for rk2 in meta_sub_graph1[ik1+1:-1]: ##### This is equivalent to M C 2
current_subgraph_overlap.append (float(calculate_overlap(KB_terms[rk1], KB_terms[rk2]) ) )
prediction_coverage = get_union(prediction_coverage, KB_terms[rk2])
avg_score = sum(current_subgraph_perf)/float(len(current_subgraph_perf))
avg_overlap = sum(current_subgraph_overlap)/float(len(current_subgraph_overlap))
# print ("the ")
final_coverage = len(get_intersection(prediction_coverage, gold_labels))/float(len(gold_labels))
meta_graph_coverage_scores.append(final_coverage)
# meta_graph_scores.append( (avg_score/float(avg_overlap+1)) * final_coverage ) ## taking average of subgraph scores
meta_graph_scores.append( avg_score * final_coverage ) ## taking average of subgraph scores
# print ("the len of meta graph scores are : ", len(meta_graph_scores))
try:
best_sub_graph_index = meta_graph_scores.index(max(meta_graph_scores))
return meta_subgraphs[best_sub_graph_index]
except ValueError:
return "Crashed"
#######################################
def get_all_combination_withCoverage_best_graph_Cand_boost(KB_terms, performance_runs, Ques_terms, Ans_terms, subgraph_size): ## gold_labels_list is QA terms and pred_labels_over_runs is justification terms
runs = list(performance_runs.keys())
gold_labels = Ques_terms + Ans_terms
# print("the gold_labels list looks like: ", runs)
meta_subgraphs = []
for i in range(subgraph_size-2):
meta_subgraphs += list(combinations(runs, i+2))
# for i in range(subgraph_size): ## for taking best subgraph amongst subgraphs of size 3,4,5
# meta_subgraphs += list(combinations(runs, i+3))
# meta_subgraphs += list(combinations(runs, subgraph_size))
meta_graph_scores = []
meta_graph_coverage_scores = []
for meta_sub_graph1 in meta_subgraphs:
current_subgraph_overlap = []
current_subgraph_perf = []
for ik1, rk1 in enumerate(meta_sub_graph1[:-1]):
if ik1 == 0: ## initializing the coverage list
prediction_coverage = KB_terms[rk1]
current_subgraph_perf.append(performance_runs[rk1])
for rk2 in meta_sub_graph1[ik1 + 1:-1]: ##### This is equivalent to M C 2
current_subgraph_overlap.append(float(calculate_overlap(KB_terms[rk1], KB_terms[rk2])))
prediction_coverage = get_union(prediction_coverage, KB_terms[rk2])
avg_score = sum(current_subgraph_perf) / float(len(current_subgraph_perf))
avg_overlap = sum(current_subgraph_overlap) / float(max(1,len(current_subgraph_overlap)))
# print ("the ")
final_query_coverage = len(get_intersection(prediction_coverage, Ques_terms)) / max(1,float(len(Ques_terms)))
final_ans_coverage = len(get_intersection(prediction_coverage, Ans_terms)) / max(1,float(len(Ans_terms)))
meta_graph_coverage_scores.append(final_query_coverage)
# meta_graph_scores.append( avg_score * final_ans_coverage * final_query_coverage) ## taking average of subgraph scores
# if subgraph_size>2:
# print ("the avg score, overlap and coverage looks like: ", avg_score, avg_overlap, final_query_coverage, final_ans_coverage)
# meta_graph_scores.append( (avg_score/float(1+avg_overlap)) * (1+1*final_ans_coverage) * (1+final_query_coverage) ) ## taking average of subgraph scores
meta_graph_scores.append( avg_score * (1+12*final_ans_coverage) * (1+final_query_coverage) ) ## taking average of subgraph scores
# print ("the len of meta graph scores are : ", len(meta_graph_scores))
try:
best_sub_graph_index = meta_graph_scores.index(max(meta_graph_scores))
return meta_subgraphs[best_sub_graph_index]
except ValueError:
return "Crashed"
#######################################
#######################################
def get_all_combination_withCoverage_best_graph_Cand_boost_withIDF(KB_terms, performance_runs, Ques_terms, Ans_terms, subgraph_size, IDF_vals): ## gold_labels_list is QA terms and pred_labels_over_runs is justification terms
runs = list(performance_runs.keys())
gold_labels = Ques_terms + Ans_terms
# print("the gold_labels list looks like: ", runs)
meta_subgraphs = []
for i in range(subgraph_size-1):
meta_subgraphs += list(combinations(runs, i+2))
# for i in range(subgraph_size): ## for taking best subgraph amongst subgraphs of size 3,4,5
# meta_subgraphs += list(combinations(runs, i+3))
# meta_subgraphs += list(combinations(runs, subgraph_size))
meta_graph_scores = []
meta_graph_coverage_scores = []
meta_graph_ans_coverage_scores = []
meta_graph_overlap_scores = []
for meta_sub_graph1 in meta_subgraphs:
current_subgraph_overlap = []
current_subgraph_perf = []
for ik1, rk1 in enumerate(meta_sub_graph1[:-1]):
if ik1 == 0: ## initializing the coverage list
prediction_coverage = KB_terms[rk1]
current_subgraph_perf.append(performance_runs[rk1])
for rk2 in meta_sub_graph1[ik1 + 1:-1]: ##### This is equivalent to M C 2
current_subgraph_overlap.append(float(calculate_overlap(KB_terms[rk1], KB_terms[rk2])))
prediction_coverage = get_union(prediction_coverage, KB_terms[rk2])
avg_score = sum(current_subgraph_perf) / float(len(current_subgraph_perf))
avg_overlap = sum(current_subgraph_overlap) / float(max(1,len(current_subgraph_overlap)))
# print ("the ")
final_query_coverage = get_intersection_withIDF(prediction_coverage, Ques_terms, IDF_vals) / max(1,float(len(Ques_terms)))
final_ans_coverage = get_intersection_withIDF(prediction_coverage, Ans_terms, IDF_vals) / max(1,float(len(Ans_terms)))
meta_graph_coverage_scores.append(final_query_coverage)
meta_graph_ans_coverage_scores.append(final_ans_coverage)
meta_graph_overlap_scores.append(avg_overlap)
# meta_graph_scores.append( avg_score * final_ans_coverage * final_query_coverage) ## taking average of subgraph scores
# if subgraph_size>2:
# print ("the avg score, overlap and coverage looks like: ", avg_score, avg_overlap, final_query_coverage, final_ans_coverage)
# meta_graph_scores.append( (avg_score/float(1+avg_overlap)) * (1+1*final_ans_coverage) * (1+final_query_coverage) ) ## taking average of subgraph scores
meta_graph_scores.append( (1+avg_score/float(1+avg_overlap)) * (1+1*final_ans_coverage) * (1+final_query_coverage) ) ## taking average of subgraph scores ## * * # 1+avg_overlap
# print ("the len of meta graph scores are : ", len(meta_graph_scores))
try:
best_sub_graph_index = meta_graph_scores.index(max(meta_graph_scores))
# print ("checking weather this returns any overlap val or not ", meta_graph_overlap_scores)
return meta_subgraphs[best_sub_graph_index], meta_graph_overlap_scores[best_sub_graph_index], meta_graph_coverage_scores[best_sub_graph_index], meta_graph_ans_coverage_scores[best_sub_graph_index]
except ValueError:
return "Crashed"
#######################################
#####################################
def get_all_combination_withCoverage_Alignment_IDF(Justification_ans_scores, ans_IDF_mat, Justification_ques_scores, ques_IDF_mat, Justification_ques_ans_scores_together, performance_runs, Ques_terms, Ans_terms, subgraph_size, IDF_vals): ## gold_labels_list is QA terms and pred_labels_over_runs is justification terms
runs = list(performance_runs.keys())
gold_labels = Ques_terms + Ans_terms
# print("the gold_labels list looks like: ", runs)
meta_subgraphs = []
for i in range(subgraph_size):
meta_subgraphs += list(combinations(runs, i+2))
# for i in range(subgraph_size): ## for taking best subgraph amongst subgraphs of size 3,4,5
# meta_subgraphs += list(combinations(runs, i+3))
# meta_subgraphs += list(combinations(runs, subgraph_size))
meta_graph_scores = []
meta_graph_coverage_scores = []
meta_graph_ans_coverage_scores = []
meta_graph_overlap_scores = []
for meta_sub_graph1 in meta_subgraphs:
current_subgraph_overlap = []
current_subgraph_perf = []
for ik1, rk1 in enumerate(meta_sub_graph1[:-1]):
if ik1 == 0: ## initializing the coverage list
ques_coverage = Justification_ques_scores[rk1]
ans_coverage = Justification_ans_scores[rk1]
current_subgraph_perf.append(performance_runs[rk1])
for rk2 in meta_sub_graph1[ik1 + 1:-1]: ##### This is equivalent to M C 2
if len(Justification_ques_ans_scores_together[rk1]) > 0 and len(Justification_ques_ans_scores_together[rk2]) > 0 :
# if len(Justification_ques_ans_scores_together[rk1])>0 and len(Justification_ques_ans_scores_together[rk2]) > 0:
if len(Justification_ques_ans_scores_together[rk1]) == len(Justification_ques_ans_scores_together[rk2]):
current_subgraph_overlap.append(float(calculate_alignment_overlap(Justification_ques_ans_scores_together[rk1], Justification_ques_ans_scores_together[rk2])))
ques_coverage = calculate_alignment_union(ques_coverage, Justification_ques_scores[rk2])
ans_coverage = calculate_alignment_union(ans_coverage, Justification_ans_scores[rk2])
else:
print ("find out why this is happening", len(Justification_ques_ans_scores_together[rk1]), len(Justification_ques_ans_scores_together[rk2]))
else:
pass
avg_score = sum(current_subgraph_perf) / float(len(current_subgraph_perf))
avg_overlap = sum(current_subgraph_overlap) / float(max(1,len(current_subgraph_overlap)))
# final_query_coverage = get_intersection_withIDF(prediction_coverage, Ques_terms, IDF_vals) / max(1,float(len(Ques_terms)))
# final_ans_coverage = get_intersection_withIDF(prediction_coverage, Ans_terms, IDF_vals) / max(1,float(len(Ans_terms)))
# if len(ques_coverage) > 0 and len(ans_coverage) > 0 :
if len(ques_coverage) == len(ques_IDF_mat):
ques_coverage = [a*b for a,b in zip(ques_IDF_mat, ques_coverage)]
# print ("yes, we do come here ")
# else:
# print ("The len of cov vector and idf vector are different, checkout why", len(ques_coverage), len(ques_IDF_mat))
if len(ans_coverage) == len(ans_IDF_mat):
ans_coverage = [a*b for a,b in zip(ans_IDF_mat, ans_coverage)]
# else:
# print ("yep, we had these cases: ")
final_query_coverage = (sum(ques_coverage)) / float(max(1, len(ques_coverage)))
final_ans_coverage = (sum(ans_coverage)) / float(max(1, len(ans_coverage)))
# final_query_coverage = math.log(max(1,sum(ques_coverage)))/float(max(1,len(ques_coverage)))
# final_ans_coverage = math.log(max(1,sum(ans_coverage)))/float(max(1,len(ans_coverage)))
meta_graph_coverage_scores.append(final_query_coverage)
meta_graph_ans_coverage_scores.append(final_ans_coverage)
meta_graph_overlap_scores.append(avg_overlap)
# meta_graph_scores.append( avg_score * final_ans_coverage * final_query_coverage) ## taking average of subgraph scores
# if subgraph_size>2:
# print ("the avg score, overlap and coverage looks like: ", avg_score, avg_overlap, final_query_coverage, final_ans_coverage)
# meta_graph_scores.append( (1+avg_score) * (1+1*final_ans_coverage) * (1+final_query_coverage) ) ## taking average of subgraph scores
meta_graph_scores.append( ((1+avg_score)/float(1+0)) * (1+1*final_ans_coverage) * (1+final_query_coverage) ) ## taking average of subgraph scores ## * * # 1+avg_overlap
# print ("the len of meta graph scores are : ", len(meta_graph_scores))
try:
best_sub_graph_index = meta_graph_scores.index(max(meta_graph_scores))
list_of_top_subgraphs = list(np.argsort(meta_graph_scores))[::-1]
vicinity_scores = []
for top_graph_ind in list_of_top_subgraphs[0:20]:
vicinity_scores.append(sum(get_differences_list(meta_subgraphs[top_graph_ind])))
# print ("checking weather this returns any overlap val or not ", meta_graph_overlap_scores)
# print("The first calculated index was: ", best_sub_graph_index, "then", list_of_top_subgraphs[vicinity_scores.index(min(vicinity_scores))])
return meta_subgraphs[list_of_top_subgraphs[vicinity_scores.index(min(vicinity_scores))]], meta_graph_overlap_scores[best_sub_graph_index], meta_graph_coverage_scores[best_sub_graph_index], meta_graph_ans_coverage_scores[best_sub_graph_index]
# return meta_subgraphs[best_sub_graph_index], meta_graph_overlap_scores[best_sub_graph_index], meta_graph_coverage_scores[best_sub_graph_index], meta_graph_ans_coverage_scores[best_sub_graph_index]
# return meta_subgraphs[best_sub_graph_index], meta_graph_overlap_scores[best_sub_graph_index], meta_graph_coverage_scores[best_sub_graph_index], meta_graph_ans_coverage_scores[best_sub_graph_index]
except ValueError:
return "Crashed"
#######################################
#####################################
def get_all_combination_withCoverage_Alignment_Regression(Justification_ans_scores, ans_IDF_mat, Justification_ques_scores, ques_IDF_mat, Justification_ques_ans_scores_together, performance_runs, Ques_terms, Ans_terms, subgraph_size, IDF_vals, n_top_ranked_sets): ## gold_labels_list is QA terms and pred_labels_over_runs is justification terms
runs = list(performance_runs.keys())
gold_labels = Ques_terms + Ans_terms
# print("the gold_labels list looks like: ", runs)
meta_subgraphs = []
for i in range(subgraph_size):
meta_subgraphs += list(combinations(runs, i+2))
# for i in range(subgraph_size): ## for taking best subgraph amongst subgraphs of size 3,4,5
# meta_subgraphs += list(combinations(runs, i+3))
# meta_subgraphs += list(combinations(runs, subgraph_size))
meta_graph_scores = []
meta_graph_coverage_scores = []
meta_graph_ans_coverage_scores = []
meta_graph_overlap_scores = []
for meta_sub_graph1 in meta_subgraphs:
current_subgraph_overlap = []
current_subgraph_perf = []
for ik1, rk1 in enumerate(meta_sub_graph1[:-1]):
if ik1 == 0: ## initializing the coverage list
ques_coverage = Justification_ques_scores[rk1]
ans_coverage = Justification_ans_scores[rk1]
current_subgraph_perf.append(performance_runs[rk1])
for rk2 in meta_sub_graph1[ik1 + 1:-1]: ##### This is equivalent to M C 2
if len(Justification_ques_ans_scores_together[rk1]) > 0 and len(Justification_ques_ans_scores_together[rk2]) > 0 :
# if len(Justification_ques_ans_scores_together[rk1])>0 and len(Justification_ques_ans_scores_together[rk2]) > 0:
if len(Justification_ques_ans_scores_together[rk1]) == len(Justification_ques_ans_scores_together[rk2]):
current_subgraph_overlap.append(float(calculate_alignment_overlap(Justification_ques_ans_scores_together[rk1], Justification_ques_ans_scores_together[rk2])))
ques_coverage = calculate_alignment_union(ques_coverage, Justification_ques_scores[rk2])
ans_coverage = calculate_alignment_union(ans_coverage, Justification_ans_scores[rk2])
else:
print ("find out why this is happening", len(Justification_ques_ans_scores_together[rk1]), len(Justification_ques_ans_scores_together[rk2]))
else:
pass
avg_score = sum(current_subgraph_perf) / float(len(current_subgraph_perf))
avg_overlap = sum(current_subgraph_overlap) / float(max(1,len(current_subgraph_overlap)))
# final_query_coverage = get_intersection_withIDF(prediction_coverage, Ques_terms, IDF_vals) / max(1,float(len(Ques_terms)))
# final_ans_coverage = get_intersection_withIDF(prediction_coverage, Ans_terms, IDF_vals) / max(1,float(len(Ans_terms)))
# if len(ques_coverage) > 0 and len(ans_coverage) > 0 :
if len(ques_coverage) == len(ques_IDF_mat):
ques_coverage = [a*b for a,b in zip(ques_IDF_mat, ques_coverage)]
# print ("yes, we do come here ")
# else:
# print ("The len of cov vector and idf vector are different, checkout why", len(ques_coverage), len(ques_IDF_mat))
if len(ans_coverage) == len(ans_IDF_mat):
ans_coverage = [a*b for a,b in zip(ans_IDF_mat, ans_coverage)]
# else:
# print ("yep, we had these cases: ")
final_query_coverage = (sum(ques_coverage)) / float(max(1, len(ques_coverage)))
final_ans_coverage = (sum(ans_coverage)) / float(max(1, len(ans_coverage)))
# final_query_coverage = math.log(max(1,sum(ques_coverage)))/float(max(1,len(ques_coverage)))
# final_ans_coverage = math.log(max(1,sum(ans_coverage)))/float(max(1,len(ans_coverage)))
meta_graph_coverage_scores.append(final_query_coverage)
meta_graph_ans_coverage_scores.append(final_ans_coverage)
meta_graph_overlap_scores.append(avg_overlap)
# vicinity_score = sum(get_differences_list(meta_sub_graph1))/float(len(get_differences_list(meta_sub_graph1)))
# vicinity_score = min(get_differences_list(meta_sub_graph1))
meta_graph_scores.append( ((1+avg_score) / float(1+0)) * (1+1*final_ans_coverage) * (1+final_query_coverage) ) ## taking average of subgraph scores ## * * # 1+avg_overlap
# print ("the len of meta graph scores are : ", len(meta_graph_scores))
try:
best_sub_graph_index = meta_graph_scores.index(max(meta_graph_scores))
list_of_top_subgraphs = list(np.argsort(meta_graph_scores))[::-1]
"""
## adding vicinity_scores_here:
top_ranked_by_ROCC_meta_subgraphs = [meta_subgraphs[i1] for i1 in list_of_top_subgraphs[0:100]]
vicinity_scores = []
for top_graph_1 in top_ranked_by_ROCC_meta_subgraphs:
vicinity_scores.append(sum(get_differences_list(top_graph_1))/float(len(get_differences_list(top_graph_1))))
list_of_top_subgraphs_from_vicinity_scores = list(np.argsort(vicinity_scores))[::-1]
top_10percent_subgraphs = []
# for top_subgraph1 in list_of_top_subgraphs[0:math.floor(0.1*len(list_of_top_subgraphs))]:
for top_subgraph1 in list_of_top_subgraphs_from_vicinity_scores[0:n_top_ranked_sets]:
top_10percent_subgraphs.append(top_ranked_by_ROCC_meta_subgraphs[top_subgraph1])
"""
top_10percent_subgraphs = []
# for top_subgraph1 in list_of_top_subgraphs[0:math.floor(0.1*len(list_of_top_subgraphs))]:
for top_subgraph1 in list_of_top_subgraphs[0:n_top_ranked_sets]:
top_10percent_subgraphs.append(meta_subgraphs[top_subgraph1])
# print ("checking weather this returns any overlap val or not ", meta_graph_overlap_scores)
# print("The first calculated index was: ", best_sub_graph_index, "then", list_of_top_subgraphs[vicinity_scores.index(min(vicinity_scores))])
return meta_subgraphs[best_sub_graph_index], top_10percent_subgraphs, meta_graph_overlap_scores[best_sub_graph_index], meta_graph_coverage_scores[best_sub_graph_index], meta_graph_ans_coverage_scores[best_sub_graph_index]
except ValueError:
return "Crashed"
#####################################
def get_all_combination_withCoverage_SOFT_Alignment_IDF(Justification_ans_scores, ans_IDF_mat, Justification_ques_scores, ques_IDF_mat, Justification_ques_ans_scores_together, performance_runs, Ques_terms, Ans_terms, subgraph_size, IDF_vals): ## gold_labels_list is QA terms and pred_labels_over_runs is justification terms
runs = list(performance_runs.keys())
gold_labels = Ques_terms + Ans_terms
# print("the gold_labels list looks like: ", runs)
meta_subgraphs = []
for i in range(subgraph_size):
meta_subgraphs += list(combinations(runs, i+2))
# for i in range(subgraph_size): ## for taking best subgraph amongst subgraphs of size 3,4,5
# meta_subgraphs += list(combinations(runs, i+3))
# meta_subgraphs += list(combinations(runs, subgraph_size))
meta_graph_scores = []
meta_graph_coverage_scores = []
meta_graph_ans_coverage_scores = []
meta_graph_overlap_scores = []
for meta_sub_graph1 in meta_subgraphs:
current_subgraph_overlap = []
current_subgraph_perf = []
for ik1, rk1 in enumerate(meta_sub_graph1[:-1]):
if ik1 == 0: ## initializing the coverage list
ques_coverage = Justification_ques_scores[rk1]
ans_coverage = Justification_ans_scores[rk1]
current_subgraph_perf.append(performance_runs[rk1])
for rk2 in meta_sub_graph1[ik1 + 1:-1]: ##### This is equivalent to M C 2
if len(Justification_ques_ans_scores_together[rk1]) > 0 and len(Justification_ques_ans_scores_together[rk2]) > 0 :
if len(Justification_ques_ans_scores_together[rk1]) == len(Justification_ques_ans_scores_together[rk2]):
current_subgraph_overlap.append(float(calculate_alignment_overlap(Justification_ques_ans_scores_together[rk1], Justification_ques_ans_scores_together[rk2])))
ques_coverage = calculate_alignment_union(ques_coverage, Justification_ques_scores[rk2])
ans_coverage = calculate_alignment_union(ans_coverage, Justification_ans_scores[rk2])
else:
print ("find out why this is happening", len(Justification_ques_ans_scores_together[rk1]), len(Justification_ques_ans_scores_together[rk2]))
else:
pass
avg_score = sum(current_subgraph_perf) / float(len(current_subgraph_perf))
avg_overlap = sum(current_subgraph_overlap) / float(max(1,len(current_subgraph_overlap)))
# final_query_coverage = get_intersection_withIDF(prediction_coverage, Ques_terms, IDF_vals) / max(1,float(len(Ques_terms)))
# final_ans_coverage = get_intersection_withIDF(prediction_coverage, Ans_terms, IDF_vals) / max(1,float(len(Ans_terms)))
# if len(ques_coverage) > 0 and len(ans_coverage) > 0 :
if len(ques_coverage) == len(ques_IDF_mat):
ques_coverage = [a*b for a,b in zip(ques_IDF_mat, ques_coverage)]
# print ("yes, we do come here ")
# else:
# print ("The len of cov vector and idf vector are different, checkout why", len(ques_coverage), len(ques_IDF_mat))
if len(ans_coverage) == len(ans_IDF_mat):
ans_coverage = [a*b for a,b in zip(ans_IDF_mat, ans_coverage)]
# else:
# print ("yep, we had these cases: ")
final_query_coverage = (sum(ques_coverage)) / float(max(1, len(ques_coverage)*len(current_subgraph_overlap)))
final_ans_coverage = (sum(ans_coverage)) / float(max(1, len(ans_coverage)*len(current_subgraph_overlap)))
# final_query_coverage = math.log(max(1,sum(ques_coverage)))/float(max(1,len(ques_coverage)))
# final_ans_coverage = math.log(max(1,sum(ans_coverage)))/float(max(1,len(ans_coverage)))
meta_graph_coverage_scores.append(final_query_coverage)
meta_graph_ans_coverage_scores.append(final_ans_coverage)
meta_graph_overlap_scores.append(avg_overlap)
# meta_graph_scores.append( avg_score * final_ans_coverage * final_query_coverage) ## taking average of subgraph scores
# if subgraph_size>2:
# print ("the avg score, overlap and coverage looks like: ", avg_score, avg_overlap, final_query_coverage, final_ans_coverage)
# meta_graph_scores.append( (1+avg_score) * (1+1*final_ans_coverage) * (1+final_query_coverage) ) ## taking average of subgraph scores
meta_graph_scores.append( ((1+avg_score)/float(1)) * (1+1*final_ans_coverage) * (1+final_query_coverage) ) ## taking average of subgraph scores ## * * # 1+avg_overlap
# print ("the len of meta graph scores are : ", len(meta_graph_scores))
try:
best_sub_graph_index = meta_graph_scores.index(max(meta_graph_scores))
# print ("checking weather this returns any overlap val or not ", meta_graph_overlap_scores)
return meta_subgraphs[best_sub_graph_index], meta_graph_overlap_scores[best_sub_graph_index], meta_graph_coverage_scores[best_sub_graph_index], meta_graph_ans_coverage_scores[best_sub_graph_index]
except ValueError:
return "Crashed"
#######################################
#######################################
def get_all_combination_withCoverage_best_graph_Cand_boost_ALL(KB_terms, performance_runs, Ques_terms, Ans_terms): ## gold_labels_list is QA terms and pred_labels_over_runs is justification terms
runs = list(performance_runs.keys())
gold_labels = Ques_terms + Ans_terms
# print("the gold_labels list looks like: ", runs)
meta_subgraphs = []
for i in range(len(runs)-1):
meta_subgraphs += list(combinations(runs, i+2))
# meta_subgraphs += list(combinations(runs, subgraph_size))
meta_graph_scores = []
meta_graph_coverage_scores = []
for meta_sub_graph1 in meta_subgraphs:
current_subgraph_overlap = []
current_subgraph_perf = []
for ik1, rk1 in enumerate(meta_sub_graph1[:-1]):
if ik1 == 0: ## initializing the coverage list
prediction_coverage = KB_terms[rk1]
current_subgraph_perf.append(performance_runs[rk1])
for rk2 in meta_sub_graph1[ik1 + 1:-1]: ##### This is equivalent to M C 2
current_subgraph_overlap.append(float(calculate_overlap(KB_terms[rk1], KB_terms[rk2])))
prediction_coverage = get_union(prediction_coverage, KB_terms[rk2])
avg_score = sum(current_subgraph_perf) / float(len(current_subgraph_perf))
avg_overlap = sum(current_subgraph_overlap) / float(max(1,len(current_subgraph_overlap)))
# print ("the ")
final_query_coverage = len(get_intersection(prediction_coverage, Ques_terms)) / float(len(Ques_terms))
final_ans_coverage = len(get_intersection(prediction_coverage, Ans_terms)) / float(len(Ans_terms))
meta_graph_coverage_scores.append(final_query_coverage)
# meta_graph_scores.append( avg_score * final_ans_coverage * final_query_coverage) ## taking average of subgraph scores
# if subgraph_size>2:
# print ("the avg score, overlap and coverage looks like: ", avg_score, avg_overlap, final_query_coverage, final_ans_coverage)
meta_graph_scores.append( (avg_score/float(1+avg_overlap)) * (1+1*final_ans_coverage) * (1+final_query_coverage) ) ## taking average of subgraph scores
# print ("the len of meta graph scores are : ", len(meta_graph_scores))
try:
best_sub_graph_index = meta_graph_scores.index(max(meta_graph_scores))
return meta_subgraphs[best_sub_graph_index]
except ValueError:
return "Crashed"
#######################################
def get_all_combination_forN_sizes_withCoverage_best_graph(pred_labels_over_runs, all_prediction_label_runs, performance_runs, gold_labels_list, BiNODE_overlap, mean_score):
runs = list(pred_labels_over_runs.keys())
best_subgraphs_diff_sizes = {} ### (P/O)*C
best_subgraphs_overlaps = {} ## just 1/O factor
best_subgraphs_Perf_Over = {} ## Just (P/O) factor, no coverage
gold_labels = list(range(len(gold_labels_list)))
all_subgraphs = []
feature_x = []
label_y = []
POC_score = 0
POC_subgraph = []
for i in range(len(runs)-1):
meta_subgraphs = list(combinations(runs, i+2))
meta_graph_scores = [] ### same sequence as above
meta_graph_Overlap_scores = []
meta_graph_PERF_Overlap_scores = []
meta_graph_coverage_scores = []
for meta_sub_graph1 in meta_subgraphs:
current_subgraph_overlap = []
current_subgraph_perf = []
for ik1, rk1 in enumerate(meta_sub_graph1[:-1]):
if ik1 == 0: ## initializing the coverage list
prediction_coverage = pred_labels_over_runs[rk1]
current_subgraph_perf.append(performance_runs[rk1])
for rk2 in meta_sub_graph1[ik1+1:]:
current_subgraph_overlap.append (BiNODE_overlap[str(rk1)+str(rk2)] )
prediction_coverage = get_union(prediction_coverage, pred_labels_over_runs[rk2])
avg_score = sum(current_subgraph_perf)/float(len(current_subgraph_perf))
avg_overlap = sum(current_subgraph_overlap)/float(len(current_subgraph_overlap))
# print ("the ")
final_coverage = len(get_intersection(prediction_coverage, gold_labels))/float(len(gold_labels))
meta_graph_coverage_scores.append(final_coverage)
meta_graph_scores.append( (avg_score/float(avg_overlap)) * final_coverage ) ## taking average of subgraph scores
############### for linear regression statistics and feature generation
# feature_x.append([avg_score, 1/float(avg_overlap), final_coverage, avg_score/float(avg_overlap),(avg_score/float(avg_overlap))*final_coverage, avg_score*final_coverage])
feature_x.append([avg_score, avg_overlap, final_coverage])
best_subgraph_preds = {mn1: all_prediction_label_runs[mn1] for mn1 in meta_sub_graph1}
subgraph_ensemble_performance = meta_voting_ensemble(best_subgraph_preds, gold_labels_list, math.ceil(len(meta_sub_graph1) / 2))
# print("the subgraph ensemble performance looks like: ", subgraph_ensemble_performance)
label_y.append(subgraph_ensemble_performance - mean_score)
all_subgraphs.append(meta_sub_graph1)
###################
meta_graph_Overlap_scores.append(1/float(avg_overlap))
meta_graph_PERF_Overlap_scores.append(avg_score/float(avg_overlap))
# print ("the len of meta graph scores are : ", len(meta_graph_scores))
best_sub_graph_index = meta_graph_scores.index(max(meta_graph_scores))
if max(meta_graph_scores)>POC_score:
POC_score = max(meta_graph_scores)
POC_subgraph = meta_subgraphs[best_sub_graph_index]
print ("best subgraph is: ", meta_subgraphs[best_sub_graph_index], max(meta_graph_scores),meta_graph_coverage_scores[best_sub_graph_index])
best_subgraphs_diff_sizes.update({i+2: meta_subgraphs[best_sub_graph_index]})
best_subgraphs_overlaps.update({i+2: meta_subgraphs[meta_graph_Overlap_scores.index(max(meta_graph_Overlap_scores))]})
best_subgraphs_Perf_Over.update({i+2:meta_subgraphs[meta_graph_PERF_Overlap_scores.index(max(meta_graph_PERF_Overlap_scores))]})
return best_subgraphs_diff_sizes, best_subgraphs_overlaps, best_subgraphs_Perf_Over, feature_x, label_y, all_subgraphs, POC_subgraph
########################################
def get_ensemble_perf_based_best_subgraph(meta_nodes, prediction_label_runs, gold_labels, best_performance, final_best_graph):
best_subgraph_preds = {mn1: prediction_label_runs[mn1] for mn1 in meta_nodes}
meta_ensemble_performance = meta_voting_ensemble(best_subgraph_preds, gold_labels, math.ceil(len(meta_nodes) / 2))
# print("the final meta ensemble performance is: ", meta_ensemble_performance)
if meta_ensemble_performance > best_performance:
final_best_graph = meta_nodes
best_performance = meta_ensemble_performance
return best_performance, final_best_graph
########################################
def get_complete_graph(pred_labels_over_runs, performance_runs, subgraph_size=4):
runs = list(pred_labels_over_runs.keys())
meta_subgraphs = list(combinations(runs, subgraph_size))
print ("len of meta subgraphs are ", len(meta_subgraphs))
meta_graph_scores = []
for meta_sub_graph1 in meta_subgraphs:
current_subgraph_score = []
for ik1, rk1 in enumerate(meta_sub_graph1[:-1]):
for rk2 in meta_sub_graph1[ik1+1:]:
current_subgraph_score.append ( performance_runs[rk1]*performance_runs[rk2] / float(calculate_overlap_labels(pred_labels_over_runs[rk1], pred_labels_over_runs[rk2]) ) )
meta_graph_scores.append(sum(current_subgraph_score)/float(len(current_subgraph_score)))
print ("the len of meta graph scores are : ", len(meta_graph_scores))
best_sub_graph_index = meta_graph_scores.index(max(meta_graph_scores))
print ("best subgraph is: ", meta_subgraphs[best_sub_graph_index], max(meta_graph_scores))
return meta_subgraphs[best_sub_graph_index]
def get_unsupervised_sub_graph(pred_labels_over_runs, subgraph_size=4):
runs = list(pred_labels_over_runs.keys())
meta_subgraphs = list(combinations(runs, subgraph_size))
print ("len of meta subgraphs are ", len(meta_subgraphs))
meta_graph_scores = []
for meta_sub_graph1 in meta_subgraphs:
current_subgraph_score = []
for ik1, rk1 in enumerate(meta_sub_graph1[:-1]):
for rk2 in meta_sub_graph1[ik1+1:]:
current_subgraph_score.append ( 1 / float(calculate_overlap_labels(pred_labels_over_runs[rk1], pred_labels_over_runs[rk2]) ) )
meta_graph_scores.append(sum(current_subgraph_score))
print ("the len of meta graph scores are : ", len(meta_graph_scores))
best_sub_graph_index = meta_graph_scores.index(max(meta_graph_scores))
print ("best subgraph is: ", meta_subgraphs[best_sub_graph_index], max(meta_graph_scores))
return meta_subgraphs[best_sub_graph_index]
## dummy function - complete this later
def get_node_pair_score(runs):
for ik1, rk1 in enumerate(runs[:-1]):
for rk2 in runs[ik1+1:]:
meta_graph.update({rk1+" " + rk2 : calculate_overlap_labels(pred_labels_over_runs[rk1], pred_labels_over_runs[rk2]) })
| 51.789948 | 345 | 0.706362 | 5,354 | 40,189 | 4.904744 | 0.039597 | 0.05655 | 0.055979 | 0.036253 | 0.926504 | 0.907845 | 0.891584 | 0.870373 | 0.8623 | 0.857312 | 0 | 0.014063 | 0.189629 | 40,189 | 775 | 346 | 51.856774 | 0.79225 | 0.24651 | 0 | 0.784689 | 0 | 0 | 0.019296 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035885 | false | 0.007177 | 0.014354 | 0 | 0.100478 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ea62811b648ea3b6ef5104ab32f082f06010a937 | 11,077 | py | Python | Python/Classification Evaluation/printPDF.py | DrMoe/Evaluation-of-satellite-imagery-based-crop-classification | ca7324ee6e5c399ea08d2c3ac11497e4ed95f473 | [
"MIT"
] | 9 | 2018-01-07T14:51:19.000Z | 2021-05-06T18:58:13.000Z | Python/Classification Evaluation/printPDF.py | DrMoe/Evaluation-of-satellite-imagery-based-crop-classification | ca7324ee6e5c399ea08d2c3ac11497e4ed95f473 | [
"MIT"
] | null | null | null | Python/Classification Evaluation/printPDF.py | DrMoe/Evaluation-of-satellite-imagery-based-crop-classification | ca7324ee6e5c399ea08d2c3ac11497e4ed95f473 | [
"MIT"
] | 5 | 2017-05-31T15:01:42.000Z | 2019-12-27T07:27:44.000Z | import timeit
import os
import errno
import socket
import datetime
import time
import csv
import numpy as np
import shutil
from pylatex import Document, Section, Subsection, Description, Tabular, MultiColumn,\
MultiRow, Itemize, Enumerate, Command, NoEscape
class printPDF:
def __init__(self, metrics_dict):
self.metrics_dict = metrics_dict
def create_pdf(self, test_spec_dict, test_description, path, name):
array_dimension = self.metrics_dict['con_matrix'].shape
row_dimensions = array_dimension[0]
array = self.metrics_dict['TpFpFnTn'][0]
code = self.metrics_dict['class_code'][0]
doc = Document("metrics")
with doc.create(Section('Test description')):
doc.append(test_description)
with doc.create(Description()) as desc:
for key, value in test_spec_dict.iteritems():
desc.add_item(key, value)
section = Section('Metrics overview')
test1 = Subsection('Rate matrix')
# Create TN, TP, FP, FN table
table1 = Tabular('cccccc')
table1.add_hline()
table1.add_row(("Class",'True-Positive','False-Positive','False-Negative','True-Negative', 'Accuracy'))
table1.add_hline()
for x in range(0, row_dimensions):
table1.add_row([self.metrics_dict['class_code'][x], self.metrics_dict['TpFpFnTn'][x][0],
self.metrics_dict['TpFpFnTn'][x][1], self.metrics_dict['TpFpFnTn'][x][2],
self.metrics_dict['TpFpFnTn'][x][3], self.metrics_dict['Acc_Indi'][x]])
table1.add_hline()
test1.append(table1)
test3 = Subsection('Class metrics')
table3 = Tabular('ccccc')
table3.add_hline()
table3.add_row(("Class",'True-Positive Rate (TPR)','Precision','True-Negative Rate (TNR)','F1-Score'))
table3.add_hline()
for x in range(0, row_dimensions):
table3.add_row([self.metrics_dict['class_code'][x], self.metrics_dict['recall_all'][x],
self.metrics_dict['precision_all'][x], self.metrics_dict['TNR'][x],
self.metrics_dict['f1_score_all'][x]])
table3.add_hline()
test3.append(table3)
test2 = Subsection('Other')
table2 = Tabular('cc')
table2.add_hline()
table2.add_row(("Class", "Value"))
table2.add_hline()
table2.add_row(["F1 Micro (Globally)", self.metrics_dict['f1_score_micro']])
table2.add_row(["F1 Macro (Each label)", self.metrics_dict['f1_score_macro']])
table2.add_row(["F1 Weighted (Each label)", self.metrics_dict['f1_score_weighted']])
table2.add_row(["F1 Micro (Globally) Std", self.metrics_dict['f1_score_micro_std']])
table2.add_hline()
table2.add_row(["Recall Micro (Globally)", self.metrics_dict['recall_micro']])
table2.add_row(["Recall Macro (Each label)", self.metrics_dict['recall_macro']])
table2.add_row(["Recall Weighted (Each label)", self.metrics_dict['recall_weighted']])
table2.add_hline()
table2.add_row(["Precision Micro (Globally)", self.metrics_dict['precision_micro']])
table2.add_row(["Precision Macro (Each label)", self.metrics_dict['precision_macro']])
table2.add_row(["Precision Weighted (Each label)", self.metrics_dict['precision_weighted']])
table2.add_hline()
table2.add_row(["Kappa", self.metrics_dict['kappa_all']])
table2.add_row(["Kappa (Linear weighted)", self.metrics_dict['kappa_linear']])
table2.add_row(["Kappa (Quadratic weighted)", self.metrics_dict['kappa_quadratic']])
table2.add_row(["Kappa Std", self.metrics_dict['kappa_all_std']])
table2.add_hline()
table2.add_row(["Accuracy (Correct classified)", self.metrics_dict['accuracy_all']])
table2.add_row(["Accuracy (Normalized)", self.metrics_dict['accuracy_normalized']])
table2.add_row(["Accuracy (Normalized) Std", self.metrics_dict['accuracy_normalized_std']])
table2.add_row(["Confidence Level(95%)", self.metrics_dict['confidence_level']])
table2.add_hline()
table2.add_row(["Jaccard (Sum)", self.metrics_dict['jaccard_all']])
table2.add_row(["Jaccard (Average)", self.metrics_dict['jaccard_normalized']])
table2.add_hline()
table2.add_row(["Zero-one classification loss (Misclassifications)", self.metrics_dict['zero_one_all']])
table2.add_row(["Zero-one classification loss (Fraction of misclassifications)", self.metrics_dict['zero_one_normalize']])
table2.add_hline()
table2.add_row(["Hamming loss", self.metrics_dict['hamming_loss']])
table2.add_hline()
table2.add_row(["Run Time (MSec)", self.metrics_dict['Run Time(MSec)']])
test2.append(table2)
section.append(test1)
section.append(test3)
section.append(test2)
doc.append(section)
try:
doc.generate_pdf(name + '_' + 'Metrics', compiler='pdflatex')
except Exception:
print ""
shutil.move(name + '_' + 'Metrics' + '.pdf', path)
try:
os.remove(name + '_' + 'Metrics' + '.tex')
except OSError:
pass
try:
os.remove(name + '_' + 'Metrics' + '.log')
except OSError:
pass
try:
os.remove(name + '_' + 'Metrics' + '.aux')
except OSError:
pass
return
def create_pdf_indi(self, test_spec_dict, test_description, path, name):
array_dimension = self.metrics_dict['con_matrix'].shape
row_dimensions = array_dimension[0]
array = self.metrics_dict['TpFpFnTn'][0]
code = self.metrics_dict['class_code'][0]
doc = Document("metrics")
with doc.create(Section('Test description')):
doc.append(test_description)
with doc.create(Description()) as desc:
for key, value in test_spec_dict.iteritems():
desc.add_item(key, value)
section = Section('Metrics overview')
test4 = Subsection('Confusion Matrix')
crop_array = np.array(
['Spring Barly(1)', 'Winter Barley(10)', 'Winter Wheat(11)', 'Winter Rape(22)', 'Maize(216)'])
# Create TN, TP, FP, FN table
table4 = Tabular('cccccc')
table4.add_hline()
table4.add_row(('', 'Spring Barly', 'Winter Barley', 'Winter Wheat', 'Winter Rape', 'Maize'))
table4.add_hline()
for x in range(0, row_dimensions):
table4.add_row([crop_array[x], self.metrics_dict['con_matrix'][x][0],
self.metrics_dict['con_matrix'][x][1], self.metrics_dict['con_matrix'][x][2],
self.metrics_dict['con_matrix'][x][3], self.metrics_dict['con_matrix'][x][4]])
table4.add_hline()
test4.append(table4)
test1 = Subsection('Rate matrix')
# Create TN, TP, FP, FN table
table1 = Tabular('cccccc')
table1.add_hline()
table1.add_row(("Class", 'True-Positive', 'False-Positive', 'False-Negative', 'True-Negative', 'Accuracy'))
table1.add_hline()
for x in range(0, row_dimensions):
table1.add_row([self.metrics_dict['class_code'][x], self.metrics_dict['TpFpFnTn'][x][0],
self.metrics_dict['TpFpFnTn'][x][1], self.metrics_dict['TpFpFnTn'][x][2],
self.metrics_dict['TpFpFnTn'][x][3], self.metrics_dict['Acc_Indi'][x]])
table1.add_hline()
test1.append(table1)
test3 = Subsection('Class metrics')
table3 = Tabular('ccccc')
table3.add_hline()
table3.add_row(("Class", 'True-Positive Rate (TPR)', 'Precision', 'True-Negative Rate (TNR)', 'F1-Score'))
table3.add_hline()
for x in range(0, row_dimensions):
table3.add_row([self.metrics_dict['class_code'][x], self.metrics_dict['recall_all'][x],
self.metrics_dict['precision_all'][x], self.metrics_dict['TNR'][x],
self.metrics_dict['f1_score_all'][x]])
table3.add_hline()
test3.append(table3)
test2 = Subsection('Other')
table2 = Tabular('cc')
table2.add_hline()
table2.add_row(("Class", "Value"))
table2.add_hline()
table2.add_row(["F1 Micro (Globally)", self.metrics_dict['f1_score_micro']])
table2.add_row(["F1 Macro (Each label)", self.metrics_dict['f1_score_macro']])
table2.add_row(["F1 Weighted (Each label)", self.metrics_dict['f1_score_weighted']])
table2.add_hline()
table2.add_row(["Recall Micro (Globally)", self.metrics_dict['recall_micro']])
table2.add_row(["Recall Macro (Each label)", self.metrics_dict['recall_macro']])
table2.add_row(["Recall Weighted (Each label)", self.metrics_dict['recall_weighted']])
table2.add_hline()
table2.add_row(["Precision Micro (Globally)", self.metrics_dict['precision_micro']])
table2.add_row(["Precision Macro (Each label)", self.metrics_dict['precision_macro']])
table2.add_row(["Precision Weighted (Each label)", self.metrics_dict['precision_weighted']])
table2.add_hline()
table2.add_row(["Kappa", self.metrics_dict['kappa_all']])
table2.add_row(["Kappa (Linear weighted)", self.metrics_dict['kappa_linear']])
table2.add_row(["Kappa (Quadratic weighted)", self.metrics_dict['kappa_quadratic']])
table2.add_hline()
table2.add_row(["Accuracy (Correct classified)", self.metrics_dict['accuracy_all']])
table2.add_row(["Accuracy (Normalized)", self.metrics_dict['accuracy_normalized']])
table2.add_hline()
table2.add_row(["Jaccard (Sum)", self.metrics_dict['jaccard_all']])
table2.add_row(["Jaccard (Average)", self.metrics_dict['jaccard_normalized']])
table2.add_hline()
table2.add_row(["Zero-one classification loss (Misclassifications)", self.metrics_dict['zero_one_all']])
table2.add_row(
["Zero-one classification loss (Fraction of misclassifications)", self.metrics_dict['zero_one_normalize']])
table2.add_hline()
table2.add_row(["Hamming loss", self.metrics_dict['hamming_loss']])
test2.append(table2)
section.append(test4)
section.append(test1)
section.append(test3)
section.append(test2)
doc.append(section)
try:
doc.generate_pdf(name + '_' + 'Metrics', compiler='pdflatex')
except Exception:
print ""
shutil.move(name + '_' + 'Metrics' + '.pdf', path)
try:
os.remove(name + '_' + 'Metrics' + '.tex')
except OSError:
pass
try:
os.remove(name + '_' + 'Metrics' + '.log')
except OSError:
pass
try:
os.remove(name + '_' + 'Metrics' + '.aux')
except OSError:
pass
return | 43.439216 | 130 | 0.614787 | 1,314 | 11,077 | 4.959665 | 0.119483 | 0.133344 | 0.17953 | 0.058309 | 0.895351 | 0.87571 | 0.842719 | 0.841798 | 0.841798 | 0.836735 | 0 | 0.021058 | 0.236887 | 11,077 | 255 | 131 | 43.439216 | 0.749911 | 0.007493 | 0 | 0.838863 | 0 | 0 | 0.246838 | 0.002093 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.028436 | 0.047393 | null | null | 0.014218 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
ea7352330f9dc1ff03b3f68222b9167f70a01f45 | 5,145 | py | Python | tests/modules/test_firewall.py | The-Cracker-Technology/CANToolz | 1773cf8b7ef906da245461f0768007e43e4bc02d | [
"Apache-2.0"
] | 194 | 2017-08-17T06:51:30.000Z | 2022-03-23T09:01:29.000Z | tests/modules/test_firewall.py | The-Cracker-Technology/CANToolz | 1773cf8b7ef906da245461f0768007e43e4bc02d | [
"Apache-2.0"
] | 32 | 2017-08-17T06:23:19.000Z | 2022-03-03T14:44:39.000Z | tests/modules/test_firewall.py | The-Cracker-Technology/CANToolz | 1773cf8b7ef906da245461f0768007e43e4bc02d | [
"Apache-2.0"
] | 42 | 2017-08-19T10:22:41.000Z | 2022-02-23T04:34:16.000Z | import time
from ..utils import TestCANToolz
class TestFirewall(TestCANToolz):
def test_blocked_body_hex(self):
self.CANEngine.load_config('tests/configurations/conf_analyze.py')
self.CANEngine.edit_module(2, {'pipe': 2, 'hex_black_body': ['0102030605']})
self.CANEngine.start_loop()
index = 3
self.CANEngine.call_module(0, 't 4:6:010203060505') # pass
time.sleep(1)
mod = self.CANEngine.actions[index][1].CANList
self.assertFalse(len(mod) == 0, "We should find message in PIPE")
self.assertTrue(mod[-1].frame_id == 4, "We should be able to find ID 4")
self.CANEngine.actions[index][1].CANList = []
self.CANEngine.call_module(0, 't 4:5:0102030605') # blocked
mod = self.CANEngine.actions[index][1].CANList
self.assertTrue(len(mod) == 0, "We should NOT find message in PIPE")
self.CANEngine.actions[index][1].CANList = []
self.CANEngine.edit_module(2, {'pipe': 2, 'hex_white_body': ['0102030605']})
self.CANEngine.call_module(0, 't 4:5:0102030605') # pass
time.sleep(1)
mod = self.CANEngine.actions[index][1].CANList
self.assertFalse(len(mod) == 0, "We should find message in PIPE")
self.assertTrue(mod[-1].frame_id == 4, "We should be able to find ID 4")
self.CANEngine.actions[index][1].CANList = []
self.CANEngine.call_module(0, 't 4:6:010203060505') # blocked
mod = self.CANEngine.actions[index][1].CANList
self.assertTrue(len(mod) == 0, "We should NOT find message in PIPE")
self.CANEngine.actions[index][1].CANList = []
def test_blocked_body(self):
self.CANEngine.load_config('tests/configurations/conf_analyze.py')
self.CANEngine.edit_module(2, {'pipe': 2, 'black_body': [[1, 2, 3, 6, 5]]})
self.CANEngine.start_loop()
index = 3
self.CANEngine.call_module(0, 't 4:6:010203060505') # pass
time.sleep(1)
mod = self.CANEngine.actions[index][1].CANList
self.assertFalse(len(mod) == 0, "We should find message in PIPE")
self.assertTrue(mod[-1].frame_id == 4, "We should be able to find ID 4")
self.CANEngine.actions[index][1].CANList = []
self.CANEngine.call_module(0, 't 4:5:0102030605') # blocked
mod = self.CANEngine.actions[index][1].CANList
self.assertTrue(len(mod) == 0, "We should NOT find message in PIPE")
self.CANEngine.actions[index][1].CANList = []
self.CANEngine.edit_module(2, {'pipe': 2, 'white_body': [[1, 2, 3, 6, 5]]})
self.CANEngine.call_module(0, 't 4:5:0102030605') # pass
time.sleep(1)
mod = self.CANEngine.actions[index][1].CANList
self.assertFalse(len(mod) == 0, "We should find message in PIPE")
self.assertTrue(mod[-1].frame_id == 4, "We should be able to find ID 4")
self.CANEngine.actions[index][1].CANList = []
self.CANEngine.call_module(0, 't 4:6:010203060505') # blocked
mod = self.CANEngine.actions[index][1].CANList
self.assertTrue(len(mod) == 0, "We should NOT find message in PIPE")
self.CANEngine.actions[index][1].CANList = []
def test_blocked_id(self):
self.CANEngine.load_config('tests/configurations/conf_analyze.py')
self.CANEngine.edit_module(2, {'pipe': 2, 'black_list': [1, 2, 3, 6, 5]})
self.CANEngine.start_loop()
index = 3
self.CANEngine.call_module(0, 't 4:4:11223344') # pass
time.sleep(1)
mod = self.CANEngine.actions[index][1].CANList
self.assertFalse(len(mod) == 0, "We should find message in PIPE")
self.assertTrue(mod[-1].frame_id == 4, "We should be able to find ID 4")
self.CANEngine.actions[index][1].CANList = []
self.CANEngine.call_module(0, 't 1:4:11223344')
time.sleep(1)
mod = self.CANEngine.actions[index][1].CANList
self.assertFalse(len(mod) > 0, "Message number 1 should not pass")
self.CANEngine.actions[index][1].CANList = []
self.CANEngine.call_module(0, 't 7:4:11223344') # pass
time.sleep(1)
mod = self.CANEngine.actions[index][1].CANList
self.assertTrue(mod[-1].frame_id == 7, "We should be able to find ID 7")
self.CANEngine.actions[index][1].CANList = []
self.CANEngine.call_module(0, 't 1:4:11223344')
time.sleep(1)
mod = self.CANEngine.actions[index][1].CANList
self.assertFalse(len(mod) > 0, "Message number 1 should not pass")
self.CANEngine.actions[index][1].CANList = []
self.CANEngine.call_module(0, 't 1:8:1122334411223344')
time.sleep(1)
mod = self.CANEngine.actions[index][1].CANList
self.assertFalse(len(mod) > 0, "Message number 1 should not pass")
self.CANEngine.actions[index][1].CANList = []
self.CANEngine.call_module(0, 't 4:4:11223344') # pass
time.sleep(1)
mod = self.CANEngine.actions[index][1].CANList
self.assertTrue(mod[-1].frame_id == 4, "We should be able to find ID 4")
self.CANEngine.actions[index][1].CANList = []
| 43.601695 | 84 | 0.627211 | 724 | 5,145 | 4.389503 | 0.089779 | 0.216803 | 0.176211 | 0.220264 | 0.949654 | 0.949654 | 0.949654 | 0.942731 | 0.936753 | 0.936753 | 0 | 0.076942 | 0.221963 | 5,145 | 117 | 85 | 43.974359 | 0.716962 | 0.012828 | 0 | 0.846154 | 0 | 0 | 0.202487 | 0.021314 | 0 | 0 | 0 | 0 | 0.208791 | 1 | 0.032967 | false | 0.032967 | 0.021978 | 0 | 0.065934 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
57664ae71afb3e6edf9d10681a4085ea023a296d | 14,091 | py | Python | rnnmodels.py | georgeyiasemis/Recurrent-Neural-Networks-from-scratch-in-Pytorch | ea0ae4d8bd876a8d619303f250e0f05061e4eef5 | [
"MIT"
] | 11 | 2021-04-12T07:10:24.000Z | 2022-03-08T21:44:29.000Z | rnnmodels.py | georgeyiasemis/Recurrent-Neural-Networks-from-scratch-in-Pytorch | ea0ae4d8bd876a8d619303f250e0f05061e4eef5 | [
"MIT"
] | null | null | null | rnnmodels.py | georgeyiasemis/Recurrent-Neural-Networks-from-scratch-in-Pytorch | ea0ae4d8bd876a8d619303f250e0f05061e4eef5 | [
"MIT"
] | 1 | 2022-02-25T21:18:16.000Z | 2022-02-25T21:18:16.000Z | import torch
import torch.nn as nn
from torch.autograd import Variable
import numpy as np
from rnncells import LSTMCell, GRUCell, RNNCell
class SimpleRNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, bias, output_size, activation='tanh'):
super(SimpleRNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.bias = bias
self.output_size = output_size
self.rnn_cell_list = nn.ModuleList()
if activation == 'tanh':
self.rnn_cell_list.append(RNNCell(self.input_size,
self.hidden_size,
self.bias,
"tanh"))
for l in range(1, self.num_layers):
self.rnn_cell_list.append(RNNCell(self.hidden_size,
self.hidden_size,
self.bias,
"tanh"))
elif activation == 'relu':
self.rnn_cell_list.append(RNNCell(self.input_size,
self.hidden_size,
self.bias,
"relu"))
for l in range(1, self.num_layers):
self.rnn_cell_list.append(RNNCell(self.hidden_size,
self.hidden_size,
self.bias,
"relu"))
else:
raise ValueError("Invalid activation.")
self.fc = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input, hx=None):
# Input of shape (batch_size, seqence length, input_size)
#
# Output of shape (batch_size, output_size)
if hx is None:
if torch.cuda.is_available():
h0 = Variable(torch.zeros(self.num_layers, input.size(0), self.hidden_size).cuda())
else:
h0 = Variable(torch.zeros(self.num_layers, input.size(0), self.hidden_size))
else:
h0 = hx
outs = []
hidden = list()
for layer in range(self.num_layers):
hidden.append(h0[layer, :, :])
for t in range(input.size(1)):
for layer in range(self.num_layers):
if layer == 0:
hidden_l = self.rnn_cell_list[layer](input[:, t, :], hidden[layer])
else:
hidden_l = self.rnn_cell_list[layer](hidden[layer - 1],hidden[layer])
hidden[layer] = hidden_l
hidden[layer] = hidden_l
outs.append(hidden_l)
# Take only last time step. Modify for seq to seq
out = outs[-1].squeeze()
out = self.fc(out)
return out
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, bias, output_size):
super(LSTM, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.bias = bias
self.output_size = output_size
self.rnn_cell_list = nn.ModuleList()
self.rnn_cell_list.append(LSTMCell(self.input_size,
self.hidden_size,
self.bias))
for l in range(1, self.num_layers):
self.rnn_cell_list.append(LSTMCell(self.hidden_size,
self.hidden_size,
self.bias))
self.fc = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input, hx=None):
# Input of shape (batch_size, seqence length , input_size)
#
# Output of shape (batch_size, output_size)
if hx is None:
if torch.cuda.is_available():
h0 = Variable(torch.zeros(self.num_layers, input.size(0), self.hidden_size).cuda())
else:
h0 = Variable(torch.zeros(self.num_layers, input.size(0), self.hidden_size))
else:
h0 = hx
outs = []
hidden = list()
for layer in range(self.num_layers):
hidden.append((h0[layer, :, :], h0[layer, :, :]))
for t in range(input.size(1)):
for layer in range(self.num_layers):
if layer == 0:
hidden_l = self.rnn_cell_list[layer](
input[:, t, :],
(hidden[layer][0],hidden[layer][1])
)
else:
hidden_l = self.rnn_cell_list[layer](
hidden[layer - 1][0],
(hidden[layer][0], hidden[layer][1])
)
hidden[layer] = hidden_l
outs.append(hidden_l[0])
out = outs[-1].squeeze()
out = self.fc(out)
return out
class GRU(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, bias, output_size):
super(GRU, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.bias = bias
self.output_size = output_size
self.rnn_cell_list = nn.ModuleList()
self.rnn_cell_list.append(GRUCell(self.input_size,
self.hidden_size,
self.bias))
for l in range(1, self.num_layers):
self.rnn_cell_list.append(GRUCell(self.hidden_size,
self.hidden_size,
self.bias))
self.fc = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input, hx=None):
# Input of shape (batch_size, seqence length, input_size)
#
# Output of shape (batch_size, output_size)
if hx is None:
if torch.cuda.is_available():
h0 = Variable(torch.zeros(self.num_layers, input.size(0), self.hidden_size).cuda())
else:
h0 = Variable(torch.zeros(self.num_layers, input.size(0), self.hidden_size))
else:
h0 = hx
outs = []
hidden = list()
for layer in range(self.num_layers):
hidden.append(h0[layer, :, :])
for t in range(input.size(1)):
for layer in range(self.num_layers):
if layer == 0:
hidden_l = self.rnn_cell_list[layer](input[:, t, :], hidden[layer])
else:
hidden_l = self.rnn_cell_list[layer](hidden[layer - 1],hidden[layer])
hidden[layer] = hidden_l
hidden[layer] = hidden_l
outs.append(hidden_l)
# Take only last time step. Modify for seq to seq
out = outs[-1].squeeze()
out = self.fc(out)
return out
class BidirRecurrentModel(nn.Module):
def __init__(self, mode, input_size, hidden_size, num_layers, bias, output_size):
super(BidirRecurrentModel, self).__init__()
self.mode = mode
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.bias = bias
self.output_size = output_size
self.rnn_cell_list = nn.ModuleList()
if mode == 'LSTM':
self.rnn_cell_list.append(LSTMCell(self.input_size,
self.hidden_size,
self.bias))
for l in range(1, self.num_layers):
self.rnn_cell_list.append(LSTMCell(self.hidden_size,
self.hidden_size,
self.bias))
elif mode == 'GRU':
self.rnn_cell_list.append(GRUCell(self.input_size,
self.hidden_size,
self.bias))
for l in range(1, self.num_layers):
self.rnn_cell_list.append(GRUCell(self.hidden_size,
self.hidden_size,
self.bias))
elif mode == 'RNN_TANH':
self.rnn_cell_list.append(RNNCell(self.input_size,
self.hidden_size,
self.bias,
"tanh"))
for l in range(1, self.num_layers):
self.rnn_cell_list.append(RNNCell(self.hidden_size,
self.hidden_size,
self.bias,
"tanh"))
elif mode == 'RNN_RELU':
self.rnn_cell_list.append(RNNCell(self.input_size,
self.hidden_size,
self.bias,
"relu"))
for l in range(1, self.num_layers):
self.rnn_cell_list.append(RNNCell(self.hidden_size,
self.hidden_size,
self.bias,
"relu"))
else:
raise ValueError("Invalid RNN mode selected.")
self.fc = nn.Linear(self.hidden_size * 2, self.output_size)
def forward(self, input, hx=None):
# Input of shape (batch_size, sequence length, input_size)
#
# Output of shape (batch_size, output_size)
if torch.cuda.is_available():
h0 = Variable(torch.zeros(self.num_layers, input.size(0), self.hidden_size).cuda())
else:
h0 = Variable(torch.zeros(self.num_layers, input.size(0), self.hidden_size))
if torch.cuda.is_available():
hT = Variable(torch.zeros(self.num_layers, input.size(0), self.hidden_size).cuda())
else:
hT = Variable(torch.zeros(self.num_layers, input.size(0), self.hidden_size))
outs = []
outs_rev = []
hidden_forward = list()
for layer in range(self.num_layers):
if self.mode == 'LSTM':
hidden_forward.append((h0[layer, :, :], h0[layer, :, :]))
else:
hidden_forward.append(h0[layer, :, :])
hidden_backward = list()
for layer in range(self.num_layers):
if self.mode == 'LSTM':
hidden_backward.append((hT[layer, :, :], hT[layer, :, :]))
else:
hidden_backward.append(hT[layer, :, :])
for t in range(input.shape[1]):
for layer in range(self.num_layers):
if self.mode == 'LSTM':
# If LSTM
if layer == 0:
# Forward net
h_forward_l = self.rnn_cell_list[layer](
input[:, t, :],
(hidden_forward[layer][0], hidden_forward[layer][1])
)
# Backward net
h_back_l = self.rnn_cell_list[layer](
input[:, -(t + 1), :],
(hidden_backward[layer][0], hidden_backward[layer][1])
)
else:
# Forward net
h_forward_l = self.rnn_cell_list[layer](
hidden_forward[layer - 1][0],
(hidden_forward[layer][0], hidden_forward[layer][1])
)
# Backward net
h_back_l = self.rnn_cell_list[layer](
hidden_backward[layer - 1][0],
(hidden_backward[layer][0], hidden_backward[layer][1])
)
else:
# If RNN{_TANH/_RELU} / GRU
if layer == 0:
# Forward net
h_forward_l = self.rnn_cell_list[layer](input[:, t, :], hidden_forward[layer])
# Backward net
h_back_l = self.rnn_cell_list[layer](input[:, -(t + 1), :], hidden_backward[layer])
else:
# Forward net
h_forward_l = self.rnn_cell_list[layer](hidden_forward[layer - 1], hidden_forward[layer])
# Backward net
h_back_l = self.rnn_cell_list[layer](hidden_backward[layer - 1], hidden_backward[layer])
hidden_forward[layer] = h_forward_l
hidden_backward[layer] = h_back_l
if self.mode == 'LSTM':
outs.append(h_forward_l[0])
outs_rev.append(h_back_l[0])
else:
outs.append(h_forward_l)
outs_rev.append(h_back_l)
# Take only last time step. Modify for seq to seq
out = outs[-1].squeeze()
out_rev = outs_rev[0].squeeze()
out = torch.cat((out, out_rev), 1)
out = self.fc(out)
return out
| 38.083784 | 114 | 0.455255 | 1,443 | 14,091 | 4.222453 | 0.06237 | 0.082061 | 0.096504 | 0.083703 | 0.914492 | 0.889381 | 0.866732 | 0.862137 | 0.856393 | 0.84556 | 0 | 0.010217 | 0.451281 | 14,091 | 369 | 115 | 38.186992 | 0.777807 | 0.047619 | 0 | 0.773946 | 0 | 0 | 0.009827 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030651 | false | 0 | 0.019157 | 0 | 0.08046 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
57e94111321160744637719af255c6f184d3ae57 | 38,589 | py | Python | qipr/registry/migrations/0001_initial.py | ctsit/qipr | 3f0ef102d81a859c955f918b74037d199b4d6a00 | [
"Apache-2.0"
] | 2 | 2017-02-10T15:07:51.000Z | 2017-02-10T15:08:01.000Z | qipr/registry/migrations/0001_initial.py | ctsit/qipr | 3f0ef102d81a859c955f918b74037d199b4d6a00 | [
"Apache-2.0"
] | 11 | 2016-08-03T13:18:08.000Z | 2017-01-24T14:19:59.000Z | qipr/registry/migrations/0001_initial.py | ctsit/qipr | 3f0ef102d81a859c955f918b74037d199b4d6a00 | [
"Apache-2.0"
] | 5 | 2016-07-29T17:12:43.000Z | 2016-12-19T15:56:14.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.5 on 2017-01-17 20:10
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import registry.utils
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='AccessLog',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('gatorlink', models.CharField(max_length=50, null=True)),
('http_verb', models.CharField(max_length=10)),
('ip', models.GenericIPAddressField()),
('request_body', models.TextField(null=True)),
('response_code', models.IntegerField(null=True)),
('time_requested', models.DateTimeField(auto_now_add=True)),
('time_responded', models.DateTimeField(auto_now=True)),
('url', models.TextField()),
('previous_log', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='next_log', to='registry.AccessLog')),
],
),
migrations.CreateModel(
name='Address',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('address1', models.CharField(max_length=50)),
('address2', models.CharField(max_length=50)),
('city', models.CharField(max_length=50)),
('zip_code', models.CharField(blank=True, max_length=10, null=True)),
('state', models.CharField(blank=True, max_length=2, null=True)),
('country', models.CharField(blank=True, max_length=2, null=True)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='AuditTrail',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('datetime', models.DateTimeField(auto_now=True)),
('json_before', models.TextField(null=True)),
('json_after', models.TextField(null=True)),
('user', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='audit', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='BigAim',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('name', models.CharField(max_length=400)),
('description', models.CharField(max_length=400, null=True)),
('sort_order', models.IntegerField(null=True)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Category',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('description', models.CharField(max_length=100, null=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='ClinicalArea',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('description', models.CharField(max_length=100, null=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='ClinicalDepartment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('description', models.CharField(max_length=100, null=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('sort_order', models.IntegerField(null=True)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='ClinicalSetting',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('description', models.CharField(max_length=100, null=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Descriptor',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('date_added', models.DateField(null=True)),
('major_revision_date', models.DateField(null=True)),
('ui', models.CharField(max_length=10)),
('cas_registry_number', models.CharField(max_length=40, null=True)),
('descriptor_class', models.CharField(max_length=1, null=True)),
('descriptor_entry_version', models.CharField(max_length=100, null=True)),
('descriptor_sort_version', models.CharField(max_length=300, null=True)),
('major_descriptor_date', models.DateField(null=True)),
('mesh_heading', models.CharField(max_length=150)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Entry',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('name', models.CharField(max_length=50, null=True)),
('pipe_separated', models.CharField(max_length=300, null=True)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Expertise',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('description', models.CharField(max_length=100, null=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='FocusArea',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('description', models.CharField(max_length=100, null=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('sort_order', models.IntegerField(null=True)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Keyword',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('description', models.CharField(max_length=100, null=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='MeshTreeNumber',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('value', models.CharField(max_length=100)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Organization',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('org_name', models.CharField(max_length=400)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Person',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('account_expiration_time', models.DateTimeField(null=True)),
('business_phone', models.CharField(max_length=50, null=True)),
('contact_phone', models.CharField(max_length=50, null=True)),
('email_address', models.CharField(max_length=100, null=True)),
('first_name', models.CharField(max_length=30)),
('gatorlink', models.CharField(max_length=50, null=True)),
('last_login_time', models.DateTimeField(null=True)),
('last_name', models.CharField(max_length=30)),
('training', models.CharField(max_length=50, null=True)),
('webpage_url', models.CharField(max_length=50, null=True)),
('title', models.CharField(max_length=50, null=True)),
('department', models.CharField(max_length=50, null=True)),
('qi_required', models.SmallIntegerField(null=True)),
('other_self_classification', models.CharField(max_length=100, null=True)),
('is_admin', models.BooleanField(default=False)),
('clinical_area', models.ManyToManyField(to='registry.ClinicalArea')),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('expertise', models.ManyToManyField(to='registry.Expertise')),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('organization', models.ManyToManyField(to='registry.Organization')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='PharmacologicalAction',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('name', models.CharField(max_length=250)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Position',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('description', models.CharField(max_length=100, null=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Project',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('approval_date', models.DateTimeField(null=True)),
('archived', models.BooleanField(default=False)),
('description', models.TextField(null=True)),
('measures', models.TextField(null=True)),
('overall_goal', models.TextField(null=True)),
('proposed_end_date', models.DateTimeField(null=True)),
('proposed_start_date', models.DateTimeField(null=True)),
('title', models.CharField(max_length=300)),
('advisor', models.ManyToManyField(related_name='advised_projects', to='registry.Person')),
('big_aim', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='projects', to='registry.BigAim')),
('category', models.ManyToManyField(related_name='projects', to='registry.Category')),
('clinical_area', models.ManyToManyField(related_name='projects', to='registry.ClinicalArea')),
('clinical_setting', models.ManyToManyField(related_name='projects', to='registry.ClinicalSetting')),
('collaborator', models.ManyToManyField(related_name='collaborations', to='registry.Person')),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('owner', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='projects', to='registry.Person')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='QI_Interest',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('description', models.CharField(max_length=100, null=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Qualifier',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('date_added', models.DateField(null=True)),
('major_revision_date', models.DateField(null=True)),
('ui', models.CharField(max_length=10)),
('qualifier_established', models.CharField(max_length=25, null=True)),
('abbreviation', models.CharField(max_length=2)),
('sub_heading', models.CharField(max_length=50)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='RegistryNumber',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('name', models.CharField(max_length=200)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='SCR',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('date_added', models.DateField(null=True)),
('major_revision_date', models.DateField(null=True)),
('ui', models.CharField(max_length=10)),
('cas_registry_number', models.CharField(max_length=40, null=True)),
('frequency', models.IntegerField(null=True)),
('note', models.TextField()),
('substance_name', models.CharField(max_length=300, null=True)),
('substance_name_term_thesaurus', models.CharField(max_length=40, null=True)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('heading_mapped_to', models.ManyToManyField(related_name='scr', to='registry.Descriptor')),
('indexing_information', models.ManyToManyField(related_name='scr_indexing', to='registry.Descriptor')),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('pharmacological_action', models.ManyToManyField(to='registry.PharmacologicalAction')),
('related_registry_number', models.ManyToManyField(to='registry.RegistryNumber')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Self_Classification',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('name', models.CharField(max_length=400)),
('description', models.CharField(max_length=400, null=True)),
('sort_order', models.IntegerField(null=True)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='SemanticType',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('value', models.CharField(max_length=10)),
('description', models.CharField(max_length=50, null=True)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Source',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('name', models.CharField(max_length=200)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Speciality',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('description', models.CharField(max_length=100, null=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Suffix',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
('description', models.CharField(max_length=100, null=True)),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Synonym',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('name', models.CharField(max_length=50, null=True)),
('pipe_separated', models.CharField(max_length=400, null=True)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Training',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(auto_now_add=True)),
('last_modified', models.DateTimeField(auto_now=True)),
('guid', models.CharField(default=registry.utils.get_guid, editable=False, max_length=32)),
('name', models.CharField(max_length=200)),
('description', models.CharField(max_length=200, null=True)),
('created_by', models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
('last_modified_by', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='UserAgent',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('ua_string', models.TextField()),
('ua_hash', models.CharField(editable=False, max_length=32)),
],
),
migrations.AlterUniqueTogether(
name='useragent',
unique_together=set([('id', 'ua_hash')]),
),
migrations.AddField(
model_name='scr',
name='semantic_type',
field=models.ManyToManyField(to='registry.SemanticType'),
),
migrations.AddField(
model_name='scr',
name='source',
field=models.ManyToManyField(to='registry.Source'),
),
migrations.AddField(
model_name='scr',
name='synonym',
field=models.ManyToManyField(to='registry.Synonym'),
),
migrations.AddField(
model_name='person',
name='position',
field=models.ManyToManyField(to='registry.Position'),
),
migrations.AddField(
model_name='person',
name='qi_interest',
field=models.ManyToManyField(to='registry.QI_Interest'),
),
migrations.AddField(
model_name='person',
name='self_classification',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='person', to='registry.Self_Classification'),
),
migrations.AddField(
model_name='person',
name='speciality',
field=models.ManyToManyField(to='registry.Speciality'),
),
migrations.AddField(
model_name='person',
name='suffix',
field=models.ManyToManyField(to='registry.Suffix'),
),
migrations.AddField(
model_name='descriptor',
name='allowable_qualifiers',
field=models.ManyToManyField(related_name='_descriptor_allowable_qualifiers_+', to='registry.Qualifier'),
),
migrations.AddField(
model_name='descriptor',
name='created_by',
field=models.ForeignKey(editable=False, on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='descriptor',
name='entry',
field=models.ManyToManyField(related_name='descriptor', to='registry.Entry'),
),
migrations.AddField(
model_name='descriptor',
name='forward_reference',
field=models.ManyToManyField(related_name='_descriptor_forward_reference_+', to='registry.Descriptor'),
),
migrations.AddField(
model_name='descriptor',
name='last_modified_by',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to=settings.AUTH_USER_MODEL),
),
migrations.AddField(
model_name='descriptor',
name='mesh_tree_number',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='descriptor', to='registry.MeshTreeNumber'),
),
migrations.AddField(
model_name='descriptor',
name='pharmacological_action',
field=models.ManyToManyField(to='registry.PharmacologicalAction'),
),
migrations.AddField(
model_name='descriptor',
name='projects',
field=models.ManyToManyField(null=True, related_name='mesh_keyword', to='registry.Project'),
),
migrations.AddField(
model_name='descriptor',
name='related_registry_number',
field=models.ManyToManyField(to='registry.RegistryNumber'),
),
migrations.AddField(
model_name='descriptor',
name='semantic_type',
field=models.ManyToManyField(to='registry.SemanticType'),
),
migrations.AddField(
model_name='address',
name='organization',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='org_address', to='registry.Organization'),
),
migrations.AddField(
model_name='address',
name='person',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='business_address', to='registry.Person'),
),
migrations.AddField(
model_name='accesslog',
name='user_agent',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='registry.UserAgent'),
),
]
| 58.291541 | 159 | 0.600508 | 3,888 | 38,589 | 5.75 | 0.059156 | 0.068438 | 0.056361 | 0.075148 | 0.870639 | 0.824969 | 0.773976 | 0.748882 | 0.741546 | 0.738146 | 0 | 0.008734 | 0.258234 | 38,589 | 661 | 160 | 58.379728 | 0.772289 | 0.001762 | 0 | 0.710567 | 1 | 0 | 0.122125 | 0.016849 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.007657 | 0 | 0.013783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
aa0e681d5f93082cf250932f6fe98fd6df1917ea | 4,105 | py | Python | tests/test_respawnTracker.py | fenugrec/dirt-rally-time-recorder | 0f91fc49e9ed9b34afd9a11676ecd51a58a6d596 | [
"CC-BY-3.0",
"Apache-2.0",
"MIT"
] | null | null | null | tests/test_respawnTracker.py | fenugrec/dirt-rally-time-recorder | 0f91fc49e9ed9b34afd9a11676ecd51a58a6d596 | [
"CC-BY-3.0",
"Apache-2.0",
"MIT"
] | null | null | null | tests/test_respawnTracker.py | fenugrec/dirt-rally-time-recorder | 0f91fc49e9ed9b34afd9a11676ecd51a58a6d596 | [
"CC-BY-3.0",
"Apache-2.0",
"MIT"
] | null | null | null | import unittest
from timerecorder.respawnTracker import RespawnTracker
fieldCount = 66
class TestRespawnTracker(unittest.TestCase):
def __init__(self, methodName):
unittest.TestCase.__init__(self, methodName)
def setUp(self):
self.thing = RespawnTracker()
def tearDown(self):
pass
def testNoRespawnForFirstStats(self):
stats = [0] * fieldCount
stats[4] = 100.0
self.thing.track(stats)
self.assertFalse(self.thing.isRecover() or self.thing.isRestart())
def testNoRespawnForLowXDeltas(self):
stats = [0] * fieldCount
stats[4] = 100.0
self.thing.track(stats)
stats[4] = 101.1
self.thing.track(stats)
self.assertFalse(self.thing.isRecover() or self.thing.isRestart())
stats[4] = 100.8
self.thing.track(stats)
self.assertFalse(self.thing.isRecover() or self.thing.isRestart())
stats[4] = 99.9
self.thing.track(stats)
self.assertFalse(self.thing.isRecover() or self.thing.isRestart())
def testNoRespawnForLowYDeltas(self):
stats = [0] * fieldCount
stats[5] = 100.0
self.thing.track(stats)
stats[5] = 101.1
self.thing.track(stats)
self.assertFalse(self.thing.isRecover() or self.thing.isRestart())
stats[5] = 100.8
self.thing.track(stats)
self.assertFalse(self.thing.isRecover() or self.thing.isRestart())
stats[5] = 99.9
self.thing.track(stats)
self.assertFalse(self.thing.isRecover() or self.thing.isRestart())
def testNoRespawnForCombinedDeltas(self):
stats = [0] * fieldCount
stats[4] = 100.0
stats[5] = 100.0
self.thing.track(stats)
stats[4] = 101.1
stats[5] = 101.0
self.thing.track(stats)
self.assertFalse(self.thing.isRecover() or self.thing.isRestart())
stats[4] = 100.8
stats[5] = 102.2
self.thing.track(stats)
self.assertFalse(self.thing.isRecover() or self.thing.isRestart())
def testSmallDeltaIsRecover(self):
stats = [0] * fieldCount
stats[4] = 100.0
stats[5] = 100.0
self.thing.track(stats)
stats[4] = 95.0
stats[5] = 100.0
self.thing.track(stats)
self.assertTrue(self.thing.isRecover())
self.assertFalse(self.thing.isRestart())
stats[4] = 96.8
stats[5] = 99.9
self.thing.track(stats)
self.assertFalse(self.thing.isRecover())
self.assertFalse(self.thing.isRestart())
stats[4] = 97.0
stats[5] = 105.0
self.thing.track(stats)
self.assertTrue(self.thing.isRecover())
self.assertFalse(self.thing.isRestart())
def testLargeDeltaIsRestartForDistanceValueNearZero(self):
stats = [0] * fieldCount
stats[2] = 13
stats[4] = 100.0
stats[5] = 100.0
self.thing.track(stats)
stats[2] = 5
stats[4] = 20.0
stats[5] = 100.0
self.thing.track(stats)
self.assertFalse(self.thing.isRecover())
self.assertTrue(self.thing.isRestart())
stats[4] = 10.0
stats[5] = 15.0
self.thing.track(stats)
self.assertFalse(self.thing.isRecover())
self.assertTrue(self.thing.isRestart())
def testLargeDeltaIsRecoverForHigherDistanceValue(self):
stats = [0] * fieldCount
stats[2] = 25
stats[4] = 100.0
stats[5] = 100.0
self.thing.track(stats)
stats[4] = 20.0
stats[5] = 100.0
self.thing.track(stats)
self.assertTrue(self.thing.isRecover())
self.assertFalse(self.thing.isRestart())
stats[4] = 10.0
stats[5] = 15.0
self.thing.track(stats)
self.assertTrue(self.thing.isRecover())
self.assertFalse(self.thing.isRestart())
if __name__ == '__main__':
unittest.main() | 29.746377 | 74 | 0.574421 | 467 | 4,105 | 5.014989 | 0.117773 | 0.211358 | 0.131512 | 0.17848 | 0.777541 | 0.769001 | 0.746798 | 0.746798 | 0.746798 | 0.73228 | 0 | 0.06263 | 0.299878 | 4,105 | 138 | 75 | 29.746377 | 0.752262 | 0 | 0 | 0.715596 | 0 | 0 | 0.001948 | 0 | 0 | 0 | 0 | 0 | 0.211009 | 1 | 0.091743 | false | 0.009174 | 0.018349 | 0 | 0.119266 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
aa1a88ac9dfb6656e1227a9de6e17e6ecf5d91b4 | 119,046 | py | Python | cryptonic.py | Septillioner/cryptonic | b4d69bf9c38d934606b862ab99b44c18642446c3 | [
"MIT"
] | 5 | 2017-10-22T14:22:09.000Z | 2018-08-27T21:02:40.000Z | cryptonic.py | Septillioner/cryptonic | b4d69bf9c38d934606b862ab99b44c18642446c3 | [
"MIT"
] | null | null | null | cryptonic.py | Septillioner/cryptonic | b4d69bf9c38d934606b862ab99b44c18642446c3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import random
import shlex
from base64 import b64encode,b64decode
import string
import os
import md5
import threading
import json
import zlib
# CRYPTONIC WRITTEN BY SEPTILLIONER
# THX FOR SUPPORTS HeykLog && Рнаитом
# BINARY
def charToBin(data):
try:
return bin(ord(data))[2:]
except TypeError:
return data
def strToBin(data):
if(len(data) != 1):
str_data = str()
for i in data:
str_data+= bin(ord(i))[2:]+" "
str_data = str_data[:-1]
return str_data
elif(len(data) > 1):
return charToBin(data)
def binToChar(data):
return chr(int(data,2))
def binToStr(data):
if(len(data)!= 1):
str_data = str()
for i in data.split(" "):
str_data+= chr(int(i,2))
return str_data
else:
return binToChar(data)
#
# OCTAL
#
def charToOct(data):
return oct(ord(data))
def strToOct(data):
if(len(data) != 1):
str_data = str()
for i in data:
str_data+= oct(ord(i))+" "
str_data = str_data[:-1]
return str_data
elif(len(data) > 1):
return charToOct(data)
def octToChar(data):
return chr(int(data,8))
def octToStr(data):
if(len(data)!= 1):
str_data = str()
for i in data.split(" "):
str_data+= chr(int(i,8))
return str_data
else:
return octToChar(data)
#
# DECIMAL
#
def charToDec(data):
return ord(data)
def strToDec(data):
if(len(data) != 1):
str_data = str()
for i in data:
str_data+= str(ord(i))+" "
str_data = str_data[:-1]
return str_data
elif(len(data) > 1):
return charToDec(data)
def decToChar(data):
return chr(int(data,10))
def decToStr(data):
if(len(data)!= 1):
str_data = str()
for i in data.split(" "):
str_data+= chr(int(i,10))
return str_data
else:
return decToChar(data)
#
# HEXDECIMAL
#
def charToHex(data):
return ord(data)
def strToHex(data):
if(len(data) != 1):
str_data = str()
for i in data:
str_data+= hex(ord(i))[2:]+" "
str_data = str_data[:-1]
return str_data
elif(len(data) > 1):
return charToDec(data)
def hexToChar(data):
return chr(int(data,16))
def hexToStr(data):
if(len(data)!= 1):
str_data = str()
for i in data.split(" "):
str_data+= chr(int(i,16))
return str_data
else:
return decToChar(data)
#
# TORS
#
tors = {"0":{"codec":"tor0","bit":"0","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","QAycFbRL=H4mMdr5vIZlDEGzKqxhWw+a6JC891iok7eUNp23/0YuBgtVOXTSfjnsP"),"tdecode":string.maketrans("QAycFbRL=H4mMdr5vIZlDEGzKqxhWw+a6JC891iok7eUNp23/0YuBgtVOXTSfjnsP","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1":{"codec":"tor1","bit":"1","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","FUDVsSHnM90NuEakxG7ZYjoz4IJbXBmlhvCOQTfg5qwP8dWRp=t2e/63icA1+rLyK"),"tdecode":string.maketrans("FUDVsSHnM90NuEakxG7ZYjoz4IJbXBmlhvCOQTfg5qwP8dWRp=t2e/63icA1+rLyK","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10":{"codec":"tor2","bit":"10","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","DiGegOX9sJ536KCtL8dlVUu/M+rwynjQmR7AqSpzxZ104caFN2bvWHBoYTfkI=EhP"),"tdecode":string.maketrans("DiGegOX9sJ536KCtL8dlVUu/M+rwynjQmR7AqSpzxZ104caFN2bvWHBoYTfkI=EhP","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11":{"codec":"tor3","bit":"11","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","hsN3c8vRGd6ED27B9JOq0pgUVmuQtLAw/aMPfFSI+YioXKzTjlHenZC15rb4xy=kW"),"tdecode":string.maketrans("hsN3c8vRGd6ED27B9JOq0pgUVmuQtLAw/aMPfFSI+YioXKzTjlHenZC15rb4xy=kW","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"100":{"codec":"tor4","bit":"100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","C9flBx2a=bY+ry8Dk5jHi1Mt/QULsTuPn6RJKz0ecvwmZSXpIoGAO73FqNEhgWVd4"),"tdecode":string.maketrans("C9flBx2a=bY+ry8Dk5jHi1Mt/QULsTuPn6RJKz0ecvwmZSXpIoGAO73FqNEhgWVd4","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"101":{"codec":"tor5","bit":"101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","VSmJAzhO96xy0EjL7CBWNoFPQrGeRapul5qTvYc3HbI=g/Xkf8iZMtK+ds1UD4wn2"),"tdecode":string.maketrans("VSmJAzhO96xy0EjL7CBWNoFPQrGeRapul5qTvYc3HbI=g/Xkf8iZMtK+ds1UD4wn2","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"110":{"codec":"tor6","bit":"110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","ed1zqbXRWlCsf7BVrZkAnQHS=LOIvtcMgF4mGoDUJ8jKx6E053/ihNYpuP2Tw+ya9"),"tdecode":string.maketrans("ed1zqbXRWlCsf7BVrZkAnQHS=LOIvtcMgF4mGoDUJ8jKx6E053/ihNYpuP2Tw+ya9","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"111":{"codec":"tor7","bit":"111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","7lER3iTm=s91NOwjPG6xZfgMLSQ2+ud0yqXBeVp5hUzJIokDKbnH/4YC8artvAcWF"),"tdecode":string.maketrans("7lER3iTm=s91NOwjPG6xZfgMLSQ2+ud0yqXBeVp5hUzJIokDKbnH/4YC8artvAcWF","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1000":{"codec":"tor8","bit":"1000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","ORTw+vFePkpzJG5Bo4nC7fxl=Wbrc206hqd3mjI8D1KNHi9syZMALXYQVSt/EaguU"),"tdecode":string.maketrans("ORTw+vFePkpzJG5Bo4nC7fxl=Wbrc206hqd3mjI8D1KNHi9syZMALXYQVSt/EaguU","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1001":{"codec":"tor9","bit":"1001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","pOXrPM=yFsG/JIqC1B0wR5j6v+hlkmbW9idSTZg73t4AVcYn8zoULxNDHeuE2QKfa"),"tdecode":string.maketrans("pOXrPM=yFsG/JIqC1B0wR5j6v+hlkmbW9idSTZg73t4AVcYn8zoULxNDHeuE2QKfa","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1010":{"codec":"tor10","bit":"1010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","M/bR9o=8VT+fHOcz3gjkGAKEXlNIrZ2ytJ6aixvWs41U7DFCwqu5YdeLmPpBQSnh0"),"tdecode":string.maketrans("M/bR9o=8VT+fHOcz3gjkGAKEXlNIrZ2ytJ6aixvWs41U7DFCwqu5YdeLmPpBQSnh0","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1011":{"codec":"tor11","bit":"1011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","Vcz+sIWC8=DLq39fyuvl1J5Pd/NjobKARepxBS0m7FO2GagTEiMk4UZnQHY6hrtwX"),"tdecode":string.maketrans("Vcz+sIWC8=DLq39fyuvl1J5Pd/NjobKARepxBS0m7FO2GagTEiMk4UZnQHY6hrtwX","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1100":{"codec":"tor12","bit":"1100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","uGtP+nEv2b9lNOfAz8B1X47UM0CoJIxWyK=VFwDicLgTdkhsjQe3aq/Rp6H5rYZmS"),"tdecode":string.maketrans("uGtP+nEv2b9lNOfAz8B1X47UM0CoJIxWyK=VFwDicLgTdkhsjQe3aq/Rp6H5rYZmS","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1101":{"codec":"tor13","bit":"1101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","KTDniQ1f/Mvw5RJ8YAegrpL+WaO64dVzXkbG0Im72q=3jsuEZF9xBUCoctNPySlhH"),"tdecode":string.maketrans("KTDniQ1f/Mvw5RJ8YAegrpL+WaO64dVzXkbG0Im72q=3jsuEZF9xBUCoctNPySlhH","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1110":{"codec":"tor14","bit":"1110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","XJiVzofHCISpGk+uKM4n86Ra=NDjYF715g3PUmxlwyEcWbLvdQrBet9ATsZ/O0qh2"),"tdecode":string.maketrans("XJiVzofHCISpGk+uKM4n86Ra=NDjYF715g3PUmxlwyEcWbLvdQrBet9ATsZ/O0qh2","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1111":{"codec":"tor15","bit":"1111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","+CZouAeIEzO/25aLUk8NMBDmlFR34=wtbn7qSHvG0TspQygJhWcY1Kx6rdiPf9VXj"),"tdecode":string.maketrans("+CZouAeIEzO/25aLUk8NMBDmlFR34=wtbn7qSHvG0TspQygJhWcY1Kx6rdiPf9VXj","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10000":{"codec":"tor16","bit":"10000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","KDZ1m9v5CraeEd8hzcwX07IknxRH=oAp6WtsBj/S3i+Lul4UqTfOV2bFgQYGMyNJP"),"tdecode":string.maketrans("KDZ1m9v5CraeEd8hzcwX07IknxRH=oAp6WtsBj/S3i+Lul4UqTfOV2bFgQYGMyNJP","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10001":{"codec":"tor17","bit":"10001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","UgrNk+wE75bKX1QG2PmSYMtc0A9TavypB3sdHCILzxq6OZoWV=efRJDi4nh/8ulFj"),"tdecode":string.maketrans("UgrNk+wE75bKX1QG2PmSYMtc0A9TavypB3sdHCILzxq6OZoWV=efRJDi4nh/8ulFj","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10010":{"codec":"tor18","bit":"10010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","Dbj/Kci+INXaU7B69RsSn=Qu0CrL1dO4APZpFgzMvkVofHxGYq8lh5mJeWyt32EwT"),"tdecode":string.maketrans("Dbj/Kci+INXaU7B69RsSn=Qu0CrL1dO4APZpFgzMvkVofHxGYq8lh5mJeWyt32EwT","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10011":{"codec":"tor19","bit":"10011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","AZ9mtVJvb=jS6BhqfXyHNKnrPckITua7W0D4glMCwo8QEGd+5eOUFi1x2z3/LRpYs"),"tdecode":string.maketrans("AZ9mtVJvb=jS6BhqfXyHNKnrPckITua7W0D4glMCwo8QEGd+5eOUFi1x2z3/LRpYs","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10100":{"codec":"tor20","bit":"10100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","/KVZPESW+jB5UIX48oxaYF=bqOzw9sRQMgTDNec3npdiykAHmvuhCJ06Ll2t7rG1f"),"tdecode":string.maketrans("/KVZPESW+jB5UIX48oxaYF=bqOzw9sRQMgTDNec3npdiykAHmvuhCJ06Ll2t7rG1f","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10101":{"codec":"tor21","bit":"10101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","trRmpbsMWxuSo6TfZ9Ah2Nj/EgPGIFvOH1QBez5i3y8UwqJdKCX4n0caYD=Lkl7+V"),"tdecode":string.maketrans("trRmpbsMWxuSo6TfZ9Ah2Nj/EgPGIFvOH1QBez5i3y8UwqJdKCX4n0caYD=Lkl7+V","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10110":{"codec":"tor22","bit":"10110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","cpgL0v7+DYFbE8maN1XtqCSreoI9n/dQf5JlAzZPKwsTVy4HhO6GiWxR=3BjMUuk2"),"tdecode":string.maketrans("cpgL0v7+DYFbE8maN1XtqCSreoI9n/dQf5JlAzZPKwsTVy4HhO6GiWxR=3BjMUuk2","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10111":{"codec":"tor23","bit":"10111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","HE7W2nlPGkY4cyMOQdX1R9vhTqF+ZeJKo6V5/w3IAabUCiB0LmrxufjtDsNz8gp=S"),"tdecode":string.maketrans("HE7W2nlPGkY4cyMOQdX1R9vhTqF+ZeJKo6V5/w3IAabUCiB0LmrxufjtDsNz8gp=S","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11000":{"codec":"tor24","bit":"11000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","sdIOVzq0QK2wm3MGiF7lgbrRnySTkhvoDUN+=4PxCYXH8t1Af/LjaZpeu96WBEJc5"),"tdecode":string.maketrans("sdIOVzq0QK2wm3MGiF7lgbrRnySTkhvoDUN+=4PxCYXH8t1Af/LjaZpeu96WBEJc5","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11001":{"codec":"tor25","bit":"11001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","n/0TMxphEIcJvLBisfetkR9WKlQHD=qSZg+N4YXOu6o2dmAaj835UGyFw1CzV7brP"),"tdecode":string.maketrans("n/0TMxphEIcJvLBisfetkR9WKlQHD=qSZg+N4YXOu6o2dmAaj835UGyFw1CzV7brP","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11010":{"codec":"tor26","bit":"11010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","FGvmpSfHECaNnzOiWu/eY1BbIUJd6q+rg2l=t9Zk8M0Vs75Xx3jywKoRcQPAhDTL4"),"tdecode":string.maketrans("FGvmpSfHECaNnzOiWu/eY1BbIUJd6q+rg2l=t9Zk8M0Vs75Xx3jywKoRcQPAhDTL4","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11011":{"codec":"tor27","bit":"11011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","GgY8541lLFT/yIMh0jpZNrJk3qE=Ce9DcxUX27nRKfvdsAmiWHb6atzoVSu+PwOBQ"),"tdecode":string.maketrans("GgY8541lLFT/yIMh0jpZNrJk3qE=Ce9DcxUX27nRKfvdsAmiWHb6atzoVSu+PwOBQ","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11100":{"codec":"tor28","bit":"11100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","5zq43xK/R6fhndtQiXTU1vPercg9LVj7M8By2OEmaGpsHZuSJIwAClkFN+0Y=boWD"),"tdecode":string.maketrans("5zq43xK/R6fhndtQiXTU1vPercg9LVj7M8By2OEmaGpsHZuSJIwAClkFN+0Y=boWD","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11101":{"codec":"tor29","bit":"11101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","HeVBCrw5yvXk7LfcQgqKJYPDiZpMan/1GIUo8A0t3lbW96SsmRux2zjhF4dN+=OET"),"tdecode":string.maketrans("HeVBCrw5yvXk7LfcQgqKJYPDiZpMan/1GIUo8A0t3lbW96SsmRux2zjhF4dN+=OET","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11110":{"codec":"tor30","bit":"11110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","uGBamw1K4EH03pRk5lPhSWgJv/Vji7LIzYcfbro6AZNFXe2T=y+qsCDUMxQtdO9n8"),"tdecode":string.maketrans("uGBamw1K4EH03pRk5lPhSWgJv/Vji7LIzYcfbro6AZNFXe2T=y+qsCDUMxQtdO9n8","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11111":{"codec":"tor31","bit":"11111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","NzGnS8h4VHYTiJyMaLPQ3BDeoxsbp+vdOCFfwlE=u/5t6grq1mIXkj0RcUA972WKZ"),"tdecode":string.maketrans("NzGnS8h4VHYTiJyMaLPQ3BDeoxsbp+vdOCFfwlE=u/5t6grq1mIXkj0RcUA972WKZ","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"100000":{"codec":"tor32","bit":"100000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","Fs=LgzHGmN9qb7Ky2YE4/hZ5fUDdjkCuxV8JBrIpo0XQ6tPMAW3lnRaOi1Secvw+T"),"tdecode":string.maketrans("Fs=LgzHGmN9qb7Ky2YE4/hZ5fUDdjkCuxV8JBrIpo0XQ6tPMAW3lnRaOi1Secvw+T","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"100001":{"codec":"tor33","bit":"100001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","NPG1TMzY0uJlhF+7oiH8Sd5peABKEx6j4tLkX3myw9QDaIW/R=UrsvbVqnOgfZCc2"),"tdecode":string.maketrans("NPG1TMzY0uJlhF+7oiH8Sd5peABKEx6j4tLkX3myw9QDaIW/R=UrsvbVqnOgfZCc2","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"100010":{"codec":"tor34","bit":"100010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","L1lTFsDnjN4RowS5EOfyC9d3qeWmMi0ckH/bGYJz28ZB6uIKPVhvUA=7r+XptQgax"),"tdecode":string.maketrans("L1lTFsDnjN4RowS5EOfyC9d3qeWmMi0ckH/bGYJz28ZB6uIKPVhvUA=7r+XptQgax","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"100011":{"codec":"tor35","bit":"100011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","GpfI+=9W8Tuw75blkXQSz6PqvoDFJZOcgAU0Bi2HaVys1jdRhN/eM4xmYrL3nKECt"),"tdecode":string.maketrans("GpfI+=9W8Tuw75blkXQSz6PqvoDFJZOcgAU0Bi2HaVys1jdRhN/eM4xmYrL3nKECt","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"100100":{"codec":"tor36","bit":"100100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","69=hVAyJ35YWmb704TjdoOsGE2/HXIZMicqg1rSRaQKn8FDeu+vNtlfzkpwxULCPB"),"tdecode":string.maketrans("69=hVAyJ35YWmb704TjdoOsGE2/HXIZMicqg1rSRaQKn8FDeu+vNtlfzkpwxULCPB","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"100101":{"codec":"tor37","bit":"100101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","0eVdoQmhXvZW375CupafAJjcrlUyGngPqHE8KTF/SIstkMDBL1Yi+z24x=NwRbO69"),"tdecode":string.maketrans("0eVdoQmhXvZW375CupafAJjcrlUyGngPqHE8KTF/SIstkMDBL1Yi+z24x=NwRbO69","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"100110":{"codec":"tor38","bit":"100110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","fejB6qiNUmOgAVXWKQ7pR4bJo0a+cYEl8ZHT25dPySxt=Lw193nsMCGvhFIuDkz/r"),"tdecode":string.maketrans("fejB6qiNUmOgAVXWKQ7pR4bJo0a+cYEl8ZHT25dPySxt=Lw193nsMCGvhFIuDkz/r","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"100111":{"codec":"tor39","bit":"100111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","3bXFjSkmOr9T8tJB5Knxh4+1ZgaD2upyU7IAHozvNd6=RlWMfYVGLE0cweCQPs/qi"),"tdecode":string.maketrans("3bXFjSkmOr9T8tJB5Knxh4+1ZgaD2upyU7IAHozvNd6=RlWMfYVGLE0cweCQPs/qi","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"101000":{"codec":"tor40","bit":"101000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","/raItw5P6G0NBUjHzOSD=eQh1dc3KM9nXE8LCxybRJ2Vfk+YZ4uFviAWTlmpsqo7g"),"tdecode":string.maketrans("/raItw5P6G0NBUjHzOSD=eQh1dc3KM9nXE8LCxybRJ2Vfk+YZ4uFviAWTlmpsqo7g","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"101001":{"codec":"tor41","bit":"101001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","C+apx9JPihzTsNmOtV8c1LHYEyRUdb72Zgv0qfl/oD5GXMwIFn4WKu6QjrB3eSA=k"),"tdecode":string.maketrans("C+apx9JPihzTsNmOtV8c1LHYEyRUdb72Zgv0qfl/oD5GXMwIFn4WKu6QjrB3eSA=k","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"101010":{"codec":"tor42","bit":"101010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","NRUBbk0fGOKa/c8zMWni74vsJ9hX3Ho=dIeY6DZ1LPutwAlTCpg+V5rxQmSy2jFEq"),"tdecode":string.maketrans("NRUBbk0fGOKa/c8zMWni74vsJ9hX3Ho=dIeY6DZ1LPutwAlTCpg+V5rxQmSy2jFEq","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"101011":{"codec":"tor43","bit":"101011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","ALjQTfsX9+y6IGnCbpF=aoBdP5U3evktm712cqh0rHK8VZDRSEz/ulwYxW4MiOJNg"),"tdecode":string.maketrans("ALjQTfsX9+y6IGnCbpF=aoBdP5U3evktm712cqh0rHK8VZDRSEz/ulwYxW4MiOJNg","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"101100":{"codec":"tor44","bit":"101100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","CPqJ94upslT7x1Dyjf5gLkRS=nWOI/odHU8Kb6aehZFwic0Q+tGBmMrEAN3VzX2vY"),"tdecode":string.maketrans("CPqJ94upslT7x1Dyjf5gLkRS=nWOI/odHU8Kb6aehZFwic0Q+tGBmMrEAN3VzX2vY","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"101101":{"codec":"tor45","bit":"101101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","n2eQok8BqaOuyhr4pmjizl=t61HRLPDAg/7dWNUwbfxIZYvSC35+MFTJ9cEVsXG0K"),"tdecode":string.maketrans("n2eQok8BqaOuyhr4pmjizl=t61HRLPDAg/7dWNUwbfxIZYvSC35+MFTJ9cEVsXG0K","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"101110":{"codec":"tor46","bit":"101110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","3rvnqX2VbcAikPUOCJ8/7lMSh+gN9wj0xLFya6WTBZuRE5=IY1tDopmHsdzKfGQe4"),"tdecode":string.maketrans("3rvnqX2VbcAikPUOCJ8/7lMSh+gN9wj0xLFya6WTBZuRE5=IY1tDopmHsdzKfGQe4","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"101111":{"codec":"tor47","bit":"101111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","sPl3xmKhGO/rNCnzdjQ9iguBbtw2cyYWUMFDp6ZAEHI087T5ka=LV+J1qeRf4oSvX"),"tdecode":string.maketrans("sPl3xmKhGO/rNCnzdjQ9iguBbtw2cyYWUMFDp6ZAEHI087T5ka=LV+J1qeRf4oSvX","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"110000":{"codec":"tor48","bit":"110000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","EdAGYWZKSBo7hVw/xy3iv5jJt+sTbeC9HR21zDOgfkFLp6aUQqXm84lu0rPMNn=Ic"),"tdecode":string.maketrans("EdAGYWZKSBo7hVw/xy3iv5jJt+sTbeC9HR21zDOgfkFLp6aUQqXm84lu0rPMNn=Ic","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"110001":{"codec":"tor49","bit":"110001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","ujZz8YiUJGKP+QqavxRldAWhVcIOCb1nfN/gpFmDMy=2rwEt6o4eHksB5793LSX0T"),"tdecode":string.maketrans("ujZz8YiUJGKP+QqavxRldAWhVcIOCb1nfN/gpFmDMy=2rwEt6o4eHksB5793LSX0T","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"110010":{"codec":"tor50","bit":"110010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","mLguy9CIxsv2Rtb1TN/pzn8MJArEodwaH034DZX=kifYOeVjqcQF+6lGPUB5SKh7W"),"tdecode":string.maketrans("mLguy9CIxsv2Rtb1TN/pzn8MJArEodwaH034DZX=kifYOeVjqcQF+6lGPUB5SKh7W","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"110011":{"codec":"tor51","bit":"110011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","L3lX8A9+5Iq=jkRrxM7fiVNsKyJndOT2mBCuYDhHtGpFSecgoQwUZazW/vP601b4E"),"tdecode":string.maketrans("L3lX8A9+5Iq=jkRrxM7fiVNsKyJndOT2mBCuYDhHtGpFSecgoQwUZazW/vP601b4E","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"110100":{"codec":"tor52","bit":"110100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","SmLXIv92ZsgO3U0+uo=n5x8QJjAbzPWN7K4/1YtfDFTldwhyBkpHrCM6eEVRGiqca"),"tdecode":string.maketrans("SmLXIv92ZsgO3U0+uo=n5x8QJjAbzPWN7K4/1YtfDFTldwhyBkpHrCM6eEVRGiqca","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"110101":{"codec":"tor53","bit":"110101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","=ayp4u1KRkL0/qADHzVGtBPbUE3o7STvWwmQcxler8fnsiC5hgFZI9N6OYjXJM+2d"),"tdecode":string.maketrans("=ayp4u1KRkL0/qADHzVGtBPbUE3o7STvWwmQcxler8fnsiC5hgFZI9N6OYjXJM+2d","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"110110":{"codec":"tor54","bit":"110110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","m6juSVPQKNa7Zqrh45Hp+UBCWM1Fiy/IczLoflndvDb3GX9YRe8TswAxt=kg0O2JE"),"tdecode":string.maketrans("m6juSVPQKNa7Zqrh45Hp+UBCWM1Fiy/IczLoflndvDb3GX9YRe8TswAxt=kg0O2JE","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"110111":{"codec":"tor55","bit":"110111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","NQR536unGtIq08rBgjkA9mslUVcTW1do4DwpyF=XiSvfH/LbeYxaCMZEhJz+KOP27"),"tdecode":string.maketrans("NQR536unGtIq08rBgjkA9mslUVcTW1do4DwpyF=XiSvfH/LbeYxaCMZEhJz+KOP27","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"111000":{"codec":"tor56","bit":"111000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","jnwyFqfMX6zt=R1/kAp+QLD4iT2Svs7PEalegGIUVroubBOJWhYZdH3c9Nmx05CK8"),"tdecode":string.maketrans("jnwyFqfMX6zt=R1/kAp+QLD4iT2Svs7PEalegGIUVroubBOJWhYZdH3c9Nmx05CK8","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"111001":{"codec":"tor57","bit":"111001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","3V2qELNhB=dQ+MYvGJSDbskcOf/8l6mx75T9XnPp4Hiroy1FKewgZzAtaRjCUu0IW"),"tdecode":string.maketrans("3V2qELNhB=dQ+MYvGJSDbskcOf/8l6mx75T9XnPp4Hiroy1FKewgZzAtaRjCUu0IW","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"111010":{"codec":"tor58","bit":"111010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","zNZjGm1c2kTwq86uEL/KJCaM7Sbr5AoXU+sBegxnfVid4PhvDRQtpHl9Iy30YO=FW"),"tdecode":string.maketrans("zNZjGm1c2kTwq86uEL/KJCaM7Sbr5AoXU+sBegxnfVid4PhvDRQtpHl9Iy30YO=FW","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"111011":{"codec":"tor59","bit":"111011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","9WjMbiN5Q7r=ocSVpZxz+4JTaf8CDPEu3thIURKXv/nYks0mgqAOH21dwGy6lBeLF"),"tdecode":string.maketrans("9WjMbiN5Q7r=ocSVpZxz+4JTaf8CDPEu3thIURKXv/nYks0mgqAOH21dwGy6lBeLF","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"111100":{"codec":"tor60","bit":"111100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","C3JHIhaNMo8FdZriUQj=V9LsOG2eRy75DxY0wPfvKWuz61XlnSmBA+tkcE/g4pbqT"),"tdecode":string.maketrans("C3JHIhaNMo8FdZriUQj=V9LsOG2eRy75DxY0wPfvKWuz61XlnSmBA+tkcE/g4pbqT","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"111101":{"codec":"tor61","bit":"111101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","GvjzoBlk3cs/ptLD+O0Tq=19dXRNEh7QVuU6nIfxMFPeHb24SWJAyKrmC8w5gYiZa"),"tdecode":string.maketrans("GvjzoBlk3cs/ptLD+O0Tq=19dXRNEh7QVuU6nIfxMFPeHb24SWJAyKrmC8w5gYiZa","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"111110":{"codec":"tor62","bit":"111110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","W6CNjS=Km+b71zelhsQv/0n48irVUOHfcTIMBpkux29GdD5qYPawLAoEJyZgtRXF3"),"tdecode":string.maketrans("W6CNjS=Km+b71zelhsQv/0n48irVUOHfcTIMBpkux29GdD5qYPawLAoEJyZgtRXF3","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"111111":{"codec":"tor63","bit":"111111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","/9psmzXZKUqcYNEiJBwOy4G2FLr8kSIWuealgH1hPC+3obD05tRjfvMATn7dQV6=x"),"tdecode":string.maketrans("/9psmzXZKUqcYNEiJBwOy4G2FLr8kSIWuealgH1hPC+3obD05tRjfvMATn7dQV6=x","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1000000":{"codec":"tor64","bit":"1000000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","+rHRPYL1=pUwacMkbe/GQt85EhFvKg3xIBDdu07mVjS9TiXzoNf4Zy2nlqOJ6WCsA"),"tdecode":string.maketrans("+rHRPYL1=pUwacMkbe/GQt85EhFvKg3xIBDdu07mVjS9TiXzoNf4Zy2nlqOJ6WCsA","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1000001":{"codec":"tor65","bit":"1000001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","TXxPeJzsYfm0WyCupiIbjZLtRgBVNDv+aGShH/cOU7l6K3d524kE=M1w8F9qroAQn"),"tdecode":string.maketrans("TXxPeJzsYfm0WyCupiIbjZLtRgBVNDv+aGShH/cOU7l6K3d524kE=M1w8F9qroAQn","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1000010":{"codec":"tor66","bit":"1000010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","ZRA6C4Y+Ti3fVswkoy9GFb/=evrmDt5K1gQPIJnSj8zUEWBMN0apdlOucLx2hHX7q"),"tdecode":string.maketrans("ZRA6C4Y+Ti3fVswkoy9GFb/=evrmDt5K1gQPIJnSj8zUEWBMN0apdlOucLx2hHX7q","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1000011":{"codec":"tor67","bit":"1000011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","XRdjO=3U7GvJNiyPLrp9SlxDhAZzFaBI2kW5/b64wQcH1TmCtYfMgVK+oen0E8qus"),"tdecode":string.maketrans("XRdjO=3U7GvJNiyPLrp9SlxDhAZzFaBI2kW5/b64wQcH1TmCtYfMgVK+oen0E8qus","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1000100":{"codec":"tor68","bit":"1000100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","Vgq2KZ8mvxdeOkUNs04t+Ap57C6HP3SiYbjBca=hDuQFMIfTywGzXl1nrL/JWRo9E"),"tdecode":string.maketrans("Vgq2KZ8mvxdeOkUNs04t+Ap57C6HP3SiYbjBca=hDuQFMIfTywGzXl1nrL/JWRo9E","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1000101":{"codec":"tor69","bit":"1000101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","KSG760aJnkFeUZ+tqY95VNxsW28fziX/Eb43wBdPlCyLhmjuc1gMDTpvrOoA=RIHQ"),"tdecode":string.maketrans("KSG760aJnkFeUZ+tqY95VNxsW28fziX/Eb43wBdPlCyLhmjuc1gMDTpvrOoA=RIHQ","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1000110":{"codec":"tor70","bit":"1000110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","e8kqSs6uL42lKHNcxwzXWTyM0IroVRpZtD+/3dOJmCYAfaiUEGb975Bvnh1j=gQFP"),"tdecode":string.maketrans("e8kqSs6uL42lKHNcxwzXWTyM0IroVRpZtD+/3dOJmCYAfaiUEGb975Bvnh1j=gQFP","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1000111":{"codec":"tor71","bit":"1000111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","qkUWjFcLdVtBufEgvPN/I672ai0R59ZorCJHD3GTsyxeYlAm8=pnQwhb1+z4OXKSM"),"tdecode":string.maketrans("qkUWjFcLdVtBufEgvPN/I672ai0R59ZorCJHD3GTsyxeYlAm8=pnQwhb1+z4OXKSM","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1001000":{"codec":"tor72","bit":"1001000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","k0NhORBtyx/32vp=W1I54rf76H9TGXC+PqL8mKiAwbnEasSFjuzZcgYVQeUdlMDoJ"),"tdecode":string.maketrans("k0NhORBtyx/32vp=W1I54rf76H9TGXC+PqL8mKiAwbnEasSFjuzZcgYVQeUdlMDoJ","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1001001":{"codec":"tor73","bit":"1001001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","AV2TjOuSqZ0UpEkBgntJ9P+w4=LKQN1HCzx/ioRehbarfXvWF63YdlIsDMG7y5cm8"),"tdecode":string.maketrans("AV2TjOuSqZ0UpEkBgntJ9P+w4=LKQN1HCzx/ioRehbarfXvWF63YdlIsDMG7y5cm8","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1001010":{"codec":"tor74","bit":"1001010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","c7P2oYkzDZEjiwU0SIlnaWF8J4rCugVNtqhdexGHOBXAQ/K1vRb+M9yLm=s56Tfp3"),"tdecode":string.maketrans("c7P2oYkzDZEjiwU0SIlnaWF8J4rCugVNtqhdexGHOBXAQ/K1vRb+M9yLm=s56Tfp3","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1001011":{"codec":"tor75","bit":"1001011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","io/waWZ0NuxVk9cD2qPzvYeT=IAh68p5LQ+mGsFg3rXKdMnEtJyfOb4ljC1BURSH7"),"tdecode":string.maketrans("io/waWZ0NuxVk9cD2qPzvYeT=IAh68p5LQ+mGsFg3rXKdMnEtJyfOb4ljC1BURSH7","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1001100":{"codec":"tor76","bit":"1001100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","z8dJIfkloK/9XN4DtZpvH7ETFwAxPVRq1+WCQiMajbrueY6BOn3c5hULGy2s=m0Sg"),"tdecode":string.maketrans("z8dJIfkloK/9XN4DtZpvH7ETFwAxPVRq1+WCQiMajbrueY6BOn3c5hULGy2s=m0Sg","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1001101":{"codec":"tor77","bit":"1001101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","kAjucILXSb16f92MzTBpEZgHWe3oFh0nxD4mOsK/7=YCPRUa+ilQ8rGVNtJv5ydqw"),"tdecode":string.maketrans("kAjucILXSb16f92MzTBpEZgHWe3oFh0nxD4mOsK/7=YCPRUa+ilQ8rGVNtJv5ydqw","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1001110":{"codec":"tor78","bit":"1001110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","/l3OfpoyTG0aEWs61MQnCZ+UruxAPIg5cL7Yz98RNVmdDHhjtbqwvBikFXS4=J2Ke"),"tdecode":string.maketrans("/l3OfpoyTG0aEWs61MQnCZ+UruxAPIg5cL7Yz98RNVmdDHhjtbqwvBikFXS4=J2Ke","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1001111":{"codec":"tor79","bit":"1001111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","3hfeowYayICMrpblPL=DSgkNX+vVq8A19TcuRFdxjQHEJUt2Wz5OGnsK47ZB6m0i/"),"tdecode":string.maketrans("3hfeowYayICMrpblPL=DSgkNX+vVq8A19TcuRFdxjQHEJUt2Wz5OGnsK47ZB6m0i/","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1010000":{"codec":"tor80","bit":"1010000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","=hMiGpg7SXDPekms5rvu16RfzI0BQqE9t8Ha+CjA2lU/OKy3JNnLVoZFxbcdYw4WT"),"tdecode":string.maketrans("=hMiGpg7SXDPekms5rvu16RfzI0BQqE9t8Ha+CjA2lU/OKy3JNnLVoZFxbcdYw4WT","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1010001":{"codec":"tor81","bit":"1010001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","r6qZuXvYtg2m7x4N5wa+K=j/ceFhzREWo9dikpOL0V3DQJnGCB1sASfbTyMUHlP8I"),"tdecode":string.maketrans("r6qZuXvYtg2m7x4N5wa+K=j/ceFhzREWo9dikpOL0V3DQJnGCB1sASfbTyMUHlP8I","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1010010":{"codec":"tor82","bit":"1010010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","ZpNSQC/tyUi68Iux7johef43lHVBzLrFmkcaTJX=GAWRnwqdvP0+15gEDMY29sbOK"),"tdecode":string.maketrans("ZpNSQC/tyUi68Iux7johef43lHVBzLrFmkcaTJX=GAWRnwqdvP0+15gEDMY29sbOK","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1010011":{"codec":"tor83","bit":"1010011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","sYZTDCwqM9VIfmUaoc56g/Sp+z7AtJnPQKWkOj2=8RB40rGvNlF3bh1LdyiHuxeXE"),"tdecode":string.maketrans("sYZTDCwqM9VIfmUaoc56g/Sp+z7AtJnPQKWkOj2=8RB40rGvNlF3bh1LdyiHuxeXE","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1010100":{"codec":"tor84","bit":"1010100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","7O6FjBJYbyHnRIi9WU=cQuoaVCXNTqlK4kPze3m/rvELGthAp+8ZMx0D2s5wdSgf1"),"tdecode":string.maketrans("7O6FjBJYbyHnRIi9WU=cQuoaVCXNTqlK4kPze3m/rvELGthAp+8ZMx0D2s5wdSgf1","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1010101":{"codec":"tor85","bit":"1010101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","Vz+ra8wWp1NgMqF=OAXjRvePlh5f0BdZc4mCiJQx2YuUDEStIk3Tn6o7HG9/bLKys"),"tdecode":string.maketrans("Vz+ra8wWp1NgMqF=OAXjRvePlh5f0BdZc4mCiJQx2YuUDEStIk3Tn6o7HG9/bLKys","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1010110":{"codec":"tor86","bit":"1010110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","3+McSewFYs/piVL4kvEIjZKDtG8AgTo6uJ72HUrbyf5qdxON9mW0hlXPz1RQaCnB="),"tdecode":string.maketrans("3+McSewFYs/piVL4kvEIjZKDtG8AgTo6uJ72HUrbyf5qdxON9mW0hlXPz1RQaCnB=","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1010111":{"codec":"tor87","bit":"1010111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","p17lKbhNEcaJXkzo4wDx9V2qfgn3OLCS0BF=iRZ8P6Tje+I5YUtrAMuvmdsGW/QyH"),"tdecode":string.maketrans("p17lKbhNEcaJXkzo4wDx9V2qfgn3OLCS0BF=iRZ8P6Tje+I5YUtrAMuvmdsGW/QyH","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1011000":{"codec":"tor88","bit":"1011000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","unS8vRVeCOMGA5zQK6FDdg3BThptf=9lI2biU4yajEPrHW07xm1LJ/cZXwkoq+NsY"),"tdecode":string.maketrans("unS8vRVeCOMGA5zQK6FDdg3BThptf=9lI2biU4yajEPrHW07xm1LJ/cZXwkoq+NsY","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1011001":{"codec":"tor89","bit":"1011001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","H97Za2DT5uVtPk3dFXQK6hIJvSicnpEAWjUb40+ox=smGRwBLNq8rzYM/CeOyfgl1"),"tdecode":string.maketrans("H97Za2DT5uVtPk3dFXQK6hIJvSicnpEAWjUb40+ox=smGRwBLNq8rzYM/CeOyfgl1","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1011010":{"codec":"tor90","bit":"1011010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","tO6ldNBT1krJ50pEfGjgemx+vzLZcCXosIh72qAaYuW/3VyQ89biPFD=RHMn4KwUS"),"tdecode":string.maketrans("tO6ldNBT1krJ50pEfGjgemx+vzLZcCXosIh72qAaYuW/3VyQ89biPFD=RHMn4KwUS","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1011011":{"codec":"tor91","bit":"1011011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","OK+t8sevH7MVgrCGPE=5xpfF1cnTN4DRoLhaYI2wAi3BWdm6jJqz0/9SkyQXuZblU"),"tdecode":string.maketrans("OK+t8sevH7MVgrCGPE=5xpfF1cnTN4DRoLhaYI2wAi3BWdm6jJqz0/9SkyQXuZblU","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1011100":{"codec":"tor92","bit":"1011100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","XWRpP+xFD3HqUZj=b4MSmiO7QGy8Yn2whBdCzLtT96kNo510uAalvsEfI/KVgJrce"),"tdecode":string.maketrans("XWRpP+xFD3HqUZj=b4MSmiO7QGy8Yn2whBdCzLtT96kNo510uAalvsEfI/KVgJrce","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1011101":{"codec":"tor93","bit":"1011101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","T5=gSiGro3cfU0EpORlIzstMD4789VQdKavCe+m2wFyhPZNuWJLjnAYkx61XbHqB/"),"tdecode":string.maketrans("T5=gSiGro3cfU0EpORlIzstMD4789VQdKavCe+m2wFyhPZNuWJLjnAYkx61XbHqB/","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1011110":{"codec":"tor94","bit":"1011110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","4FMYfpnjCJVBvrH3UudK6o2kSGt8cxNEXsZ7g95a0=LbAqe1WO+/hzmwDlyiQIRPT"),"tdecode":string.maketrans("4FMYfpnjCJVBvrH3UudK6o2kSGt8cxNEXsZ7g95a0=LbAqe1WO+/hzmwDlyiQIRPT","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1011111":{"codec":"tor95","bit":"1011111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","YZhb9leXPtB+MkVma5ScLuIG0/6NQOKUgvJjHoyA81T4sF72fEirDpRdqz=nWwx3C"),"tdecode":string.maketrans("YZhb9leXPtB+MkVma5ScLuIG0/6NQOKUgvJjHoyA81T4sF72fEirDpRdqz=nWwx3C","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1100000":{"codec":"tor96","bit":"1100000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","R3Wg/KkUr=TZpslbFI1x+nEfatu824odLeAiy0vNqOBcwJ6jmGPQDVhM7X5CHYzS9"),"tdecode":string.maketrans("R3Wg/KkUr=TZpslbFI1x+nEfatu824odLeAiy0vNqOBcwJ6jmGPQDVhM7X5CHYzS9","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1100001":{"codec":"tor97","bit":"1100001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","m1/Mc9YtkpSw6CKIE+3Fu8ZlDJOQqy2rTAHhfdXV7U=oG0v5ajzngbiLBxRNes4WP"),"tdecode":string.maketrans("m1/Mc9YtkpSw6CKIE+3Fu8ZlDJOQqy2rTAHhfdXV7U=oG0v5ajzngbiLBxRNes4WP","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1100010":{"codec":"tor98","bit":"1100010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","YIDLUuT5bzG9eoaiywPKO3pSJNgcFHC20jkWfdxR64lB=mvQnVsME7+/Ar81ZhtXq"),"tdecode":string.maketrans("YIDLUuT5bzG9eoaiywPKO3pSJNgcFHC20jkWfdxR64lB=mvQnVsME7+/Ar81ZhtXq","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1100011":{"codec":"tor99","bit":"1100011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","VB1n9dhEfYIASD7L0jcytHmQRZ5MguwFr2NUlTOi6qk=+eoK4vGbaJ3ps/xC8zWXP"),"tdecode":string.maketrans("VB1n9dhEfYIASD7L0jcytHmQRZ5MguwFr2NUlTOi6qk=+eoK4vGbaJ3ps/xC8zWXP","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1100100":{"codec":"tor100","bit":"1100100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","r3lwbGqT172hpoLuEM0CQZBiI+jJUHYNR5S6DfVtnPA4ydxeFXkags=WOKzc/89vm"),"tdecode":string.maketrans("r3lwbGqT172hpoLuEM0CQZBiI+jJUHYNR5S6DfVtnPA4ydxeFXkags=WOKzc/89vm","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1100101":{"codec":"tor101","bit":"1100101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","CnE9xlHiZMd+=GTouFsKrahXJ1cY0yNI8jqgOb6LARBmVfetD24wk7U/5QvS3zpPW"),"tdecode":string.maketrans("CnE9xlHiZMd+=GTouFsKrahXJ1cY0yNI8jqgOb6LARBmVfetD24wk7U/5QvS3zpPW","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1100110":{"codec":"tor102","bit":"1100110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","gFBvweQc1Iu859Ttqj=OHJiR6KUDSbL2pPznsC0VrAo4aGmh3MY/Xx+yZfEl7WNkd"),"tdecode":string.maketrans("gFBvweQc1Iu859Ttqj=OHJiR6KUDSbL2pPznsC0VrAo4aGmh3MY/Xx+yZfEl7WNkd","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1100111":{"codec":"tor103","bit":"1100111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","zGTB71Xyo4f5UZjxhRWOKecMaI0id=6SN92t3QbLED8gv+nmY/kqHwVsACuFJlrpP"),"tdecode":string.maketrans("zGTB71Xyo4f5UZjxhRWOKecMaI0id=6SN92t3QbLED8gv+nmY/kqHwVsACuFJlrpP","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1101000":{"codec":"tor104","bit":"1101000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","lgZR8/Svs4bKIyEN3pWFn7=PC9HzTocdBUXr+xYLAJDmkVOqwf2G0Qha1i6utMj5e"),"tdecode":string.maketrans("lgZR8/Svs4bKIyEN3pWFn7=PC9HzTocdBUXr+xYLAJDmkVOqwf2G0Qha1i6utMj5e","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1101001":{"codec":"tor105","bit":"1101001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","vBmIPDElrOWcMT+zy/8=YNZxu4tJUXonHG9dfkbViRLK2S6h3q0AeawCpjsQF1g75"),"tdecode":string.maketrans("vBmIPDElrOWcMT+zy/8=YNZxu4tJUXonHG9dfkbViRLK2S6h3q0AeawCpjsQF1g75","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1101010":{"codec":"tor106","bit":"1101010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","qT9dOEzI3+DcgJABKhlnmNuyZRV61bMf8ax5svGYHjUXSe4k=CtPQ0L7w/rpioF2W"),"tdecode":string.maketrans("qT9dOEzI3+DcgJABKhlnmNuyZRV61bMf8ax5svGYHjUXSe4k=CtPQ0L7w/rpioF2W","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1101011":{"codec":"tor107","bit":"1101011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","CNEJFo49hi0uaxcKg35RjZdMSD/Ip8fPb6QVXAGHtn1B=2WrlYkqT+szvw7UOmyeL"),"tdecode":string.maketrans("CNEJFo49hi0uaxcKg35RjZdMSD/Ip8fPb6QVXAGHtn1B=2WrlYkqT+szvw7UOmyeL","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1101100":{"codec":"tor108","bit":"1101100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","YR+DiJc=UdlKQTsvLwozhIguF63CWNyMS1He27XVt4qf/Ba50EGmbnZrxpP9A8kOj"),"tdecode":string.maketrans("YR+DiJc=UdlKQTsvLwozhIguF63CWNyMS1He27XVt4qf/Ba50EGmbnZrxpP9A8kOj","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1101101":{"codec":"tor109","bit":"1101101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","6EFtT/=YLDxHwubN0iI1JVMR3ek5q8KcmWjUB7oCGnO+sv4AyXz9fQpSP2ldhraZg"),"tdecode":string.maketrans("6EFtT/=YLDxHwubN0iI1JVMR3ek5q8KcmWjUB7oCGnO+sv4AyXz9fQpSP2ldhraZg","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1101110":{"codec":"tor110","bit":"1101110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","6tNZ7Y3vw2OJ4LWz9on8c/TdmXklMBbHyr0FDVseKuqihIUAEa+RQpSj51fC=gPxG"),"tdecode":string.maketrans("6tNZ7Y3vw2OJ4LWz9on8c/TdmXklMBbHyr0FDVseKuqihIUAEa+RQpSj51fC=gPxG","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1101111":{"codec":"tor111","bit":"1101111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","cutrkWKf1nogHj2qaZUNS/hXJOYVsbE0pxCLR=QMi57Bye+Gdm8zvFIP493lTwAD6"),"tdecode":string.maketrans("cutrkWKf1nogHj2qaZUNS/hXJOYVsbE0pxCLR=QMi57Bye+Gdm8zvFIP493lTwAD6","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1110000":{"codec":"tor112","bit":"1110000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","5lghHsN/+0t3jVuxMUBonJwrimZDOLWYzE2GXec79S8qKQbpIa6yRfvTP41dkC=FA"),"tdecode":string.maketrans("5lghHsN/+0t3jVuxMUBonJwrimZDOLWYzE2GXec79S8qKQbpIa6yRfvTP41dkC=FA","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1110001":{"codec":"tor113","bit":"1110001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","dc4x+6geX7H0RmW9qNfQwY1rGphJk8jnPDsiZtKFyvVoUCz3luEM/ITL5=BaSAOb2"),"tdecode":string.maketrans("dc4x+6geX7H0RmW9qNfQwY1rGphJk8jnPDsiZtKFyvVoUCz3luEM/ITL5=BaSAOb2","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1110010":{"codec":"tor114","bit":"1110010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","rwxWat40M61O8y2UDYvspjf/zSKdHeGB7loqkumVZc+b9gn5IALX3iNJ=FTPCQERh"),"tdecode":string.maketrans("rwxWat40M61O8y2UDYvspjf/zSKdHeGB7loqkumVZc+b9gn5IALX3iNJ=FTPCQERh","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1110011":{"codec":"tor115","bit":"1110011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","hA7wRskDF5OH83M4f69YPgNjVTKWbyoSmLGpBluaU/Z+I=eXxEc2C1nvtQqrJi0zd"),"tdecode":string.maketrans("hA7wRskDF5OH83M4f69YPgNjVTKWbyoSmLGpBluaU/Z+I=eXxEc2C1nvtQqrJi0zd","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1110100":{"codec":"tor116","bit":"1110100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","WMuY0HTjg+zrbkp1dAFsxvwn64P=UoNeZtV3EfD7yQ295ImcGRaJO/KhliS8XBLCq"),"tdecode":string.maketrans("WMuY0HTjg+zrbkp1dAFsxvwn64P=UoNeZtV3EfD7yQ295ImcGRaJO/KhliS8XBLCq","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1110101":{"codec":"tor117","bit":"1110101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","yucqnBop8erKzJC9O25lf7bjt/UXDY+Em10wHVsZPRdM3WITiGvxQNgSLaFA6k4h="),"tdecode":string.maketrans("yucqnBop8erKzJC9O25lf7bjt/UXDY+Em10wHVsZPRdM3WITiGvxQNgSLaFA6k4h=","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1110110":{"codec":"tor118","bit":"1110110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","zPhWLKlvkAYpX1w/+9NJ4otRDsUd83bCjraMy60eqQVcZg52IiFx7SB=EGTmnfHuO"),"tdecode":string.maketrans("zPhWLKlvkAYpX1w/+9NJ4otRDsUd83bCjraMy60eqQVcZg52IiFx7SB=EGTmnfHuO","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1110111":{"codec":"tor119","bit":"1110111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","utw9r2QFioeAqHZSJdx8BlgWfjY/nbzPIhDpNC+T=a64McK71RXOskGyE50LUVvm3"),"tdecode":string.maketrans("utw9r2QFioeAqHZSJdx8BlgWfjY/nbzPIhDpNC+T=a64McK71RXOskGyE50LUVvm3","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1111000":{"codec":"tor120","bit":"1111000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","q87BXoLPy=i3G5FWjm6VvhKM/w1R0QIn4pa9ONbSeD2YHsJUltufgCTErczd+kxAZ"),"tdecode":string.maketrans("q87BXoLPy=i3G5FWjm6VvhKM/w1R0QIn4pa9ONbSeD2YHsJUltufgCTErczd+kxAZ","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1111001":{"codec":"tor121","bit":"1111001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","kl7BI9iDyUprEf8QX4F3uPH/6sMj1mAht0CbTKZdagc+Ln5qROxNJW=wovVSY2zGe"),"tdecode":string.maketrans("kl7BI9iDyUprEf8QX4F3uPH/6sMj1mAht0CbTKZdagc+Ln5qROxNJW=wovVSY2zGe","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1111010":{"codec":"tor122","bit":"1111010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","7BkW6GY3TlfrEL+QPvOJt5eg41hiRyI0NmAFpZwqM9/xXbKUSaucjD2ods8=CHVnz"),"tdecode":string.maketrans("7BkW6GY3TlfrEL+QPvOJt5eg41hiRyI0NmAFpZwqM9/xXbKUSaucjD2ods8=CHVnz","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1111011":{"codec":"tor123","bit":"1111011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","WlAR0bEeQ8JPjxNUB+KZ/coYngk23sHXGd4FzqrMp=VC9ivILD5S1yTmO7hu6wfta"),"tdecode":string.maketrans("WlAR0bEeQ8JPjxNUB+KZ/coYngk23sHXGd4FzqrMp=VC9ivILD5S1yTmO7hu6wfta","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1111100":{"codec":"tor124","bit":"1111100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","Gzo+217PBjE/es=DTHWI5LnqymuiM9fgN6ZYtJvbFV8RrCcxkd34plXAhaOSKQwU0"),"tdecode":string.maketrans("Gzo+217PBjE/es=DTHWI5LnqymuiM9fgN6ZYtJvbFV8RrCcxkd34plXAhaOSKQwU0","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1111101":{"codec":"tor125","bit":"1111101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","wXHpe365U+l9LWyF4xd1nKJVrAT8fuNItkgDR7MEaz=mhS/bPqC0GsOvQZo2ciBYj"),"tdecode":string.maketrans("wXHpe365U+l9LWyF4xd1nKJVrAT8fuNItkgDR7MEaz=mhS/bPqC0GsOvQZo2ciBYj","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1111110":{"codec":"tor126","bit":"1111110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","DaVWzKtXCi6gx2mr70O1EHRwy5kfh3/jc94NFupsYJU=oBIveZblATdS+qP8nQMLG"),"tdecode":string.maketrans("DaVWzKtXCi6gx2mr70O1EHRwy5kfh3/jc94NFupsYJU=oBIveZblATdS+qP8nQMLG","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"1111111":{"codec":"tor127","bit":"1111111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","q4yXMm12ROnCpi0x3fTKjFaDLwlY+=W9k/U6cbgQ8vuzPBor7I5EAsdHeZSGhVJtN"),"tdecode":string.maketrans("q4yXMm12ROnCpi0x3fTKjFaDLwlY+=W9k/U6cbgQ8vuzPBor7I5EAsdHeZSGhVJtN","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10000000":{"codec":"tor128","bit":"10000000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","LnhM5/XcZSzR0eTvqsrmwWlu6J8Db9FNkQYAUGfK47=yVICOoPj2H1ixgaB3td+pE"),"tdecode":string.maketrans("LnhM5/XcZSzR0eTvqsrmwWlu6J8Db9FNkQYAUGfK47=yVICOoPj2H1ixgaB3td+pE","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10000001":{"codec":"tor129","bit":"10000001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","LFr6K4gPInj1RzHta3o7dkYXDuUQqNO8/9=TSmxip2sZyWfbcJ+EAeCG0VBl5hvMw"),"tdecode":string.maketrans("LFr6K4gPInj1RzHta3o7dkYXDuUQqNO8/9=TSmxip2sZyWfbcJ+EAeCG0VBl5hvMw","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10000010":{"codec":"tor130","bit":"10000010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","0tmjDOBErXP6eR7xNVJ+dFLS83sMhqbATCuGZ=oyf95kzn4iH/paKgYW12IQwcvUl"),"tdecode":string.maketrans("0tmjDOBErXP6eR7xNVJ+dFLS83sMhqbATCuGZ=oyf95kzn4iH/paKgYW12IQwcvUl","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10000011":{"codec":"tor131","bit":"10000011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","aAe6fRcVho=qFC214tsjEk895PDn/MZgWvdOLUHxJlbBNSzmXpIYrG3u+yw0i7TQK"),"tdecode":string.maketrans("aAe6fRcVho=qFC214tsjEk895PDn/MZgWvdOLUHxJlbBNSzmXpIYrG3u+yw0i7TQK","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10000100":{"codec":"tor132","bit":"10000100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","WjpdA=R9yxmLX3+EH1vZhcIGbl2rUCJSOKf5qt0/TYVia8znPuF7swDkN4MBo6geQ"),"tdecode":string.maketrans("WjpdA=R9yxmLX3+EH1vZhcIGbl2rUCJSOKf5qt0/TYVia8znPuF7swDkN4MBo6geQ","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10000101":{"codec":"tor133","bit":"10000101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","=/bFhKP3DwBJuLAg9pZR5VUNoryq+mTfxSEeCldI4GH6Xvj82WsQMiOnacztk701Y"),"tdecode":string.maketrans("=/bFhKP3DwBJuLAg9pZR5VUNoryq+mTfxSEeCldI4GH6Xvj82WsQMiOnacztk701Y","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10000110":{"codec":"tor134","bit":"10000110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","rw0sYKe8abkDd=Xxu+NVmE/pTPU296CLG3IcFAytZnOSgj4BMhz5q1WvJlRfHQoi7"),"tdecode":string.maketrans("rw0sYKe8abkDd=Xxu+NVmE/pTPU296CLG3IcFAytZnOSgj4BMhz5q1WvJlRfHQoi7","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10000111":{"codec":"tor135","bit":"10000111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","Q8XPzf4L9nsqbZAytHx/TkoE06jGeOpDBJ3Ra=Y5WVdSUl1FhI2MCwNv+K7icgumr"),"tdecode":string.maketrans("Q8XPzf4L9nsqbZAytHx/TkoE06jGeOpDBJ3Ra=Y5WVdSUl1FhI2MCwNv+K7icgumr","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10001000":{"codec":"tor136","bit":"10001000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","XNK+VbJtqz=P/w1C0GlnygH4FmT5jx3hko8DpI9LQ2ef7BOsWEZSidvYM6AUuRrac"),"tdecode":string.maketrans("XNK+VbJtqz=P/w1C0GlnygH4FmT5jx3hko8DpI9LQ2ef7BOsWEZSidvYM6AUuRrac","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10001001":{"codec":"tor137","bit":"10001001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","slbyGQxtRCqwd6kJgBZMf52K1TPoO+A0eprumNnh7L4cViSDWU8X/EHz3vYa9IF=j"),"tdecode":string.maketrans("slbyGQxtRCqwd6kJgBZMf52K1TPoO+A0eprumNnh7L4cViSDWU8X/EHz3vYa9IF=j","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10001010":{"codec":"tor138","bit":"10001010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","wf/eoq+74DFCVyAJRIdcSkXEs5ZHjMbh0aTn1UWiGrQK92NvgLxpzPmBYul6t=8O3"),"tdecode":string.maketrans("wf/eoq+74DFCVyAJRIdcSkXEs5ZHjMbh0aTn1UWiGrQK92NvgLxpzPmBYul6t=8O3","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10001011":{"codec":"tor139","bit":"10001011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","FzrcmMhv1CSwO70U2ANeuKpaJonHTIy5=Vq/PdigQBY48GXsLZDkj3xf+EtR6b9Wl"),"tdecode":string.maketrans("FzrcmMhv1CSwO70U2ANeuKpaJonHTIy5=Vq/PdigQBY48GXsLZDkj3xf+EtR6b9Wl","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10001100":{"codec":"tor140","bit":"10001100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","mtLEd0BUTF62PGlA7u=ZqhegcI3/XSkf+vz5KRY9xCaNWoiMJDHQjOVn4b18ypwrs"),"tdecode":string.maketrans("mtLEd0BUTF62PGlA7u=ZqhegcI3/XSkf+vz5KRY9xCaNWoiMJDHQjOVn4b18ypwrs","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10001101":{"codec":"tor141","bit":"10001101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","I5otliM/THeJzcv2SGVy6O1UPnFks3a7B+mKNfjQb49ZprARYgEdD8wu0CWqh=XLx"),"tdecode":string.maketrans("I5otliM/THeJzcv2SGVy6O1UPnFks3a7B+mKNfjQb49ZprARYgEdD8wu0CWqh=XLx","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10001110":{"codec":"tor142","bit":"10001110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","m4unYVtDIw9U+X52qT7=PKgkENM1jRx8feAyZOobs/aJLG306BdCvzWFcprQlShiH"),"tdecode":string.maketrans("m4unYVtDIw9U+X52qT7=PKgkENM1jRx8feAyZOobs/aJLG306BdCvzWFcprQlShiH","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10001111":{"codec":"tor143","bit":"10001111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","QeKFGxpfrHqT=9Ew1MDtBZWv6UOsojCgl3SP0uXR8k+25zcbJyiNLV47YhAdanmI/"),"tdecode":string.maketrans("QeKFGxpfrHqT=9Ew1MDtBZWv6UOsojCgl3SP0uXR8k+25zcbJyiNLV47YhAdanmI/","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10010000":{"codec":"tor144","bit":"10010000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","nxLM4ITNWHpXl1bBua27Es0oUKSmQ=FiPetCZAJ5RVy8w6zdfDGYrc/qjOvk9+gh3"),"tdecode":string.maketrans("nxLM4ITNWHpXl1bBua27Es0oUKSmQ=FiPetCZAJ5RVy8w6zdfDGYrc/qjOvk9+gh3","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10010001":{"codec":"tor145","bit":"10010001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","Ka=CvBie1EVqH3bWtXGOyApZFx2PoMzlfY5QngR7Jkhm8SI09DUu6sTj4NdL/wcr+"),"tdecode":string.maketrans("Ka=CvBie1EVqH3bWtXGOyApZFx2PoMzlfY5QngR7Jkhm8SI09DUu6sTj4NdL/wcr+","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10010010":{"codec":"tor146","bit":"10010010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","dmlEYIFgT5OKV6kAtN+p3Cujw=UM97Do4hsLfRaB01x8PXncZz/vqS2ibrQGyWeJH"),"tdecode":string.maketrans("dmlEYIFgT5OKV6kAtN+p3Cujw=UM97Do4hsLfRaB01x8PXncZz/vqS2ibrQGyWeJH","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10010011":{"codec":"tor147","bit":"10010011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","FhGPQc0AZgEi+yTm/Nr9x3J8dCu5=6bnstzplDSRMo4YqBfaXHvKe1LVkwj27WOIU"),"tdecode":string.maketrans("FhGPQc0AZgEi+yTm/Nr9x3J8dCu5=6bnstzplDSRMo4YqBfaXHvKe1LVkwj27WOIU","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10010100":{"codec":"tor148","bit":"10010100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","l32b0NTEoHk5YDmJgpSBv=FV9LtZPxyMwKAz7GRf6r/dac1sCW8IeXjqun+QhiUO4"),"tdecode":string.maketrans("l32b0NTEoHk5YDmJgpSBv=FV9LtZPxyMwKAz7GRf6r/dac1sCW8IeXjqun+QhiUO4","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10010101":{"codec":"tor149","bit":"10010101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","dtcQ+OxLyIFT4PMjepBsnh/zGm2gAw=kEVDruH6qJfK5obClU07ZN9WYR13SaXi8v"),"tdecode":string.maketrans("dtcQ+OxLyIFT4PMjepBsnh/zGm2gAw=kEVDruH6qJfK5obClU07ZN9WYR13SaXi8v","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10010110":{"codec":"tor150","bit":"10010110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","=kNeXcirVg9Yy6AfsZFD2JwbIM73TnRqSjL8vPEx5+mWhQUdluBC/40KHa1tOGzpo"),"tdecode":string.maketrans("=kNeXcirVg9Yy6AfsZFD2JwbIM73TnRqSjL8vPEx5+mWhQUdluBC/40KHa1tOGzpo","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10010111":{"codec":"tor151","bit":"10010111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","u4l1GroK+2W6iRO3JwZYL=xUnsakByj7/FP9TSXvqHMf0EAptmbNIgzCVdeh5QD8c"),"tdecode":string.maketrans("u4l1GroK+2W6iRO3JwZYL=xUnsakByj7/FP9TSXvqHMf0EAptmbNIgzCVdeh5QD8c","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10011000":{"codec":"tor152","bit":"10011000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","LhB1itMNIQoZFugvrD9fUmKWwP04AaYV+T5J8cSksGqy=/dzlCEXHe32pjROx6nb7"),"tdecode":string.maketrans("LhB1itMNIQoZFugvrD9fUmKWwP04AaYV+T5J8cSksGqy=/dzlCEXHe32pjROx6nb7","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10011001":{"codec":"tor153","bit":"10011001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","trFTZVXGhBs=+v8RYAqa9Nn4bSxOcflimD3WPU/e1dQ26ugwjyJE5MKpLIH70okCz"),"tdecode":string.maketrans("trFTZVXGhBs=+v8RYAqa9Nn4bSxOcflimD3WPU/e1dQ26ugwjyJE5MKpLIH70okCz","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10011010":{"codec":"tor154","bit":"10011010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","4uqVG5DsO2+0QifY7xtlcyMSvdHZaENnPA=whbJ1RkoF/IjgKzUTr8X9WpeC63BmL"),"tdecode":string.maketrans("4uqVG5DsO2+0QifY7xtlcyMSvdHZaENnPA=whbJ1RkoF/IjgKzUTr8X9WpeC63BmL","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10011011":{"codec":"tor155","bit":"10011011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","3EKBDJ=Gbd/junUCHToLxzMNXykqQPr0IsmWghapR1FVZf8tiO495S+A6wvlce7Y2"),"tdecode":string.maketrans("3EKBDJ=Gbd/junUCHToLxzMNXykqQPr0IsmWghapR1FVZf8tiO495S+A6wvlce7Y2","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10011100":{"codec":"tor156","bit":"10011100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","ZN0YJkFuDdvryg6CG3Ka8ml/f71x+9UA52TwWIQbzjOMhSVRBXoE4ipHPLsnqcte="),"tdecode":string.maketrans("ZN0YJkFuDdvryg6CG3Ka8ml/f71x+9UA52TwWIQbzjOMhSVRBXoE4ipHPLsnqcte=","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10011101":{"codec":"tor157","bit":"10011101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","EenfD6Lp/k7z1XF43OSCGJ9dN=Qsyaj2MV+cRo5HrlbPu0ZwWxtTmiIKYghqBUA8v"),"tdecode":string.maketrans("EenfD6Lp/k7z1XF43OSCGJ9dN=Qsyaj2MV+cRo5HrlbPu0ZwWxtTmiIKYghqBUA8v","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10011110":{"codec":"tor158","bit":"10011110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","7YjDy8+z3f0qAJGKdCBxwEPTcIN6rQRUpvXW5atgZMmbo9/4HusnS=l1OeVLFhi2k"),"tdecode":string.maketrans("7YjDy8+z3f0qAJGKdCBxwEPTcIN6rQRUpvXW5atgZMmbo9/4HusnS=l1OeVLFhi2k","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10011111":{"codec":"tor159","bit":"10011111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","InXQTJK+=HSUVOBoR7lcgWZh0t8GpM3yAxj5ksePmYuiCF4Df26aEbz/wv9qLr1Nd"),"tdecode":string.maketrans("InXQTJK+=HSUVOBoR7lcgWZh0t8GpM3yAxj5ksePmYuiCF4Df26aEbz/wv9qLr1Nd","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10100000":{"codec":"tor160","bit":"10100000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","tgZWmdCLuUpAv+5Fz1ifYjob2sRMSelcq/OT6H7xaJ38Ik=yKwDGXBrVPQ09nhN4E"),"tdecode":string.maketrans("tgZWmdCLuUpAv+5Fz1ifYjob2sRMSelcq/OT6H7xaJ38Ik=yKwDGXBrVPQ09nhN4E","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10100001":{"codec":"tor161","bit":"10100001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","6jmhJ+eNfU2b9FYtQRqdX7WBMa=8rVuvp5wzECOoAikg0n3S/14cLxZlDHyPTKIsG"),"tdecode":string.maketrans("6jmhJ+eNfU2b9FYtQRqdX7WBMa=8rVuvp5wzECOoAikg0n3S/14cLxZlDHyPTKIsG","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10100010":{"codec":"tor162","bit":"10100010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","iDqeJtcAByz+nb4P=8wx3gFVpKZY/aE7kNRXQG6Ul5LIr1dv0T9oMfWhOHSsmCj2u"),"tdecode":string.maketrans("iDqeJtcAByz+nb4P=8wx3gFVpKZY/aE7kNRXQG6Ul5LIr1dv0T9oMfWhOHSsmCj2u","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10100011":{"codec":"tor163","bit":"10100011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","lv1sWo+tZA/3bz9jUQCJfuhFTDBREgnpOy4x5m7IrXNGHacqKki=6MLYS8wV2e0dP"),"tdecode":string.maketrans("lv1sWo+tZA/3bz9jUQCJfuhFTDBREgnpOy4x5m7IrXNGHacqKki=6MLYS8wV2e0dP","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10100100":{"codec":"tor164","bit":"10100100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","NFh4VAO7U6D8yQ2qB+5r0/=ZcYfWTG9xluJeMsza13LXovwjCgSRPEtkmpbidnKIH"),"tdecode":string.maketrans("NFh4VAO7U6D8yQ2qB+5r0/=ZcYfWTG9xluJeMsza13LXovwjCgSRPEtkmpbidnKIH","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10100101":{"codec":"tor165","bit":"10100101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","cY3GeVx1KZoEmQpO+R6frTi2NHjlFMwuIP54B9yJ7bUWvdn8/CLqzkAsXgh=Sta0D"),"tdecode":string.maketrans("cY3GeVx1KZoEmQpO+R6frTi2NHjlFMwuIP54B9yJ7bUWvdn8/CLqzkAsXgh=Sta0D","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10100110":{"codec":"tor166","bit":"10100110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","wEK3Tif2WHPsYoLdc6qyu7lFgrRejI0NhB8AtOnCS+Zzxp/9MJVXGbUQ4=ak1Dmv5"),"tdecode":string.maketrans("wEK3Tif2WHPsYoLdc6qyu7lFgrRejI0NhB8AtOnCS+Zzxp/9MJVXGbUQ4=ak1Dmv5","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10100111":{"codec":"tor167","bit":"10100111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","evcr+VSglENG2Y1jRI6TWk3t4Hwoxdn=Jb/Oim0zZMs8h7UyuXqLBPKCQDA9p5Faf"),"tdecode":string.maketrans("evcr+VSglENG2Y1jRI6TWk3t4Hwoxdn=Jb/Oim0zZMs8h7UyuXqLBPKCQDA9p5Faf","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10101000":{"codec":"tor168","bit":"10101000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","9jCXqoM841ExFrIH0fmkRhiATKpzSBca=/+l52ZPt6eOubnw7LgWYQGyUDvsVNJd3"),"tdecode":string.maketrans("9jCXqoM841ExFrIH0fmkRhiATKpzSBca=/+l52ZPt6eOubnw7LgWYQGyUDvsVNJd3","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10101001":{"codec":"tor169","bit":"10101001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","wLQkiZW/8PBDbEyecS+6xIYNopnzHj2RfqAsVCuJ0MUG4mF=aOX197TKlhg3dtv5r"),"tdecode":string.maketrans("wLQkiZW/8PBDbEyecS+6xIYNopnzHj2RfqAsVCuJ0MUG4mF=aOX197TKlhg3dtv5r","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10101010":{"codec":"tor170","bit":"10101010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","XxeqOgnRVS4Z2GQyBltbNzH9KcW=jDCkrL0hasfAi+Uv/d7FIM8T1ouJp3w6EYmP5"),"tdecode":string.maketrans("XxeqOgnRVS4Z2GQyBltbNzH9KcW=jDCkrL0hasfAi+Uv/d7FIM8T1ouJp3w6EYmP5","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10101011":{"codec":"tor171","bit":"10101011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","DRCMxhbedgZpP=Nk4BuXH/26tWVnm+zsGyISoK0qc8rvQAUaL3jF57lEiTYO9wJf1"),"tdecode":string.maketrans("DRCMxhbedgZpP=Nk4BuXH/26tWVnm+zsGyISoK0qc8rvQAUaL3jF57lEiTYO9wJf1","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10101100":{"codec":"tor172","bit":"10101100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","JOl9n2c/0wpdaL4xDbqSht=K5Qk1IWPXosr3vgiBA7myZjeN8FzHVGRMCu+UYET6f"),"tdecode":string.maketrans("JOl9n2c/0wpdaL4xDbqSht=K5Qk1IWPXosr3vgiBA7myZjeN8FzHVGRMCu+UYET6f","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10101101":{"codec":"tor173","bit":"10101101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","2qB/QfzSxeIhsZy1gPMw3EdOCGT0Ap7D+tcrKkob4uFR8Yjv=Jm9VaUNilWL6HX5n"),"tdecode":string.maketrans("2qB/QfzSxeIhsZy1gPMw3EdOCGT0Ap7D+tcrKkob4uFR8Yjv=Jm9VaUNilWL6HX5n","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10101110":{"codec":"tor174","bit":"10101110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","4F/sBh7vb=gTODcUlCtLS9Zy5RWwXn1NMjiuozHdqkYmeGQ6p2rKEPfIJx8a3A0+V"),"tdecode":string.maketrans("4F/sBh7vb=gTODcUlCtLS9Zy5RWwXn1NMjiuozHdqkYmeGQ6p2rKEPfIJx8a3A0+V","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10101111":{"codec":"tor175","bit":"10101111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","R=jgQGsHe+cJyAqxlZrwM9U3YL2WXO/oVSIt5C70vn1dhNmp4f8kKDFEPu6zaBibT"),"tdecode":string.maketrans("R=jgQGsHe+cJyAqxlZrwM9U3YL2WXO/oVSIt5C70vn1dhNmp4f8kKDFEPu6zaBibT","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10110000":{"codec":"tor176","bit":"10110000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","omi8l/pBnEXWTUCcwrVkZHvS6KRPLdNGj1qe7f4QuA+IO50MYDyz9atg3=sbhxFJ2"),"tdecode":string.maketrans("omi8l/pBnEXWTUCcwrVkZHvS6KRPLdNGj1qe7f4QuA+IO50MYDyz9atg3=sbhxFJ2","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10110001":{"codec":"tor177","bit":"10110001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","Xmu+YWTNr0d4MRjA6D5yeoS3x/2fzG87tPiKsbQZBhF=OLlwgqIVcCEvpH91UkJan"),"tdecode":string.maketrans("Xmu+YWTNr0d4MRjA6D5yeoS3x/2fzG87tPiKsbQZBhF=OLlwgqIVcCEvpH91UkJan","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10110010":{"codec":"tor178","bit":"10110010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","H7y634cVDPsFRAfZdb80eXm1vlwUtuWIkj2orh+K5SMOnCQiq9/aYJ=pTgzxEBNLG"),"tdecode":string.maketrans("H7y634cVDPsFRAfZdb80eXm1vlwUtuWIkj2orh+K5SMOnCQiq9/aYJ=pTgzxEBNLG","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10110011":{"codec":"tor179","bit":"10110011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","=zdeD1XV/Rb2PwCr7vtLoK46WGB5aYAxiTIclyk3FjHS+sugmQ8NJ9h0pZEUqnfOM"),"tdecode":string.maketrans("=zdeD1XV/Rb2PwCr7vtLoK46WGB5aYAxiTIclyk3FjHS+sugmQ8NJ9h0pZEUqnfOM","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10110100":{"codec":"tor180","bit":"10110100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","2R8ZnlruxUONzVkGBhtmFAWIqse9KEwg0aLQ5=PS/Tp4JjYi3f6XH+cMvdy7Co1bD"),"tdecode":string.maketrans("2R8ZnlruxUONzVkGBhtmFAWIqse9KEwg0aLQ5=PS/Tp4JjYi3f6XH+cMvdy7Co1bD","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10110101":{"codec":"tor181","bit":"10110101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","=RcU8CB01FVTXMZJxEyGl3gvYotd7PfHKhaLIAqnDSWwjsN+6b/2r9O4pzQekmi5u"),"tdecode":string.maketrans("=RcU8CB01FVTXMZJxEyGl3gvYotd7PfHKhaLIAqnDSWwjsN+6b/2r9O4pzQekmi5u","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10110110":{"codec":"tor182","bit":"10110110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","WV7h/r+R0ALDmapkqouXzT5C=HwvGJbI28s9dcyU3MlgxFKn6SNQfBPeO4tj1YiZE"),"tdecode":string.maketrans("WV7h/r+R0ALDmapkqouXzT5C=HwvGJbI28s9dcyU3MlgxFKn6SNQfBPeO4tj1YiZE","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10110111":{"codec":"tor183","bit":"10110111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","wD2coQJGljxLS=EY6pN3Wg1Mkqb4vVHRy7FU58OuztTPZnC0idKhrsmIXBaf9e/+A"),"tdecode":string.maketrans("wD2coQJGljxLS=EY6pN3Wg1Mkqb4vVHRy7FU58OuztTPZnC0idKhrsmIXBaf9e/+A","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10111000":{"codec":"tor184","bit":"10111000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","A1D6mfzOGtFZrBuM/9J+iqVCKlRPLcpo4=W5e2dY0XgNnxkHvjI7SEyTa8hsQU3wb"),"tdecode":string.maketrans("A1D6mfzOGtFZrBuM/9J+iqVCKlRPLcpo4=W5e2dY0XgNnxkHvjI7SEyTa8hsQU3wb","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10111001":{"codec":"tor185","bit":"10111001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","6ivnlAL/kDf4WuPhOoq5y0z7RaIecKJdG3X8rQpjmst1S=wNbVUTMZCYBxHEF+9g2"),"tdecode":string.maketrans("6ivnlAL/kDf4WuPhOoq5y0z7RaIecKJdG3X8rQpjmst1S=wNbVUTMZCYBxHEF+9g2","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10111010":{"codec":"tor186","bit":"10111010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","FLtXE+e62shdnOVIrwgmTvQbZS51P8oC39y/GHxWkc04zADqpURlKN7=uYiMBJjaf"),"tdecode":string.maketrans("FLtXE+e62shdnOVIrwgmTvQbZS51P8oC39y/GHxWkc04zADqpURlKN7=uYiMBJjaf","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10111011":{"codec":"tor187","bit":"10111011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","TVkyswGaUflzjMeWqpK2H+DS3NBbc7in4PErF61LYo5CRuhJ=xvZQ/8XO9Ag0tmdI"),"tdecode":string.maketrans("TVkyswGaUflzjMeWqpK2H+DS3NBbc7in4PErF61LYo5CRuhJ=xvZQ/8XO9Ag0tmdI","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10111100":{"codec":"tor188","bit":"10111100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","ZJQIYbB8yrVLXj6eldz1auxE97tHFAUnNkqCfvDoPRM2iK+pg0Os3/SG5m4w=hWcT"),"tdecode":string.maketrans("ZJQIYbB8yrVLXj6eldz1auxE97tHFAUnNkqCfvDoPRM2iK+pg0Os3/SG5m4w=hWcT","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10111101":{"codec":"tor189","bit":"10111101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","urLjh=nVWUzHiJoKq8s5b+S1teAvBmIyFgxQYkalPpRNDXZ43/Tcw7CMd9GEf062O"),"tdecode":string.maketrans("urLjh=nVWUzHiJoKq8s5b+S1teAvBmIyFgxQYkalPpRNDXZ43/Tcw7CMd9GEf062O","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10111110":{"codec":"tor190","bit":"10111110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","9bN30m6cfZMi75unPSpshCr+HETxFIy/QjqU8YRaWB=Lew4OKkld2XvDtgJAGz1oV"),"tdecode":string.maketrans("9bN30m6cfZMi75unPSpshCr+HETxFIy/QjqU8YRaWB=Lew4OKkld2XvDtgJAGz1oV","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"10111111":{"codec":"tor191","bit":"10111111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","SrRZYl8kwVfBK+DGC2nAjLQEyWu4UzoOmv/eMJtxaPhiN1qH0csXb536IpF9=Tgd7"),"tdecode":string.maketrans("SrRZYl8kwVfBK+DGC2nAjLQEyWu4UzoOmv/eMJtxaPhiN1qH0csXb536IpF9=Tgd7","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11000000":{"codec":"tor192","bit":"11000000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","0YgX2B61adv5IcH3Lwql=tmD9yuGoki4CsWpK8ARbN/fZhxrzO+TJUSPEVeQjFM7n"),"tdecode":string.maketrans("0YgX2B61adv5IcH3Lwql=tmD9yuGoki4CsWpK8ARbN/fZhxrzO+TJUSPEVeQjFM7n","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11000001":{"codec":"tor193","bit":"11000001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","JDMo3ERajXZ1T08r7/CgvKskQ=AuVH9GSqWnLOifpmUb2eYlIFdch4xywt+Bz6N5P"),"tdecode":string.maketrans("JDMo3ERajXZ1T08r7/CgvKskQ=AuVH9GSqWnLOifpmUb2eYlIFdch4xywt+Bz6N5P","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11000010":{"codec":"tor194","bit":"11000010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","DX=jaGcQoC/WAItS+Y6k2mxRHgJNuB7EP1eFLUMsZqO4rlTVKdw59nzyh3fv8pbi0"),"tdecode":string.maketrans("DX=jaGcQoC/WAItS+Y6k2mxRHgJNuB7EP1eFLUMsZqO4rlTVKdw59nzyh3fv8pbi0","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11000011":{"codec":"tor195","bit":"11000011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","PNCvr/bFy95kUElRonW0dwM8qSO7cVYBmLHzeA2T6gQh3=1aJIDspt+Xi4uxGZKfj"),"tdecode":string.maketrans("PNCvr/bFy95kUElRonW0dwM8qSO7cVYBmLHzeA2T6gQh3=1aJIDspt+Xi4uxGZKfj","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11000100":{"codec":"tor196","bit":"11000100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","/DVcnJCAlt0ZQvhELedbqRHsN9agpTK824WuU3k+oSxjiY1OIrFM65fm7z=ByPGwX"),"tdecode":string.maketrans("/DVcnJCAlt0ZQvhELedbqRHsN9agpTK824WuU3k+oSxjiY1OIrFM65fm7z=ByPGwX","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11000101":{"codec":"tor197","bit":"11000101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","sxNdUg4z73J+DRwFmfL2CovHpVe/nlyTS=XE5G0rZAaOk8iWq69QtbhBPMYjcu1KI"),"tdecode":string.maketrans("sxNdUg4z73J+DRwFmfL2CovHpVe/nlyTS=XE5G0rZAaOk8iWq69QtbhBPMYjcu1KI","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11000110":{"codec":"tor198","bit":"11000110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","8RbEldaC/cUn4sM0pNYKOujL6GSZomQ2ADyhtT=HFX3fiI5+gzeJ1BwvPkq7rxVW9"),"tdecode":string.maketrans("8RbEldaC/cUn4sM0pNYKOujL6GSZomQ2ADyhtT=HFX3fiI5+gzeJ1BwvPkq7rxVW9","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11000111":{"codec":"tor199","bit":"11000111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","XIQEzbNYeVuwCgcpdGU7Kxi5h=P64sJHqF8RMatyW+SOTrB1/Llnof2jmA9Zk0v3D"),"tdecode":string.maketrans("XIQEzbNYeVuwCgcpdGU7Kxi5h=P64sJHqF8RMatyW+SOTrB1/Llnof2jmA9Zk0v3D","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11001000":{"codec":"tor200","bit":"11001000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","VX/pS=YK+d6xZPo07gRNEOq8Ujuw2D3BHCJcAtL4WfaGhl1zIbreFyvminsQT9kM5"),"tdecode":string.maketrans("VX/pS=YK+d6xZPo07gRNEOq8Ujuw2D3BHCJcAtL4WfaGhl1zIbreFyvminsQT9kM5","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11001001":{"codec":"tor201","bit":"11001001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","Qel=raXiZRkV4hMKyI10TL9JvAs+S53wfNzB8pjOon27cFYGCgDxqbWUuP/tHEdm6"),"tdecode":string.maketrans("Qel=raXiZRkV4hMKyI10TL9JvAs+S53wfNzB8pjOon27cFYGCgDxqbWUuP/tHEdm6","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11001010":{"codec":"tor202","bit":"11001010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","uW8SFL5J=DxPqVZn6eGftc/wTiHAMNRgCvsXO27b0rjdyahBp1IQ9lU3zomEkY+4K"),"tdecode":string.maketrans("uW8SFL5J=DxPqVZn6eGftc/wTiHAMNRgCvsXO27b0rjdyahBp1IQ9lU3zomEkY+4K","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11001011":{"codec":"tor203","bit":"11001011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","4ejIDO0wF5vxRPiNTWl2sBzKnSVH7QauU/pcCfgkAMrG+hX98=JqLYdZo3bt1m6yE"),"tdecode":string.maketrans("4ejIDO0wF5vxRPiNTWl2sBzKnSVH7QauU/pcCfgkAMrG+hX98=JqLYdZo3bt1m6yE","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11001100":{"codec":"tor204","bit":"11001100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","6WNnwfqligSEboh1+s5H9t3Dkc7xvY/dmA2QXGIVKeaPpR8LFTzuJM0Z=Oy4BCjUr"),"tdecode":string.maketrans("6WNnwfqligSEboh1+s5H9t3Dkc7xvY/dmA2QXGIVKeaPpR8LFTzuJM0Z=Oy4BCjUr","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11001101":{"codec":"tor205","bit":"11001101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","YcLiZ+Ehg7Jx=KXzonA6eyM9dmua1fSGvCPjtNQ0THV/ORIr4psUl3wWqkBDF528b"),"tdecode":string.maketrans("YcLiZ+Ehg7Jx=KXzonA6eyM9dmua1fSGvCPjtNQ0THV/ORIr4psUl3wWqkBDF528b","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11001110":{"codec":"tor206","bit":"11001110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","8wtPgvbmu=N6d3cJyRY5x+VDr4T0GkAjXo/1FE7ML9nlB2zUHWhQiKSpOeZfqsIaC"),"tdecode":string.maketrans("8wtPgvbmu=N6d3cJyRY5x+VDr4T0GkAjXo/1FE7ML9nlB2zUHWhQiKSpOeZfqsIaC","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11001111":{"codec":"tor207","bit":"11001111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","T3qzLS5nwY7UyAQWPkDf4Mb6=jlgBuGtihHJ8o09ERVFXZOmId12+re/cspKNaxCv"),"tdecode":string.maketrans("T3qzLS5nwY7UyAQWPkDf4Mb6=jlgBuGtihHJ8o09ERVFXZOmId12+re/cspKNaxCv","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11010000":{"codec":"tor208","bit":"11010000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","sYpLAnj4iGHXz9yRMtPD5f0hUQwFBC+eOrqbVSWlm16Zcg3ITu7/JKoxakd8=NEv2"),"tdecode":string.maketrans("sYpLAnj4iGHXz9yRMtPD5f0hUQwFBC+eOrqbVSWlm16Zcg3ITu7/JKoxakd8=NEv2","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11010001":{"codec":"tor209","bit":"11010001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","sGuCayqHF31LJlv2Kr+edj8p7kYBMog5x6DSQTwtA4/Xn9fm=PNIW0RcVEhbOzZUi"),"tdecode":string.maketrans("sGuCayqHF31LJlv2Kr+edj8p7kYBMog5x6DSQTwtA4/Xn9fm=PNIW0RcVEhbOzZUi","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11010010":{"codec":"tor210","bit":"11010010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","3diUWH7Of1yL5cSu9=K2XP4J+jeTw6ohmZRxVlrMQDGba8AEkgvtNzpI0/nqsCBFY"),"tdecode":string.maketrans("3diUWH7Of1yL5cSu9=K2XP4J+jeTw6ohmZRxVlrMQDGba8AEkgvtNzpI0/nqsCBFY","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11010011":{"codec":"tor211","bit":"11010011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","EZ3Q9pvAVPe=doGM8/RkbCSXK0qFgJxzmrDInw6Hau2hjLB5+UNO4YclWf71iytTs"),"tdecode":string.maketrans("EZ3Q9pvAVPe=doGM8/RkbCSXK0qFgJxzmrDInw6Hau2hjLB5+UNO4YclWf71iytTs","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11010100":{"codec":"tor212","bit":"11010100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","iTvzRasdSq+85ykVYLf4l6/j2CXOJPwWEcobZIng031hxU9NMe7GA=rDpFtHuKBQm"),"tdecode":string.maketrans("iTvzRasdSq+85ykVYLf4l6/j2CXOJPwWEcobZIng031hxU9NMe7GA=rDpFtHuKBQm","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11010101":{"codec":"tor213","bit":"11010101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","7VW0UkMnOpsljc/PhTqvrRmQ=zYwbdZ2f+LxIKBGyNS9D4XFt6gHEo35i8CeJ1uaA"),"tdecode":string.maketrans("7VW0UkMnOpsljc/PhTqvrRmQ=zYwbdZ2f+LxIKBGyNS9D4XFt6gHEo35i8CeJ1uaA","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11010110":{"codec":"tor214","bit":"11010110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","jhx7V8ADnrYMlK6pvOy2e4gI0mo3LqGJFb/BHSNWka+1uZitz9EcPdw5TQfRUsXC="),"tdecode":string.maketrans("jhx7V8ADnrYMlK6pvOy2e4gI0mo3LqGJFb/BHSNWka+1uZitz9EcPdw5TQfRUsXC=","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11010111":{"codec":"tor215","bit":"11010111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","n3u6E5GeUCMVQhosAryvS01FLwabWDOtdRxNKzmYgqI=p9fPl/+j4BJi8X2T7cZkH"),"tdecode":string.maketrans("n3u6E5GeUCMVQhosAryvS01FLwabWDOtdRxNKzmYgqI=p9fPl/+j4BJi8X2T7cZkH","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11011000":{"codec":"tor216","bit":"11011000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","OzWQdcPbRpV3MgH5AXylkKjroN7a=JnZiftGvBFx6mEq41hC9T0uew/Ys8+I2SLUD"),"tdecode":string.maketrans("OzWQdcPbRpV3MgH5AXylkKjroN7a=JnZiftGvBFx6mEq41hC9T0uew/Ys8+I2SLUD","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11011001":{"codec":"tor217","bit":"11011001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","ptgq4G1uAFEimYd6hJ8TQ0b3asMKeIBcSyxR/PwNzZX7L=C5W2ljfvUnO+rVHo9kD"),"tdecode":string.maketrans("ptgq4G1uAFEimYd6hJ8TQ0b3asMKeIBcSyxR/PwNzZX7L=C5W2ljfvUnO+rVHo9kD","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11011010":{"codec":"tor218","bit":"11011010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","10kuG+NjbtIKcS3Va2ABLT5HnC=vsRJow8i7hQmdYDUzyFfpM4lqgO9ExZWPe6/Xr"),"tdecode":string.maketrans("10kuG+NjbtIKcS3Va2ABLT5HnC=vsRJow8i7hQmdYDUzyFfpM4lqgO9ExZWPe6/Xr","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11011011":{"codec":"tor219","bit":"11011011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","y7DJ81q6AOoiZItPl9eHCNGb/znK=Wgk5rdpwMThvUY4mQBXVfuj2Fx30LSRascE+"),"tdecode":string.maketrans("y7DJ81q6AOoiZItPl9eHCNGb/znK=Wgk5rdpwMThvUY4mQBXVfuj2Fx30LSRascE+","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11011100":{"codec":"tor220","bit":"11011100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","zYU/MkjycKWEIFieJ8Qp=+qxClZS0tLvGVmBnP7A26TO5RXgobNduahs3rDw4H19f"),"tdecode":string.maketrans("zYU/MkjycKWEIFieJ8Qp=+qxClZS0tLvGVmBnP7A26TO5RXgobNduahs3rDw4H19f","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11011101":{"codec":"tor221","bit":"11011101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","NLckTgM8DA5RChG/wvqm03JEyzlVprbU2xY+Xij7fIn1ZHsQ=WPFKS4uaBOto9de6"),"tdecode":string.maketrans("NLckTgM8DA5RChG/wvqm03JEyzlVprbU2xY+Xij7fIn1ZHsQ=WPFKS4uaBOto9de6","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11011110":{"codec":"tor222","bit":"11011110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","BtrvY7CkpHEydGsFZun8LVl+4Q106PaqJKhie=9oSM/TA52fxXRgc3bWjDImOUzwN"),"tdecode":string.maketrans("BtrvY7CkpHEydGsFZun8LVl+4Q106PaqJKhie=9oSM/TA52fxXRgc3bWjDImOUzwN","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11011111":{"codec":"tor223","bit":"11011111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","G5Mp/v4QUBAegrCRYIlbkTnX+jSJW1=KuxiHNFfds2myLqw36Eth8Dc0aPzo7OVZ9"),"tdecode":string.maketrans("G5Mp/v4QUBAegrCRYIlbkTnX+jSJW1=KuxiHNFfds2myLqw36Eth8Dc0aPzo7OVZ9","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11100000":{"codec":"tor224","bit":"11100000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","Ins1j2w3NC8ZkDMtr0/OTH+Yp=xlVfFodAyUzePJLb5R4mi76caSKGuEWhqXgQ9vB"),"tdecode":string.maketrans("Ins1j2w3NC8ZkDMtr0/OTH+Yp=xlVfFodAyUzePJLb5R4mi76caSKGuEWhqXgQ9vB","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11100001":{"codec":"tor225","bit":"11100001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","lma5PpdcNxiF8MOjn69CZr4zBH0/XqhJStWkAoV=IRG2bsQuU3YfEvTKgDy7e+wL1"),"tdecode":string.maketrans("lma5PpdcNxiF8MOjn69CZr4zBH0/XqhJStWkAoV=IRG2bsQuU3YfEvTKgDy7e+wL1","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11100010":{"codec":"tor226","bit":"11100010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","FD0Aly4aTmN=bRh/r7UEqdewYk2OcoIKsXgZ5Wn6J+QpxfPLB3GMuViHzt9Cjv8S1"),"tdecode":string.maketrans("FD0Aly4aTmN=bRh/r7UEqdewYk2OcoIKsXgZ5Wn6J+QpxfPLB3GMuViHzt9Cjv8S1","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11100011":{"codec":"tor227","bit":"11100011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","UtbnKPp4GICO1MNjXVfWzqa+Jwm09Bh=v3DyFkdZHoilA7exL86g2TQE/5cSYuRrs"),"tdecode":string.maketrans("UtbnKPp4GICO1MNjXVfWzqa+Jwm09Bh=v3DyFkdZHoilA7exL86g2TQE/5cSYuRrs","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11100100":{"codec":"tor228","bit":"11100100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","WvgXOx3dJDCGmjQTz/pkI1KPsRL2FlE8A+Y=Voqca0NiByehwUr967S45nfHtubMZ"),"tdecode":string.maketrans("WvgXOx3dJDCGmjQTz/pkI1KPsRL2FlE8A+Y=Voqca0NiByehwUr967S45nfHtubMZ","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11100101":{"codec":"tor229","bit":"11100101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","4hwNSDou+WPji3nKxvm0MQZV7JGq5dRy9baFUk=/ItABceYHLCXsEg8lOrzTpf216"),"tdecode":string.maketrans("4hwNSDou+WPji3nKxvm0MQZV7JGq5dRy9baFUk=/ItABceYHLCXsEg8lOrzTpf216","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11100110":{"codec":"tor230","bit":"11100110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","RzNr/cMmVjo=lCgkxvZWU5b4pSsYwTKQOIieP13aLutE0dfnHDXAy2h+9JBF6G78q"),"tdecode":string.maketrans("RzNr/cMmVjo=lCgkxvZWU5b4pSsYwTKQOIieP13aLutE0dfnHDXAy2h+9JBF6G78q","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11100111":{"codec":"tor231","bit":"11100111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","9WA6gDGqQR4B0Fe+tpb8xCHaMjoTmrUJIi=7wP/d51LXNhSsVOEukvYcyf2zlZ3nK"),"tdecode":string.maketrans("9WA6gDGqQR4B0Fe+tpb8xCHaMjoTmrUJIi=7wP/d51LXNhSsVOEukvYcyf2zlZ3nK","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11101000":{"codec":"tor232","bit":"11101000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","=F4otpyZCPObDAh5kn8j+YilvRTL1MBw2r/IU0WdzXGEVHSuefQ6mgq79NK3sJxac"),"tdecode":string.maketrans("=F4otpyZCPObDAh5kn8j+YilvRTL1MBw2r/IU0WdzXGEVHSuefQ6mgq79NK3sJxac","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11101001":{"codec":"tor233","bit":"11101001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","439cPaVTQOMNZpisBgRCvfJyKhbY1Gwm+WSnrFeqlk67Lz=dDoX5/jI2tAHxU8Eu0"),"tdecode":string.maketrans("439cPaVTQOMNZpisBgRCvfJyKhbY1Gwm+WSnrFeqlk67Lz=dDoX5/jI2tAHxU8Eu0","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11101010":{"codec":"tor234","bit":"11101010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","mB9SIqTAoHpRCgEUtahL4zD5uZMPx1F8wWl62iGX3sKJk7fQj+vYVdO/0ye=bcrnN"),"tdecode":string.maketrans("mB9SIqTAoHpRCgEUtahL4zD5uZMPx1F8wWl62iGX3sKJk7fQj+vYVdO/0ye=bcrnN","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11101011":{"codec":"tor235","bit":"11101011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","oYWIf0=hmFHTutNapBscS1JLjyzR+v9ieU84QxDVZP3KdXGl/2Mq7Cb6kwAE5rnOg"),"tdecode":string.maketrans("oYWIf0=hmFHTutNapBscS1JLjyzR+v9ieU84QxDVZP3KdXGl/2Mq7Cb6kwAE5rnOg","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11101100":{"codec":"tor236","bit":"11101100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","+AzxtZis9o2SjFJ6HOXYcaeyUw7q/Pg1=LfCWDdEK0lMpIVNu5RQrTmvGB3hn48bk"),"tdecode":string.maketrans("+AzxtZis9o2SjFJ6HOXYcaeyUw7q/Pg1=LfCWDdEK0lMpIVNu5RQrTmvGB3hn48bk","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11101101":{"codec":"tor237","bit":"11101101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","QkSXfZuzbHpGVtR5CTJ218YFrgWx0D=anNP7vj4Ld9B+oEOyhmIMcAsUK/wlei6q3"),"tdecode":string.maketrans("QkSXfZuzbHpGVtR5CTJ218YFrgWx0D=anNP7vj4Ld9B+oEOyhmIMcAsUK/wlei6q3","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11101110":{"codec":"tor238","bit":"11101110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","vzlShky0W+rVgcbdJ2Ye6NP7IGU1FXODan54pHA3T9EtB8jsCu=LiZQ/owRMqKxmf"),"tdecode":string.maketrans("vzlShky0W+rVgcbdJ2Ye6NP7IGU1FXODan54pHA3T9EtB8jsCu=LiZQ/owRMqKxmf","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11101111":{"codec":"tor239","bit":"11101111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","UnJRHKib0XI3qvPy+pg2Na/hr=wETmQM8F15zSoAtZWkG7esufDcxLVBCYO469djl"),"tdecode":string.maketrans("UnJRHKib0XI3qvPy+pg2Na/hr=wETmQM8F15zSoAtZWkG7esufDcxLVBCYO469djl","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11110000":{"codec":"tor240","bit":"11110000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","wIdQm2pl0PanLrRqHXYx13E75BCyhgzJTFOWNjst6DM9cZUViGKuek/vSAb4=8fo+"),"tdecode":string.maketrans("wIdQm2pl0PanLrRqHXYx13E75BCyhgzJTFOWNjst6DM9cZUViGKuek/vSAb4=8fo+","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11110001":{"codec":"tor241","bit":"11110001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","AHuJ+D60MpyYv4zF9VdQITX23eiRjPxWgmOfS7cqlnNbwUZ/1sKEB58Gtaoh=LCrk"),"tdecode":string.maketrans("AHuJ+D60MpyYv4zF9VdQITX23eiRjPxWgmOfS7cqlnNbwUZ/1sKEB58Gtaoh=LCrk","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11110010":{"codec":"tor242","bit":"11110010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","Di1IK3d7kVoWfmCPSpeg+yJ5jR0NxhnLzsA6OGlbqYr/8H9avwuMBTXEQcFUt2Z=4"),"tdecode":string.maketrans("Di1IK3d7kVoWfmCPSpeg+yJ5jR0NxhnLzsA6OGlbqYr/8H9avwuMBTXEQcFUt2Z=4","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11110011":{"codec":"tor243","bit":"11110011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","Aoprsv3IHu5T+cCn6qNhV7/b8iUdQyxezk1GR0W4YmO2wL9aSgMZKl=JPFfjDBtXE"),"tdecode":string.maketrans("Aoprsv3IHu5T+cCn6qNhV7/b8iUdQyxezk1GR0W4YmO2wL9aSgMZKl=JPFfjDBtXE","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11110100":{"codec":"tor244","bit":"11110100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","P6IZnR387KvDsfLHo+j1YUGWQceJaEF4xBptuqlN0Tz=Xk2yd9MgriwmCAOVbh5S/"),"tdecode":string.maketrans("P6IZnR387KvDsfLHo+j1YUGWQceJaEF4xBptuqlN0Tz=Xk2yd9MgriwmCAOVbh5S/","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11110101":{"codec":"tor245","bit":"11110101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","BXU3r1qKau/4jATFLhcbgQVStiOxz9H7Znk+fYpRCJmyG0D65=8sNoMdlwEWIveP2"),"tdecode":string.maketrans("BXU3r1qKau/4jATFLhcbgQVStiOxz9H7Znk+fYpRCJmyG0D65=8sNoMdlwEWIveP2","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11110110":{"codec":"tor246","bit":"11110110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","l8wB5qdGprRYLPQfOEnAjc=FgMXkS0Ivbt7o9azs2JVHZC6NxKe4uU/D+W13iTmyh"),"tdecode":string.maketrans("l8wB5qdGprRYLPQfOEnAjc=FgMXkS0Ivbt7o9azs2JVHZC6NxKe4uU/D+W13iTmyh","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11110111":{"codec":"tor247","bit":"11110111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","qTAiBFMf=GIEHWv1gckQ6S/4Z+K7PpYRlO3XexjdzJCy9nm2sNVaburth0Lo5wU8D"),"tdecode":string.maketrans("qTAiBFMf=GIEHWv1gckQ6S/4Z+K7PpYRlO3XexjdzJCy9nm2sNVaburth0Lo5wU8D","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11111000":{"codec":"tor248","bit":"11111000","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","OWGt7HdxfwabeiRqX+cDMoI9hLnNsQ3PuvzKr5TES4AUgj8l602FmpB1yCJk/YZV="),"tdecode":string.maketrans("OWGt7HdxfwabeiRqX+cDMoI9hLnNsQ3PuvzKr5TES4AUgj8l602FmpB1yCJk/YZV=","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11111001":{"codec":"tor249","bit":"11111001","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","mHItF/8sGn5RPUiEywVxDvWBh3Y19lCAduaZr2fo0MSJkQg6eKLTc7pO+N=Xzb4qj"),"tdecode":string.maketrans("mHItF/8sGn5RPUiEywVxDvWBh3Y19lCAduaZr2fo0MSJkQg6eKLTc7pO+N=Xzb4qj","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11111010":{"codec":"tor250","bit":"11111010","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","1KI=MURO6fk7mPL5CXygodGZnE48c3zqQTJ0wiW+2YVurHN9sSA/DpthlejbBvxaF"),"tdecode":string.maketrans("1KI=MURO6fk7mPL5CXygodGZnE48c3zqQTJ0wiW+2YVurHN9sSA/DpthlejbBvxaF","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11111011":{"codec":"tor251","bit":"11111011","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","uvp0eb58nURy4XPAC=Oh7qZQfcjrS/kVmsagH1K2xLBINEF63oDTJdtwYiM9lGz+W"),"tdecode":string.maketrans("uvp0eb58nURy4XPAC=Oh7qZQfcjrS/kVmsagH1K2xLBINEF63oDTJdtwYiM9lGz+W","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11111100":{"codec":"tor252","bit":"11111100","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","MWQ/g8OriNl+9XL7pxCS5mfuZVtnoGcHdqKv41y=eDRIa3bwhF20Bk6jJAzTEUYsP"),"tdecode":string.maketrans("MWQ/g8OriNl+9XL7pxCS5mfuZVtnoGcHdqKv41y=eDRIa3bwhF20Bk6jJAzTEUYsP","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11111101":{"codec":"tor253","bit":"11111101","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","STNnMzFIGpwE5R8qecKflrY3OUy7v=JaxPLZu1oDBiQCkdgs9mH2h4/A0b+tW6XjV"),"tdecode":string.maketrans("STNnMzFIGpwE5R8qecKflrY3OUy7v=JaxPLZu1oDBiQCkdgs9mH2h4/A0b+tW6XjV","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11111110":{"codec":"tor254","bit":"11111110","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","CvzWTrKsY/Vpqy4UoZ=mwhNPJMcGFE8ueIg+7ODjRtlHQ9a5iLdbAB10xXkfn6S23"),"tdecode":string.maketrans("CvzWTrKsY/Vpqy4UoZ=mwhNPJMcGFE8ueIg+7ODjRtlHQ9a5iLdbAB10xXkfn6S23","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")},"11111111":{"codec":"tor255","bit":"11111111","tencode":string.maketrans("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=","2bh04fuzwiU3rNDJF+WIM9cyX61ARBTsgnd=H8xYl/KS7Cq5jQvZPOkmVtpLeEaoG"),"tdecode":string.maketrans("2bh04fuzwiU3rNDJF+WIM9cyX61ARBTsgnd=H8xYl/KS7Cq5jQvZPOkmVtpLeEaoG","ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=")}}
def randEncode(data):
global tors
randBit = list(tors)[random.randint(0,len(tors)-1)]
return string.translate(data,tors[randBit]["tencode"]),binToStr(randBit)
def randByteChar():
global tors
return binToChar(list(tors)[random.randint(0,len(tors)-1)])
def torEncode(text,eChar):
global tors
return string.translate(text,tors[charToBin(eChar)]["tencode"])
def torDecode(text,eChar):
global tors
return string.translate(text,tors[charToBin(eChar)]["tdecode"])
def md5_(data):
md_ = md5.md5()
md_.update(str(data))
return md_.hexdigest()
minLen = 16
class cryptonic:
def __init__(self,pattern=False,patternFilename=False,cryptonicLen=False):
if(pattern != False):
self.pattern = pattern
elif(patternFilename != False):
self.pattern = patternRead(patternFilename)
elif(cryptonicLen != False):
self.pattern = createCryptonicKey(codecLen = cryptonicLen)
def encode(self,data=""):
if(len(data) >= 0):
if(data == 0):
data = "Niye dosyayi bos birakiyosun dosya israfi degilmi utan...Program bozuluyo sonra"
pattern = b64decode(self.pattern)
encodedData = b64encode(data)
for item in pattern:
encodedData = torEncode(encodedData,item)
return True,b64encode(encodedData)
else:
return False,False
def decode(self,data=""):
if(len(data) > 0):
pattern = b64decode(self.pattern)[::-1]
decodedData = b64decode(data)
for eChar in pattern:
decodedData = torDecode(decodedData,eChar)
try:
return True,b64decode(decodedData)
except:
return False,None
else:
return False,None
def encodeFile(self,fname):
with open(fname,"rb") as fileVarR:
with open("%s.cryptonicted"%(fname),"wb") as fileVar:
fileVar.write("-----BEGIN CRYPTONIC DATA-----\n%s\n-----END CRYPTONIC DATA-----"%(self.encode(data=fileVarR.read())[1]))
return "%s.cryptonicted"%(fname)
def decodeFile(self,fname,nt=True):
try:
if(nt):
with open("%s.cryptonicted" % fname, "rb") as fileVar:
state, decodedFileData = self.decode(
data=str(fileVar.read().split("-----")[1:-1][1].split("\n")[1:-1]))
if (not state):
return False, None
with open("(decoded)"+".".join(fname.split(".")),"wb") as fileVar:
fileVar.write("%s"%(decodedFileData))
return True,"(decoded)%s.%s"%(fname.split(".")[0],fname.split(".")[1])
else:
with open("%s" % fname, "rb") as fileVar:
data = str(fileVar.read().split("-----")[1:-1][1].split("\n")[1:-1])
state, decodedFileData = self.decode(data=data)
if (not state):
return False, None
with open(".".join(fname.split(".")[:-1]),"wb") as fileVar:
fileVar.write("%s"%(decodedFileData))
return True,"%s.%s"%(fname.split(".")[0],fname.split(".")[1])
except IOError:
return False,None
except IndexError:
return False,None
except None:
return False,None
def patternSave(self,fname):
if(len(self.pattern) > 4):
if(os.path.isfile("%s.cton"%(fname))):
question = raw_input("%s.cton adli dosya uzerine yazilsinmi ?(e/h)"%(fname))
if(question.lower() == "e"):
with open("%s.cton"%(fname),"wb") as fileVar:
fileVar.write("-----BEGIN CRYPTONIC KEY-----\n%s\n-----END CRYPTONIC KEY-----"%(self.pattern))
elif(question.lower() == "h"):
i = 0
while True:
if(not os.path.isfile("%s(%s).cton"%(fname,i))):
with open("%s(%s).cton"%(fname,i),"wb") as fileVar:
fileVar.write("-----BEGIN CRYPTONIC KEY-----\n%s\n-----END CRYPTONIC KEY-----"%(self.pattern))
break
i += 1
else:
with open("%s.cton"%(fname),"wb") as fileVar:
fileVar.write("-----BEGIN CRYPTONIC KEY-----\n%s\n-----END CRYPTONIC KEY-----"%(self.pattern))
class fusion(object):
"""docstring for fusion"""
fsrule = "***"
def __init__(self, filename):
self.filename = filename
def triggerFusion(self,fileTree):
with open(self.filename,"wb+") as fp:
for path in fileTree:
for fpath in fileTree[path]:
try:
with open("%s\\%s"%(path,fpath),"rb") as fpa:
fp.write("%s%s%s"%(self.fsrule,b64encode(json.dumps({"filename":"%s\\%s"%(path,fpath),"data":self.encode(fpa.read())})),self.fsrule))
except None:
print "Permission Denied"
def encode(self,data):
return b64encode(zlib.compress(data))
def decode(self,data):
return zlib.decompress(b64decode(data))
def triggerFission(self):
with open(self.filename,"rb") as fp:
fname = os.path.split(self.filename)[1]
for i in fp.read().split(self.fsrule):
if(i != ""):
djson = json.loads(b64decode(i))
if not os.path.exists(os.path.split(djson["filename"])[0]):
os.makedirs(os.path.split(djson["filename"])[0])
with open(djson["filename"],"wb") as fpa:
fpa.write(self.decode(djson["data"]))
def createCryptonicKey(codecLen = 4):
if(codecLen >= 16):
lastcodec = ""
pattern = ""
eChar = ""
eChar =randByteChar()
lastcodec=eChar
for item in xrange(codecLen):
while True:
eChar =randByteChar()
if(lastcodec != eChar):
pattern += eChar
break
else:continue
lastcodec = eChar
return b64encode(pattern)
else:
lastcodec = ""
pattern = ""
eChar = ""
eChar =randByteChar()
lastcodec=eChar
for item in xrange(16):
while True:
eChar =randByteChar()
if(lastcodec != eChar):
pattern += eChar
break
else:continue
lastcodec = eChar
return b64encode(pattern)
def getFileData(fname):
with open(fname,"rb") as fileVar:
return fileVar.read()
def patternRead(fname):
try:
with open("%s.cton"%(fname),"rb") as fileVar:
return str(fileVar.read().split("-----")[1:-1][1].split("\n")[1:-1])
except:
print "%s.cton not found so created on %s.cton.Pattern_"%(fname,fname)
patternWriteInFile(fname,createPattern(1024))
return patternRead(fname)
def consoleApplication():
global tors
nowPattern = ""
while True:
cmd = raw_input("Cryptonic > ")
for commander in cmd.split(";"):
commands=shlex.split(commander)
if(commands[0] == "pattern"):
if(commands[1] == "set"):
if(commands[2] == "create"):
try:
nowPattern = createCryptonicKey(int(commands[3]))
pattern = cryptonic(pattern=nowPattern)
print "Successfully Loaded at cache"
except IndexError:
print "Please enter integer of create method"
elif(commands[2] == "ctonfile"):
try:
nowPattern = patternRead(commands[3])
pattern = cryptonic(pattern=nowPattern)
print "Successfully Loaded at cache"
except IndexError:
print "Please enter string."
elif(commands[2] == "seton"):
try:
nowPattern = b64encode(commands[3])
pattern = cryptonic(pattern=nowPattern)
print "Successfully Loaded at cache"
except TypeError:
print "Base64 Error !"
elif(commands[2] == "getfile"):
try:
ffname = commands[3]
if(os.path.isfile(ffname)):
with open(ffname,"rb") as fileVar:
nowPattern = b64encode(fileVar.read())
pattern = cryptonic(pattern=nowPattern)
print "Successfuly created pattern with %s. Loaded at cache"%(ffname)
else:
print "File not found."
except:
print "Please enter filename."
else:
print "Command wrong using"
elif(commands[1] == "get"):
if(commands[2] == "key"):
if(len(nowPattern) < 1024*5 and len(nowPattern) != 0):
print "-----BEGIN CRYPTONIC KEY-----\n%s\n-----END CRYPTONIC KEY-----"%(nowPattern)
elif(len(nowPattern) > 1024*5):
print "Pattern size bigger than 5KB so didn't print."
else:
print "Pattern don't loaded."
elif(commands[2] == "size"):
if(len(nowPattern) != 0):
print "Pattern Size : %s Byte"%(len(b64decode(nowPattern)))
else:
print "Pattern don't loaded."
elif(commands[2] == "md5"):
if(len(nowPattern) != 0):
print "Pattern MD5 Hash : %s"%(md5_(b64decode(nowPattern)))
else:
print "Pattern don't loaded."
elif(commands[2] == "info"):
if(len(nowPattern) != 0):
print "Pattern Size : %s Byte"%(len(b64decode(nowPattern)))
print "Pattern MD5 Hash : %s"%(md5_(b64decode(nowPattern)))
else:
print "Pattern don't loaded."
elif(commands[1] == "save"):
if(len(nowPattern) != 0):
if(os.path.isfile(commands[2])):
print "This file already created."
else:
try:
pattern.patternSave(commands[2])
print "Sucessfully created cryptonic key file > %s.cton"%(commands[2])
except:
print "Error on saving."
else:
print "Please load pattern key."
else:
print "Please enter valid command."
elif(commands[0] == "encode"):
if(commands[1] == "file"):
if(len(nowPattern) != 0):
if(os.path.isfile(commands[2])):
fname = pattern.encodeFile(commands[2])
print "Encoded File Name : %s"%(fname)
else:
print "File not found."
else:
print "Please load pattern key."
elif(commands[1] == "text"):
if(len(nowPattern) != 0):
try:
encodedText = pattern.encode(data=commands[2])
print "\nPattern MD5 : %s\nEncoded Text : %s\nRaw Text : %s\n"%(md5_(nowPattern),encodedText[1],commands[2])
except:
print "Error excepted on encode text"
else:
print "Please load cryptonic key."
elif(commands[1] == "allpath"):
if(len(nowPattern) != 0):
folderName = commands[2]
fileTree = getFolderTree(folderName)
for i in fileTree:
for j in fileTree[i]:
filename = "%s\\%s"%(i,j)
fname = pattern.encodeFile(filename)
os.remove(filename)
os.rename(folderName,"%s-cryptonicted"%(folderName))
else:
print "Please load cryptonic key."
elif(commands[0] == "decode"):
if(commands[1] == "file"):
if(len(nowPattern) != 0):
if(os.path.isfile(commands[2])):
try:
state,fname = pattern.decodeFile(commands[2])
if(state):
print "\nDecoded Filename : %s\nEncoded File Name : %s"%(fname,commands[2])
else:
print "Failed."
except:
print "Cryptonic key not valid."
else:
print "File not found."
else:
print "Please load pattern key."
elif(commands[1] == "text"):
if(len(nowPattern) != 0):
try:
state,decodedText = pattern.decode(data=commands[2])
if(state):
print "\nPattern MD5 : %s\nDecoded Text : %s\nEncoded Text : %s\n"%(md5_(nowPattern),decodedText,commands[2])
else:
print "Failed\n"
except:
print "Error excepted on decode text"
else:
print "Please load pattern key."
elif(commands[1] == "allpath"):
if(len(nowPattern) != 0):
folderName = commands[2]
fileTree = getFolderTree(folderName)
for i in fileTree:
for j in fileTree[i]:
if(j.split(".")[-1] == "cryptonicted"):
filename = "%s\\%s"%(i,j)
state,fname = pattern.decodeFile(fname=filename,nt=False)
if(state == False):
print "Failed."
break
os.remove(filename)
else:
pass
if(folderName.split("-")[-1] == "cryptonicted"):
os.rename(folderName,"-".join(folderName.split("-")[:-1]))
else:
pass
else:
print "Please load pattern key."
elif(commands[0] == "cry"):
if(commands[1] == "compress"):
try:
path = commands[2]
except:
print "Please enter path."
continue
fs = fusion(raw_input("Please enter file name > "))
fs.triggerFusion(getFolderTree(path))
elif(commands[1] == "decompress"):
try:
filename = commands[2]
except:
print "Please enter filename."
continue
fs = fusion(filename)
fs.triggerFission()
elif(commands[0] == "help"):
print getHelp()
else:
print "Please Enter valid command."
#
# MAIN
#
def getHelp():
return """
TR---------
pattern Bölümü-Section
set Bölümü-Section
create - Verilen boyuta göre Cryptonic key oluþturur.
PatternSize
ctonfile - Key dosyasýný loadlar.
Filename
getfile - Girdiðiniz dosyanýn binarysine göre pattern oluþturur.
Filename
seton - Komut isteminde base64 kodu girerek manuel key loadlayýn.
Base64PatternKey
get
key - Dosya 5 KB den büyük deðilse base64 içeriðini yazdýrýr.
size -Cryptonic key'in Boyutunu Yazdýrýr.
md5 - Cryptonic key'in MD5 hashini yazdirir.
info - Dosyanin ozelliklerini yazdidir.
save - Verdiðiniz isme .cton uzantýsýný koyarak patterni kaydeder.
Filename
encode
text - Konsolda girdiðiniz veriyi encodeler.
string
file - Girdiðiniz dosyayý encodeler. Dizin girebilirsiniz.
Filename
allpath - Girdiðiniz klasörü içindeki þifrelennmiþ verilerin hepsini çözer.
Folder
decode
text - Konsolda girdiðiniz veriyi decodeler.
string
file - Girdiðiniz dosyayý decodeler. Dizin girebilirsiniz.
Filename
allpath - Girdiðiniz klasörü içindeki þifrelennmiþ verilerin hepsini çözer.
Folder
cry
compress
path - Girdiğiniz klasörün içindeki dosyalar ve klasörler doğrudan sizin belirlediğiniz dosyanın içine sıkıştırılır.
decompress
path - Girdiğiniz dosyanın içindeki dosyalar ve klasörler doğrudan sizin önceden belirlediğiniz klasör içine yerleştirilir.
EN---------
pattern
set - set patterns for encoding,decoding.(cache from import)
create - create a pattern of the desired length
Argument-Input - pattern length(Integer)
ctonfile
Argument-Input - cton Filename(String)
getfile - creating pattern with according any file
Argument-Input - Filename(String)
seton - setting pattern with entered key
Argument-Input - Pattern-base64(String)
get - Getting pattern information
key - If greater than 5 KB does not print
Console-Output - Pattern-base64(String)
size - Pattern key size
Console-Output - Pattern-size(String)
md5 - Returns pattern md5
Console-Output - Pattern-MD5-Hexdigest(String)
info - Returns pattern md5 and size
Console-Output - Pattern-info(String)
save - Saves last loaded pattern as file
Argument-Input - Filename(String)
encode - Encoding with last loaded pattern
text - Encoding simple text
Argument-Input - Text(String)
file - Encoding file with entered file path
Argument-Input - Filename(String)
allpath - Encoding all files and folders with entered folder path
Argument-Input - Folder Path(String)
decode - Decoding with last loaded pattern
text - Decoding simple crypted text(needs pattern key on used to encoding for decoding)
Argument-Input - Encrypted text(String)
file - Decoding file with entered file path(needs pattern key on used to encoding for decoding)
Argument-Input - Filename(String)
allpath - Decoding crypted all files and folders with entered file path(needs pattern key on used to encoding for decoding)
Argument-Input - Folder path(String)
cry - Compressing paths
compress - Compressing all paths and files with entered file path
Argument-Input - Folder path(String)
Console-Input - Filename for keeping files and folders(String)
decompress - Decompressing all paths and files with entered compress file name
Argument-Input - Filename for decompressing files and folders(String)
"""
def getFolderTree(path):
fileTree = dict()
for root, dirs, files in os.walk(path):
fileTree[root] = files
return fileTree
def main():
consoleApplication()
if(__name__ == "__main__"):
main()
| 209.957672 | 95,391 | 0.783101 | 7,675 | 119,046 | 12.15114 | 0.241433 | 0.08235 | 0.06039 | 0.230581 | 0.056927 | 0.046719 | 0.040693 | 0.039921 | 0.03531 | 0.033187 | 0 | 0.134458 | 0.08107 | 119,046 | 566 | 95,392 | 210.328622 | 0.716925 | 0.001117 | 0 | 0.449814 | 0 | 0.005576 | 0.700451 | 0.564391 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0.003717 | 0.018587 | null | null | 0.092937 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a4aa543b312725f6f816bb0eb7e529e4069903d7 | 605 | py | Python | tests/mock/s3.py | Wambosa/tetrauniversal-functions | 4d63ab7b45afbdb67f569e2bc513cb91feaa0e17 | [
"MIT"
] | 3 | 2020-01-30T21:25:35.000Z | 2020-02-26T21:05:05.000Z | tests/mock/s3.py | Wambosa/tetrauniversal-functions | 4d63ab7b45afbdb67f569e2bc513cb91feaa0e17 | [
"MIT"
] | null | null | null | tests/mock/s3.py | Wambosa/tetrauniversal-functions | 4d63ab7b45afbdb67f569e2bc513cb91feaa0e17 | [
"MIT"
] | null | null | null | from box import Box
class VoidS3:
def get_object(self, Bucket='', Key=''):
def read():
return b'1234567,http://web.uk,left,2019-01-01T00:01:000Z,87646675465\n8901234,https://web.com,right,2020-01-01T00:00:000Z,99999999999'
return {
'Body': Box({
'read': read
})
}
class DiffDelimiterS3:
def get_object(self, Bucket='', Key=''):
def read():
return b'1234567!http://web.uk!left!2019-01-01T00:01:000Z!87646675465\n8901234!https://web.com!right!2020-01-01T00:00:000Z!99999999999'
return {
'Body': Box({
'read': read
})
} | 21.607143 | 141 | 0.609917 | 82 | 605 | 4.47561 | 0.402439 | 0.076294 | 0.065395 | 0.087193 | 0.871935 | 0.871935 | 0.871935 | 0.871935 | 0.871935 | 0.871935 | 0 | 0.280922 | 0.21157 | 605 | 28 | 142 | 21.607143 | 0.48847 | 0 | 0 | 0.631579 | 0 | 0.105263 | 0.438944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.210526 | false | 0 | 0.052632 | 0.105263 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 11 |
a4d338d42d94c49c2b517211d1d1347eacded247 | 30,918 | py | Python | tests/reg_tests/test_DVGeometryESP.py | jrram/pygeo | ed15c848703a90055d38130b6d05cef8080a9d68 | [
"Apache-2.0"
] | 41 | 2019-04-18T00:49:42.000Z | 2022-03-27T10:06:47.000Z | tests/reg_tests/test_DVGeometryESP.py | jrram/pygeo | ed15c848703a90055d38130b6d05cef8080a9d68 | [
"Apache-2.0"
] | 90 | 2019-05-01T19:08:26.000Z | 2022-03-28T15:27:12.000Z | tests/reg_tests/test_DVGeometryESP.py | jrram/pygeo | ed15c848703a90055d38130b6d05cef8080a9d68 | [
"Apache-2.0"
] | 35 | 2019-04-30T19:06:42.000Z | 2022-03-18T14:26:57.000Z | import unittest
import os
import numpy as np
from stl import mesh
from baseclasses import BaseRegTest
from baseclasses.utils import Error
from parameterized import parameterized_class
import time
try:
from mpi4py import MPI
except ImportError:
MPI = None
if MPI:
try:
import pyOCSM
from pygeo import DVGeometryESP
except ImportError:
pyOCSM = None
test_params = [{"N_PROCS": 1, "name": "serial"}, {"N_PROCS": 4, "name": "parallel_4procs"}]
@unittest.skipUnless(MPI and pyOCSM, "MPI and pyOCSM are required.")
@parameterized_class(test_params)
class TestPyGeoESP_BasicCube(unittest.TestCase):
# to be tested in serial and parallel automatically
N_PROCS = 1
def setUp(self):
# Store the path where this current script lives
# This all paths in the script are relative to this path
# This is needed to support testflo running directories and files as inputs
self.input_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
def setup_cubemodel(self):
# load the box model and build the box model
csmFile = os.path.join(self.input_path, "../input_files/esp/box.csm")
DVGeo = DVGeometryESP(csmFile)
self.assertIsNotNone(DVGeo)
# add a point set on the surface
vertex1 = np.array([-2.0, -2.0, -2.0])
vertex2 = np.array([1.5, 1.5, 1.5])
left = np.array([-2.0, -1.1, -1.1])
right = np.array([1.5, -1.2, -0.1])
front = np.array([0.25, 1.5, 0.3])
back = np.array([1.2, -2.0, -0.3])
top = np.array([0.0, 0.1, 1.5])
bottom = np.array([-1.9, -1.1, -2.0])
initpts = np.vstack([vertex1, vertex2, left, right, front, back, top, bottom, left, right])
distglobal = DVGeo.addPointSet(initpts, "mypts", cache_projections=False)
self.assertAlmostEqual(distglobal, 0.0, 8)
# evaluate the points and check that they match
DVGeo._updateESPModel()
DVGeo._updateProjectedPts()
self.assertTrue(DVGeo.pointSetUpToDate)
self.assertAlmostEqual(np.linalg.norm(initpts - DVGeo.pointSets["mypts"].proj_pts), 0.0, 10)
return DVGeo, initpts
def setup_cubemodel_analytic_jac(self):
jacpt0 = np.array(
[[1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0, 0.0, 0.0]] # x # y
) # z
jacpt1 = np.array(
[[1.0, 0.0, 0.0, 1.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 1.0, 0.0], [0.0, 0.0, 1.0, 0.0, 0.0, 1.0]] # x # y
) # z
jacpt2 = np.array(
[
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 0.9 / 3.5, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 0.9 / 3.5],
]
) # z
jacpt3 = np.array(
[
[1.0, 0.0, 0.0, 1.0, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 0.8 / 3.5, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 1.9 / 3.5],
]
) # z
jacpt4 = np.array(
[
[1.0, 0.0, 0.0, 2.25 / 3.50, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 1.0, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 2.30 / 3.50],
]
) # z
jacpt5 = np.array(
[
[1.0, 0.0, 0.0, 3.20 / 3.50, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 0.0, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 1.70 / 3.50],
]
) # z
jacpt6 = np.array(
[
[1.0, 0.0, 0.0, 2.0 / 3.5, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 2.1 / 3.5, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 1.0],
]
) # z
jacpt7 = np.array(
[
[1.0, 0.0, 0.0, 0.1 / 3.5, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 0.9 / 3.5, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
]
) # z
ordered_analytic_jac = np.concatenate(
[jacpt0, jacpt1, jacpt2, jacpt3, jacpt4, jacpt5, jacpt6, jacpt7, jacpt2, jacpt3], axis=0
).reshape(10, 3, 6)
return ordered_analytic_jac
def test_load_a_model(self):
# load the box model and build the box model
csmFile = os.path.join(self.input_path, "../input_files/esp/box.csm")
DVGeometryESP(csmFile)
def test_save_cadfile(self):
write_fullpath = os.path.join(self.input_path, "reg_tests/fullpath_" + str(self.N_PROCS) + ".step")
DVGeo, initpts = self.setup_cubemodel()
if DVGeo.comm.rank == 0:
try:
os.remove(write_fullpath)
except OSError:
pass
DVGeo.writeCADFile(write_fullpath)
DVGeo.comm.barrier()
time.sleep(0.1)
self.assertTrue(os.path.exists(write_fullpath))
# check that bad file extension raises a Python error
with self.assertRaises(IOError):
DVGeo.writeCADFile("relpath.wrongext")
def test_write_csmfile(self):
DVGeo, initpts = self.setup_cubemodel()
write_fullpath = os.path.join(self.input_path, "reg_tests/fullpath_" + str(self.N_PROCS) + ".csm")
if DVGeo.comm.rank == 0:
try:
os.remove(write_fullpath)
except OSError:
pass
DVGeo.writeCSMFile(write_fullpath)
DVGeo.comm.barrier()
time.sleep(0.1)
self.assertTrue(os.path.exists(write_fullpath))
# check that bad file extension raises a Python error
with self.assertRaises(IOError):
DVGeo.writeCADFile("relpath.wrongext")
def test_add_desvars(self):
# load the box model and build the box model
csmFile = os.path.join(self.input_path, "../input_files/esp/box.csm")
DVGeo = DVGeometryESP(csmFile)
self.assertIsNotNone(DVGeo)
# add variables with a mix of optional arguments
DVGeo.addVariable("cubex0", lower=np.array([-10.0]), upper=np.array([10.0]), scale=0.1, dh=0.0001)
self.assertEqual(DVGeo.getNDV(), 1)
DVGeo.addVariable("cubey0")
self.assertEqual(DVGeo.getNDV(), 2)
DVGeo.addVariable("cubez0", lower=np.array([-10.0]), upper=np.array([10.0]))
self.assertEqual(DVGeo.getNDV(), 3)
# try to add a variable that isn't in the CSM file
with self.assertRaises(Error):
DVGeo.addVariable("cubew0")
def test_add_pointset(self):
DVGeo, initpts = self.setup_cubemodel()
def test_updated_points(self):
DVGeo, initpts = self.setup_cubemodel()
DVGeo.addVariable("cubey0")
DVGeo.setDesignVars({"cubey0": np.array([4.2000])}, updateJacobian=False)
npts = initpts.shape[0]
self.assertAlmostEqual(np.sum(DVGeo.pointSets["mypts"].proj_pts[:, 1] - initpts[:, 1]) / npts, 6.2, 10)
DVGeo.addVariable("cubedz")
DVGeo.setDesignVars({"cubedz": np.array([9.5])}, updateJacobian=False)
self.assertAlmostEqual(DVGeo.pointSets["mypts"].proj_pts[1, 2], 7.5)
self.assertAlmostEqual(DVGeo.pointSets["mypts"].proj_pts[0, 2], -2.0)
def test_finite_precision(self):
DVGeo, initpts = self.setup_cubemodel()
DVGeo.addVariable("cubey0")
DVGeo.setDesignVars({"cubey0": np.array([4.2 + 1e-12])}, updateJacobian=False)
self.assertAlmostEqual(DVGeo.pointSets["mypts"].proj_pts[0, 1] - 4.2, 1e-12, 15)
DVGeo.addVariable("cubedz")
DVGeo.setDesignVars({"cubedz": np.array([9.5 - 1e-12])}, updateJacobian=False)
self.assertAlmostEqual(DVGeo.pointSets["mypts"].proj_pts[1, 2] - 7.5, -1e-12, 15)
def test_serial_finite_difference(self):
# this test checks the underlying jacobian itself, not the public API
# TODO write tests for the public API
DVGeo, initpts = self.setup_cubemodel()
for designvarname in ["cubex0", "cubey0", "cubez0", "cubedx", "cubedy", "cubedz"]:
DVGeo.addVariable(designvarname)
# check the FD derivatives
initpts_cache = initpts.copy()
dvdict_cache = DVGeo.DVs.copy()
self.assertFalse(DVGeo.updatedJac["mypts"])
DVGeo._computeSurfJacobian(fd=True)
self.assertTrue(DVGeo.updatedJac["mypts"])
npts = initpts.shape[0]
ndvs = DVGeo.getNDV()
# check the jacobian results match analytic result
testjac = DVGeo.pointSets["mypts"].jac.reshape(npts, 3, ndvs)
analyticjac = self.setup_cubemodel_analytic_jac()
for ipt in range(npts):
self.assertAlmostEqual(np.sum(np.abs(testjac[ipt, :, :] - analyticjac[ipt, :, :])), 0)
# check that the point set hasn't changed after running the FDs
self.assertAlmostEqual(np.sum(np.abs(initpts_cache - DVGeo.pointSets["mypts"].proj_pts)), 0.0)
# check that the DV dict hasn't changed
for key in dvdict_cache:
self.assertAlmostEqual(np.sum(np.abs(DVGeo.DVs[key].value - dvdict_cache[key].value)), 0.0)
def test_jacobian_arbitrary_added_order(self):
# this test checks the underlying jacobian itself, not the public API
DVGeo, initpts = self.setup_cubemodel()
# switch up the order of DVs added
for designvarname in ["cubey0", "cubedx", "cubedy", "cubex0", "cubedz", "cubez0"]:
DVGeo.addVariable(designvarname)
# check the FD derivatives
DVGeo._computeSurfJacobian(fd=True)
npts = initpts.shape[0]
ndvs = DVGeo.getNDV()
# check the jacobian results match analytic result
testjac = DVGeo.pointSets["mypts"].jac.reshape(npts, 3, ndvs)
ordered_analyticjac = self.setup_cubemodel_analytic_jac()
analyticjac = np.zeros((npts, 3, ndvs))
# get original variable ordering
orig_var_order = ["cubex0", "cubey0", "cubez0", "cubedx", "cubedy", "cubedz"]
# reorder the analytic jacobian
for idv, designvarname in enumerate(orig_var_order):
dv_ind = DVGeo.DVs[designvarname].globalStartInd
analyticjac[:, :, dv_ind] = ordered_analyticjac[:, :, idv]
self.assertNotEqual(dv_ind, idv)
for ipt in range(npts):
self.assertAlmostEqual(np.sum(np.abs(testjac[ipt, :, :] - analyticjac[ipt, :, :])), 0)
@unittest.skipUnless(MPI and pyOCSM, "MPI and pyOCSM are required.")
class TestPyGeoESP_BasicCube_Distributed(unittest.TestCase):
N_PROCS = 3
def setUp(self):
# Store the path where this current script lives
# This all paths in the script are relative to this path
# This is needed to support testflo running directories and files as inputs
self.input_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
self.comm = MPI.COMM_WORLD
def setup_cubemodel(self):
# load the box model and build the box model
csmFile = os.path.join(self.input_path, "../input_files/esp/box.csm")
DVGeo = DVGeometryESP(csmFile)
self.assertIsNotNone(DVGeo)
# add a point set on the surface
# distri
vertex1 = np.array([-2.0, -2.0, -2.0])
vertex2 = np.array([1.5, 1.5, 1.5])
left = np.array([-2.0, -1.1, -1.1])
right = np.array([1.5, -1.2, -0.1])
front = np.array([0.25, 1.5, 0.3])
back = np.array([1.2, -2.0, -0.3])
top = np.array([0.0, 0.1, 1.5])
bottom = np.array([-1.9, -1.1, -2.0])
# distribute the pointset
if self.comm.rank == 0:
initpts = np.vstack([vertex1, vertex2, left, right])
elif self.comm.rank == 1:
initpts = np.vstack([front, back, top])
elif self.comm.rank == 2:
initpts = np.vstack([bottom, left, right])
else:
raise ValueError("Too many procs")
distglobal = DVGeo.addPointSet(initpts, "mypts", cache_projections=False)
self.assertAlmostEqual(distglobal, 0.0, 8)
# evaluate the points and check that they match
DVGeo._updateESPModel()
DVGeo._updateProjectedPts()
self.assertTrue(DVGeo.pointSetUpToDate)
self.assertAlmostEqual(np.linalg.norm(initpts - DVGeo.pointSets["mypts"].proj_pts), 0.0, 10)
return DVGeo, initpts
def setup_cubemodel_analytic_jac(self):
jacpt0 = np.array(
[[1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0, 0.0, 0.0]] # x # y
) # z
jacpt1 = np.array(
[[1.0, 0.0, 0.0, 1.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 1.0, 0.0], [0.0, 0.0, 1.0, 0.0, 0.0, 1.0]] # x # y
) # z
jacpt2 = np.array(
[
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 0.9 / 3.5, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 0.9 / 3.5],
]
) # z
jacpt3 = np.array(
[
[1.0, 0.0, 0.0, 1.0, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 0.8 / 3.5, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 1.9 / 3.5],
]
) # z
jacpt4 = np.array(
[
[1.0, 0.0, 0.0, 2.25 / 3.50, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 1.0, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 2.30 / 3.50],
]
) # z
jacpt5 = np.array(
[
[1.0, 0.0, 0.0, 3.20 / 3.50, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 0.0, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 1.70 / 3.50],
]
) # z
jacpt6 = np.array(
[
[1.0, 0.0, 0.0, 2.0 / 3.5, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 2.1 / 3.5, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 1.0],
]
) # z
jacpt7 = np.array(
[
[1.0, 0.0, 0.0, 0.1 / 3.5, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 0.9 / 3.5, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
]
) # z
if self.comm.rank == 0:
ordered_analytic_jac = np.concatenate([jacpt0, jacpt1, jacpt2, jacpt3], axis=0).reshape(4, 3, 6)
elif self.comm.rank == 1:
ordered_analytic_jac = np.concatenate([jacpt4, jacpt5, jacpt6], axis=0).reshape(3, 3, 6)
elif self.comm.rank == 2:
ordered_analytic_jac = np.concatenate([jacpt7, jacpt2, jacpt3], axis=0).reshape(3, 3, 6)
return ordered_analytic_jac
def test_load_a_model(self):
# load the box model and build the box model
csmFile = os.path.join(self.input_path, "../input_files/esp/box.csm")
DVGeometryESP(csmFile)
def test_add_desvars(self):
# load the box model and build the box model
csmFile = os.path.join(self.input_path, "../input_files/esp/box.csm")
DVGeo = DVGeometryESP(csmFile)
self.assertIsNotNone(DVGeo)
# add variables with a mix of optional arguments
DVGeo.addVariable("cubex0", lower=np.array([-10.0]), upper=np.array([10.0]), scale=0.1, dh=0.0001)
self.assertEqual(DVGeo.getNDV(), 1)
DVGeo.addVariable("cubey0")
self.assertEqual(DVGeo.getNDV(), 2)
DVGeo.addVariable("cubez0", lower=np.array([-10.0]), upper=np.array([10.0]))
self.assertEqual(DVGeo.getNDV(), 3)
# try to add a variable that isn't in the CSM file
with self.assertRaises(Error):
DVGeo.addVariable("cubew0")
def test_add_pointset(self):
DVGeo, initpts = self.setup_cubemodel()
def test_updated_points(self):
DVGeo, initpts = self.setup_cubemodel()
DVGeo.addVariable("cubey0")
DVGeo.setDesignVars({"cubey0": np.array([4.2000])}, updateJacobian=False)
npts = initpts.shape[0]
self.assertAlmostEqual(np.sum(DVGeo.pointSets["mypts"].proj_pts[:, 1] - initpts[:, 1]) / npts, 6.2, 10)
DVGeo.addVariable("cubedz")
DVGeo.setDesignVars({"cubedz": np.array([9.5])}, updateJacobian=False)
if self.comm.rank == 0:
self.assertAlmostEqual(DVGeo.pointSets["mypts"].proj_pts[1, 2], 7.5)
self.assertAlmostEqual(DVGeo.pointSets["mypts"].proj_pts[0, 2], -2.0)
def test_parallel_finite_difference(self):
# this test checks the underlying jacobian itself, not the public API
# TODO write tests for the public API
DVGeo, initpts = self.setup_cubemodel()
for designvarname in ["cubex0", "cubey0", "cubez0", "cubedx", "cubedy", "cubedz"]:
DVGeo.addVariable(designvarname)
# check the FD derivatives
initpts_cache = initpts.copy()
dvdict_cache = DVGeo.DVs.copy()
self.assertFalse(DVGeo.updatedJac["mypts"])
DVGeo._computeSurfJacobian(fd=True)
self.assertTrue(DVGeo.updatedJac["mypts"])
npts = initpts.shape[0]
ndvs = DVGeo.getNDV()
# check the jacobian results match analytic result
testjac = DVGeo.pointSets["mypts"].jac.reshape(npts, 3, ndvs)
analyticjac = self.setup_cubemodel_analytic_jac()
for ipt in range(npts):
self.assertAlmostEqual(np.sum(np.abs(testjac[ipt, :, :] - analyticjac[ipt, :, :])), 0)
# check that the point set hasn't changed after running the FDs
self.assertAlmostEqual(np.sum(np.abs(initpts_cache - DVGeo.pointSets["mypts"].proj_pts)), 0.0)
# check that the DV dict hasn't changed
for key in dvdict_cache:
self.assertAlmostEqual(np.sum(np.abs(DVGeo.DVs[key].value - dvdict_cache[key].value)), 0.0)
def test_jacobian_arbitrary_added_order(self):
# this test checks the underlying jacobian itself, not the public API
DVGeo, initpts = self.setup_cubemodel()
# switch up the order of DVs added
for designvarname in ["cubey0", "cubedx", "cubedy", "cubex0", "cubedz", "cubez0"]:
DVGeo.addVariable(designvarname)
# check the FD derivatives
DVGeo._computeSurfJacobian(fd=True)
npts = initpts.shape[0]
ndvs = DVGeo.getNDV()
# check the jacobian results match analytic result
testjac = DVGeo.pointSets["mypts"].jac.reshape(npts, 3, ndvs)
ordered_analyticjac = self.setup_cubemodel_analytic_jac()
analyticjac = np.zeros((npts, 3, ndvs))
# get original variable ordering
orig_var_order = ["cubex0", "cubey0", "cubez0", "cubedx", "cubedy", "cubedz"]
# reorder the analytic jacobian
for idv, designvarname in enumerate(orig_var_order):
dv_ind = DVGeo.DVs[designvarname].globalStartInd
analyticjac[:, :, dv_ind] = ordered_analyticjac[:, :, idv]
self.assertNotEqual(dv_ind, idv)
for ipt in range(npts):
self.assertAlmostEqual(np.sum(np.abs(testjac[ipt, :, :] - analyticjac[ipt, :, :])), 0)
@unittest.skipUnless(MPI and pyOCSM, "MPI and pyOCSM are required.")
class TestPyGeoESP_BasicCube_Distributed_OneProcBlank(unittest.TestCase):
N_PROCS = 4
def setUp(self):
# Store the path where this current script lives
# This all paths in the script are relative to this path
# This is needed to support testflo running directories and files as inputs
self.input_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
self.comm = MPI.COMM_WORLD
def setup_cubemodel(self):
# load the box model and build the box model
csmFile = os.path.join(self.input_path, "../input_files/esp/box.csm")
DVGeo = DVGeometryESP(csmFile)
self.assertIsNotNone(DVGeo)
# add a point set on the surface
# distri
vertex1 = np.array([-2.0, -2.0, -2.0])
vertex2 = np.array([1.5, 1.5, 1.5])
left = np.array([-2.0, -1.1, -1.1])
right = np.array([1.5, -1.2, -0.1])
front = np.array([0.25, 1.5, 0.3])
back = np.array([1.2, -2.0, -0.3])
top = np.array([0.0, 0.1, 1.5])
bottom = np.array([-1.9, -1.1, -2.0])
# distribute the pointset
if self.comm.rank == 0:
initpts = np.vstack([vertex1, vertex2, left, right])
elif self.comm.rank == 1:
initpts = np.vstack([front, back, top])
elif self.comm.rank == 2:
initpts = np.array([]).reshape((0, 3))
elif self.comm.rank == 3:
initpts = np.vstack([bottom, left, right])
else:
raise ValueError("Too many procs")
distglobal = DVGeo.addPointSet(initpts, "mypts", cache_projections=False)
self.assertAlmostEqual(distglobal, 0.0, 8)
# evaluate the points and check that they match
DVGeo._updateESPModel()
DVGeo._updateProjectedPts()
self.assertTrue(DVGeo.pointSetUpToDate)
self.assertAlmostEqual(np.linalg.norm(initpts - DVGeo.pointSets["mypts"].proj_pts), 0.0, 10)
return DVGeo, initpts
def setup_cubemodel_analytic_jac(self):
jacpt0 = np.array(
[[1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0, 0.0, 0.0]] # x # y
) # z
jacpt1 = np.array(
[[1.0, 0.0, 0.0, 1.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 1.0, 0.0], [0.0, 0.0, 1.0, 0.0, 0.0, 1.0]] # x # y
) # z
jacpt2 = np.array(
[
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 0.9 / 3.5, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 0.9 / 3.5],
]
) # z
jacpt3 = np.array(
[
[1.0, 0.0, 0.0, 1.0, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 0.8 / 3.5, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 1.9 / 3.5],
]
) # z
jacpt4 = np.array(
[
[1.0, 0.0, 0.0, 2.25 / 3.50, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 1.0, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 2.30 / 3.50],
]
) # z
jacpt5 = np.array(
[
[1.0, 0.0, 0.0, 3.20 / 3.50, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 0.0, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 1.70 / 3.50],
]
) # z
jacpt6 = np.array(
[
[1.0, 0.0, 0.0, 2.0 / 3.5, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 2.1 / 3.5, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 1.0],
]
) # z
jacpt7 = np.array(
[
[1.0, 0.0, 0.0, 0.1 / 3.5, 0.0, 0.0], # x
[0.0, 1.0, 0.0, 0.0, 0.9 / 3.5, 0.0], # y
[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
]
) # z
if self.comm.rank == 0:
ordered_analytic_jac = np.concatenate([jacpt0, jacpt1, jacpt2, jacpt3], axis=0).reshape(4, 3, 6)
elif self.comm.rank == 1:
ordered_analytic_jac = np.concatenate([jacpt4, jacpt5, jacpt6], axis=0).reshape(3, 3, 6)
elif self.comm.rank == 2:
ordered_analytic_jac = np.array([]).reshape(0, 3, 6)
elif self.comm.rank == 3:
ordered_analytic_jac = np.concatenate([jacpt7, jacpt2, jacpt3], axis=0).reshape(3, 3, 6)
return ordered_analytic_jac
def test_add_pointset(self):
DVGeo, initpts = self.setup_cubemodel()
def test_updated_points(self):
DVGeo, initpts = self.setup_cubemodel()
DVGeo.addVariable("cubey0")
DVGeo.setDesignVars({"cubey0": np.array([4.2000])}, updateJacobian=False)
npts = initpts.shape[0]
if self.comm.rank != 2:
self.assertAlmostEqual(np.sum(DVGeo.pointSets["mypts"].proj_pts[:, 1] - initpts[:, 1]) / npts, 6.2, 10)
DVGeo.addVariable("cubedz")
DVGeo.setDesignVars({"cubedz": np.array([9.5])}, updateJacobian=False)
if self.comm.rank == 0:
self.assertAlmostEqual(DVGeo.pointSets["mypts"].proj_pts[1, 2], 7.5)
self.assertAlmostEqual(DVGeo.pointSets["mypts"].proj_pts[0, 2], -2.0)
elif self.comm.rank == 1:
self.assertAlmostEqual(DVGeo.pointSets["mypts"].proj_pts[0, 2], -2.0 + (0.3 + 2.0) * (9.5 / 3.5))
self.assertAlmostEqual(DVGeo.pointSets["mypts"].proj_pts[1, 2], -2.0 + (-0.3 + 2.0) * (9.5 / 3.5))
elif self.comm.rank == 3:
self.assertAlmostEqual(DVGeo.pointSets["mypts"].proj_pts[0, 2], -2.0)
self.assertAlmostEqual(DVGeo.pointSets["mypts"].proj_pts[1, 2], -2.0 + (-1.1 + 2.0) * (9.5 / 3.5))
def test_parallel_finite_difference(self):
# this test checks the underlying jacobian itself, not the public API
# TODO write tests for the public API
DVGeo, initpts = self.setup_cubemodel()
for designvarname in ["cubex0", "cubey0", "cubez0", "cubedx", "cubedy", "cubedz"]:
DVGeo.addVariable(designvarname)
# check the FD derivatives
initpts_cache = initpts.copy()
dvdict_cache = DVGeo.DVs.copy()
self.assertFalse(DVGeo.updatedJac["mypts"])
DVGeo._computeSurfJacobian(fd=True)
self.assertTrue(DVGeo.updatedJac["mypts"])
npts = initpts.shape[0]
ndvs = DVGeo.getNDV()
# check the jacobian results match analytic result
testjac = DVGeo.pointSets["mypts"].jac.reshape(npts, 3, ndvs)
analyticjac = self.setup_cubemodel_analytic_jac()
if self.comm.rank != 2:
for ipt in range(npts):
self.assertAlmostEqual(np.sum(np.abs(testjac[ipt, :, :] - analyticjac[ipt, :, :])), 0)
# check that the point set hasn't changed after running the FDs
self.assertAlmostEqual(np.sum(np.abs(initpts_cache - DVGeo.pointSets["mypts"].proj_pts)), 0.0)
# check that the DV dict hasn't changed
for key in dvdict_cache:
self.assertAlmostEqual(np.sum(np.abs(DVGeo.DVs[key].value - dvdict_cache[key].value)), 0.0)
@unittest.skipUnless(MPI and pyOCSM, "MPI and pyOCSM are required.")
@parameterized_class(test_params)
class TestPyGeoESP_NACAFoil(unittest.TestCase):
# serial and parallel handled automatically
N_PROCS = 1
def setUp(self):
# Store the path where this current script lives
# This all paths in the script are relative to this path
# This is needed to support testflo running directories and files as inputs
self.input_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
self.comm = MPI.COMM_WORLD
def setup_airfoilmodel(self, kulfan=False, projtol=0.01):
# load the csm file and pointset file
if kulfan:
csmFile = os.path.join(self.input_path, "../input_files/esp/naca0012_kulfan.csm")
max_dist_tol = 2
else:
csmFile = os.path.join(self.input_path, "../input_files/esp/naca0012.csm")
max_dist_tol = 3
stlFile = os.path.join(self.input_path, "../input_files/esp/naca0012_esp.stl")
DVGeo = DVGeometryESP(csmFile, projTol=projtol)
self.assertIsNotNone(DVGeo)
testobj = mesh.Mesh.from_file(stlFile)
# test mesh dim 0 is triangle index
# dim 1 is each vertex of the triangle
# dim 2 is x, y, z dimension
p0 = testobj.vectors[:, 0, :]
p1 = testobj.vectors[:, 1, :]
p2 = testobj.vectors[:, 2, :]
distglobal1 = DVGeo.addPointSet(p0, "airfoil_p0")
distglobal2 = DVGeo.addPointSet(p1, "airfoil_p1")
distglobal3 = DVGeo.addPointSet(p2, "airfoil_p2")
distglobal = np.max(np.array([distglobal1, distglobal2, distglobal3]))
self.assertAlmostEqual(distglobal, 0.0, max_dist_tol)
# evaluate the points and check that they match
DVGeo._updateESPModel()
DVGeo._updateProjectedPts()
self.assertTrue(DVGeo.pointSetUpToDate)
updated_dist_max = np.max(np.sqrt(np.sum((p0 - DVGeo.pointSets["airfoil_p0"].proj_pts) ** 2, axis=1)))
self.assertAlmostEqual(updated_dist_max, 0.0, max_dist_tol)
updated_dist_max = np.max(np.sqrt(np.sum((p1 - DVGeo.pointSets["airfoil_p1"].proj_pts) ** 2, axis=1)))
self.assertAlmostEqual(updated_dist_max, 0.0, max_dist_tol)
updated_dist_max = np.max(np.sqrt(np.sum((p2 - DVGeo.pointSets["airfoil_p2"].proj_pts) ** 2, axis=1)))
self.assertAlmostEqual(updated_dist_max, 0.0, max_dist_tol)
return DVGeo, [p0, p1, p2]
def test_add_pointset(self):
DVGeo, initpts = self.setup_airfoilmodel()
def test_add_pointset_tighter_tolerance(self):
with self.assertRaises(ValueError):
DVGeo, initpts = self.setup_airfoilmodel(projtol=1e-5)
def test_add_desvars(self):
DVGeo, initpts = self.setup_airfoilmodel()
DVGeo.addVariable("nacacode", lower=np.array([8]), upper=np.array([15]), scale=1, dh=0.001)
self.assertEqual(DVGeo.getNDV(), 1)
def test_point_mismatch(self):
# load the wrong pointset on purpose
csmFile = os.path.join(self.input_path, "../input_files/esp/naca0010.csm")
stlFile = os.path.join(self.input_path, "../input_files/esp/naca0012_esp.stl")
DVGeo = DVGeometryESP(csmFile)
self.assertIsNotNone(DVGeo)
testobj = mesh.Mesh.from_file(stlFile)
# test mesh dim 0 is triangle index
# dim 1 is each vertex of the triangle
# dim 2 is x, y, z dimension
p0 = testobj.vectors[:, 0, :]
with self.assertRaises(ValueError):
distglobal1 = DVGeo.addPointSet(p0, "airfoil_p0")
self.assertGreater(distglobal1, 0.01)
def test_parallel_finite_difference(self, train=False):
np.random.seed(1)
DVGeo, initpts = self.setup_airfoilmodel(kulfan=True)
DVGeo.addVariable("cst_u", lower=np.zeros((13,)), upper=np.ones((13,)), scale=1, dh=0.0001)
DVGeo.addVariable("cst_l", lower=-np.ones((13,)), upper=np.zeros((13,)), scale=1, dh=0.0001)
refFile = os.path.join(self.input_path, "reg_tests/ref/test_DVGeometryESP_01.ref")
pointset_names = ["airfoil_p0", "airfoil_p1", "airfoil_p2"]
for pointset_name in pointset_names:
self.assertFalse(DVGeo.updatedJac[pointset_name])
DVGeo._computeSurfJacobian(fd=True)
for pointset_name in pointset_names:
self.assertTrue(DVGeo.updatedJac[pointset_name])
with BaseRegTest(refFile, train=train) as handler:
handler.root_print("ESP NACA 0012 derivative test")
npts = initpts[0].shape[0]
dIdpt = np.random.rand(1, npts, 3)
for pointset_name in pointset_names:
dIdx = DVGeo.totalSensitivity(dIdpt, pointset_name)
handler.root_add_dict("dIdx_" + pointset_name, dIdx, rtol=1e-7, atol=1e-7)
# TODO test pointset caching?
# TODO test total derivative API on an actual distributed pointset?
if __name__ == "__main__":
unittest.main()
| 42.586777 | 118 | 0.571124 | 4,426 | 30,918 | 3.910303 | 0.078626 | 0.068643 | 0.073843 | 0.071416 | 0.889178 | 0.872306 | 0.85451 | 0.850581 | 0.846536 | 0.83914 | 0 | 0.07368 | 0.281842 | 30,918 | 725 | 119 | 42.645517 | 0.705774 | 0.114949 | 0 | 0.744604 | 0 | 0 | 0.050858 | 0.014368 | 0 | 0 | 0 | 0.001379 | 0.131295 | 1 | 0.061151 | false | 0.003597 | 0.023381 | 0 | 0.111511 | 0.001799 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a4dd05067c5e06aa16fbddac6dca5d3cb39fa85b | 158 | py | Python | rfvision/core/visualizer3d/__init__.py | mvig-robotflow/rfvision | cc662f213dfe5a3e8864a6b5685a668a4436e397 | [
"Apache-2.0"
] | 6 | 2021-09-25T03:53:06.000Z | 2022-02-19T03:25:11.000Z | rfvision/core/visualizer3d/__init__.py | mvig-robotflow/rfvision | cc662f213dfe5a3e8864a6b5685a668a4436e397 | [
"Apache-2.0"
] | 1 | 2021-07-21T13:14:54.000Z | 2021-07-21T13:14:54.000Z | rfvision/core/visualizer3d/__init__.py | mvig-robotflow/rfvision | cc662f213dfe5a3e8864a6b5685a668a4436e397 | [
"Apache-2.0"
] | 2 | 2021-07-16T03:25:04.000Z | 2021-11-22T06:04:01.000Z | from .show_result import show_result, show_multi_modality_result, show_seg_result
__all__ = ['show_result', 'show_multi_modality_result', 'show_seg_result']
| 39.5 | 81 | 0.835443 | 23 | 158 | 5 | 0.347826 | 0.347826 | 0.243478 | 0.330435 | 0.8 | 0.8 | 0.8 | 0.8 | 0.8 | 0 | 0 | 0 | 0.075949 | 158 | 3 | 82 | 52.666667 | 0.787671 | 0 | 0 | 0 | 0 | 0 | 0.329114 | 0.164557 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
a4e03099090ba2580d2b433e7bd11deaaffef236 | 167 | py | Python | pyclesperanto_prototype/_tier0/_set_wait_for_kernel_finish.py | haesleinhuepf/pyclesperanto_prototype | 65bc3035d3b2b61a2722c93b95bae310bfbd190e | [
"BSD-3-Clause"
] | 1 | 2021-01-15T15:32:19.000Z | 2021-01-15T15:32:19.000Z | pyclesperanto_prototype/_tier0/_set_wait_for_kernel_finish.py | haesleinhuepf/pyclesperanto_prototype | 65bc3035d3b2b61a2722c93b95bae310bfbd190e | [
"BSD-3-Clause"
] | null | null | null | pyclesperanto_prototype/_tier0/_set_wait_for_kernel_finish.py | haesleinhuepf/pyclesperanto_prototype | 65bc3035d3b2b61a2722c93b95bae310bfbd190e | [
"BSD-3-Clause"
] | null | null | null | def set_wait_for_kernel_finish(wait_for_kernel_finish : bool = None):
from ._pycl import OCLProgram
OCLProgram._wait_for_kernel_finish = wait_for_kernel_finish | 55.666667 | 69 | 0.832335 | 25 | 167 | 4.96 | 0.48 | 0.225806 | 0.419355 | 0.612903 | 0.612903 | 0.612903 | 0.612903 | 0.612903 | 0 | 0 | 0 | 0 | 0.11976 | 167 | 3 | 70 | 55.666667 | 0.843537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
35105e6cc65130ace58750043b7e6ddf91c30ad3 | 15,515 | py | Python | tests/snapshots/snap_test_holidata/test_holidata_produces_holidays_for_locale_and_year[es_ES-2016] 1.py | gour/holidata | 89c7323f9c5345a3ecbf5cd5a835b0e08cfebc13 | [
"MIT"
] | 32 | 2019-04-12T08:01:34.000Z | 2022-02-28T04:41:50.000Z | tests/snapshots/snap_test_holidata/test_holidata_produces_holidays_for_locale_and_year[es_ES-2016] 1.py | gour/holidata | 89c7323f9c5345a3ecbf5cd5a835b0e08cfebc13 | [
"MIT"
] | 74 | 2019-07-09T16:35:20.000Z | 2022-03-09T16:41:34.000Z | tests/snapshots/snap_test_holidata/test_holidata_produces_holidays_for_locale_and_year[es_ES-2016] 1.py | gour/holidata | 89c7323f9c5345a3ecbf5cd5a835b0e08cfebc13 | [
"MIT"
] | 20 | 2019-01-28T07:41:02.000Z | 2022-02-16T02:38:57.000Z | [
{
'date': '2016-01-01',
'description': 'Año Nuevo',
'locale': 'es-ES',
'notes': '',
'region': '',
'type': 'NF'
},
{
'date': '2016-01-06',
'description': 'Epifanía del Señor',
'locale': 'es-ES',
'notes': '',
'region': '',
'type': 'NRF'
},
{
'date': '2016-02-29',
'description': 'Día de Andalucía',
'locale': 'es-ES',
'notes': '',
'region': 'AN',
'type': 'F'
},
{
'date': '2016-03-01',
'description': 'Día de las Illes Balears',
'locale': 'es-ES',
'notes': '',
'region': 'IB',
'type': 'F'
},
{
'date': '2016-03-19',
'description': 'San José',
'locale': 'es-ES',
'notes': '',
'region': 'MC',
'type': 'RF'
},
{
'date': '2016-03-19',
'description': 'San José',
'locale': 'es-ES',
'notes': '',
'region': 'ML',
'type': 'RF'
},
{
'date': '2016-03-19',
'description': 'San José',
'locale': 'es-ES',
'notes': '',
'region': 'VC',
'type': 'RF'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'AN',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'AR',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'AS',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'CB',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'CE',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'CL',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'CM',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'CN',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'EX',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'GA',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'IB',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'MC',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'MD',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'ML',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'NC',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'PV',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'RI',
'type': 'RV'
},
{
'date': '2016-03-24',
'description': 'Jueves Santo',
'locale': 'es-ES',
'notes': '',
'region': 'VC',
'type': 'RV'
},
{
'date': '2016-03-25',
'description': 'Viernes Santo',
'locale': 'es-ES',
'notes': '',
'region': '',
'type': 'NRV'
},
{
'date': '2016-03-27',
'description': 'Pascua',
'locale': 'es-ES',
'notes': '',
'region': '',
'type': 'NRV'
},
{
'date': '2016-03-28',
'description': 'Lunes de Pascua',
'locale': 'es-ES',
'notes': '',
'region': 'CT',
'type': 'RV'
},
{
'date': '2016-03-28',
'description': 'Lunes de Pascua',
'locale': 'es-ES',
'notes': '',
'region': 'IB',
'type': 'RV'
},
{
'date': '2016-03-28',
'description': 'Lunes de Pascua',
'locale': 'es-ES',
'notes': '',
'region': 'NC',
'type': 'RV'
},
{
'date': '2016-03-28',
'description': 'Lunes de Pascua',
'locale': 'es-ES',
'notes': '',
'region': 'PV',
'type': 'RV'
},
{
'date': '2016-03-28',
'description': 'Lunes de Pascua',
'locale': 'es-ES',
'notes': '',
'region': 'RI',
'type': 'RV'
},
{
'date': '2016-03-28',
'description': 'Lunes de Pascua',
'locale': 'es-ES',
'notes': '',
'region': 'VC',
'type': 'RV'
},
{
'date': '2016-04-23',
'description': 'Fiesta de Castilla y León',
'locale': 'es-ES',
'notes': '',
'region': 'CL',
'type': 'F'
},
{
'date': '2016-04-23',
'description': 'San Jorge / Día de Aragón',
'locale': 'es-ES',
'notes': '',
'region': 'AR',
'type': 'RF'
},
{
'date': '2016-05-01',
'description': 'Fiesta del Trabajo',
'locale': 'es-ES',
'notes': '',
'region': '',
'type': 'NF'
},
{
'date': '2016-05-02',
'description': 'Lunes siguiente a la Fiesta del Trabajo',
'locale': 'es-ES',
'notes': '',
'region': 'AN',
'type': 'F'
},
{
'date': '2016-05-02',
'description': 'Lunes siguiente a la Fiesta del Trabajo',
'locale': 'es-ES',
'notes': '',
'region': 'AR',
'type': 'F'
},
{
'date': '2016-05-02',
'description': 'Lunes siguiente a la Fiesta del Trabajo',
'locale': 'es-ES',
'notes': '',
'region': 'AS',
'type': 'F'
},
{
'date': '2016-05-02',
'description': 'Lunes siguiente a la Fiesta del Trabajo',
'locale': 'es-ES',
'notes': '',
'region': 'CL',
'type': 'F'
},
{
'date': '2016-05-02',
'description': 'Lunes siguiente a la Fiesta del Trabajo',
'locale': 'es-ES',
'notes': '',
'region': 'CN',
'type': 'F'
},
{
'date': '2016-05-02',
'description': 'Lunes siguiente a la Fiesta del Trabajo',
'locale': 'es-ES',
'notes': '',
'region': 'EX',
'type': 'F'
},
{
'date': '2016-05-02',
'description': 'Lunes siguiente a la Fiesta del Trabajo',
'locale': 'es-ES',
'notes': '',
'region': 'MD',
'type': 'F'
},
{
'date': '2016-05-16',
'description': 'Lunes de Pascua Granada',
'locale': 'es-ES',
'notes': '',
'region': 'CT',
'type': 'F'
},
{
'date': '2016-05-17',
'description': 'Día de las Letras Gallegas',
'locale': 'es-ES',
'notes': '',
'region': 'GA',
'type': 'F'
},
{
'date': '2016-05-26',
'description': 'Corpus Christi',
'locale': 'es-ES',
'notes': '',
'region': 'CM',
'type': 'RV'
},
{
'date': '2016-05-30',
'description': 'Día de Canarias',
'locale': 'es-ES',
'notes': '',
'region': 'CN',
'type': 'F'
},
{
'date': '2016-05-31',
'description': 'Día de Castilla-La Mancha',
'locale': 'es-ES',
'notes': '',
'region': 'CM',
'type': 'F'
},
{
'date': '2016-06-09',
'description': 'Día de la Región de Murcia',
'locale': 'es-ES',
'notes': '',
'region': 'MC',
'type': 'F'
},
{
'date': '2016-06-09',
'description': 'Día de La Rioja',
'locale': 'es-ES',
'notes': '',
'region': 'RI',
'type': 'F'
},
{
'date': '2016-06-24',
'description': 'San Juan',
'locale': 'es-ES',
'notes': '',
'region': 'CT',
'type': 'RF'
},
{
'date': '2016-06-24',
'description': 'San Juan',
'locale': 'es-ES',
'notes': '',
'region': 'GA',
'type': 'RF'
},
{
'date': '2016-07-25',
'description': 'Santiago Apóstol',
'locale': 'es-ES',
'notes': '',
'region': 'MD',
'type': 'RF'
},
{
'date': '2016-07-25',
'description': 'Santiago Apóstol',
'locale': 'es-ES',
'notes': '',
'region': 'NC',
'type': 'RF'
},
{
'date': '2016-07-25',
'description': 'Santiago Apóstol',
'locale': 'es-ES',
'notes': '',
'region': 'PV',
'type': 'RF'
},
{
'date': '2016-07-25',
'description': 'Santiago Apóstol',
'locale': 'es-ES',
'notes': '',
'region': 'RI',
'type': 'RF'
},
{
'date': '2016-07-25',
'description': 'Santiago Apóstol / Día Nacional de Galicia',
'locale': 'es-ES',
'notes': '',
'region': 'GA',
'type': 'RF'
},
{
'date': '2016-07-28',
'description': 'Día de las Instituciones de Cantabria',
'locale': 'es-ES',
'notes': '',
'region': 'CB',
'type': 'F'
},
{
'date': '2016-08-15',
'description': 'Asunción de la Virgen',
'locale': 'es-ES',
'notes': '',
'region': '',
'type': 'NRF'
},
{
'date': '2016-09-02',
'description': 'Día de Ceuta',
'locale': 'es-ES',
'notes': '',
'region': 'CE',
'type': 'F'
},
{
'date': '2016-09-08',
'description': 'Día de Asturias',
'locale': 'es-ES',
'notes': '',
'region': 'AS',
'type': 'F'
},
{
'date': '2016-09-08',
'description': 'Día de Extremadura',
'locale': 'es-ES',
'notes': '',
'region': 'EX',
'type': 'F'
},
{
'date': '2016-09-12',
'description': 'Fiesta del Sacrificio (Aid El Kebir)',
'locale': 'es-ES',
'notes': '',
'region': 'ML',
'type': 'RV'
},
{
'date': '2016-09-12',
'description': 'Fiesta del Sacrificio (Eidul Adha)',
'locale': 'es-ES',
'notes': '',
'region': 'CE',
'type': 'RV'
},
{
'date': '2016-09-15',
'description': 'La Bien Aparecida',
'locale': 'es-ES',
'notes': '',
'region': 'CB',
'type': 'RF'
},
{
'date': '2016-10-07',
'description': '80º aniversario del primer Gobierno Vasco',
'locale': 'es-ES',
'notes': '',
'region': 'PV',
'type': 'F'
},
{
'date': '2016-10-12',
'description': 'Fiesta Nacional de España',
'locale': 'es-ES',
'notes': '',
'region': '',
'type': 'NF'
},
{
'date': '2016-11-01',
'description': 'Todos los Santos',
'locale': 'es-ES',
'notes': '',
'region': '',
'type': 'NRF'
},
{
'date': '2016-12-06',
'description': 'Día de la Constitución Española',
'locale': 'es-ES',
'notes': '',
'region': '',
'type': 'NF'
},
{
'date': '2016-12-08',
'description': 'Inmaculada Concepción',
'locale': 'es-ES',
'notes': '',
'region': '',
'type': 'NRF'
},
{
'date': '2016-12-25',
'description': 'Natividad del Señor',
'locale': 'es-ES',
'notes': '',
'region': '',
'type': 'NRF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'AN',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'AR',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'AS',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'CB',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'CE',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'CL',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'CM',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'CT',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'EX',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'IB',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'MC',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'MD',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'ML',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'NC',
'type': 'RF'
},
{
'date': '2016-12-26',
'description': 'San Esteban',
'locale': 'es-ES',
'notes': '',
'region': 'VC',
'type': 'RF'
}
] | 22.485507 | 68 | 0.372156 | 1,378 | 15,515 | 4.190131 | 0.095065 | 0.119155 | 0.148944 | 0.223415 | 0.861448 | 0.849151 | 0.847246 | 0.798926 | 0.781607 | 0.735712 | 0 | 0.07413 | 0.400064 | 15,515 | 690 | 69 | 22.485507 | 0.546197 | 0 | 0 | 0.671014 | 0 | 0 | 0.401199 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1050ecfa12fc9ec5a61dca89f9a63b129c860df3 | 112 | py | Python | yawhois/parser/jobswhois_verisign_grs_com.py | huyphan/pyyawhois | 77fb2f73a9c67989f1d41d98f37037406a69d136 | [
"MIT"
] | null | null | null | yawhois/parser/jobswhois_verisign_grs_com.py | huyphan/pyyawhois | 77fb2f73a9c67989f1d41d98f37037406a69d136 | [
"MIT"
] | null | null | null | yawhois/parser/jobswhois_verisign_grs_com.py | huyphan/pyyawhois | 77fb2f73a9c67989f1d41d98f37037406a69d136 | [
"MIT"
] | null | null | null | from .base_verisign import VerisignParserBase
class JobswhoisVerisignGrsComParser(VerisignParserBase):
pass | 28 | 56 | 0.866071 | 9 | 112 | 10.666667 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098214 | 112 | 4 | 57 | 28 | 0.950495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
52acc6da38c04c7881ae0da840bea8f11662038c | 10,569 | py | Python | checkio/Codeship/Mono Captcha/test_mono_captcha.py | KenMercusLai/checkio | c7702221e1bc0b0b30425859ffa6c09722949d65 | [
"MIT"
] | 39 | 2015-02-09T13:24:12.000Z | 2019-05-16T17:51:19.000Z | checkio/Codeship/Mono Captcha/test_mono_captcha.py | KenMercusLai/checkio | c7702221e1bc0b0b30425859ffa6c09722949d65 | [
"MIT"
] | 1 | 2019-10-21T16:18:14.000Z | 2019-10-21T16:18:14.000Z | checkio/Codeship/Mono Captcha/test_mono_captcha.py | KenMercusLai/checkio | c7702221e1bc0b0b30425859ffa6c09722949d65 | [
"MIT"
] | 22 | 2015-01-30T18:00:05.000Z | 2021-05-22T02:57:23.000Z | import unittest
from mono_captcha import checkio
class Tests(unittest.TestCase):
TESTS = {
"Basics": [
{
"input": [
[0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0],
[0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
[0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0],
[0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0],
],
"answer": 394,
"explanation": "",
},
{
"input": [
[0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
[0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0],
[0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0],
],
"answer": 394,
"explanation": " 3,1 3,5 0,10 ",
},
],
"Clear": [
{
"input": [
[0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0],
[0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0],
[0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0],
],
"answer": 123,
"explanation": "",
},
{
"input": [
[0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0],
[0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0],
[0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0],
],
"answer": 456,
"explanation": "",
},
{
"input": [
[0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0],
[0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
[0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0],
[0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0],
],
"answer": 789,
"explanation": "",
},
{
"input": [
[0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0],
[0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0],
[0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0],
[0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0],
[0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0],
],
"answer": 1034,
"explanation": "",
},
{
"input": [
[0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0],
[0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0],
[0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0],
],
"answer": 52678,
"explanation": "",
},
{
"input": [
[0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0],
[0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0],
[0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0],
[0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0],
],
"answer": 911,
"explanation": "",
},
{
"input": [
[0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0],
[0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0],
[0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0],
[0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0],
],
"answer": 777,
"explanation": "",
},
{
"input": [
[0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0],
[0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0],
[0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0],
],
"answer": 21312,
"explanation": "",
},
{
"input": [
[0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0],
[0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
[0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0],
[0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
[0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0],
],
"answer": 80808,
"explanation": "",
},
],
"Noise": [
{
"input": [
[0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0],
[0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0],
[0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0],
],
"answer": 123,
"explanation": " 1,3 1,5 2,11 ",
},
{
"input": [
[0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0],
[0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0],
[0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0],
],
"answer": 456,
"explanation": " 4,2 3,5 3,9 ",
},
{
"input": [
[0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0],
[0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0],
[0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0],
[0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0],
],
"answer": 789,
"explanation": " 2,2 2,6 1,10 ",
},
{
"input": [
[0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0],
[0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0],
[0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0],
[0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0],
[0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0],
],
"answer": 1034,
"explanation": " 1,1 4,7 1,10 2,14 ",
},
{
"input": [
[0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0],
[0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0],
[0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
[0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0],
],
"answer": 52678,
"explanation": " 2,1 2,5 1,9 3,15 2,17 ",
},
{
"input": [
[0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0],
[0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
],
"answer": 911,
"explanation": " 4,1 4,6 0,10 ",
},
{
"input": [
[0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0],
[0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0],
[0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0],
[0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0],
],
"answer": 777,
"explanation": " 1,2 1,5 2,9 ",
},
{
"input": [
[0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0],
[0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0],
],
"answer": 21312,
"explanation": " 3,3 3,5 0,11 1,14 4,17 ",
},
{
"input": [
[0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
[0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
[0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
[0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0],
],
"answer": 80808,
"explanation": " 1,2 4,5 3,10 0,15 2,18 ",
},
],
}
def test_Basics(self):
for i in self.TESTS['Basics']:
assert checkio(i['input']) == i['answer']
def test_Clear(self):
for i in self.TESTS['Clear']:
assert checkio(i['input']) == i['answer']
def test_Noise(self):
for i in self.TESTS['Noise']:
assert checkio(i['input']) == i['answer']
if __name__ == "__main__": # pragma: no cover
unittest.main()
| 42.10757 | 84 | 0.237109 | 1,805 | 10,569 | 1.381717 | 0.029363 | 0.360866 | 0.3332 | 0.256616 | 0.90016 | 0.90016 | 0.85846 | 0.855253 | 0.826383 | 0.809944 | 0 | 0.362841 | 0.545747 | 10,569 | 250 | 85 | 42.276 | 0.156634 | 0.001514 | 0 | 0.53719 | 0 | 0 | 0.064923 | 0 | 0 | 0 | 0 | 0 | 0.012397 | 1 | 0.012397 | false | 0 | 0.008264 | 0 | 0.028926 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
52dbee006f3c8ec7cc2a6ae3cc4ad76306096180 | 68,628 | py | Python | nidaqmx/_task_modules/triggering/start_trigger.py | stafak/nidaqmx-python | f354d7971b21074c120c6f298dbbf4a5e0e4f4f4 | [
"MIT"
] | 252 | 2017-03-22T02:43:16.000Z | 2022-03-27T14:44:44.000Z | nidaqmx/_task_modules/triggering/start_trigger.py | stafak/nidaqmx-python | f354d7971b21074c120c6f298dbbf4a5e0e4f4f4 | [
"MIT"
] | 133 | 2017-03-21T20:57:59.000Z | 2022-03-31T16:08:12.000Z | nidaqmx/_task_modules/triggering/start_trigger.py | stafak/nidaqmx-python | f354d7971b21074c120c6f298dbbf4a5e0e4f4f4 | [
"MIT"
] | 124 | 2017-04-01T18:35:24.000Z | 2022-03-25T06:30:00.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import ctypes
import numpy
from nidaqmx._lib import (
lib_importer, wrapped_ndpointer, ctypes_byte_str, c_bool32)
from nidaqmx.system.physical_channel import PhysicalChannel
from nidaqmx.errors import (
check_for_error, is_string_buffer_too_small, is_array_buffer_too_small)
from nidaqmx.constants import (
Coupling, DigitalPatternCondition, DigitalWidthUnits, Edge, Slope,
TriggerType, WindowTriggerCondition1)
class StartTrigger(object):
"""
Represents the start trigger configurations for a DAQmx task.
"""
def __init__(self, task_handle):
self._handle = task_handle
@property
def anlg_edge_coupling(self):
"""
:class:`nidaqmx.constants.Coupling`: Specifies the coupling for
the source signal of the trigger if the source is a terminal
rather than a virtual channel.
"""
val = ctypes.c_int()
cfunc = lib_importer.windll.DAQmxGetAnlgEdgeStartTrigCoupling
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(ctypes.c_int)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return Coupling(val.value)
@anlg_edge_coupling.setter
def anlg_edge_coupling(self, val):
val = val.value
cfunc = lib_importer.windll.DAQmxSetAnlgEdgeStartTrigCoupling
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_int]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_edge_coupling.deleter
def anlg_edge_coupling(self):
cfunc = lib_importer.windll.DAQmxResetAnlgEdgeStartTrigCoupling
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_edge_dig_fltr_enable(self):
"""
bool: Specifies whether to apply a digital filter to the digital
output of the analog triggering circuitry (the Analog
Comparison Event). When enabled, the analog signal must stay
above or below the trigger level for the minimum pulse width
before being recognized. Use filtering for noisy trigger
signals that transition in and out of the hysteresis window
rapidly.
"""
val = c_bool32()
cfunc = lib_importer.windll.DAQmxGetAnlgEdgeStartTrigDigFltrEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(c_bool32)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@anlg_edge_dig_fltr_enable.setter
def anlg_edge_dig_fltr_enable(self, val):
cfunc = lib_importer.windll.DAQmxSetAnlgEdgeStartTrigDigFltrEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, c_bool32]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_edge_dig_fltr_enable.deleter
def anlg_edge_dig_fltr_enable(self):
cfunc = lib_importer.windll.DAQmxResetAnlgEdgeStartTrigDigFltrEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_edge_dig_fltr_min_pulse_width(self):
"""
float: Specifies in seconds the minimum pulse width the filter
recognizes.
"""
val = ctypes.c_double()
cfunc = (lib_importer.windll.
DAQmxGetAnlgEdgeStartTrigDigFltrMinPulseWidth)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle,
ctypes.POINTER(ctypes.c_double)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@anlg_edge_dig_fltr_min_pulse_width.setter
def anlg_edge_dig_fltr_min_pulse_width(self, val):
cfunc = (lib_importer.windll.
DAQmxSetAnlgEdgeStartTrigDigFltrMinPulseWidth)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_double]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_edge_dig_fltr_min_pulse_width.deleter
def anlg_edge_dig_fltr_min_pulse_width(self):
cfunc = (lib_importer.windll.
DAQmxResetAnlgEdgeStartTrigDigFltrMinPulseWidth)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_edge_dig_fltr_timebase_rate(self):
"""
float: Specifies in hertz the rate of the digital filter
timebase. NI-DAQmx uses this value to compute settings for
the filter.
"""
val = ctypes.c_double()
cfunc = (lib_importer.windll.
DAQmxGetAnlgEdgeStartTrigDigFltrTimebaseRate)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle,
ctypes.POINTER(ctypes.c_double)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@anlg_edge_dig_fltr_timebase_rate.setter
def anlg_edge_dig_fltr_timebase_rate(self, val):
cfunc = (lib_importer.windll.
DAQmxSetAnlgEdgeStartTrigDigFltrTimebaseRate)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_double]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_edge_dig_fltr_timebase_rate.deleter
def anlg_edge_dig_fltr_timebase_rate(self):
cfunc = (lib_importer.windll.
DAQmxResetAnlgEdgeStartTrigDigFltrTimebaseRate)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_edge_dig_fltr_timebase_src(self):
"""
str: Specifies the terminal of the signal to use as the timebase
of the digital filter.
"""
cfunc = (lib_importer.windll.
DAQmxGetAnlgEdgeStartTrigDigFltrTimebaseSrc)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_char_p,
ctypes.c_uint]
temp_size = 0
while True:
val = ctypes.create_string_buffer(temp_size)
size_or_code = cfunc(
self._handle, val, temp_size)
if is_string_buffer_too_small(size_or_code):
# Buffer size must have changed between calls; check again.
temp_size = 0
elif size_or_code > 0 and temp_size == 0:
# Buffer size obtained, use to retrieve data.
temp_size = size_or_code
else:
break
check_for_error(size_or_code)
return val.value.decode('ascii')
@anlg_edge_dig_fltr_timebase_src.setter
def anlg_edge_dig_fltr_timebase_src(self, val):
cfunc = (lib_importer.windll.
DAQmxSetAnlgEdgeStartTrigDigFltrTimebaseSrc)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes_byte_str]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_edge_dig_fltr_timebase_src.deleter
def anlg_edge_dig_fltr_timebase_src(self):
cfunc = (lib_importer.windll.
DAQmxResetAnlgEdgeStartTrigDigFltrTimebaseSrc)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_edge_dig_sync_enable(self):
"""
bool: Specifies whether to synchronize recognition of
transitions in the signal to the internal timebase of the
device.
"""
val = c_bool32()
cfunc = lib_importer.windll.DAQmxGetAnlgEdgeStartTrigDigSyncEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(c_bool32)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@anlg_edge_dig_sync_enable.setter
def anlg_edge_dig_sync_enable(self, val):
cfunc = lib_importer.windll.DAQmxSetAnlgEdgeStartTrigDigSyncEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, c_bool32]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_edge_dig_sync_enable.deleter
def anlg_edge_dig_sync_enable(self):
cfunc = lib_importer.windll.DAQmxResetAnlgEdgeStartTrigDigSyncEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_edge_hyst(self):
"""
float: Specifies a hysteresis level in the units of the
measurement or generation. If **anlg_edge_slope** is
**Slope1.RISING**, the trigger does not deassert until the
source signal passes below **anlg_edge_lvl** minus the
hysteresis. If **anlg_edge_slope** is **Slope1.FALLING**,
the trigger does not deassert until the source signal passes
above **anlg_edge_lvl** plus the hysteresis. Hysteresis is
always enabled. Set this property to a non-zero value to use
hysteresis.
"""
val = ctypes.c_double()
cfunc = lib_importer.windll.DAQmxGetAnlgEdgeStartTrigHyst
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle,
ctypes.POINTER(ctypes.c_double)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@anlg_edge_hyst.setter
def anlg_edge_hyst(self, val):
cfunc = lib_importer.windll.DAQmxSetAnlgEdgeStartTrigHyst
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_double]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_edge_hyst.deleter
def anlg_edge_hyst(self):
cfunc = lib_importer.windll.DAQmxResetAnlgEdgeStartTrigHyst
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_edge_lvl(self):
"""
float: Specifies at what threshold in the units of the
measurement or generation to start acquiring or generating
samples. Use **anlg_edge_slope** to specify on which slope
to trigger on this threshold.
"""
val = ctypes.c_double()
cfunc = lib_importer.windll.DAQmxGetAnlgEdgeStartTrigLvl
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle,
ctypes.POINTER(ctypes.c_double)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@anlg_edge_lvl.setter
def anlg_edge_lvl(self, val):
cfunc = lib_importer.windll.DAQmxSetAnlgEdgeStartTrigLvl
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_double]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_edge_lvl.deleter
def anlg_edge_lvl(self):
cfunc = lib_importer.windll.DAQmxResetAnlgEdgeStartTrigLvl
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_edge_slope(self):
"""
:class:`nidaqmx.constants.Slope`: Specifies on which slope of
the trigger signal to start acquiring or generating samples.
"""
val = ctypes.c_int()
cfunc = lib_importer.windll.DAQmxGetAnlgEdgeStartTrigSlope
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(ctypes.c_int)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return Slope(val.value)
@anlg_edge_slope.setter
def anlg_edge_slope(self, val):
val = val.value
cfunc = lib_importer.windll.DAQmxSetAnlgEdgeStartTrigSlope
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_int]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_edge_slope.deleter
def anlg_edge_slope(self):
cfunc = lib_importer.windll.DAQmxResetAnlgEdgeStartTrigSlope
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_edge_src(self):
"""
str: Specifies the name of a virtual channel or terminal where
there is an analog signal to use as the source of the Start
Trigger.
"""
cfunc = lib_importer.windll.DAQmxGetAnlgEdgeStartTrigSrc
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_char_p,
ctypes.c_uint]
temp_size = 0
while True:
val = ctypes.create_string_buffer(temp_size)
size_or_code = cfunc(
self._handle, val, temp_size)
if is_string_buffer_too_small(size_or_code):
# Buffer size must have changed between calls; check again.
temp_size = 0
elif size_or_code > 0 and temp_size == 0:
# Buffer size obtained, use to retrieve data.
temp_size = size_or_code
else:
break
check_for_error(size_or_code)
return val.value.decode('ascii')
@anlg_edge_src.setter
def anlg_edge_src(self, val):
cfunc = lib_importer.windll.DAQmxSetAnlgEdgeStartTrigSrc
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes_byte_str]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_edge_src.deleter
def anlg_edge_src(self):
cfunc = lib_importer.windll.DAQmxResetAnlgEdgeStartTrigSrc
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_win_btm(self):
"""
float: Specifies the lower limit of the window. Specify this
value in the units of the measurement or generation.
"""
val = ctypes.c_double()
cfunc = lib_importer.windll.DAQmxGetAnlgWinStartTrigBtm
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle,
ctypes.POINTER(ctypes.c_double)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@anlg_win_btm.setter
def anlg_win_btm(self, val):
cfunc = lib_importer.windll.DAQmxSetAnlgWinStartTrigBtm
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_double]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_win_btm.deleter
def anlg_win_btm(self):
cfunc = lib_importer.windll.DAQmxResetAnlgWinStartTrigBtm
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_win_coupling(self):
"""
:class:`nidaqmx.constants.Coupling`: Specifies the coupling for
the source signal of the trigger if the source is a terminal
rather than a virtual channel.
"""
val = ctypes.c_int()
cfunc = lib_importer.windll.DAQmxGetAnlgWinStartTrigCoupling
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(ctypes.c_int)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return Coupling(val.value)
@anlg_win_coupling.setter
def anlg_win_coupling(self, val):
val = val.value
cfunc = lib_importer.windll.DAQmxSetAnlgWinStartTrigCoupling
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_int]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_win_coupling.deleter
def anlg_win_coupling(self):
cfunc = lib_importer.windll.DAQmxResetAnlgWinStartTrigCoupling
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_win_dig_fltr_enable(self):
"""
bool: Specifies whether to apply a digital filter to the digital
output of the analog triggering circuitry (the Analog
Comparison Event). When enabled, the analog signal must stay
within the trigger window for the minimum pulse width before
being recognized. Use filtering for noisy trigger signals
that transition in and out of the window rapidly.
"""
val = c_bool32()
cfunc = lib_importer.windll.DAQmxGetAnlgWinStartTrigDigFltrEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(c_bool32)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@anlg_win_dig_fltr_enable.setter
def anlg_win_dig_fltr_enable(self, val):
cfunc = lib_importer.windll.DAQmxSetAnlgWinStartTrigDigFltrEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, c_bool32]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_win_dig_fltr_enable.deleter
def anlg_win_dig_fltr_enable(self):
cfunc = lib_importer.windll.DAQmxResetAnlgWinStartTrigDigFltrEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_win_dig_fltr_min_pulse_width(self):
"""
float: Specifies in seconds the minimum pulse width the filter
recognizes.
"""
val = ctypes.c_double()
cfunc = (lib_importer.windll.
DAQmxGetAnlgWinStartTrigDigFltrMinPulseWidth)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle,
ctypes.POINTER(ctypes.c_double)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@anlg_win_dig_fltr_min_pulse_width.setter
def anlg_win_dig_fltr_min_pulse_width(self, val):
cfunc = (lib_importer.windll.
DAQmxSetAnlgWinStartTrigDigFltrMinPulseWidth)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_double]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_win_dig_fltr_min_pulse_width.deleter
def anlg_win_dig_fltr_min_pulse_width(self):
cfunc = (lib_importer.windll.
DAQmxResetAnlgWinStartTrigDigFltrMinPulseWidth)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_win_dig_fltr_timebase_rate(self):
"""
float: Specifies in hertz the rate of the digital filter
timebase. NI-DAQmx uses this value to compute settings for
the filter.
"""
val = ctypes.c_double()
cfunc = (lib_importer.windll.
DAQmxGetAnlgWinStartTrigDigFltrTimebaseRate)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle,
ctypes.POINTER(ctypes.c_double)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@anlg_win_dig_fltr_timebase_rate.setter
def anlg_win_dig_fltr_timebase_rate(self, val):
cfunc = (lib_importer.windll.
DAQmxSetAnlgWinStartTrigDigFltrTimebaseRate)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_double]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_win_dig_fltr_timebase_rate.deleter
def anlg_win_dig_fltr_timebase_rate(self):
cfunc = (lib_importer.windll.
DAQmxResetAnlgWinStartTrigDigFltrTimebaseRate)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_win_dig_fltr_timebase_src(self):
"""
str: Specifies the terminal of the signal to use as the timebase
of the digital filter.
"""
cfunc = (lib_importer.windll.
DAQmxGetAnlgWinStartTrigDigFltrTimebaseSrc)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_char_p,
ctypes.c_uint]
temp_size = 0
while True:
val = ctypes.create_string_buffer(temp_size)
size_or_code = cfunc(
self._handle, val, temp_size)
if is_string_buffer_too_small(size_or_code):
# Buffer size must have changed between calls; check again.
temp_size = 0
elif size_or_code > 0 and temp_size == 0:
# Buffer size obtained, use to retrieve data.
temp_size = size_or_code
else:
break
check_for_error(size_or_code)
return val.value.decode('ascii')
@anlg_win_dig_fltr_timebase_src.setter
def anlg_win_dig_fltr_timebase_src(self, val):
cfunc = (lib_importer.windll.
DAQmxSetAnlgWinStartTrigDigFltrTimebaseSrc)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes_byte_str]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_win_dig_fltr_timebase_src.deleter
def anlg_win_dig_fltr_timebase_src(self):
cfunc = (lib_importer.windll.
DAQmxResetAnlgWinStartTrigDigFltrTimebaseSrc)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_win_dig_sync_enable(self):
"""
bool: Specifies whether to synchronize recognition of
transitions in the signal to the internal timebase of the
device.
"""
val = c_bool32()
cfunc = lib_importer.windll.DAQmxGetAnlgWinStartTrigDigSyncEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(c_bool32)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@anlg_win_dig_sync_enable.setter
def anlg_win_dig_sync_enable(self, val):
cfunc = lib_importer.windll.DAQmxSetAnlgWinStartTrigDigSyncEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, c_bool32]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_win_dig_sync_enable.deleter
def anlg_win_dig_sync_enable(self):
cfunc = lib_importer.windll.DAQmxResetAnlgWinStartTrigDigSyncEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_win_src(self):
"""
str: Specifies the name of a virtual channel or terminal where
there is an analog signal to use as the source of the Start
Trigger.
"""
cfunc = lib_importer.windll.DAQmxGetAnlgWinStartTrigSrc
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_char_p,
ctypes.c_uint]
temp_size = 0
while True:
val = ctypes.create_string_buffer(temp_size)
size_or_code = cfunc(
self._handle, val, temp_size)
if is_string_buffer_too_small(size_or_code):
# Buffer size must have changed between calls; check again.
temp_size = 0
elif size_or_code > 0 and temp_size == 0:
# Buffer size obtained, use to retrieve data.
temp_size = size_or_code
else:
break
check_for_error(size_or_code)
return val.value.decode('ascii')
@anlg_win_src.setter
def anlg_win_src(self, val):
cfunc = lib_importer.windll.DAQmxSetAnlgWinStartTrigSrc
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes_byte_str]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_win_src.deleter
def anlg_win_src(self):
cfunc = lib_importer.windll.DAQmxResetAnlgWinStartTrigSrc
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_win_top(self):
"""
float: Specifies the upper limit of the window. Specify this
value in the units of the measurement or generation.
"""
val = ctypes.c_double()
cfunc = lib_importer.windll.DAQmxGetAnlgWinStartTrigTop
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle,
ctypes.POINTER(ctypes.c_double)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@anlg_win_top.setter
def anlg_win_top(self, val):
cfunc = lib_importer.windll.DAQmxSetAnlgWinStartTrigTop
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_double]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_win_top.deleter
def anlg_win_top(self):
cfunc = lib_importer.windll.DAQmxResetAnlgWinStartTrigTop
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def anlg_win_trig_when(self):
"""
:class:`nidaqmx.constants.WindowTriggerCondition1`: Specifies
whether the task starts acquiring or generating samples when
the signal enters or leaves the window you specify with
**anlg_win_btm** and **anlg_win_top**.
"""
val = ctypes.c_int()
cfunc = lib_importer.windll.DAQmxGetAnlgWinStartTrigWhen
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(ctypes.c_int)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return WindowTriggerCondition1(val.value)
@anlg_win_trig_when.setter
def anlg_win_trig_when(self, val):
val = val.value
cfunc = lib_importer.windll.DAQmxSetAnlgWinStartTrigWhen
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_int]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@anlg_win_trig_when.deleter
def anlg_win_trig_when(self):
cfunc = lib_importer.windll.DAQmxResetAnlgWinStartTrigWhen
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def delay(self):
"""
float: Specifies an amount of time to wait after the Start
Trigger is received before acquiring or generating the first
sample. This value is in the units you specify with
**delay_units**.
"""
val = ctypes.c_double()
cfunc = lib_importer.windll.DAQmxGetStartTrigDelay
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle,
ctypes.POINTER(ctypes.c_double)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@delay.setter
def delay(self, val):
cfunc = lib_importer.windll.DAQmxSetStartTrigDelay
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_double]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@delay.deleter
def delay(self):
cfunc = lib_importer.windll.DAQmxResetStartTrigDelay
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def delay_units(self):
"""
:class:`nidaqmx.constants.DigitalWidthUnits`: Specifies the
units of **delay**.
"""
val = ctypes.c_int()
cfunc = lib_importer.windll.DAQmxGetStartTrigDelayUnits
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(ctypes.c_int)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return DigitalWidthUnits(val.value)
@delay_units.setter
def delay_units(self, val):
val = val.value
cfunc = lib_importer.windll.DAQmxSetStartTrigDelayUnits
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_int]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@delay_units.deleter
def delay_units(self):
cfunc = lib_importer.windll.DAQmxResetStartTrigDelayUnits
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def dig_edge_dig_fltr_enable(self):
"""
bool: Specifies whether to apply a digital filter to the trigger
signal.
"""
val = c_bool32()
cfunc = lib_importer.windll.DAQmxGetDigEdgeStartTrigDigFltrEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(c_bool32)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@dig_edge_dig_fltr_enable.setter
def dig_edge_dig_fltr_enable(self, val):
cfunc = lib_importer.windll.DAQmxSetDigEdgeStartTrigDigFltrEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, c_bool32]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@dig_edge_dig_fltr_enable.deleter
def dig_edge_dig_fltr_enable(self):
cfunc = lib_importer.windll.DAQmxResetDigEdgeStartTrigDigFltrEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def dig_edge_dig_fltr_min_pulse_width(self):
"""
float: Specifies in seconds the minimum pulse width the filter
recognizes.
"""
val = ctypes.c_double()
cfunc = (lib_importer.windll.
DAQmxGetDigEdgeStartTrigDigFltrMinPulseWidth)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle,
ctypes.POINTER(ctypes.c_double)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@dig_edge_dig_fltr_min_pulse_width.setter
def dig_edge_dig_fltr_min_pulse_width(self, val):
cfunc = (lib_importer.windll.
DAQmxSetDigEdgeStartTrigDigFltrMinPulseWidth)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_double]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@dig_edge_dig_fltr_min_pulse_width.deleter
def dig_edge_dig_fltr_min_pulse_width(self):
cfunc = (lib_importer.windll.
DAQmxResetDigEdgeStartTrigDigFltrMinPulseWidth)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def dig_edge_dig_fltr_timebase_rate(self):
"""
float: Specifies in hertz the rate of the pulse width filter
timebase. NI-DAQmx uses this value to compute settings for
the filter.
"""
val = ctypes.c_double()
cfunc = (lib_importer.windll.
DAQmxGetDigEdgeStartTrigDigFltrTimebaseRate)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle,
ctypes.POINTER(ctypes.c_double)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@dig_edge_dig_fltr_timebase_rate.setter
def dig_edge_dig_fltr_timebase_rate(self, val):
cfunc = (lib_importer.windll.
DAQmxSetDigEdgeStartTrigDigFltrTimebaseRate)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_double]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@dig_edge_dig_fltr_timebase_rate.deleter
def dig_edge_dig_fltr_timebase_rate(self):
cfunc = (lib_importer.windll.
DAQmxResetDigEdgeStartTrigDigFltrTimebaseRate)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def dig_edge_dig_fltr_timebase_src(self):
"""
str: Specifies the input terminal of the signal to use as the
timebase of the pulse width filter.
"""
cfunc = (lib_importer.windll.
DAQmxGetDigEdgeStartTrigDigFltrTimebaseSrc)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_char_p,
ctypes.c_uint]
temp_size = 0
while True:
val = ctypes.create_string_buffer(temp_size)
size_or_code = cfunc(
self._handle, val, temp_size)
if is_string_buffer_too_small(size_or_code):
# Buffer size must have changed between calls; check again.
temp_size = 0
elif size_or_code > 0 and temp_size == 0:
# Buffer size obtained, use to retrieve data.
temp_size = size_or_code
else:
break
check_for_error(size_or_code)
return val.value.decode('ascii')
@dig_edge_dig_fltr_timebase_src.setter
def dig_edge_dig_fltr_timebase_src(self, val):
cfunc = (lib_importer.windll.
DAQmxSetDigEdgeStartTrigDigFltrTimebaseSrc)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes_byte_str]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@dig_edge_dig_fltr_timebase_src.deleter
def dig_edge_dig_fltr_timebase_src(self):
cfunc = (lib_importer.windll.
DAQmxResetDigEdgeStartTrigDigFltrTimebaseSrc)
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def dig_edge_dig_sync_enable(self):
"""
bool: Specifies whether to synchronize recognition of
transitions in the signal to the internal timebase of the
device. If you set this property to True, the device does
not recognize and act upon the trigger until the next pulse
of the internal timebase.
"""
val = c_bool32()
cfunc = lib_importer.windll.DAQmxGetDigEdgeStartTrigDigSyncEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(c_bool32)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@dig_edge_dig_sync_enable.setter
def dig_edge_dig_sync_enable(self, val):
cfunc = lib_importer.windll.DAQmxSetDigEdgeStartTrigDigSyncEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, c_bool32]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@dig_edge_dig_sync_enable.deleter
def dig_edge_dig_sync_enable(self):
cfunc = lib_importer.windll.DAQmxResetDigEdgeStartTrigDigSyncEnable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def dig_edge_edge(self):
"""
:class:`nidaqmx.constants.Edge`: Specifies on which edge of a
digital pulse to start acquiring or generating samples.
"""
val = ctypes.c_int()
cfunc = lib_importer.windll.DAQmxGetDigEdgeStartTrigEdge
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(ctypes.c_int)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return Edge(val.value)
@dig_edge_edge.setter
def dig_edge_edge(self, val):
val = val.value
cfunc = lib_importer.windll.DAQmxSetDigEdgeStartTrigEdge
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_int]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@dig_edge_edge.deleter
def dig_edge_edge(self):
cfunc = lib_importer.windll.DAQmxResetDigEdgeStartTrigEdge
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def dig_edge_src(self):
"""
str: Specifies the name of a terminal where there is a digital
signal to use as the source of the Start Trigger.
"""
cfunc = lib_importer.windll.DAQmxGetDigEdgeStartTrigSrc
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_char_p,
ctypes.c_uint]
temp_size = 0
while True:
val = ctypes.create_string_buffer(temp_size)
size_or_code = cfunc(
self._handle, val, temp_size)
if is_string_buffer_too_small(size_or_code):
# Buffer size must have changed between calls; check again.
temp_size = 0
elif size_or_code > 0 and temp_size == 0:
# Buffer size obtained, use to retrieve data.
temp_size = size_or_code
else:
break
check_for_error(size_or_code)
return val.value.decode('ascii')
@dig_edge_src.setter
def dig_edge_src(self, val):
cfunc = lib_importer.windll.DAQmxSetDigEdgeStartTrigSrc
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes_byte_str]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@dig_edge_src.deleter
def dig_edge_src(self):
cfunc = lib_importer.windll.DAQmxResetDigEdgeStartTrigSrc
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def dig_pattern_pattern(self):
"""
str: Specifies the digital pattern that must be met for the
Start Trigger to occur.
"""
cfunc = lib_importer.windll.DAQmxGetDigPatternStartTrigPattern
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_char_p,
ctypes.c_uint]
temp_size = 0
while True:
val = ctypes.create_string_buffer(temp_size)
size_or_code = cfunc(
self._handle, val, temp_size)
if is_string_buffer_too_small(size_or_code):
# Buffer size must have changed between calls; check again.
temp_size = 0
elif size_or_code > 0 and temp_size == 0:
# Buffer size obtained, use to retrieve data.
temp_size = size_or_code
else:
break
check_for_error(size_or_code)
return val.value.decode('ascii')
@dig_pattern_pattern.setter
def dig_pattern_pattern(self, val):
cfunc = lib_importer.windll.DAQmxSetDigPatternStartTrigPattern
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes_byte_str]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@dig_pattern_pattern.deleter
def dig_pattern_pattern(self):
cfunc = lib_importer.windll.DAQmxResetDigPatternStartTrigPattern
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def dig_pattern_src(self):
"""
:class:`nidaqmx.system.physical_channel.PhysicalChannel`:
Specifies the physical channels to use for pattern matching.
The order of the physical channels determines the order of
the pattern. If a port is included, the order of the
physical channels within the port is in ascending order.
"""
cfunc = lib_importer.windll.DAQmxGetDigPatternStartTrigSrc
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_char_p,
ctypes.c_uint]
temp_size = 0
while True:
val = ctypes.create_string_buffer(temp_size)
size_or_code = cfunc(
self._handle, val, temp_size)
if is_string_buffer_too_small(size_or_code):
# Buffer size must have changed between calls; check again.
temp_size = 0
elif size_or_code > 0 and temp_size == 0:
# Buffer size obtained, use to retrieve data.
temp_size = size_or_code
else:
break
check_for_error(size_or_code)
return PhysicalChannel(val.value.decode('ascii'))
@dig_pattern_src.setter
def dig_pattern_src(self, val):
val = val.name
cfunc = lib_importer.windll.DAQmxSetDigPatternStartTrigSrc
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes_byte_str]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@dig_pattern_src.deleter
def dig_pattern_src(self):
cfunc = lib_importer.windll.DAQmxResetDigPatternStartTrigSrc
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def dig_pattern_trig_when(self):
"""
:class:`nidaqmx.constants.DigitalPatternCondition`: Specifies
whether the Start Trigger occurs when the physical channels
specified with **dig_pattern_src** match or differ from the
digital pattern specified with **dig_pattern_pattern**.
"""
val = ctypes.c_int()
cfunc = lib_importer.windll.DAQmxGetDigPatternStartTrigWhen
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(ctypes.c_int)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return DigitalPatternCondition(val.value)
@dig_pattern_trig_when.setter
def dig_pattern_trig_when(self, val):
val = val.value
cfunc = lib_importer.windll.DAQmxSetDigPatternStartTrigWhen
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_int]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@dig_pattern_trig_when.deleter
def dig_pattern_trig_when(self):
cfunc = lib_importer.windll.DAQmxResetDigPatternStartTrigWhen
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def retriggerable(self):
"""
bool: Specifies whether a finite task resets and waits for
another Start Trigger after the task completes. When you set
this property to True, the device performs a finite
acquisition or generation each time the Start Trigger occurs
until the task stops. The device ignores a trigger if it is
in the process of acquiring or generating signals.
"""
val = c_bool32()
cfunc = lib_importer.windll.DAQmxGetStartTrigRetriggerable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(c_bool32)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return val.value
@retriggerable.setter
def retriggerable(self, val):
cfunc = lib_importer.windll.DAQmxSetStartTrigRetriggerable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, c_bool32]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@retriggerable.deleter
def retriggerable(self):
cfunc = lib_importer.windll.DAQmxResetStartTrigRetriggerable
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
@property
def term(self):
"""
str: Indicates the name of the internal Start Trigger terminal
for the task. This property does not return the name of the
trigger source terminal.
"""
cfunc = lib_importer.windll.DAQmxGetStartTrigTerm
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_char_p,
ctypes.c_uint]
temp_size = 0
while True:
val = ctypes.create_string_buffer(temp_size)
size_or_code = cfunc(
self._handle, val, temp_size)
if is_string_buffer_too_small(size_or_code):
# Buffer size must have changed between calls; check again.
temp_size = 0
elif size_or_code > 0 and temp_size == 0:
# Buffer size obtained, use to retrieve data.
temp_size = size_or_code
else:
break
check_for_error(size_or_code)
return val.value.decode('ascii')
@property
def trig_type(self):
"""
:class:`nidaqmx.constants.TriggerType`: Specifies the type of
trigger to use to start a task.
"""
val = ctypes.c_int()
cfunc = lib_importer.windll.DAQmxGetStartTrigType
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.POINTER(ctypes.c_int)]
error_code = cfunc(
self._handle, ctypes.byref(val))
check_for_error(error_code)
return TriggerType(val.value)
@trig_type.setter
def trig_type(self, val):
val = val.value
cfunc = lib_importer.windll.DAQmxSetStartTrigType
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes.c_int]
error_code = cfunc(
self._handle, val)
check_for_error(error_code)
@trig_type.deleter
def trig_type(self):
cfunc = lib_importer.windll.DAQmxResetStartTrigType
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
def cfg_anlg_edge_start_trig(
self, trigger_source="", trigger_slope=Slope.RISING,
trigger_level=0.0):
"""
Configures the task to start acquiring or generating samples
when an analog signal crosses the level you specify.
Args:
trigger_source (Optional[str]): Is the name of a virtual
channel or terminal where there is an analog signal to
use as the source of the trigger.
trigger_slope (Optional[nidaqmx.constants.Slope]): Specifies
on which slope of the signal to start acquiring or
generating samples when the signal crosses
**trigger_level**.
trigger_level (Optional[float]): Specifies at what threshold
to start acquiring or generating samples. Specify this
value in the units of the measurement or generation. Use
**trigger_slope** to specify on which slope to trigger
at this threshold.
"""
cfunc = lib_importer.windll.DAQmxCfgAnlgEdgeStartTrig
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes_byte_str,
ctypes.c_int, ctypes.c_double]
error_code = cfunc(
self._handle, trigger_source, trigger_slope.value, trigger_level)
check_for_error(error_code)
def cfg_anlg_window_start_trig(
self, window_top, window_bottom, trigger_source="",
trigger_when=WindowTriggerCondition1.ENTERING_WINDOW):
"""
Configures the task to start acquiring or generating samples
when an analog signal enters or leaves a range you specify.
Args:
window_top (float): Is the upper limit of the window.
Specify this value in the units of the measurement or
generation.
window_bottom (float): Is the lower limit of the window.
Specify this value in the units of the measurement or
generation.
trigger_source (Optional[str]): Is the name of a virtual
channel or terminal where there is an analog signal to
use as the source of the trigger.
trigger_when (Optional[nidaqmx.constants.WindowTriggerCondition1]):
Specifies whether the task starts measuring or
generating samples when the signal enters the window or
when it leaves the window. Use **window_bottom** and
**window_top** to specify the limits of the window.
"""
cfunc = lib_importer.windll.DAQmxCfgAnlgWindowStartTrig
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes_byte_str,
ctypes.c_int, ctypes.c_double, ctypes.c_double]
error_code = cfunc(
self._handle, trigger_source, trigger_when.value, window_top,
window_bottom)
check_for_error(error_code)
def cfg_dig_edge_start_trig(
self, trigger_source, trigger_edge=Edge.RISING):
"""
Configures the task to start acquiring or generating samples on
a rising or falling edge of a digital signal.
Args:
trigger_source (str): Specifies the name of a terminal where
there is a digital signal to use as the source of the
trigger.
trigger_edge (Optional[nidaqmx.constants.Edge]): Specifies
on which edge of the digital signal to start acquiring
or generating samples.
"""
cfunc = lib_importer.windll.DAQmxCfgDigEdgeStartTrig
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes_byte_str,
ctypes.c_int]
error_code = cfunc(
self._handle, trigger_source, trigger_edge.value)
check_for_error(error_code)
def cfg_dig_pattern_start_trig(
self, trigger_source, trigger_pattern,
trigger_when=DigitalPatternCondition.PATTERN_MATCHES):
"""
Configures a task to start acquiring or generating samples when
a digital pattern is matched.
Args:
trigger_source (str): Specifies the physical channels to use
for pattern matching. The order of the physical channels
determines the order of the pattern. If a port is
included, the order of the physical channels within the
port is in ascending order.
trigger_pattern (str): Specifies the digital pattern that
must be met for the trigger to occur.
trigger_when (Optional[nidaqmx.constants.DigitalPatternCondition]):
Specifies the condition under which the trigger occurs.
"""
cfunc = lib_importer.windll.DAQmxCfgDigPatternStartTrig
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle, ctypes_byte_str,
ctypes_byte_str, ctypes.c_int]
error_code = cfunc(
self._handle, trigger_source, trigger_pattern, trigger_when.value)
check_for_error(error_code)
def disable_start_trig(self):
"""
Configures the task to start acquiring or generating samples
immediately upon starting the task.
"""
cfunc = lib_importer.windll.DAQmxDisableStartTrig
if cfunc.argtypes is None:
with cfunc.arglock:
if cfunc.argtypes is None:
cfunc.argtypes = [
lib_importer.task_handle]
error_code = cfunc(
self._handle)
check_for_error(error_code)
| 34.417252 | 80 | 0.585184 | 7,463 | 68,628 | 5.135334 | 0.050918 | 0.109902 | 0.08454 | 0.095812 | 0.809446 | 0.785336 | 0.762975 | 0.741109 | 0.710007 | 0.687358 | 0 | 0.002007 | 0.353893 | 68,628 | 1,993 | 81 | 34.434521 | 0.862317 | 0.142784 | 0 | 0.817731 | 0 | 0 | 0.000792 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077305 | false | 0 | 0.160993 | 0 | 0.26383 | 0.000709 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5e182410daf815402a887144abd96e20403a4fa5 | 7,621 | py | Python | microsoft_atp/komand_microsoft_atp/actions/get_file_id_from_alert_id/schema.py | emartin-merrill-r7/insightconnect-plugins | a589745dbcc9f01d3e601431e77ab7221a84c117 | [
"MIT"
] | 1 | 2020-03-18T09:14:55.000Z | 2020-03-18T09:14:55.000Z | microsoft_atp/komand_microsoft_atp/actions/get_file_id_from_alert_id/schema.py | OSSSP/insightconnect-plugins | 846758dab745170cf1a8c146211a8bea9592e8ff | [
"MIT"
] | null | null | null | microsoft_atp/komand_microsoft_atp/actions/get_file_id_from_alert_id/schema.py | OSSSP/insightconnect-plugins | 846758dab745170cf1a8c146211a8bea9592e8ff | [
"MIT"
] | null | null | null | # GENERATED BY KOMAND SDK - DO NOT EDIT
import komand
import json
class Component:
DESCRIPTION = "Retrieve the file ID related to an alert"
class Input:
ALERT_ID = "alert_id"
class Output:
FILE_INFORMATION = "file_information"
class GetFileIdFromAlertIdInput(komand.Input):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"alert_id": {
"type": "string",
"title": "Alert ID",
"description": "Alert ID to get files from",
"order": 1
}
},
"required": [
"alert_id"
]
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
class GetFileIdFromAlertIdOutput(komand.Output):
schema = json.loads("""
{
"type": "object",
"title": "Variables",
"properties": {
"file_information": {
"$ref": "#/definitions/file_information",
"title": "File Information",
"description": "The file ID related to the given alert ID",
"order": 1
}
},
"required": [
"file_information"
],
"definitions": {
"file_information": {
"type": "object",
"title": "file_information",
"properties": {
"@odata.context": {
"type": "string",
"title": "OData Context",
"description": "OData context",
"order": 2
},
"file_list": {
"type": "array",
"title": "File List",
"description": "List of file information entities",
"items": {
"$ref": "#/definitions/file_list_entry"
},
"order": 1
}
},
"definitions": {
"file_list_entry": {
"type": "object",
"title": "file_list_entry",
"properties": {
"fileProductName": {
"type": "string",
"title": "File Product Name",
"description": "File product name",
"order": 1
},
"filePublisher": {
"type": "string",
"title": "File Publisher",
"description": "File publisher",
"order": 2
},
"fileType": {
"type": "string",
"title": "File Type",
"description": "File type",
"order": 3
},
"globalFirstObserved": {
"type": "string",
"title": "Global First Observed",
"description": "Global first observed",
"order": 4
},
"globalLastObserved": {
"type": "string",
"title": "Global Last Observed",
"description": "Global last observed",
"order": 5
},
"globalPrevalence": {
"type": "integer",
"title": "Global Prevalence",
"description": "Global prevalence",
"order": 6
},
"isPeFile": {
"type": "boolean",
"title": "Is PE File",
"description": "Is PE file",
"order": 7
},
"isValidCertificate": {
"type": "boolean",
"title": "Is Valid Certificate",
"description": "Is valid certificate",
"order": 8
},
"issuer": {
"type": "string",
"title": "Issuer",
"description": "Issuer",
"order": 9
},
"md5": {
"type": "string",
"title": "MD5",
"description": "MD5",
"order": 10
},
"sha1": {
"type": "string",
"title": "SHA1",
"description": "SHA1",
"order": 11
},
"sha256": {
"type": "string",
"title": "SHA256",
"description": "SHA256",
"order": 12
},
"signer": {
"type": "string",
"title": "Signer",
"description": "Signer",
"order": 13
},
"signerHash": {
"type": "string",
"title": "Signer Hash",
"description": "Signer hash",
"order": 14
},
"size": {
"type": "integer",
"title": "Size",
"description": "Size",
"order": 15
},
"windowsDefenderAVThreatName": {
"type": "string",
"title": "Windows Defender AV Threat Name",
"description": "Windows Defender AV threat name",
"order": 16
}
}
}
}
},
"file_list_entry": {
"type": "object",
"title": "file_list_entry",
"properties": {
"fileProductName": {
"type": "string",
"title": "File Product Name",
"description": "File product name",
"order": 1
},
"filePublisher": {
"type": "string",
"title": "File Publisher",
"description": "File publisher",
"order": 2
},
"fileType": {
"type": "string",
"title": "File Type",
"description": "File type",
"order": 3
},
"globalFirstObserved": {
"type": "string",
"title": "Global First Observed",
"description": "Global first observed",
"order": 4
},
"globalLastObserved": {
"type": "string",
"title": "Global Last Observed",
"description": "Global last observed",
"order": 5
},
"globalPrevalence": {
"type": "integer",
"title": "Global Prevalence",
"description": "Global prevalence",
"order": 6
},
"isPeFile": {
"type": "boolean",
"title": "Is PE File",
"description": "Is PE file",
"order": 7
},
"isValidCertificate": {
"type": "boolean",
"title": "Is Valid Certificate",
"description": "Is valid certificate",
"order": 8
},
"issuer": {
"type": "string",
"title": "Issuer",
"description": "Issuer",
"order": 9
},
"md5": {
"type": "string",
"title": "MD5",
"description": "MD5",
"order": 10
},
"sha1": {
"type": "string",
"title": "SHA1",
"description": "SHA1",
"order": 11
},
"sha256": {
"type": "string",
"title": "SHA256",
"description": "SHA256",
"order": 12
},
"signer": {
"type": "string",
"title": "Signer",
"description": "Signer",
"order": 13
},
"signerHash": {
"type": "string",
"title": "Signer Hash",
"description": "Signer hash",
"order": 14
},
"size": {
"type": "integer",
"title": "Size",
"description": "Size",
"order": 15
},
"windowsDefenderAVThreatName": {
"type": "string",
"title": "Windows Defender AV Threat Name",
"description": "Windows Defender AV threat name",
"order": 16
}
}
}
}
}
""")
def __init__(self):
super(self.__class__, self).__init__(self.schema)
| 26.189003 | 65 | 0.419105 | 544 | 7,621 | 5.786765 | 0.191176 | 0.082592 | 0.123888 | 0.036213 | 0.779543 | 0.768107 | 0.768107 | 0.768107 | 0.736976 | 0.736976 | 0 | 0.018269 | 0.425403 | 7,621 | 290 | 66 | 26.27931 | 0.700617 | 0.004855 | 0 | 0.707581 | 1 | 0 | 0.936692 | 0.033237 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00722 | false | 0 | 0.00722 | 0 | 0.050542 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
5e28b730031236c92538dd6aee0a1934f03e16b6 | 3,884 | py | Python | tests/tools/assigner/actions/fixtures.py | bringhurst/kafka-tools | 5472a89d5a6702ae7a692211053a55dfba63072b | [
"Apache-2.0"
] | null | null | null | tests/tools/assigner/actions/fixtures.py | bringhurst/kafka-tools | 5472a89d5a6702ae7a692211053a55dfba63072b | [
"Apache-2.0"
] | null | null | null | tests/tools/assigner/actions/fixtures.py | bringhurst/kafka-tools | 5472a89d5a6702ae7a692211053a55dfba63072b | [
"Apache-2.0"
] | 5 | 2019-10-24T06:54:44.000Z | 2021-07-25T03:20:49.000Z | import argparse
from kafka.tools.assigner.models.broker import Broker
from kafka.tools.assigner.models.cluster import Cluster
from kafka.tools.assigner.models.topic import Topic
def set_up_cluster():
cluster = Cluster()
cluster.add_broker(Broker(1, "brokerhost1.example.com"))
cluster.add_broker(Broker(2, "brokerhost2.example.com"))
cluster.brokers[1].rack = "a"
cluster.brokers[2].rack = "b"
cluster.add_topic(Topic("testTopic1", 2))
cluster.add_topic(Topic("testTopic2", 2))
partition = cluster.topics['testTopic1'].partitions[0]
partition.add_replica(cluster.brokers[1], 0)
partition.add_replica(cluster.brokers[2], 1)
partition = cluster.topics['testTopic1'].partitions[1]
partition.add_replica(cluster.brokers[2], 0)
partition.add_replica(cluster.brokers[1], 1)
partition = cluster.topics['testTopic2'].partitions[0]
partition.add_replica(cluster.brokers[2], 0)
partition.add_replica(cluster.brokers[1], 1)
partition = cluster.topics['testTopic2'].partitions[1]
partition.add_replica(cluster.brokers[1], 0)
partition.add_replica(cluster.brokers[2], 1)
return cluster
def set_up_cluster_4broker():
cluster = Cluster()
cluster.add_broker(Broker(1, "brokerhost1.example.com"))
cluster.add_broker(Broker(2, "brokerhost2.example.com"))
cluster.add_broker(Broker(3, "brokerhost3.example.com"))
cluster.add_broker(Broker(4, "brokerhost4.example.com"))
cluster.brokers[1].rack = "a"
cluster.brokers[2].rack = "a"
cluster.brokers[3].rack = "b"
cluster.brokers[4].rack = "b"
cluster.add_topic(Topic("testTopic1", 4))
cluster.add_topic(Topic("testTopic2", 4))
cluster.add_topic(Topic("testTopic3", 4))
partition = cluster.topics['testTopic1'].partitions[0]
partition.add_replica(cluster.brokers[1], 0)
partition.add_replica(cluster.brokers[2], 1)
partition = cluster.topics['testTopic1'].partitions[1]
partition.add_replica(cluster.brokers[2], 0)
partition.add_replica(cluster.brokers[3], 1)
partition = cluster.topics['testTopic1'].partitions[2]
partition.add_replica(cluster.brokers[2], 0)
partition.add_replica(cluster.brokers[3], 1)
partition = cluster.topics['testTopic1'].partitions[3]
partition.add_replica(cluster.brokers[4], 0)
partition.add_replica(cluster.brokers[1], 1)
partition = cluster.topics['testTopic2'].partitions[0]
partition.add_replica(cluster.brokers[4], 0)
partition.add_replica(cluster.brokers[3], 1)
partition = cluster.topics['testTopic2'].partitions[1]
partition.add_replica(cluster.brokers[2], 0)
partition.add_replica(cluster.brokers[4], 1)
partition = cluster.topics['testTopic2'].partitions[2]
partition.add_replica(cluster.brokers[2], 0)
partition.add_replica(cluster.brokers[1], 1)
partition = cluster.topics['testTopic2'].partitions[3]
partition.add_replica(cluster.brokers[3], 0)
partition.add_replica(cluster.brokers[1], 1)
partition = cluster.topics['testTopic3'].partitions[0]
partition.add_replica(cluster.brokers[3], 0)
partition.add_replica(cluster.brokers[2], 1)
partition = cluster.topics['testTopic3'].partitions[1]
partition.add_replica(cluster.brokers[4], 0)
partition.add_replica(cluster.brokers[2], 1)
partition = cluster.topics['testTopic3'].partitions[2]
partition.add_replica(cluster.brokers[1], 0)
partition.add_replica(cluster.brokers[2], 1)
partition = cluster.topics['testTopic3'].partitions[3]
partition.add_replica(cluster.brokers[3], 0)
partition.add_replica(cluster.brokers[4], 1)
return cluster
def set_up_subparser():
aparser = argparse.ArgumentParser(prog='kafka-assigner', description='Rejigger Kafka cluster partitions')
subparsers = aparser.add_subparsers(help='Select manipulation module to use')
return (aparser, subparsers)
| 44.643678 | 109 | 0.728888 | 499 | 3,884 | 5.571142 | 0.102204 | 0.191367 | 0.218705 | 0.299281 | 0.873381 | 0.813669 | 0.766547 | 0.73777 | 0.736331 | 0.736331 | 0 | 0.036873 | 0.127188 | 3,884 | 86 | 110 | 45.162791 | 0.783186 | 0 | 0 | 0.632911 | 0 | 0 | 0.11174 | 0.03553 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037975 | false | 0 | 0.050633 | 0 | 0.126582 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
eab49adb56bb0dfa1ac3ae14fed884e5a4d768a6 | 99,881 | py | Python | selfdrive/car/hyundai/values.py | baldwalker/openpilot-opkr | 367c663fbea7d5bdb059e0bfbb467684d501d844 | [
"MIT"
] | 1 | 2021-12-01T23:50:06.000Z | 2021-12-01T23:50:06.000Z | selfdrive/car/hyundai/values.py | baldwalker/openpilot-opkr | 367c663fbea7d5bdb059e0bfbb467684d501d844 | [
"MIT"
] | 1 | 2022-02-13T07:28:46.000Z | 2022-02-13T07:40:59.000Z | selfdrive/car/hyundai/values.py | baldwalker/openpilot-opkr | 367c663fbea7d5bdb059e0bfbb467684d501d844 | [
"MIT"
] | 1 | 2022-02-27T06:04:07.000Z | 2022-02-27T06:04:07.000Z | from dataclasses import dataclass
from typing import Dict, List, Union
from cereal import car
from common.conversions import Conversions as CV
from selfdrive.car import dbc_dict
from selfdrive.car.docs_definitions import CarInfo, Harness
from common.params import Params
Ecu = car.CarParams.Ecu
# Steer torque limits
class CarControllerParams:
ACCEL_MIN = -4.0 # m/s
ACCEL_MAX = 2.0 # m/s
def __init__(self, CP):
self.STEER_MAX = int(Params().get("SteerMaxAdj", encoding="utf8")) # default 384
self.STEER_DELTA_UP = int(Params().get("SteerDeltaUpAdj", encoding="utf8")) # default 3
self.STEER_DELTA_DOWN = int(Params().get("SteerDeltaDownAdj", encoding="utf8")) # default 7
self.STEER_DRIVER_ALLOWANCE = 50
self.STEER_DRIVER_MULTIPLIER = 2
self.STEER_DRIVER_FACTOR = 1
class CAR:
# HYUNDAI
AVANTE_AD = "HYUNDAI AVANTE (AD)"
AVANTE_CN7 = "HYUNDAI AVANTE (CN7)"
AVANTE_HEV_CN7 = "HYUNDAI AVANTE HYBRID (CN7)"
I30_PD = "HYUNDAI I30 (PD)"
SONATA_DN8 = "HYUNDAI SONATA (DN8)"
SONATA_HEV_DN8 = "HYUNDAI SONATA HYBRID (DN8)"
SONATA_LF = "HYUNDAI SONATA (LF)"
SONATA_TURBO_LF = "HYUNDAI SONATA TURBO (LF)"
SONATA_HEV_LF = "HYUNDAI SONATA HYBRID (LF)"
KONA_OS = "HYUNDAI KONA (OS)"
KONA_EV_OS = "HYUNDAI KONA EV (OS)"
KONA_HEV_OS = "HYUNDAI KONA HYBRID (OS)"
IONIQ_EV_AE = "HYUNDAI IONIQ ELECTRIC (AE)"
IONIQ_HEV_AE = "HYUNDAI IONIQ HYBRID (AE)"
SANTAFE_TM = "HYUNDAI SANTAFE (TM)"
SANTAFE_HEV_TM = "HYUNDAI SANTAFE HYBRID (TM)"
PALISADE_LX2 = "HYUNDAI PALISADE (LX2)"
VELOSTER_JS = "HYUNDAI VELOSTER (JS)"
GRANDEUR_IG = "HYUNDAI GRANDEUR (IG)"
GRANDEUR_HEV_IG = "HYUNDAI GRANDEUR HYBRID (IG)"
GRANDEUR_FL_IG = "HYUNDAI GRANDEUR FL (IG)"
GRANDEUR_HEV_FL_IG = "HYUNDAI GRANDEUR HYBRID FL (IG)"
TUCSON_TL = "HYUNDAI TUCSON (TL)"
NEXO_FE = "HYUNDAI NEXO (FE)"
# KIA
KIA_FORTE = "KIA FORTE E 2018 & GT 2021"
K3_BD = "KIA K3 (BD)"
K5_JF = "KIA K5 (JF)"
K5_HEV_JF = "KIA K5 HYBRID (JF)"
K5_DL3 = "KIA K5 (DL3)"
SPORTAGE_QL = "KIA SPORTAGE (QL)"
SORENTO_UM = "KIA SORENTO (UM)"
STINGER_CK = "KIA STINGER (CK)"
NIRO_EV_DE = "KIA NIRO EV (DE)"
NIRO_HEV_DE = "KIA NIRO HYBRID (DE)"
K7_YG = "KIA K7 (YG)"
K7_HEV_YG = "KIA K7 HYBRID (YG)"
SELTOS_SP2 = "KIA SELTOS (SP2)"
SOUL_EV_SK3 = "KIA SOUL EV (SK3)"
MOHAVE_HM = "KIA MOHAVE (HM)"
# GENESIS
GENESIS_DH = "GENESIS (DH)"
GENESIS_G70_IK = "GENESIS G70 (IK)"
GENESIS_G70_2020 = "GENESIS G70 2020"
GENESIS_G80_DH = "GENESIS G80 (DH)"
GENESIS_G90_HI = "GENESIS G90 (HI)"
GENESIS_EQ900_HI = "GENESIS EQ900 (HI)"
@dataclass
class HyundaiCarInfo(CarInfo):
package: str="SCC + LKAS"
good_torque: bool = True
CAR_INFO: Dict[str, Union[HyundaiCarInfo, List[HyundaiCarInfo]]] = {
# hyundai
CAR.AVANTE_AD: HyundaiCarInfo("Hyundai Avante", video_link="https://youtu.be/_EdYQtV52-c"),
CAR.AVANTE_CN7: HyundaiCarInfo("Hyundai Avante 2021", video_link="https://youtu.be/_EdYQtV52-c"),
CAR.AVANTE_HEV_CN7: HyundaiCarInfo("Hyundai Avante Hybrid 2021"),
CAR.I30_PD: HyundaiCarInfo("Hyundai I30", "All"),
CAR.SONATA_DN8: HyundaiCarInfo("Hyundai Sonata 2020-22", "All", video_link="https://www.youtube.com/watch?v=ix63r9kE3Fw", harness=Harness.hyundai_a),
CAR.SONATA_HEV_DN8: HyundaiCarInfo("Hyundai Sonata Hybrid 2021-22", "All", harness=Harness.hyundai_a),
CAR.SONATA_LF: HyundaiCarInfo("Hyundai LF Sonata"),
CAR.SONATA_TURBO_LF: HyundaiCarInfo("Hyundai LF Sonata Turbo"),
CAR.SONATA_HEV_LF: HyundaiCarInfo("Hyundai LF Sonata Hybrid"),
CAR.KONA_OS: HyundaiCarInfo("Hyundai Kona 2020", harness=Harness.hyundai_b),
CAR.KONA_EV_OS: HyundaiCarInfo("Hyundai Kona Electric 2018-19", harness=Harness.hyundai_g),
CAR.KONA_HEV_OS: HyundaiCarInfo("Hyundai Kona Hybrid 2020", video_link="https://youtu.be/_EdYQtV52-c", harness=Harness.hyundai_i),
CAR.IONIQ_EV_AE: HyundaiCarInfo("Hyundai Ioniq Electric 2019", "All", harness=Harness.hyundai_c),
CAR.IONIQ_HEV_AE: HyundaiCarInfo("Hyundai Ioniq Hybrid 2020-22", "SCC + LFA", harness=Harness.hyundai_h),
CAR.SANTAFE_TM: HyundaiCarInfo("Hyundai Santa Fe 2019-20", "All", harness=Harness.hyundai_d),
CAR.SANTAFE_HEV_TM: HyundaiCarInfo("Hyundai Santa Fe Hybrid 2022", "All", harness=Harness.hyundai_l),
CAR.PALISADE_LX2: [
HyundaiCarInfo("Hyundai Palisade 2020-21", "All", video_link="https://youtu.be/TAnDqjF4fDY?t=456", harness=Harness.hyundai_h),
HyundaiCarInfo("Kia Telluride 2020", harness=Harness.hyundai_h),
],
CAR.VELOSTER_JS: HyundaiCarInfo("Hyundai Veloster 2019-20", "All", min_enable_speed=5. * CV.MPH_TO_MS, harness=Harness.hyundai_e),
CAR.GRANDEUR_IG: HyundaiCarInfo("Hyundai Grandeur IG", "All", harness=Harness.hyundai_c),
CAR.GRANDEUR_HEV_IG: HyundaiCarInfo("Hyundai Grandeur IG Hybrid", "All", harness=Harness.hyundai_c),
CAR.GRANDEUR_FL_IG: HyundaiCarInfo("Hyundai Grandeur IG FL", "All", harness=Harness.hyundai_k),
CAR.GRANDEUR_HEV_FL_IG: HyundaiCarInfo("Hyundai Grandeur IG FL Hybrid", "All", harness=Harness.hyundai_k),
CAR.TUCSON_TL: HyundaiCarInfo("Hyundai Tucson", "All"),
CAR.NEXO_FE: HyundaiCarInfo("Hyundai Nexo", "All"),
# Kia
CAR.KIA_FORTE: [
HyundaiCarInfo("Kia Forte 2018", harness=Harness.hyundai_b),
HyundaiCarInfo("Kia Forte 2019-21", harness=Harness.hyundai_g),
],
CAR.K3_BD: HyundaiCarInfo("Kia K3 2018-21"),
CAR.K5_JF: HyundaiCarInfo("Kia K5 2021-22", "SCC + LFA", harness=Harness.hyundai_a),
CAR.K5_HEV_JF: HyundaiCarInfo("Kia K5 Hybrid 2017"),
CAR.K5_DL3: HyundaiCarInfo("Kia K5 2021"),
CAR.SPORTAGE_QL: HyundaiCarInfo("Kia Sportage"),
CAR.SORENTO_UM: HyundaiCarInfo("Kia Sorento 2018-19", video_link="https://www.youtube.com/watch?v=Fkh3s6WHJz8"),
CAR.STINGER_CK: HyundaiCarInfo("Kia Stinger 2018", video_link="https://www.youtube.com/watch?v=MJ94qoofYw0", harness=Harness.hyundai_c),
CAR.NIRO_EV_DE: HyundaiCarInfo("Kia Niro Electric 2019-22", "All", video_link="https://www.youtube.com/watch?v=lT7zcG6ZpGo"),
CAR.NIRO_HEV_DE: HyundaiCarInfo("Kia Niro Plug-In Hybrid 2019", min_enable_speed=10. * CV.MPH_TO_MS, harness=Harness.hyundai_c),
CAR.K7_YG: HyundaiCarInfo("Kia K7 2016-19"),
CAR.K7_HEV_YG: HyundaiCarInfo("Kia K7 Hybrid 2016-19"),
CAR.SELTOS_SP2: HyundaiCarInfo("Kia Seltos 2021", harness=Harness.hyundai_a),
CAR.SOUL_EV_SK3: HyundaiCarInfo("Kia Soul EV 2019"),
CAR.MOHAVE_HM: HyundaiCarInfo("Kia Mohave 2019"),
# genesis
CAR.GENESIS_DH: HyundaiCarInfo("Genesis 2015-2016", min_enable_speed=19 * CV.MPH_TO_MS, harness=Harness.hyundai_j),
CAR.GENESIS_G70_IK: HyundaiCarInfo("Genesis G70 2018", "All", harness=Harness.hyundai_f),
CAR.GENESIS_G70_2020: HyundaiCarInfo("Genesis G70 2020", "All", harness=Harness.hyundai_f),
CAR.GENESIS_G80_DH: HyundaiCarInfo("Genesis G80 2017", "All", harness=Harness.hyundai_h),
CAR.GENESIS_G90_HI: HyundaiCarInfo("Genesis G90 2017", "All", harness=Harness.hyundai_c),
CAR.GENESIS_EQ900_HI: HyundaiCarInfo("Genesis EQ900", "All"),
}
class Buttons:
NONE = 0
RES_ACCEL = 1
SET_DECEL = 2
GAP_DIST = 3
CANCEL = 4
FINGERPRINTS = {
# genesis
CAR.GENESIS_DH: [{
67: 8, 68: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 7, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 5, 897: 8, 902: 8, 903: 6, 916: 8, 1024: 2, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1168: 7, 1170: 8, 1173: 8, 1184: 8, 1265: 4, 1280: 1, 1287: 4, 1292: 8, 1312: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1334: 8, 1335: 8, 1342: 6, 1345: 8, 1363: 8, 1369: 8, 1370: 8, 1371: 8, 1378: 4, 1384: 5, 1407: 8, 1419: 8, 1427: 6, 1434: 2, 1456: 4
},{
67: 8, 68: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 7, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 5, 897: 8, 902: 8, 903: 6, 916: 8, 1024: 2, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1168: 7, 1170: 8, 1173: 8, 1184: 8, 1265: 4, 1280: 1, 1281: 3, 1287: 4, 1292: 8, 1312: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1334: 8, 1335: 8, 1345: 8, 1363: 8, 1369: 8, 1370: 8, 1378: 4, 1379: 8, 1384: 5, 1407: 8, 1419: 8, 1427: 6, 1434: 2, 1456: 4
},{
67: 8, 68: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 7, 593: 8, 608: 8, 688: 5, 809: 8, 854: 7, 870: 7, 871: 8, 872: 5, 897: 8, 902: 8, 903: 6, 912: 7, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1168: 7, 1170: 8, 1173: 8, 1184: 8, 1265: 4, 1268: 8, 1280: 1, 1281: 3, 1287: 4, 1292: 8, 1312: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1334: 8, 1335: 8, 1345: 8, 1363: 8, 1369: 8, 1370: 8, 1371: 8, 1378: 4, 1384: 5, 1407: 8, 1419: 8, 1427: 6, 1434: 2, 1437: 8, 1456: 4
},{
67: 8, 68: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 7, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 5, 897: 8, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1168: 7, 1170: 8, 1173: 8, 1184: 8, 1265: 4, 1280: 1, 1287: 4, 1292: 8, 1312: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1334: 8, 1335: 8, 1345: 8, 1363: 8, 1369: 8, 1370: 8, 1378: 4, 1379: 8, 1384: 5, 1407: 8, 1425: 2, 1427: 6, 1437: 8, 1456: 4
},{
67: 8, 68: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 7, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 5, 897: 8, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1168: 7, 1170: 8, 1173: 8, 1184: 8, 1265: 4, 1280: 1, 1287: 4, 1292: 8, 1312: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1334: 8, 1335: 8, 1345: 8, 1363: 8, 1369: 8, 1370: 8, 1371: 8, 1378: 4, 1384: 5, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1437: 8, 1456: 4
}],
CAR.GENESIS_G70_IK: [{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 358: 6, 544: 8, 576: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832:8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1168: 7, 1170: 8, 1173:8, 1184: 8, 1186: 2, 1191: 2, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1379: 8, 1384: 8, 1407: 8, 1419:8, 1427: 6, 1456: 4, 1470: 8, 1988: 8, 1996: 8, 2000: 8, 2004: 8, 2008: 8, 2012: 8, 2015: 8
}],
CAR.GENESIS_G80_DH: [{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 358: 6, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1024: 2, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1168: 7, 1170: 8, 1173: 8, 1184: 8, 1191: 2, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1370: 8, 1371: 8, 1378: 4, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1434: 2, 1456: 4, 1470: 8
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 358: 6, 359: 8, 544: 8, 546: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1157: 4, 1168: 7, 1170: 8, 1173: 8, 1184: 8, 1265: 4, 1280: 1, 1281: 3, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1370: 8, 1371: 8, 1378: 4, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1434: 2, 1437: 8, 1456: 4, 1470: 8
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 358: 6, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1157: 4, 1162: 8, 1168: 7, 1170: 8, 1173: 8, 1184: 8, 1193: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 4, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1437: 8, 1456: 4, 1470: 8
}],
CAR.GENESIS_G90_HI: [{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 358: 6, 359: 8, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1162: 4, 1168: 7, 1170: 8, 1173: 8, 1184: 8, 1265: 4, 1280: 1, 1281: 3, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1370: 8, 1371: 8, 1378: 4, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1434: 2, 1456: 4, 1470: 8, 1988: 8, 2000: 8, 2003: 8, 2004: 8, 2005: 8, 2008: 8, 2011: 8, 2012: 8, 2013: 8
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 358: 6, 359: 8, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1157: 4, 1162: 4, 1168: 7, 1170: 8, 1173: 8, 1184: 8, 1265: 4, 1280: 1, 1281: 3, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1370: 8, 1371: 8, 1378: 4, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1434: 2, 1456: 4, 1470: 8
}],
# hyundai
CAR.AVANTE_CN7: [{
66: 8, 67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 897: 8, 832: 8, 899: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1170: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1314: 8, 1322: 8, 1345: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1415: 8, 1419: 8, 1425: 2, 1427: 6, 1440: 8, 1456: 4, 1472: 8, 1486: 8, 1487: 8, 1491: 8, 1530: 8, 1532: 5, 2001: 8, 2003: 8, 2004: 8, 2009: 8, 2012: 8, 2016: 8, 2017: 8, 2024: 8, 2025: 8
}],
CAR.I30_PD: [{
66: 8, 67: 8, 68: 8, 127: 8, 128: 8, 129: 8, 273: 8, 274: 8, 275: 8, 339: 8, 354: 3, 356: 4, 399: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 884: 8, 897: 8, 899: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1193: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1356: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1414: 3, 1415: 8, 1427: 6, 1440: 8, 1456: 4, 1470: 8, 1486: 8, 1487: 8, 1491: 8, 1530: 8, 1952: 8, 1960: 8, 1988: 8, 2000: 8, 2001: 8, 2005: 8, 2008: 8, 2009: 8, 2013: 8, 2017: 8, 2025: 8
},{
66: 8, 67: 8, 68: 8, 127: 8, 128: 8, 129: 8, 273: 8, 274: 8, 275: 8, 339: 8, 354: 3, 356: 4, 399: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 832: 8, 897: 8, 899: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1356: 8, 1363: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1414: 3, 1415: 8, 1419: 8, 1440: 8, 1456: 4, 1470: 8, 1486: 8, 1487: 8, 1491: 8, 1530: 8
},{
66: 8, 67: 8, 68: 8, 127: 8, 128: 8, 129: 8, 273: 8, 274: 8, 275: 8, 339: 8, 354: 3, 356: 4, 399: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 832: 8, 897: 8, 899: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1356: 8, 1363: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1414: 3, 1419: 8, 1427: 6, 1440: 8, 1456: 4, 1470: 8, 1486: 8, 1487: 8, 1491: 8, 1960: 8, 1990: 8, 1998: 8, 2000: 8, 2001: 8, 2004: 8, 2005: 8, 2008: 8, 2009: 8, 2012: 8, 2013: 8, 2015: 8, 2016: 8, 2017: 8, 2024: 8, 2025: 8
},{
67: 8, 68: 8, 127: 8, 128: 8, 129: 8, 273: 8, 274: 8, 275: 8, 339: 8, 354: 3, 356: 4, 399: 8, 512: 6, 544: 8, 608: 8, 790: 8, 809: 8, 832: 8, 899: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1356: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1414: 3, 1415: 8, 1419: 8, 1427: 6, 1440: 8, 1456: 4, 1470: 8, 1486: 8, 1487: 8, 1491: 8, 1530: 8
}],
CAR.SONATA_DN8: [{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 545: 8, 546: 8, 547: 8, 548: 8, 549: 8, 550: 8, 576: 8, 593: 8, 608: 8, 688: 6, 809: 8, 832: 8, 854: 8, 865: 8, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 908: 8, 909: 8, 912: 7, 913: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1089: 5, 1107: 5, 1108: 8, 1114: 8, 1136: 8, 1145: 8, 1151: 8, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 8, 1170: 8, 1173: 8, 1180: 8, 1183: 8, 1184: 8, 1186: 2, 1191: 2, 1193: 8, 1210: 8, 1225: 8, 1227: 8, 1265: 4, 1268: 8, 1280: 8, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1330: 8, 1339: 8, 1342: 6, 1343: 8, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1384: 8, 1394: 8, 1407: 8, 1419: 8, 1427: 6, 1446: 8, 1456: 4, 1460: 8, 1470: 8, 1485: 8, 1504: 3
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1162: 4, 1168: 7, 1170: 8, 1173: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1444: 8, 1456: 4, 1470: 8
},{
66: 8, 67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 512: 6, 544: 8, 608: 8, 790: 8, 809: 8, 832: 8, 899: 8, 902: 8, 903: 6, 912: 7, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1265: 4, 1268: 8, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1312: 8, 1314: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1415: 8, 1419: 8, 1427: 6, 1440: 8, 1456: 4, 1460: 8, 1470: 8, 1472: 8, 1491: 8, 1530: 8
},{
66: 8, 67: 8, 68: 8, 127: 8, 128: 8, 129: 8, 273: 8, 274: 8, 275: 8, 339: 8, 354: 3, 356: 4, 399: 8, 512: 6, 544: 8, 608: 8, 790: 8, 809: 8, 832: 8, 899: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1170: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1314: 8, 1322: 8, 1345: 8, 1349: 8, 1351: 8, 1353: 8, 1356: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1427: 6, 1440: 8, 1456: 4, 1472: 8, 1491: 8, 1530: 8
}],
CAR.SONATA_LF: [{
66: 8, 67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 832: 8, 897: 8, 899: 8, 902: 8, 903: 6, 912: 7, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1265: 4, 1268: 8, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1312: 8, 1314: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1414: 3, 1415: 8, 1419: 8, 1427: 6, 1440: 8, 1456: 4, 1460: 8, 1470: 8, 1471: 8, 1472: 8, 1491: 8, 1530: 8, 1532: 5, 2016: 8, 2024: 8
},{
64: 8, 66: 8, 67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 447: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 832: 8, 884: 8, 897: 8, 899: 8, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1253: 8, 1254: 8, 1255: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1314: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1371: 8, 1407: 8, 1414: 3, 1415: 8, 1419: 8, 1425: 2, 1427: 6, 1440: 8, 1456: 4, 1460: 8, 1470: 8, 1472: 8, 1486: 8, 1487: 8, 1491: 8, 1530: 8, 2015: 8, 2024: 8
},{
64: 8, 66: 8, 67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 512: 6, 544: 8, 593: 8, 608: 8, 625: 8, 688: 5, 790: 8, 809: 8, 832: 8, 897: 8, 899: 8, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1314: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1366: 8, 1367: 8, 1369: 8, 1371: 8, 1407: 8, 1415: 8, 1419: 8, 1425: 2, 1427: 6, 1440: 8, 1456: 4, 1460: 8, 1470: 8, 1472: 8, 1491: 8, 1530: 8, 1990: 8, 1998: 8, 2016: 8, 2024: 8
},{
64: 8, 66: 8, 67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 447: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 832: 8, 884: 8, 897: 8, 899: 8, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1253: 8, 1254: 8, 1255: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1314: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1414: 3, 1415: 8, 1419: 8, 1425: 2, 1427: 6, 1440: 8, 1456: 4, 1460: 8, 1470: 8, 1472: 8, 1486: 8, 1487: 8, 1491: 8, 1530: 8, 1905: 8, 1913: 8, 1990: 8, 1998: 8, 2006: 8, 2014: 8, 2016: 8, 2017: 8, 2024: 8, 2025: 8
},{
66: 8, 67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 447: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 832: 8, 884: 8, 897: 8, 899: 8, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1253: 8, 1254: 8, 1255: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1314: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1397: 8, 1407: 8, 1415: 8, 1419: 8, 1425: 2, 1427: 6, 1440: 8, 1456: 4, 1470: 8, 1472: 8, 1486: 8, 1487: 8, 1491: 8, 1530: 8, 1532: 5, 2000: 8, 2001: 8, 2004: 8, 2005: 8, 2008: 8, 2009: 8, 2012: 8, 2013: 8, 2014: 8, 2016: 8, 2017: 8, 2024: 8, 2025: 8
}],
CAR.SONATA_HEV_DN8: [{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 548: 8, 576: 8, 593: 8, 688: 6, 757: 2, 832: 8, 865: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 913: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1102: 8, 1108: 8, 1114: 8, 1136: 6, 1138: 5, 1151: 8, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 8, 1173: 8, 1180: 8, 1184: 8, 1186: 2, 1191: 2, 1193: 8, 1210: 8, 1225: 8, 1227: 8, 1265: 4, 1268: 8, 1280: 8, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1330: 8, 1339: 8, 1342: 6, 1343: 8, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1446: 8, 1448: 8, 1456: 4, 1460: 8, 1470: 8, 1476: 8, 1535: 8
},{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 576: 8, 593: 8, 688: 6, 757: 2, 832: 8, 865: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 913: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1102: 8, 1108: 8, 1114: 8, 1136: 6, 1138: 5, 1151: 8, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 8, 1173: 8, 1180: 8, 1184: 8, 1186: 2, 1191: 2, 1193: 8, 1210: 8, 1225: 8, 1227: 8, 1265: 4, 1268: 8, 1280: 8, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1330: 8, 1339: 8, 1342: 6, 1343: 8, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1446: 8, 1448: 8, 1456: 4, 1460: 8, 1470: 8, 1476: 8, 1535: 8
},{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 7, 593: 8, 688: 5, 832: 7, 881: 8, 882: 8, 897: 8, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1151: 6, 1168: 7, 1173: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1345: 8, 1355: 8, 1363: 8, 1369: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8
}],
CAR.SONATA_TURBO_LF: [{
66: 8, 67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 832: 8, 897: 8, 899: 8, 902: 8, 903: 6, 912: 7, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1265: 4, 1268: 8, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1312: 8, 1314: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1414: 3, 1415: 8, 1419: 8, 1427: 6, 1440: 8, 1456: 4, 1460: 8, 1470: 8, 1471: 8, 1472: 8, 1491: 8, 1530: 8, 1532: 5, 2016: 8, 2024: 8
},{
64: 8, 66: 8, 67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 447: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 832: 8, 884: 8, 897: 8, 899: 8, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1253: 8, 1254: 8, 1255: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1314: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1371: 8, 1407: 8, 1414: 3, 1415: 8, 1419: 8, 1425: 2, 1427: 6, 1440: 8, 1456: 4, 1460: 8, 1470: 8, 1472: 8, 1486: 8, 1487: 8, 1491: 8, 1530: 8, 2015: 8, 2024: 8
},{
64: 8, 66: 8, 67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 447: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 832: 8, 884: 8, 897: 8, 899: 8, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1253: 8, 1254: 8, 1255: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1314: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1414: 3, 1415: 8, 1419: 8, 1425: 2, 1427: 6, 1440: 8, 1456: 4, 1460: 8, 1470: 8, 1472: 8, 1486: 8, 1487: 8, 1491: 8, 1530: 8, 1905: 8, 1913: 8, 1990: 8, 1998: 8, 2006: 8, 2014: 8, 2016: 8, 2017: 8, 2024: 8, 2025: 8
}],
CAR.KONA_OS: [{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 354: 3, 356: 4, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 909: 8, 916: 8, 1040: 8, 1078: 4, 1107: 5, 1136: 8, 1156: 8, 1170: 8, 1173: 8, 1191: 2, 1265: 4, 1280: 1, 1287: 4, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1384: 8, 1394: 8,1407: 8, 1414: 3, 1419: 8, 1427: 6, 1456: 4, 1470: 8, 2004: 8, 2009: 8, 2012: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 354: 3, 356: 4, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 909: 8, 916: 8, 1040: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1170: 8, 1173: 8, 1186: 2, 1191: 2, 1265: 4, 1280: 1, 1287: 4, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1384: 8, 1394: 8, 1407: 8, 1414: 3, 1419: 8, 1427: 6, 1456: 4, 1470: 8, 1988: 8, 1990: 8, 1998: 8, 2001: 8, 2004: 8, 2009: 8, 2012: 8, 2015: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 354: 3, 356: 4, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1156: 8, 1170: 8, 1173: 8, 1186: 2, 1191: 2, 1193: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1384: 8, 1394: 8, 1407: 8, 1414: 3, 1419: 8, 1427: 6, 1456: 4, 1470: 8
}],
CAR.KONA_EV_OS: [{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 549: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1307: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1378: 4, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1473: 8, 1507: 8, 1535: 8, 2000: 8, 2004: 8, 2008: 8, 2012: 8
},{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1157: 4, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1378: 4, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1473: 8, 1507: 8, 1535: 8
},{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1157: 4, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1378: 4, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1473: 8, 1507: 8, 1535: 8
},{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1157: 4, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1473: 8, 1507: 8, 1535: 8
},{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 547: 8, 548: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1157: 4, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1193: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1307: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1378: 4, 1379: 8, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1473: 8, 1507: 8, 1535: 8
},{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 547: 8, 548: 8, 549: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1157: 4, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1307: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1378: 4, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1473: 8, 1535: 8, 1988: 8, 1996: 8, 2000: 8, 2004: 8, 2008: 8, 2012: 8
},{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1157: 4, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1193: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1378: 4, 1379: 8, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1473: 8, 1507: 8, 1535: 8
},{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 547: 8, 548: 8, 549: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1157: 4, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1378: 4, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1473: 8, 1507: 8, 1535: 8
}],
CAR.KONA_HEV_OS: [{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 576: 8, 832: 8, 881: 8, 882: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1173: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 354: 3, 356: 4, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 909: 8, 916: 8, 1040: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1170: 8, 1173: 8, 1186: 2, 1191: 2, 1265: 4, 1280: 1, 1287: 4, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1384: 8, 1394: 8, 1407: 8, 1414: 3, 1419: 8, 1427: 6, 1456: 4, 1470: 8, 1988: 8, 1990: 8, 1998: 8, 2001: 8, 2004: 8, 2009: 8, 2012: 8, 2015: 8
},{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 547: 8, 548: 8, 549: 8, 576: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1138: 4, 1151: 6, 1155: 8, 1157: 4, 1164: 8, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1193: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1378: 8, 1379: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
},{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 547: 8, 548: 8, 549: 8, 576: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1138: 4, 1151: 6, 1155: 8, 1157: 4, 1164: 8, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1188: 8, 1191: 2, 1193: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1378: 8, 1379: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
},{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 549: 8, 576: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1138: 4, 1151: 6, 1155: 8, 1157: 4, 1164: 8, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1193: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1378: 8, 1379: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8, 1988: 8, 2000: 8, 2004: 8, 2008: 8, 2012: 8
},{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 547: 8, 548: 8, 549: 8, 576: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1138: 4, 1151: 6, 1155: 8, 1157: 4, 1164: 8, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1193: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1378: 8, 1379: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8, 1988: 8, 1996: 8, 2000: 8, 2004: 8, 2008: 8, 2012: 8
},{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 547: 8, 548: 8, 549: 8, 576: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1138: 4, 1151: 6, 1155: 8, 1157: 4, 1164: 8, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1188: 8, 1191: 2, 1193: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1378: 8, 1379: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8, 1988: 8, 1996: 8, 2000: 8, 2004: 8, 2008: 8, 2012: 8
}],
CAR.IONIQ_HEV_AE: [{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 576: 8, 593: 8, 688: 5, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1173: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1322: 8, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470:8, 1476: 8, 1535: 8
},{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 576:8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1173: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1407: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
},{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 544: 8, 576: 8, 832: 8, 881: 8, 882: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1173: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1407: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
}],
CAR.IONIQ_EV_AE: [{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 7, 546: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1155: 8, 1156: 8, 1157: 4, 1164: 8, 1168: 7, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1379: 8, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1473: 8, 1507: 8, 1535: 8, 2015: 8
},{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 7, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1168: 7, 1173: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1407: 8, 1419: 8, 1425: 2, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1507: 8, 1535: 8
},{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 7, 545: 8, 546: 8, 548: 8, 549: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1168: 7, 1173: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1322: 8, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1507: 8, 1535: 8
},{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 7, 546: 8, 832: 8, 881: 8, 882: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1168: 7, 1173: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1322: 8, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1507: 8
}],
CAR.SANTAFE_TM: [{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 6, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1155: 8, 1156: 8, 1162: 8, 1164: 8, 1168: 7, 1170: 8, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1227: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1379: 8, 1384: 8, 1407: 8, 1414: 3, 1419: 8, 1427: 6, 1456: 4, 1470: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 6, 764: 8, 809: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1155: 8, 1162: 8, 1164: 8, 1168: 7, 1170: 8, 1173: 8, 1180: 8, 1183: 8, 1186: 2, 1227: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1384: 8, 1407: 8, 1414: 3, 1419: 8, 1427: 6, 1456: 4, 1470: 8, 1988: 8, 2000: 8, 2004: 8, 2008: 8, 2012: 8
},{
67: 8, 68: 8, 80: 4, 160: 8, 161: 8, 272: 8, 288: 4, 339: 8, 356: 8, 357: 8, 399: 8, 544: 8, 608: 8, 672: 8, 688: 5, 704: 1, 790: 8, 809: 8, 848: 8, 880: 8, 898: 8, 900: 8, 901: 8, 904: 8, 1056: 8, 1064: 8, 1065: 8, 1072: 8, 1075: 8, 1087: 8, 1088: 8, 1151: 8, 1200: 8, 1201: 8, 1232: 4, 1264: 8, 1265: 8, 1266: 8, 1296: 8, 1306: 8, 1312: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1348: 8, 1349: 8, 1369: 8, 1370: 8, 1371: 8, 1407: 8, 1415: 8, 1419: 8, 1440: 8, 1442: 4, 1461: 8, 1470: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 6, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1155: 8, 1156: 8, 1168: 7, 1170: 8, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1227: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1379: 8, 1384: 8, 1407: 8, 1427: 6, 1456: 4, 1470: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 6, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 912: 7, 1040: 8, 1042: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1162: 8, 1164: 8, 1168: 7, 1170: 8, 1173: 8, 1180: 8, 1183: 8, 1191: 2, 1227: 8, 1260: 8, 1265: 4, 1268: 8, 1280: 1, 1287: 4, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1456: 4, 1470: 8, 1628: 8, 1629: 8, 1630: 8, 1631: 8, 1674: 8, 1675: 8, 1676: 8, 1677: 8, 1791: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 6, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 912: 7, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1155: 8, 1156: 8, 1162: 8, 1164: 8, 1168: 7, 1170: 8, 1173: 8, 1180: 8, 1183: 8, 1186: 2, 1191: 2, 1227: 8, 1265: 4, 1268: 8, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1456: 4, 1470: 8, 1628: 8, 1629: 8, 1630: 8, 1631: 8, 1674: 8, 1675: 8, 1676: 8, 1677: 8, 1791: 8, 2015: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 546: 8, 593: 8, 608: 8, 688: 6, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 7, 1170: 8, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1227: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1378: 8, 1379: 8, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1456: 4, 1470: 8, 1479: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 546: 8, 547: 8, 548: 8, 593: 8, 608: 8, 688: 6, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 7, 1170: 8, 1173: 8, 1183: 8, 1186: 2, 1191: 2, 1227: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1379: 8, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1456: 4, 1470: 8, 1479: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 6, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 7, 1170: 8, 1173: 8, 1180: 8, 1183: 8, 1186: 2, 1191: 2, 1210: 8, 1227: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1384: 8, 1407: 8, 1414: 3, 1419: 8, 1427: 6, 1456: 4, 1470: 8, 1911: 8
}],
CAR.PALISADE_LX2: [{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 549: 8, 576: 8, 593: 8, 608: 8, 688: 6, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 913: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1123: 8, 1136: 8, 1151: 6, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 7, 1170: 8, 1173: 8, 1180: 8, 1186: 2, 1191: 2, 1193: 8, 1210: 8, 1225: 8, 1227: 8, 1265: 4, 1280: 8, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1456: 4, 1470: 8, 2000: 8, 2005: 8, 2008: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 576: 8, 593: 8, 608: 8, 688: 6, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 913: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1123: 8, 1136: 8, 1151: 6, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 7, 1170: 8, 1173: 8, 1180: 8, 1186: 2, 1191: 2, 1193: 8, 1210: 8, 1225: 8, 1227: 8, 1265: 4, 1280: 8, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1456: 4, 1470: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 546: 8, 576: 8, 593: 8, 608: 8, 688: 6, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 913: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1123: 8, 1136: 8, 1151: 6, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 7, 1170: 8, 1173: 8, 1180: 8, 1186: 2, 1191: 2, 1193: 8, 1210: 8, 1225: 8, 1227: 8, 1265: 4, 1280: 8, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1456: 4, 1470: 8
}],
CAR.VELOSTER_JS: [{
64: 8, 66: 8, 67: 8, 68: 8, 127: 8, 128: 8, 129: 8, 273: 8, 274: 8, 275: 8, 339: 8, 354: 3, 356: 4, 399: 8, 512: 6, 544: 8, 558: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 832: 8, 884: 8, 897: 8, 899: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1170: 8, 1181: 5, 1186: 2, 1191: 2, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1356: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1378: 4, 1407: 8, 1414: 3, 1415: 8, 1419: 8, 1427: 6, 1440: 8, 1456: 4, 1470: 8, 1486: 8, 1487: 8, 1491: 8, 1530: 8, 1532: 5, 1872: 8, 1988: 8, 1996: 8, 2000: 8, 2001: 8, 2004: 8, 2008: 8, 2009: 8, 2012: 8, 2015: 8, 2016: 8, 2017: 8, 2024: 8, 2025: 8
}],
CAR.GRANDEUR_IG: [{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 854 : 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8 , 1151: 6, 1162: 4, 1168: 7, 1170: 8, 1173: 8, 1185: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312 : 8, 1322: 8, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 4, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6 , 1456: 4, 1470: 8
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 546: 8, 547: 8, 549: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1156: 8, 1157: 4, 1162: 4, 1168: 7, 1170: 8, 1173: 8, 1185: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1456: 4, 1470: 8
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 546: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1156: 8, 1157: 4, 1162: 4, 1168: 7, 1170: 8, 1173: 8, 1185: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 4, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1456: 4, 1470: 8
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1156: 8, 1157: 4, 1162: 4, 1168: 7, 1170: 8, 1173: 8, 1185: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 4, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1456: 4, 1470: 8
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1156: 8, 1162: 4, 1168: 7, 1170: 8, 1173: 8, 1185: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 4, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1456: 4, 1470: 8
}],
CAR.GRANDEUR_HEV_IG: [{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 576: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1151: 6, 1156: 8, 1157: 4, 1168: 7, 1173: 8, 1185: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 4, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
},{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 576: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1151: 6, 1156: 8, 1157: 4, 1168: 7, 1173: 8, 1185: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1379: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
}],
CAR.GRANDEUR_FL_IG: [{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 516: 8, 524: 8, 528: 8, 532: 8, 544: 8, 576: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 8, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 913: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 8, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 8, 1170: 8, 1173: 8, 1180: 8, 1186: 2, 1191: 2, 1193: 8, 1210: 8, 1225: 8, 1227: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 8, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1456: 4, 1470: 8
}],
CAR.GRANDEUR_HEV_FL_IG: [{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 576: 8, 593: 8, 688: 5, 832: 8, 865: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 913: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1108: 8, 1136: 6, 1138: 5, 1151: 8, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 8, 1173: 8, 1180: 8, 1186: 2, 1191: 2, 1193: 8, 1210: 8, 1225: 8, 1227: 8, 1265: 4, 1268: 8, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 8, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
},{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 516: 8, 544: 8, 576: 8, 593: 8, 688: 5, 832: 8, 865: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 913: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1108: 8, 1136: 6, 1138: 5, 1151: 8, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 8, 1173: 8, 1180: 8, 1186: 2, 1191: 2, 1193: 8, 1210: 8, 1225: 8, 1227: 8, 1265: 4, 1268: 8, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 8, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
}],
CAR.NEXO_FE: [{
127: 8, 145: 8, 146: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 512: 6, 544: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 908: 8, 909: 8, 912: 7, 916: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 8, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 7, 1173: 8, 1174: 8, 1180: 8, 1183: 8, 1186: 2, 1191: 2, 1192: 8, 1193: 8, 1210: 8, 1219: 8, 1220: 8, 1222: 6, 1223: 8, 1224: 8, 1227: 8, 1230: 6, 1231: 6, 1265: 4, 1268: 8, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1297: 8, 1298: 8, 1305: 8, 1312: 8, 1315: 8, 1316: 8, 1322: 8, 1324: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1371: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1437: 8, 1456: 4, 1460: 8, 1470: 8, 1484: 8, 1507: 8, 1520: 8, 1535: 8
},{
127: 8, 145: 8, 146: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 512: 6, 544: 8, 546: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 908: 8, 909: 8, 912: 7, 916: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 8, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 7, 1173: 8, 1174: 8, 1180: 8, 1183: 8, 1186: 2, 1191: 2, 1192: 8, 1193: 8, 1210: 8, 1219: 8, 1220: 8, 1222: 6, 1223: 8, 1224: 8, 1227: 8, 1230: 6, 1231: 6, 1265: 4, 1268: 8, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1297: 8, 1298: 8, 1305: 8, 1312: 8, 1315: 8, 1316: 8, 1322: 8, 1324: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1371: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1437: 8, 1456: 4, 1460: 8, 1470: 8, 1484: 8, 1507: 8, 1520: 8, 1535: 8
}],
# kia
CAR.K3_BD: [{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1078: 4, 1107: 5, 1136: 8, 1156: 8, 1170: 8, 1173: 8, 1191: 2, 1225: 8, 1265: 4, 1280: 4, 1287: 4, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1384: 8, 1394: 8, 1407: 8, 1427: 6, 1456: 4, 1470: 8
}],
CAR.K5_JF: [{
64: 8, 66: 8, 67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 447: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 832: 8, 884: 8, 897: 8, 899: 8, 902: 8, 903: 6, 909: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1186: 2, 1191: 2, 1253: 8, 1254: 8, 1255: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1414: 3, 1415: 8, 1419: 8, 1425: 2, 1427: 6, 1440: 8, 1456: 4, 1470: 8, 1472: 8, 1486: 8, 1487: 8, 1491: 8, 1530: 8, 1532: 5, 1952: 8, 1960: 8, 1988: 8, 1996: 8, 2001: 8, 2004: 8, 2008: 8, 2009: 8, 2012: 8, 2016: 8, 2017: 8, 2024: 8, 2025: 8
},{
64: 8, 66: 8, 67: 8, 68: 8, 127: 8, 128: 8, 129: 8, 273: 8, 274: 8, 275: 8, 339: 8, 354: 3, 356: 4, 399: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 832: 8, 897: 8, 899: 8, 902: 8, 903: 6, 912: 7, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1265: 4, 1268: 8, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1356: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1440: 8, 1456: 4, 1470: 8, 1472: 8, 1491: 8, 1492: 8
},{
64: 8, 66: 8, 67: 8, 68: 8, 127: 8, 128: 8, 129: 8, 273: 8, 274: 8, 275: 8, 339: 8, 354: 3, 356: 4, 399: 8, 512: 6, 544: 8, 608: 8, 790: 8, 809: 8, 832: 8, 899: 8, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1356: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1415: 8, 1419: 8, 1427: 6, 1440: 8, 1456: 4, 1470: 8, 1472: 8, 1491: 8, 1492: 8, 1905: 8, 1913: 8, 2001: 8, 2009: 8, 2015: 8, 2016: 8, 2017: 8, 2024: 8, 2025: 8
},{
64: 8, 66: 8, 67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 512: 6, 544: 8, 608: 8, 790: 8, 809: 8, 832: 8, 899: 8, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1236: 2, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1366: 8, 1367: 8, 1369: 8, 1371: 8, 1407: 8, 1415: 8, 1419: 8, 1427: 6, 1440: 8, 1456: 4, 1470: 8, 1472: 8, 1491: 8, 1492: 8, 2015: 8, 2024: 8, 2025: 8
},{
64: 8, 66: 8, 67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 512: 6, 544: 8, 608: 8, 625: 8, 790: 8, 809: 8, 832: 8, 899: 8, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1236: 2, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1366: 8, 1367: 8, 1369: 8, 1371: 8, 1407: 8, 1415: 8, 1419: 8, 1427: 6, 1440: 8, 1456: 4, 1470: 8, 1472: 8, 1491: 8, 1492: 8, 2015: 8, 2024: 8, 2025: 8
},{
64: 8, 66: 8, 67: 8, 68: 8, 127: 8, 128: 8, 129: 8, 273: 8, 274: 8, 275: 8, 339: 8, 354: 3, 356: 4, 399: 8, 447: 8, 512: 6, 544: 8, 558: 8, 593: 8, 608: 8, 640: 8, 688: 5, 790: 8, 809: 8, 832: 8, 884: 8, 897: 8, 899: 8, 902: 8, 903: 6, 909: 8, 912: 7, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1151: 6, 1168: 7, 1170: 8, 1186: 2, 1191: 2, 1253: 8, 1254: 8, 1255: 8, 1265: 4, 1268: 8, 1280: 1, 1282: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1356: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1414: 3, 1415: 8, 1419: 8, 1425: 2, 1427: 6, 1440: 8, 1456: 4, 1470: 8, 1472: 8, 1486: 8, 1487: 8, 1491: 8, 1492: 8, 1530: 8, 1532: 5, 1792: 8, 1872: 8, 1937: 8, 1953: 8, 1968: 8, 1988: 8, 1996: 8, 2000: 8, 2001: 8, 2004: 8, 2008: 8, 2009: 8, 2012: 8, 2015: 8, 2016: 8, 2017: 8, 2024: 8, 2025: 8
}],
CAR.K5_HEV_JF: [{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1151: 6, 1168: 7, 1173: 8, 1236: 2, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1371: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
},{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 576: 8, 593: 8, 688: 5, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 909: 8, 912: 7, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1151: 6, 1168: 7, 1173: 8, 1180: 8, 1186: 2, 1191: 2, 1265: 4, 1268: 8, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1371: 8, 1407: 8, 1419: 8, 1420: 8, 1425: 2, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
}],
CAR.SPORTAGE_QL: [{
67: 8, 68: 8, 127: 8, 273: 8, 274: 8, 275: 8, 339: 8, 356: 4, 399: 8, 447: 8, 512: 6, 544: 8, 593: 8, 608: 8, 688: 5, 790: 8, 809: 8, 832: 8, 884: 8, 897: 8, 899: 8, 902: 8, 903: 6, 909: 8, 916: 8, 1040: 8, 1078: 4, 1170: 8, 1191: 2, 1253: 8, 1254: 8, 1255: 8, 1265: 4, 1280: 1, 1282: 4, 1287: 4, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1349: 8, 1351: 8, 1353: 8, 1363: 8, 1365: 8, 1366: 8, 1367: 8, 1369: 8, 1407: 8, 1419: 8, 1427: 6, 1440: 8, 1456: 4, 1470: 8, 1472: 8, 1486: 8, 1487: 8, 1491: 8, 1492: 8, 1530: 8
}],
CAR.SORENTO_UM: [{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1168: 7, 1170: 8, 1173: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1370: 8, 1371: 8, 1384: 8, 1407: 8, 1411: 8, 1419: 8, 1425: 2, 1427: 6, 1444: 8, 1456: 4, 1470: 8, 1489: 1
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1157: 4, 1162: 8, 1168: 7, 1170: 8, 1173: 8, 1186: 2, 1191: 2, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1456: 4, 1470: 8, 1479: 8
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 546: 8, 548: 8, 550: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1157: 4, 1162: 8, 1168: 7, 1170: 8, 1173: 8, 1186: 2, 1191: 2, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1456: 4, 1470: 8, 1479: 8
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1157: 4, 1162: 8, 1168: 7, 1170: 8, 1173: 8, 1186: 2, 1191: 2, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1456: 4, 1470: 8, 1479: 8
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 7, 608: 8, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 5, 902: 8, 903: 6, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1168: 7, 1170: 8, 1173: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1322: 8, 1331: 8, 1332: 8, 1333: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1370: 8, 1384: 5, 1407: 8, 1411: 8, 1419: 8, 1427: 6, 1437: 8, 1444: 8, 1456: 4, 1470: 8, 1489: 1, 1990: 8, 1998: 8
}],
CAR.STINGER_CK: [{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 358: 6, 359: 8, 544: 8, 576: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 913: 8, 916: 8, 1027: 8, 1028: 8, 1040: 8, 1042: 8, 1053: 8, 1054: 8, 1055: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1102: 8, 1107: 5, 1136: 8, 1151: 8, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 7, 1170: 8, 1173: 8, 1180: 8, 1183: 8, 1184: 8, 1186: 2, 1191: 2, 1193: 8, 1210: 8, 1225: 8, 1227: 8, 1265: 4, 1280: 1, 1281: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 8, 1343: 8, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1370: 8, 1371: 8, 1378: 8, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1437: 8, 1456: 4, 1460: 8, 1470: 8, 1485: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 358: 6, 359: 8, 544: 8, 576: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1168: 7, 1170: 8, 1173: 8, 1184: 8, 1265: 4, 1280: 1, 1281: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 4, 1379: 8, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1456: 4, 1470: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 358: 6, 359: 8, 544: 8, 576: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1157: 4, 1168: 7, 1170: 8, 1173: 8, 1184: 8, 1265: 4, 1280: 1, 1281: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 4, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1437: 8, 1456: 4, 1470: 8
},{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 358: 6, 359: 8, 544: 8, 576: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1168: 7, 1170: 8, 1173: 8, 1184: 8, 1265: 4, 1280: 1, 1281: 4, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1378: 4, 1379: 8, 1384: 8, 1407: 8, 1419: 8, 1425: 2, 1427: 6, 1456: 4, 1470: 8
}],
CAR.NIRO_EV_DE: [{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1157: 4, 1168: 7, 1173: 8, 1186: 2, 1191: 2, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1473: 8, 1507: 8, 1535: 8
},{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1157: 4, 1168: 7, 1173: 8, 1186: 2, 1191: 2, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1473: 8, 1507: 8, 1535: 8
}],
CAR.NIRO_HEV_DE: [{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 576: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1173: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
},{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 576: 8, 593: 8, 688: 5, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1173: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
},{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 576: 8, 832: 8, 881: 8, 882: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 6, 1173: 8, 1225: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
}],
CAR.K7_YG: [{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 546: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1162: 4, 1168: 7, 1170: 8, 1173: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1378: 4, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1444: 8, 1456: 4, 1470: 8
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 546: 8, 608: 8, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1162: 4, 1168: 7, 1170: 8, 1173: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1378: 4, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1444: 8, 1456: 4, 1470: 8
},{
67: 8, 68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 546: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 7, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 903: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1078: 4, 1107: 5, 1136: 8, 1151: 6, 1156: 8, 1157: 4, 1162: 4, 1168: 7, 1170: 8, 1173: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 4, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1444: 8, 1456: 4, 1470: 8
}],
CAR.K7_HEV_YG: [{
68: 8, 127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 576: 8, 593: 8, 688: 5, 832: 8, 865: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 913: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1096: 8, 1102: 8, 1108: 8, 1136: 6, 1138: 5, 1151: 8, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 7, 1173: 8, 1180: 8, 1186: 2, 1191: 2, 1210: 8, 1227: 8, 1265: 4, 1268: 8, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1343: 8, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1379: 8, 1407: 8, 1419: 8, 1427: 6, 1429: 8, 1430: 8, 1448: 8, 1456: 4, 1470: 8, 1476: 8, 1535: 8
}],
CAR.SELTOS_SP2: [{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 524: 8, 544: 8, 593: 8, 608: 8, 688: 6, 809: 8, 832: 8, 854: 8, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 905: 8, 909: 8, 910: 5, 911: 5, 913: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1102: 8, 1107: 5, 1114: 8, 1136: 8, 1145: 8, 1151: 8, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 8, 1170: 8, 1173: 8, 1186: 2, 1191: 2, 1225: 8, 1265: 4, 1280: 8, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1379: 8, 1384: 8, 1394: 8, 1407: 8, 1419: 8, 1427: 6, 1446: 8, 1456: 4, 1470: 8, 1485: 8, 1988: 8, 1996: 8, 2000: 8, 2004: 8, 2008: 8, 2012: 8, 2015: 8
}],
CAR.SOUL_EV_SK3: [{
127: 8, 304: 8, 320: 8, 339: 8, 352: 8, 356: 4, 544: 8, 546: 8, 548: 8, 549: 8, 593: 8, 688: 6, 832: 8, 881: 8, 882: 8, 897: 8, 902: 8, 903: 8, 905: 8, 909: 8, 913: 8, 916: 8, 1040: 8, 1042: 8, 1056: 8, 1057: 8, 1078: 4, 1136: 8, 1151: 6, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 7, 1173: 8, 1186: 2, 1191: 2, 1193: 8, 1225: 8, 1227: 8, 1265: 4, 1280: 1, 1287: 4, 1290: 8, 1291: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1355: 8, 1363: 8, 1369: 8, 1378: 8, 1379: 8, 1407: 8, 1419: 8, 1426: 8, 1427: 6, 1429: 8, 1430: 8, 1456: 4, 1470: 8, 1473: 8, 1507: 8, 1535: 8
}],
CAR.MOHAVE_HM: [{
67: 8, 127: 8, 304: 8, 320: 8, 339: 8, 356: 4, 544: 8, 593: 8, 608: 8, 688: 5, 809: 8, 832: 8, 854: 8, 870: 7, 871: 8, 872: 8, 897: 8, 902: 8, 905: 8, 909: 8, 913: 8, 916: 8, 1040: 8, 1056: 8, 1057: 8, 1064: 8, 1078: 4, 1107: 5, 1123: 8, 1136: 8, 1145: 8, 1151: 8, 1155: 8, 1156: 8, 1157: 4, 1162: 8, 1164: 8, 1168: 8, 1170: 8, 1173: 8, 1180: 8, 1186: 2, 1191: 2, 1193: 8, 1210: 8, 1225: 8, 1227: 8, 1265: 4, 1280: 8, 1287: 4, 1290: 8, 1292: 8, 1294: 8, 1312: 8, 1322: 8, 1342: 6, 1345: 8, 1348: 8, 1363: 8, 1369: 8, 1371: 8, 1378: 8, 1384: 8, 1407: 8, 1419: 8, 1427: 6, 1456: 4, 1470: 8, 1479: 8
}]
}
if Params().get_bool("FingerprintTwoSet"):
FW_VERSIONS = {
# genesis
CAR.GENESIS_G70_IK: {
(Ecu.fwdRadar, 0x7d0, None): [b'\xf1\x00IK__ SCC F-CUP 1.00 1.02 96400-G9100 \xf1\xa01.02',],
(Ecu.esp, 0x7d1, None): [b'\xf1\x00\x00\x00\x00\x00\x00\x00',],
(Ecu.engine, 0x7e0, None): [b'\xf1\x81640F0051\x00\x00\x00\x00\x00\x00\x00\x00',],
(Ecu.eps, 0x7d4, None): [b'\xf1\x00IK MDPS R 1.00 1.06 57700-G9420 4I4VL106',],
(Ecu.fwdCamera, 0x7c4, None): [b'\xf1\x00IK MFC AT USA LHD 1.00 1.01 95740-G9000 170920',],
(Ecu.transmission, 0x7e1, None): [b'\xf1\x87VDJLT17895112DN4\x88fVf\x99\x88\x88\x88\x87fVe\x88vhwwUFU\x97eFex\x99\xff\xb7\x82\xf1\x81E25\x00\x00\x00\x00\x00\x00\x00\xf1\x00bcsh8p54 E25\x00\x00\x00\x00\x00\x00\x00SIK0T33NB2\x11\x1am\xda',],
},
CAR.GENESIS_G70_2020: {
(Ecu.eps, 0x7d4, None): [
b'\xf1\x00IK MDPS R 1.00 1.07 57700-G9220 4I2VL107',
b'\xf1\x00IK MDPS R 1.00 1.07 57700-G9420 4I4VL107',
b'\xf1\x00IK MDPS R 1.00 1.08 57700-G9420 4I4VL108',
],
(Ecu.transmission, 0x7e1, None): [
b'\xf1\x87VCJLP18407832DN3\x88vXfvUVT\x97eFU\x87d7v\x88eVeveFU\x89\x98\x7f\xff\xb2\xb0\xf1\x81E25\x00\x00\x00',
b'\x00\x00\x00\x00\xf1\x00bcsh8p54 E25\x00\x00\x00\x00\x00\x00\x00SIK0T33NB4\xecE\xefL',
b'\xf1\x87VDKLT18912362DN4wfVfwefeveVUwfvw\x88vWfvUFU\x89\xa9\x8f\xff\x87w\xf1\x81E25\x00\x00\x00\x00\x00\x00\x00\xf1\x00bcsh8p54 E25\x00\x00\x00\x00\x00\x00\x00SIK0T33NB4\xecE\xefL',
b'\xf1\x87VDJLC18480772DK9\x88eHfwfff\x87eFUeDEU\x98eFe\x86T5DVyo\xff\x87s\xf1\x81E25\x00\x00\x00\x00\x00\x00\x00\xf1\x00bcsh8p54 E25\x00\x00\x00\x00\x00\x00\x00SIK0T33KB5\x9f\xa5&\x81',
],
(Ecu.fwdRadar, 0x7d0, None): [
b'\xf1\x00IK__ SCC F-CUP 1.00 1.02 96400-G9100 ',
b'\xf1\x00IK__ SCC F-CUP 1.00 1.02 96400-G9100 \xf1\xa01.02',
b'\xf1\x00IK__ SCC FHCUP 1.00 1.02 96400-G9000 ',
],
(Ecu.fwdCamera, 0x7c4, None): [
b'\xf1\x00IK MFC AT USA LHD 1.00 1.01 95740-G9000 170920',
b'\xf1\x00IK MFC AT KOR LHD 1.00 1.01 95740-G9000 170920',
],
(Ecu.engine, 0x7e0, None): [
b'\xf1\x81640J0051\x00\x00\x00\x00\x00\x00\x00\x00',
b'\xf1\x81640H0051\x00\x00\x00\x00\x00\x00\x00\x00',
],
},
# hyundai
CAR.AVANTE_CN7: {
(Ecu.fwdRadar, 0x7d0, None): [
b'\xf1\x00CN7_ SCC F-CUP 1.00 1.01 99110-AA000 ',
b'\xf1\x00CN7_ SCC FHCUP 1.00 1.01 99110-AA000 ',
b'\xf1\x8799110AA000\xf1\x00CN7_ SCC FHCUP 1.00 1.01 99110-AA000 ',
b'\xf1\x8799110AA000\xf1\x00CN7_ SCC F-CUP 1.00 1.01 99110-AA000 ',
],
(Ecu.eps, 0x7d4, None): [
b'\xf1\x87\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf1\x00CN7 MDPS C 1.00 1.06 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 4CNDC106',
b'\xf1\x8756310/AA070\xf1\x00CN7 MDPS C 1.00 1.06 56310/AA070 4CNDC106',
b'\xf1\x8756310AA050\x00\xf1\x00CN7 MDPS C 1.00 1.06 56310AA050\x00 4CNDC106',
],
(Ecu.fwdCamera, 0x7c4, None): [
b'\xf1\x00CN7 MFC AT USA LHD 1.00 1.00 99210-AB000 200819',
b'\xf1\x00CN7 MFC AT USA LHD 1.00 1.03 99210-AA000 200819',
b'\xf1\x00CN7 MFC AT USA LHD 1.00 1.01 99210-AB000 210205',
],
(Ecu.esp, 0x7d1, None): [
b'\xf1\x00CN ESC \t 101 \x10\x03 58910-AB800',
b'\xf1\x8758910-AA800\xf1\x00CN ESC \t 104 \x08\x03 58910-AA800',
b'\xf1\x8758910-AB800\xf1\x00CN ESC \t 101 \x10\x03 58910-AB800',
],
(Ecu.transmission, 0x7e1, None): [
b'\xf1\x00HT6WA280BLHT6VA640A1CCN0N20NS5\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
b'\xf1\x00HT6WA280BLHT6VA640A1CCN0N20NS5\x00\x00\x00\x00\x00\x00\xe8\xba\xce\xfa',
b'\xf1\x87CXMQFM2135005JB2E\xb9\x89\x98W\xa9y\x97h\xa9\x98\x99wxvwh\x87\177\xffx\xff\xff\xff,,\xf1\x89HT6VA640A1\xf1\x82CCN0N20NS5\x00\x00\x00\x00\x00\x00',
b'\xf1\x87CXMQFM1916035JB2\x88vvgg\x87Wuwgev\xa9\x98\x88\x98h\x99\x9f\xffh\xff\xff\xff\xa5\xee\xf1\x89HT6VA640A1\xf1\x82CCN0N20NS5\x00\x00\x00\x00\x00\x00',
b'\xf1\x87CXLQF40189012JL2f\x88\x86\x88\x88vUex\xb8\x88\x88\x88\x87\x88\x89fh?\xffz\xff\xff\xff\x08z\xf1\x89HT6VA640A1\xf1\x82CCN0N20NS5\x00\x00\x00\x00\x00\x00',
b'\xf1\x87CXMQFM2728305JB2E\x97\x87xw\x87vwgw\x84x\x88\x88w\x89EI\xbf\xff{\xff\xff\xff\xe6\x0e\xf1\x89HT6VA640A1\xf1\x82CCN0N20NS5\x00\x00\x00\x00\x00\x00',
b'\xf1\x87CXMQFM3806705JB2\x89\x87wwx\x88g\x86\x99\x87\x86xwwv\x88yv\x7f\xffz\xff\xff\xffV\x15\xf1\x89HT6VA640A1\xf1\x82CCN0N20NS5\x00\x00\x00\x00\x00\x00',
],
(Ecu.engine, 0x7e0, None): [
b'\xf1\x82CNCWD0AMFCXCSFFA',
b'\xf1\x82CNCWD0AMFCXCSFFB',
b'\xf1\x82CNCVD0AMFCXCSFFB',
b'\xf1\x870\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf1\x82CNDWD0AMFCXCSG8A',
],
},
CAR.I30_PD: {
(Ecu.fwdRadar, 0x7d0, None): [b'\xf1\x00PD__ SCC F-CUP 1.00 1.01 99110-G3100 ',],
(Ecu.esp, 0x7d1, None): [b'\xf1\x00PD ESC \x11 100 \a\x03 58910-G3AC0',],
(Ecu.engine, 0x7e0, None): [b'\x01TPD-1A506F000H00',],
(Ecu.eps, 0x7d4, None): [b'\xf1\x00PDu MDPS C 1.00 1.01 56310/G3690 4PDUC101',],
(Ecu.fwdCamera, 0x7c4, None): [b'\xf1\x00PDP LKAS AT AUS RHD 1.00 1.01 99211-G4000 v60',],
(Ecu.transmission, 0x7e1, None): [b'\xf1\x816U2VA051\x00\x00\xf1\x006U2V0_C2\x00\x006U2VA051\x00\x00DPD0H16US0\x00\x00\x00\x00',],
},
CAR.SONATA_DN8: {
(Ecu.fwdRadar, 0x7d0, None): [
b'\xf1\x00DN8_ SCC FHCUP 1.00 1.01 99110-L1000 ',
b'\xf1\x00DN8_ SCC FHCUP 1.00 1.00 99110-L0000 ',
b'\xf1\x00DN8_ SCC F-CU- 1.00 1.00 99110-L0000 ',
],
(Ecu.esp, 0x7d1, None): [
b'\xf1\x00DN ESC \x01 102\x19\x04\x13 58910-L1300\xf1\xa01.02',
b'\xf1\x8758910-L0100\xf1\x00DN ESC \x06 104\x19\x08\x01 58910-L0100\xf1\xa01.04',
],
(Ecu.engine, 0x7e0, None): [
b'HM6M2_0a0_BD0',
b'\xf1\x87391162M003\xf1\xa0000F',
b'\xf1\x87391162M003\xf1\xa00240',
],
(Ecu.eps, 0x7d4, None): [
b'\xf1\x8756310-L1010\xf1\x00DN8 MDPS C 1.00 1.03 56310-L1010 4DNDC103\xf1\xa01.03',
b'\xf1\x8756310L0010\x00\xf1\x00DN8 MDPS C 1.00 1.01 56310L0010\x00 4DNAC101\xf1\xa01.01',
b'\xf1\x8756310-L0010\xf1\x00DN8 MDPS C 1.00 1.01 56310-L0010 4DNAC101\xf1\xa01.01',
],
(Ecu.fwdCamera, 0x7c4, None): [
b'\xf1\x00DN8 MFC AT KOR LHD 1.00 1.02 99211-L1000 190422',
b'\xf1\x00DN8 MFC AT USA LHD 1.00 1.00 99211-L0000 190716',
b'\xf1\x00DN8 MFC AT USA LHD 1.00 1.01 99211-L0000 191016',
],
(Ecu.transmission, 0x7e1, None): [
b'\xf1\x00HT6TA260BLHT6TA800A1TDN8C20KS4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
b'\xf1\x00bcsh8p54 U903\x00\x00\x00\x00\x00\x00SDN8T16NB0z{\xd4v',
],
},
CAR.SONATA_HEV_DN8: {
(Ecu.fwdRadar, 0x7d0, None): [b'\xf1\x00DNhe SCC FHCUP 1.00 1.02 99110-L5000 ',],
(Ecu.esp, 0x7d1, None): [b'\xf1\x8758910-L0100\xf1\x00DN ESC \x06 104\x19\x08\x01 58910-L0100\xf1\xa01.04',],
(Ecu.engine, 0x7e0, None): [b'\xf1\x87391062J002\xf1\xa0000P',],
(Ecu.eps, 0x7d4, None): [b'\xf1\x8756310-L5500\xf1\x00DN8 MDPS C 1.00 1.02 56310-L5500 4DNHC102\xf1\xa01.02',],
(Ecu.fwdCamera, 0x7c4, None): [b'\xf1\x00DN8HMFC AT USA LHD 1.00 1.04 99211-L1000 191016',],
(Ecu.transmission, 0x7e1, None): [b'\xf1\x00PSBG2323 E09\x00\x00\x00\x00\x00\x00\x00TDN2H20SA5\x97R\x88\x9e',],
},
CAR.KONA_HEV_OS: {
(Ecu.esp, 0x7d1, None): [b'\xf1\x00OS IEB \x01 104 \x11 58520-CM000\xf1\xa01.04',],
(Ecu.fwdRadar, 0x7d0, None): [b'\xf1\x00OShe SCC FNCUP 1.00 1.01 99110-CM000 \xf1\xa01.01',],
(Ecu.eps, 0x7d4, None): [b'\xf1\x00OS MDPS C 1.00 1.00 56310CM030\x00 4OHDC100',],
(Ecu.fwdCamera, 0x7c4, None): [b'\xf1\x00OSH LKAS AT KOR LHD 1.00 1.01 95740-CM000 l31',],
(Ecu.transmission, 0x7e1, None): [b'\xf1\x816U3J9051\x00\x00\xf1\x006U3H1_C2\x00\x006U3J9051\x00\x00HOS0G16DS1\x16\xc7\xb0\xd9',],
(Ecu.engine, 0x7e0, None): [b'\xf1\x816H6F6051\x00\x00\x00\x00\x00\x00\x00\x00',],
},
CAR.KONA_OS: {
(Ecu.fwdRadar, 0x7d0, None): [b'\xf1\x00OS__ SCC F-CUP 1.00 1.00 95655-J9200 \xf1\xa01.00',],
(Ecu.esp, 0x7d1, None): [b'\xf1\x816V5RAK00018.ELF\xf1\x00\x00\x00\x00\x00\x00\x00\xf1\xa01.05',],
(Ecu.engine, 0x7e0, None): [b'"\x01TOS-0NU06F301J02',],
(Ecu.eps, 0x7d4, None): [b'\xf1\x00OS MDPS C 1.00 1.05 56310J9030\x00 4OSDC105',],
(Ecu.fwdCamera, 0x7c4, None): [b'\xf1\x00OS9 LKAS AT USA LHD 1.00 1.00 95740-J9300 g21',],
(Ecu.transmission, 0x7e1, None): [b'\xf1\x816U2VE051\x00\x00\xf1\x006U2V0_C2\x00\x006U2VE051\x00\x00DOS4T16NS3\x00\x00\x00\x00',],
},
CAR.KONA_EV_OS: {
(Ecu.fwdRadar, 0x7D0, None): [b'\xf1\x00OSev SCC FNCUP 1.00 1.01 99110-K4000 \xf1\xa01.01',],
(Ecu.esp, 0x7D1, None): [b'\xf1\xa02.06',],
(Ecu.eps, 0x7D4, None): [
b'\xf1\x00OS MDPS C 1.00 1.04 56310K4000\x00 4OEDC104',
b'\xf1\x00OS MDPS C 1.00 1.04 56310K4050\x00 4OEDC104',
],
(Ecu.fwdCamera, 0x7C4, None): [b'\xf1\x00OSE LKAS AT KOR LHD 1.00 1.00 95740-K4100 W40',],
},
CAR.IONIQ_HEV_AE: {
(Ecu.fwdRadar, 0x7d0, None): [
b'\xf1\x00AEhe SCC F-CUP 1.00 1.00 99110-G2200 ',
b'\xf1\x00AEhe SCC H-CUP 1.01 1.01 96400-G2000 ',
],
(Ecu.engine, 0x7e0, None): [
b'\xf1\x816H6F6051\x00\x00\x00\x00\x00\x00\x00\x00',
b'\xf1\x816H6F2051\x00\x00\x00\x00\x00\x00\x00\x00',
],
(Ecu.eps, 0x7D4, None): [
b'\xf1\x00AE MDPS C 1.00 1.07 56310/G2301 4AEHC107',
b'\xf1\x00AE MDPS C 1.00 1.01 56310/G2310 4APHC101',
],
(Ecu.fwdCamera, 0x7c4, None): [
b'\xf1\x00AEH MFC AT EUR LHD 1.00 1.01 95740-G2600 190819',
b'\xf1\x00AEH MFC AT EUR LHD 1.00 1.00 95740-G2400 180222',
],
(Ecu.transmission, 0x7e1, None): [
b'\xf1\x816U3J8051\x00\x00\xf1\x006U3H1_C2\x00\x006U3J8051\x00\x00HAE0G16UL0Nd\xed:',
b'\xf1\x816U3H1051\x00\x00\xf1\x006U3H0_C2\x00\x006U3H1051\x00\x00HAE0G16US2\x95\xa2^$',
],
},
CAR.SANTAFE_TM: {
(Ecu.fwdRadar, 0x7d0, None): [b'\xf1\x00TM__ SCC F-CUP 1.00 1.02 99110-S2000 \xf1\xa01.02',],
(Ecu.esp, 0x7d1, None): [b'\xf1\x00TM ESC \x02 100\x18\x030 58910-S2600\xf1\xa01.00',],
(Ecu.engine, 0x7e0, None): [b'\xf1\x81606EA051\x00\x00\x00\x00\x00\x00\x00\x00',],
(Ecu.eps, 0x7d4, None): [b'\xf1\x00TM MDPS C 1.00 1.00 56340-S2000 8409',],
(Ecu.fwdCamera, 0x7c4, None): [b'\xf1\x00TM MFC AT USA LHD 1.00 1.00 99211-S2000 180409',],
(Ecu.transmission, 0x7e1, None): [b'\xf1\x87SBJWAA6562474GG0ffvgeTeFx\x88\x97\x88ww\x87www\x87w\x84o\xfa\xff\x87fO\xff\xc2 \xf1\x816W3C2051\x00\x00\xf1\x006W351_C2\x00\x006W3C2051\x00\x00TTM2G24NS1\x00\x00\x00\x00',],
},
CAR.PALISADE_LX2: {
(Ecu.fwdRadar, 0x7d0, None): [
b'\xf1\x00LX2_ SCC FHCUP 1.00 1.04 99110-S8100 \xf1\xa01.04',
b'\xf1\x00LX2 SCC FHCUP 1.00 1.04 99110-S8100 \xf1\xa01.04',
b'\xf1\x00LX2_ SCC FHCUP 1.00 1.04 99110-S8100 \xf1\xa01.04',
],
(Ecu.esp, 0x7d1, None): [
b'\xf1\x00LX ESC \v 102\x19\x05\a 58910-S8330\xf1\xa01.02',
b'\xf1\x00LX ESC \v 103\x19\t\x10 58910-S8360\xf1\xa01.03',
b'\xf1\x00LX ESC \x01 103\x19\t\x10 58910-S8360\xf1\xa01.03',
b'\xf1\x00LX ESC \x0b 102\x19\x05\x07 58910-S8330',
],
(Ecu.engine, 0x7e0, None): [
b'\xf1\x81640J0051\x00\x00\x00\x00\x00\x00\x00\x00',
b'\xf1\x81640K0051\x00\x00\x00\x00\x00\x00\x00\x00',
],
(Ecu.eps, 0x7d4, None): [
b'\xf1\x00LX2 MDPS C 1.00 1.03 56310-S8020 4LXDC103',
],
(Ecu.engine, 0x7e0, None): [b'\xf1\x81640J0051\x00\x00\x00\x00\x00\x00\x00\x00',],
(Ecu.eps, 0x7d4, None): [b'\xf1\x00LX2 MDPS C 1.00 1.03 56310-S8020 4LXDC103',],
(Ecu.fwdCamera, 0x7c4, None): [
b'\xf1\x00LX2 MFC AT USA LHD 1.00 1.03 99211-S8100 190125',
b'\xf1\x00LX2 MFC AT USA LHD 1.00 1.05 99211-S8100 190909',
b'\xf1\x00LX2 MFC AT USA LHD 1.00 1.05 99211-S8100 190909',
],
(Ecu.transmission, 0x7e1, None): [
b'\xf1\x87LBLUFN650868KF36\xa9\x98\x89\x88\xa8\x88\x88\x88h\x99\xa6\x89fw\x86gw\x88\x97x\xaa\x7f\xf6\xff\xbb\xbb\x8f\xff+\x82\xf1\x81U891\x00\x00\x00\x00\x00\x00\xf1\x00bcsh8p54 U891\x00\x00\x00\x00\x00\x00SLX2G38NB3\xd1\xc3\xf8\xa8',
b'\xf1\x87LDKVBN424201KF26\xba\xaa\x9a\xa9\x99\x99\x89\x98\x89\x99\xa8\x99\x88\x99\x98\x89\x88\x99\xa8\x89v\x7f\xf7\xffwf_\xffq\xa6\xf1\x81U891\x00\x00\x00\x00\x00\x00\xf1\x00bcsh8p54 U891\x00\x00\x00\x00\x00\x00SLX4G38NB2\xafL]\xe7',
b'\xf1\x87LDLVBN560098KF26\x86fff\x87vgfg\x88\x96xfw\x86gfw\x86g\x95\xf6\xffeU_\xff\x92c\xf1\x81U891\x00\x00\x00\x00\x00\x00\xf1\x00bcsh8p54 U891\x00\x00\x00\x00\x00\x00SLX4G38NB2\xafL]\xe7',
b'\xf1\x87LDLVBN5600981KF26\x86fff\x87vgfg\x88\x96xfw\x86gfw\x86g\x95\xf6\xffeU_\xff\x92c\xf1\x81U891\x00\x00\x00\x00\x00\x00\xf1\x00bcsh8p54 U891\x00\x00\x00\x00\x00\x00SLX4G38NB2\xafL]\xe7',
b'\xf1\x87LBLUFN655162KF36\x98\x88\x88\x88\x98\x88\x88\x88x\x99\xa7\x89x\x99\xa7\x89x\x99\x97\x89g\xf7\xffwU_\xff\xe9!\xf1\x81U891\x00\x00\x00\x00\x00\x00\xf1\x00bcsh8p54 U891\x00\x00\x00\x00\x00\x00SLX2G38NB3\xd1\xc3\xf8\xa8',
],
},
CAR.VELOSTER_JS: {
(Ecu.fwdRadar, 0x7d0, None): [b'\xf1\x00JS__ SCC H-CUP 1.00 1.02 95650-J3200 ',],
(Ecu.esp, 0x7d1, None): [b'\xf1\x00\x00\x00\x00\x00\x00\x00',],
(Ecu.engine, 0x7e0, None): [b'\x01TJS-JNU06F200H0A',],
(Ecu.eps, 0x7d4, None): [b'\xf1\x00JSL MDPS C 1.00 1.03 56340-J3000 8308',],
(Ecu.fwdCamera, 0x7c4, None): [b'\xf1\x00JS LKAS AT USA LHD 1.00 1.02 95740-J3000 K32',],
(Ecu.transmission, 0x7e1, None): [b'\xf1\x816U2V8051\x00\x00\xf1\x006U2V0_C2\x00\x006U2V8051\x00\x00DJS0T16NS1\xba\x02\xb8\x80',],
},
# kia
CAR.KIA_FORTE: {
(Ecu.eps, 0x7D4, None): [
b'\xf1\x00BD MDPS C 1.00 1.02 56310-XX000 4BD2C102',
b'\xf1\x00BD MDPS C 1.00 1.08 56310/M6300 4BDDC108',
b'\xf1\x00BD MDPS C 1.00 1.08 56310M6300\x00 4BDDC108',
],
(Ecu.fwdCamera, 0x7C4, None): [
b'\xf1\x00BD LKAS AT USA LHD 1.00 1.04 95740-M6000 J33',
],
(Ecu.fwdRadar, 0x7D0, None): [
b'\xf1\x00BD__ SCC H-CUP 1.00 1.02 99110-M6000 ',
],
(Ecu.engine, 0x7e0, None): [
b'\x01TBDM1NU06F200H01',
b'391182B945\x00',
],
(Ecu.esp, 0x7d1, None): [
b'\xf1\x816VGRAH00018.ELF\xf1\x00\x00\x00\x00\x00\x00\x00',
],
(Ecu.transmission, 0x7e1, None): [
b'\xf1\x816U2VC051\x00\x00\xf1\x006U2V0_C2\x00\x006U2VC051\x00\x00DBD0T16SS0\x00\x00\x00\x00',
b"\xf1\x816U2VC051\x00\x00\xf1\x006U2V0_C2\x00\x006U2VC051\x00\x00DBD0T16SS0\xcf\x1e'\xc3",
],
},
CAR.K5_JF: {
(Ecu.fwdRadar, 0x7d0, None): [b'\xf1\x00JF__ SCC F-CUP 1.00 1.00 96400-D4110 ',],
(Ecu.esp, 0x7d1, None): [b'\xf1\x00JF ESC \v 11 \x18\x030 58920-D5180',],
(Ecu.engine, 0x7e0, None): [b'\x01TJFAJNU06F201H03',],
(Ecu.eps, 0x7d4, None): [b'\xf1\x00TM MDPS C 1.00 1.00 56340-S2000 8409',],
(Ecu.fwdCamera, 0x7c4, None): [b'\xf1\x00JFA LKAS AT USA LHD 1.00 1.02 95895-D5000 h31',],
(Ecu.transmission, 0x7e1, None): [b'\xf1\x816U2V8051\x00\x00\xf1\x006U2V0_C2\x00\x006U2V8051\x00\x00DJF0T16NL0\t\xd2GW',],
},
CAR.K5_HEV_JF: {
(Ecu.fwdRadar, 0x7d0, None): [
b'\xf1\x00DEhe SCC H-CUP 1.01 1.02 96400-G5100 ',
b'\xf1\x00JFhe SCC F-CUP 1.00 1.00 96400-A8000 ',
],
(Ecu.engine, 0x7e0, None): [
b'\xf1\x816H6F4051\x00\x00\x00\x00\x00\x00\x00\x00',
b'\xf1\x816H673051\x00\x00\x00\x00\x00\x00\x00\x00',
],
(Ecu.eps, 0x7d4, None): [
b'\xf1\x00DE MDPS C 1.00 1.09 56310G5301\x00 4DEHC109',
b'\xf1\x00JF MDPS C 1.00 1.02 56310-XX000\x00 4JFHC102',
],
(Ecu.fwdCamera, 0x7c4, None): [
b'\xf1\x00DEP MFC AT USA LHD 1.00 1.01 95740-G5010 170424',
b'\xf1\x00JFP MFC AT EUR LHD 1.00 1.03 95895-A8100 180608',
],
(Ecu.transmission, 0x7e1, None): [
b"\xf1\x816U3J2051\x00\x00\xf1\x006U3H0_C2\x00\x006U3J2051\x00\x00PDE0G16NS2\xf4'\\\x91",
b"\xf1\x816T7B0051\x00\x00\xf1\x006T7B0_C2\x00\x006T7B0051\x00\x00TJF2H20KA0\xf4'\\\x91",
],
},
CAR.STINGER_CK: {
(Ecu.fwdRadar, 0x7d0, None): [ b'\xf1\x00CK__ SCC F_CUP 1.00 1.01 96400-J5100 \xf1\xa01.01',],
(Ecu.engine, 0x7e0, None): [ b'\xf1\x81640E0051\x00\x00\x00\x00\x00\x00\x00\x00',],
(Ecu.eps, 0x7d4, None): [b'\xf1\x00CK MDPS R 1.00 1.04 57700-J5420 4C4VL104',],
(Ecu.fwdCamera, 0x7c4, None): [b'\xf1\x00CK MFC AT USA LHD 1.00 1.03 95740-J5000 170822',],
(Ecu.transmission, 0x7e1, None): [
b'\xf1\x87VDHLG17118862DK2\x8awWwgu\x96wVfUVwv\x97xWvfvUTGTx\x87o\xff\xc9\xed\xf1\x81E21\x00\x00\x00\x00\x00\x00\x00\xf1\x00bcsh8p54 E21\x00\x00\x00\x00\x00\x00\x00SCK0T33NB0\x88\xa2\xe6\xf0',
b'\xf1\x87VDHLG17000192DK2xdFffT\xa5VUD$DwT\x86wveVeeD&T\x99\xba\x8f\xff\xcc\x99\xf1\x81E21\x00\x00\x00\x00\x00\x00\x00\xf1\x00bcsh8p54 E21\x00\x00\x00\x00\x00\x00\x00SCK0T33NB0\x88\xa2\xe6\xf0',
],
},
CAR.NIRO_EV_DE: {
(Ecu.fwdRadar, 0x7D0, None): [
b'\xf1\x00DEev SCC F-CUP 1.00 1.03 96400-Q4100 \xf1\xa01.03',
b'\xf1\x00DEev SCC F-CUP 1.00 1.00 99110-Q4000 \xf1\xa01.00',
],
(Ecu.esp, 0x7D1, None): [
b'\xf1\xa01.06',
b'\xf1\xa01.07',
],
(Ecu.eps, 0x7D4, None): [
b'\xf1\x00DE MDPS C 1.00 1.05 56310Q4000\x00 4DEEC105',
b'\xf1\x00DE MDPS C 1.00 1.05 56310Q4100\x00 4DEEC105',
],
(Ecu.fwdCamera, 0x7C4, None): [
b'\xf1\x00DEE MFC AT USA LHD 1.00 1.03 95740-Q4000 180821',
b'\xf1\x00DEE MFC AT EUR LHD 1.00 1.00 99211-Q4000 191211',
],
},
CAR.SELTOS_SP2: {
(Ecu.fwdRadar, 0x7d0, None): [b'\xf1\x8799110Q5100\xf1\000SP2_ SCC FHCUP 1.01 1.05 99110-Q5100 \xf1\xa01.05',],
(Ecu.esp, 0x7d1, None): [
b'\xf1\x8758910-Q5450\xf1\000SP ESC \a 101\031\t\005 58910-Q5450\xf1\xa01.01',
b'\xf1\x8758910-Q5450\xf1\000SP ESC \t 101\031\t\005 58910-Q5450\xf1\xa01.01',
],
(Ecu.engine, 0x7e0, None): [
b'\xf1\x81616D2051\000\000\000\000\000\000\000\000',
b'\001TSP2KNL06F100J0K',
],
(Ecu.eps, 0x7d4, None): [b'\xf1\000SP2 MDPS C 1.00 1.04 56300Q5200 ',],
(Ecu.fwdCamera, 0x7c4, None): [b'\xf1\000SP2 MFC AT USA LHD 1.00 1.04 99210-Q5000 191114',],
(Ecu.transmission, 0x7e1, None): [
b'\xf1\x87CZLUB49370612JF7h\xa8y\x87\x99\xa7hv\x99\x97fv\x88\x87x\x89x\x96O\xff\x88\xff\xff\xff.@\xf1\x816V2C2051\000\000\xf1\0006V2B0_C2\000\0006V2C2051\000\000CSP4N20NS3\000\000\000\000',
b'\xf1\x87954A22D200\xf1\x81T01950A1 \xf1\000T0190XBL T01950A1 DSP2T16X4X950NS6\xd30\xa5\xb9',
],
},
}
CHECKSUM = {
"crc8": [CAR.SANTAFE_TM, CAR.SONATA_DN8, CAR.PALISADE_LX2, CAR.SONATA_HEV_DN8, CAR.SELTOS_SP2, CAR.AVANTE_CN7, CAR.SOUL_EV_SK3, CAR.AVANTE_HEV_CN7, CAR.SANTAFE_HEV_TM, CAR.K5_DL3],
"6B": [CAR.SORENTO_UM, CAR.GENESIS_DH],
}
FEATURES = {
# Use Cluster for Gear Selection, rather than Transmission
"use_cluster_gears": {CAR.AVANTE_AD, CAR.KONA_OS, CAR.I30_PD, CAR.K7_YG, CAR.GRANDEUR_IG, CAR.GRANDEUR_FL_IG},
# Use TCU Message for Gear Selection
"use_tcu_gears": {CAR.K5_JF, CAR.SONATA_LF, CAR.VELOSTER_JS, CAR.SONATA_TURBO_LF, CAR.STINGER_CK},
# Use E_GEAR Message for Gear Selection
"use_elect_gears": {CAR.SONATA_HEV_DN8, CAR.SONATA_HEV_LF, CAR.KONA_EV_OS, CAR.KONA_HEV_OS, CAR.IONIQ_EV_AE, CAR.IONIQ_HEV_AE, CAR.GRANDEUR_HEV_IG, CAR.GRANDEUR_HEV_FL_IG, CAR.NEXO_FE,
CAR.K5_HEV_JF, CAR.K7_HEV_YG, CAR.NIRO_EV_DE, CAR.NIRO_HEV_DE, CAR.SOUL_EV_SK3, CAR.AVANTE_HEV_CN7, CAR.SANTAFE_HEV_TM},
# send LFA MFA message for new HKG models
# Insert your car in this if you want turn LFA icon on.
# need to add lfa modded cars which are changed from lkas to lfa cam
"send_lfahda_mfa": {CAR.GRANDEUR_HEV_FL_IG, CAR.GRANDEUR_FL_IG, CAR.SONATA_DN8, CAR.PALISADE_LX2, CAR.SONATA_HEV_DN8, CAR.SANTAFE_TM, CAR.KONA_EV_OS, CAR.NIRO_EV_DE, CAR.KONA_HEV_OS,
CAR.SELTOS_SP2, CAR.SOUL_EV_SK3, CAR.NEXO_FE, CAR.MOHAVE_HM, CAR.STINGER_CK, CAR.AVANTE_CN7, CAR.AVANTE_HEV_CN7, CAR.K5_DL3, CAR.SANTAFE_HEV_TM, CAR.GENESIS_G70_IK},
"send_hda_mfa": {CAR.GRANDEUR_IG, CAR.GRANDEUR_HEV_IG},
# these cars use the FCA11 message for the AEB and FCW signals, all others use SCC12
# Insert your car in this if you see front collision error on your cluster.
"use_fca": {CAR.GRANDEUR_HEV_FL_IG, CAR.GRANDEUR_FL_IG, CAR.SONATA_DN8, CAR.AVANTE_CN7, CAR.I30_PD, CAR.PALISADE_LX2, CAR.GENESIS_G70_IK, CAR.GENESIS_G70_2020, CAR.GENESIS_G90_HI, CAR.KONA_HEV_OS, CAR.KONA_EV_OS, CAR.SELTOS_SP2, CAR.MOHAVE_HM, CAR.KIA_FORTE},
}
HYBRID_CAR = {CAR.K5_HEV_JF, CAR.IONIQ_HEV_AE, CAR.SONATA_HEV_DN8, CAR.SONATA_HEV_LF, CAR.K7_HEV_YG, CAR.GRANDEUR_HEV_IG, CAR.GRANDEUR_HEV_FL_IG, CAR.NIRO_HEV_DE, CAR.KONA_HEV_OS, CAR.AVANTE_HEV_CN7}
EV_CAR = {CAR.IONIQ_EV_AE, CAR.KONA_EV_OS, CAR.NIRO_EV_DE, CAR.NEXO_FE, CAR.SOUL_EV_SK3}
if Params().get_bool("UseRadarTrack"):
DBC = {
# genesis
CAR.GENESIS_DH: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.GENESIS_G70_IK: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.GENESIS_G70_2020: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.GENESIS_G80_DH: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.GENESIS_G90_HI: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.GENESIS_EQ900_HI: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
# hyundai
CAR.AVANTE_AD: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.AVANTE_CN7: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.AVANTE_HEV_CN7: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.I30_PD: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.SONATA_DN8: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.SONATA_HEV_DN8: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.SONATA_LF: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.SONATA_TURBO_LF: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.SONATA_HEV_LF: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.KONA_OS: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.KONA_EV_OS: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.KONA_HEV_OS: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.IONIQ_EV_AE: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.IONIQ_HEV_AE: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.SANTAFE_TM: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.PALISADE_LX2: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.VELOSTER_JS: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.GRANDEUR_IG: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.GRANDEUR_HEV_IG: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.GRANDEUR_FL_IG: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.GRANDEUR_HEV_FL_IG: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.TUCSON_TL: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.NEXO_FE: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
# kia
CAR.KIA_FORTE: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.K3_BD: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.K5_JF: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.K5_HEV_JF: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.K5_DL3: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.SPORTAGE_QL: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.SORENTO_UM: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.STINGER_CK: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.NIRO_EV_DE: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.NIRO_HEV_DE: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.K7_YG: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.K7_HEV_YG: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.SELTOS_SP2: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.SOUL_EV_SK3: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
CAR.MOHAVE_HM: dbc_dict('hyundai_kia_generic', 'hyundai_kia_mando_front_radar'),
}
else:
DBC = {
# genesis
CAR.GENESIS_DH: dbc_dict('hyundai_kia_generic', None),
CAR.GENESIS_G70_IK: dbc_dict('hyundai_kia_generic', None),
CAR.GENESIS_G70_2020: dbc_dict('hyundai_kia_generic', None), # 'hyundai_kia_mando_front_radar'),
CAR.GENESIS_G80_DH: dbc_dict('hyundai_kia_generic', None),
CAR.GENESIS_G90_HI: dbc_dict('hyundai_kia_generic', None),
CAR.GENESIS_EQ900_HI: dbc_dict('hyundai_kia_generic', None),
# hyundai
CAR.AVANTE_AD: dbc_dict('hyundai_kia_generic', None),
CAR.AVANTE_CN7: dbc_dict('hyundai_kia_generic', None),
CAR.AVANTE_HEV_CN7: dbc_dict('hyundai_kia_generic', None),
CAR.I30_PD: dbc_dict('hyundai_kia_generic', None),
CAR.SONATA_DN8: dbc_dict('hyundai_kia_generic', None),
CAR.SONATA_HEV_DN8: dbc_dict('hyundai_kia_generic', None),
CAR.SONATA_LF: dbc_dict('hyundai_kia_generic', None),
CAR.SONATA_TURBO_LF: dbc_dict('hyundai_kia_generic', None),
CAR.SONATA_HEV_LF: dbc_dict('hyundai_kia_generic', None),
CAR.KONA_OS: dbc_dict('hyundai_kia_generic', None),
CAR.KONA_EV_OS: dbc_dict('hyundai_kia_generic', None),
CAR.KONA_HEV_OS: dbc_dict('hyundai_kia_generic', None),
CAR.IONIQ_EV_AE: dbc_dict('hyundai_kia_generic', None),
CAR.IONIQ_HEV_AE: dbc_dict('hyundai_kia_generic', None),
CAR.SANTAFE_TM: dbc_dict('hyundai_kia_generic', None),
CAR.PALISADE_LX2: dbc_dict('hyundai_kia_generic', None),
CAR.VELOSTER_JS: dbc_dict('hyundai_kia_generic', None),
CAR.GRANDEUR_IG: dbc_dict('hyundai_kia_generic', None),
CAR.GRANDEUR_HEV_IG: dbc_dict('hyundai_kia_generic', None),
CAR.GRANDEUR_FL_IG: dbc_dict('hyundai_kia_generic', None),
CAR.GRANDEUR_HEV_FL_IG: dbc_dict('hyundai_kia_generic', None),
CAR.TUCSON_TL: dbc_dict('hyundai_kia_generic', None),
CAR.NEXO_FE: dbc_dict('hyundai_kia_generic', None),
# kia
CAR.KIA_FORTE: dbc_dict('hyundai_kia_generic', None),
CAR.K3_BD: dbc_dict('hyundai_kia_generic', None),
CAR.K5_JF: dbc_dict('hyundai_kia_generic', None),
CAR.K5_HEV_JF: dbc_dict('hyundai_kia_generic', None),
CAR.K5_DL3: dbc_dict('hyundai_kia_generic', None),
CAR.SPORTAGE_QL: dbc_dict('hyundai_kia_generic', None),
CAR.SORENTO_UM: dbc_dict('hyundai_kia_generic', None),
CAR.STINGER_CK: dbc_dict('hyundai_kia_generic', None),
CAR.NIRO_EV_DE: dbc_dict('hyundai_kia_generic', None),
CAR.NIRO_HEV_DE: dbc_dict('hyundai_kia_generic', None),
CAR.K7_YG: dbc_dict('hyundai_kia_generic', None),
CAR.K7_HEV_YG: dbc_dict('hyundai_kia_generic', None),
CAR.SELTOS_SP2: dbc_dict('hyundai_kia_generic', None),
CAR.SOUL_EV_SK3: dbc_dict('hyundai_kia_generic', None),
CAR.MOHAVE_HM: dbc_dict('hyundai_kia_generic', None),
}
STEER_THRESHOLD = int(Params().get("SteerThreshold", encoding="utf8"))
| 114.93786 | 901 | 0.582553 | 20,214 | 99,881 | 2.825022 | 0.051351 | 0.035198 | 0.041923 | 0.043919 | 0.83427 | 0.812083 | 0.778653 | 0.756203 | 0.733719 | 0.697645 | 0 | 0.505264 | 0.214485 | 99,881 | 868 | 902 | 115.070277 | 0.222552 | 0.006618 | 0 | 0.27907 | 0 | 0.084455 | 0.178553 | 0.078973 | 0 | 0 | 0.005444 | 0 | 0 | 1 | 0.001224 | false | 0 | 0.008568 | 0 | 0.080783 | 0.001224 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
d838705d92ab1411ce7e82cde060426c9d770747 | 1,396 | py | Python | runs/batchshipyard/snake/3d/scripts/visit_database_views.py | mesnardo/FlyingSnake2Cloud | c76d83226327476a17ba244040cc6338a4dbe022 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | runs/batchshipyard/snake/3d/scripts/visit_database_views.py | mesnardo/FlyingSnake2Cloud | c76d83226327476a17ba244040cc6338a4dbe022 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | runs/batchshipyard/snake/3d/scripts/visit_database_views.py | mesnardo/FlyingSnake2Cloud | c76d83226327476a17ba244040cc6338a4dbe022 | [
"CC-BY-4.0",
"BSD-3-Clause"
] | null | null | null | """
List of predefined view attributes.
"""
def set_view3d_attributes(View3DAtts, name):
if name == 'domain':
View3DAtts.viewNormal = (-0.31, 0.41, 0.86)
View3DAtts.focus = (0, 0, 1.6)
View3DAtts.viewUp = (0.24, 0.91, -0.34)
View3DAtts.viewAngle = 30
View3DAtts.parallelScale = 21
View3DAtts.nearPlane = -42.1555
View3DAtts.farPlane = 42.1555
View3DAtts.imagePan = (-0.06, -0.014)
View3DAtts.imageZoom = 5.56
View3DAtts.perspective = 1
View3DAtts.eyeAngle = 2
View3DAtts.centerOfRotationSet = 0
View3DAtts.centerOfRotation = (0.0146802, 0, 1.6)
View3DAtts.axis3DScaleFlag = 0
View3DAtts.axis3DScales = (1, 1, 1)
View3DAtts.shear = (0, 0, 1)
View3DAtts.windowValid = 1
elif name == 'crop':
View3DAtts.viewNormal = (-0.31, 0.41, 0.86)
View3DAtts.focus = (0, 0, 1.6)
View3DAtts.viewUp = (0.24, 0.91, -0.34)
View3DAtts.viewAngle = 30
View3DAtts.parallelScale = 21
View3DAtts.nearPlane = -42.1555
View3DAtts.farPlane = 42.1555
View3DAtts.imagePan = (0.06, -0.014)
View3DAtts.imageZoom = 1.2
View3DAtts.perspective = 1
View3DAtts.eyeAngle = 2
View3DAtts.centerOfRotationSet = 0
View3DAtts.centerOfRotation = (0.0146802, 0, 1.6)
View3DAtts.axis3DScaleFlag = 0
View3DAtts.axis3DScales = (1, 1, 1)
View3DAtts.shear = (0, 0, 1)
View3DAtts.windowValid = 1
return
| 31.727273 | 53 | 0.662607 | 174 | 1,396 | 5.304598 | 0.293103 | 0.013001 | 0.013001 | 0.056338 | 0.890574 | 0.890574 | 0.890574 | 0.890574 | 0.890574 | 0.890574 | 0 | 0.161991 | 0.208453 | 1,396 | 43 | 54 | 32.465116 | 0.673303 | 0.025072 | 0 | 0.789474 | 0 | 0 | 0.007391 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dc402bcdf48ceca405cf343cbb3b62ba16303ff7 | 68 | py | Python | python/testData/codeInsight/controlflow/assertfalse.py | teddywest32/intellij-community | e0268d7a1da1d318b441001448cdd3e8929b2f29 | [
"Apache-2.0"
] | null | null | null | python/testData/codeInsight/controlflow/assertfalse.py | teddywest32/intellij-community | e0268d7a1da1d318b441001448cdd3e8929b2f29 | [
"Apache-2.0"
] | 11 | 2017-02-27T22:35:32.000Z | 2021-12-24T08:07:40.000Z | python/testData/codeInsight/controlflow/assertfalse.py | teddywest32/intellij-community | e0268d7a1da1d318b441001448cdd3e8929b2f29 | [
"Apache-2.0"
] | 1 | 2020-11-27T10:36:50.000Z | 2020-11-27T10:36:50.000Z | assert false
print("Unreachable")
assert False
print("Unreachable2") | 17 | 21 | 0.808824 | 8 | 68 | 6.875 | 0.625 | 0.4 | 0.581818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015873 | 0.073529 | 68 | 4 | 21 | 17 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
f49cd1defcfd9740e4dd22673cf42c7f4ddb4c0e | 18,004 | py | Python | sdk/python/pulumi_rancher2/project.py | mitchellmaler/pulumi-rancher2 | e6ca44b58b5b10c12a4e628e61aa8d98330f0863 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_rancher2/project.py | mitchellmaler/pulumi-rancher2 | e6ca44b58b5b10c12a4e628e61aa8d98330f0863 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_rancher2/project.py | mitchellmaler/pulumi-rancher2 | e6ca44b58b5b10c12a4e628e61aa8d98330f0863 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from typing import Union
from . import utilities, tables
class Project(pulumi.CustomResource):
annotations: pulumi.Output[dict]
"""
Annotations for Node Pool object (map)
"""
cluster_id: pulumi.Output[str]
"""
The cluster id where create project (string)
"""
container_resource_limit: pulumi.Output[dict]
"""
Default containers resource limits on project (List maxitem:1)
* `limitsCpu` (`str`) - Limit for limits cpu in project (string)
* `limitsMemory` (`str`) - Limit for limits memory in project (string)
* `requestsCpu` (`str`) - Limit for requests cpu in project (string)
* `requestsMemory` (`str`) - Limit for requests memory in project (string)
"""
description: pulumi.Output[str]
"""
A project description (string)
"""
enable_project_monitoring: pulumi.Output[bool]
"""
Enable built-in project monitoring. Default `false` (bool)
"""
labels: pulumi.Output[dict]
"""
Labels for Node Pool object (map)
"""
name: pulumi.Output[str]
"""
The name of the project (string)
"""
pod_security_policy_template_id: pulumi.Output[str]
"""
Default Pod Security Policy ID for the project (string)
"""
project_monitoring_input: pulumi.Output[dict]
"""
Project monitoring config. Any parameter defined in [rancher-monitoring charts](https://github.com/rancher/system-charts/tree/dev/charts/rancher-monitoring) could be configured (list maxitems:1)
* `answers` (`dict`) - Key/value answers for monitor input (map)
"""
resource_quota: pulumi.Output[dict]
"""
Resource quota for project. Rancher v2.1.x or higher (list maxitems:1)
* `namespaceDefaultLimit` (`dict`) - Default resource quota limit for namespaces in project (list maxitems:1)
* `configMaps` (`str`) - Limit for config maps in project (string)
* `limitsCpu` (`str`) - Limit for limits cpu in project (string)
* `limitsMemory` (`str`) - Limit for limits memory in project (string)
* `persistentVolumeClaims` (`str`) - Limit for persistent volume claims in project (string)
* `pods` (`str`) - Limit for pods in project (string)
* `replicationControllers` (`str`) - Limit for replication controllers in project (string)
* `requestsCpu` (`str`) - Limit for requests cpu in project (string)
* `requestsMemory` (`str`) - Limit for requests memory in project (string)
* `requestsStorage` (`str`) - Limit for requests storage in project (string)
* `secrets` (`str`) - Limit for secrets in project (string)
* `services` (`str`)
* `servicesLoadBalancers` (`str`) - Limit for services load balancers in project (string)
* `servicesNodePorts` (`str`) - Limit for services node ports in project (string)
* `projectLimit` (`dict`) - Resource quota limit for project (list maxitems:1)
* `configMaps` (`str`) - Limit for config maps in project (string)
* `limitsCpu` (`str`) - Limit for limits cpu in project (string)
* `limitsMemory` (`str`) - Limit for limits memory in project (string)
* `persistentVolumeClaims` (`str`) - Limit for persistent volume claims in project (string)
* `pods` (`str`) - Limit for pods in project (string)
* `replicationControllers` (`str`) - Limit for replication controllers in project (string)
* `requestsCpu` (`str`) - Limit for requests cpu in project (string)
* `requestsMemory` (`str`) - Limit for requests memory in project (string)
* `requestsStorage` (`str`) - Limit for requests storage in project (string)
* `secrets` (`str`) - Limit for secrets in project (string)
* `services` (`str`)
* `servicesLoadBalancers` (`str`) - Limit for services load balancers in project (string)
* `servicesNodePorts` (`str`) - Limit for services node ports in project (string)
"""
wait_for_cluster: pulumi.Output[bool]
"""
Wait for cluster becomes active. Default `false` (bool)
"""
def __init__(__self__, resource_name, opts=None, annotations=None, cluster_id=None, container_resource_limit=None, description=None, enable_project_monitoring=None, labels=None, name=None, pod_security_policy_template_id=None, project_monitoring_input=None, resource_quota=None, wait_for_cluster=None, __props__=None, __name__=None, __opts__=None):
"""
Provides a Rancher v2 Project resource. This can be used to create projects for Rancher v2 environments and retrieve their information.
> This content is derived from https://github.com/terraform-providers/terraform-provider-rancher2/blob/master/website/docs/r/project.html.markdown.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[dict] annotations: Annotations for Node Pool object (map)
:param pulumi.Input[str] cluster_id: The cluster id where create project (string)
:param pulumi.Input[dict] container_resource_limit: Default containers resource limits on project (List maxitem:1)
:param pulumi.Input[str] description: A project description (string)
:param pulumi.Input[bool] enable_project_monitoring: Enable built-in project monitoring. Default `false` (bool)
:param pulumi.Input[dict] labels: Labels for Node Pool object (map)
:param pulumi.Input[str] name: The name of the project (string)
:param pulumi.Input[str] pod_security_policy_template_id: Default Pod Security Policy ID for the project (string)
:param pulumi.Input[dict] project_monitoring_input: Project monitoring config. Any parameter defined in [rancher-monitoring charts](https://github.com/rancher/system-charts/tree/dev/charts/rancher-monitoring) could be configured (list maxitems:1)
:param pulumi.Input[dict] resource_quota: Resource quota for project. Rancher v2.1.x or higher (list maxitems:1)
:param pulumi.Input[bool] wait_for_cluster: Wait for cluster becomes active. Default `false` (bool)
The **container_resource_limit** object supports the following:
* `limitsCpu` (`pulumi.Input[str]`) - Limit for limits cpu in project (string)
* `limitsMemory` (`pulumi.Input[str]`) - Limit for limits memory in project (string)
* `requestsCpu` (`pulumi.Input[str]`) - Limit for requests cpu in project (string)
* `requestsMemory` (`pulumi.Input[str]`) - Limit for requests memory in project (string)
The **project_monitoring_input** object supports the following:
* `answers` (`pulumi.Input[dict]`) - Key/value answers for monitor input (map)
The **resource_quota** object supports the following:
* `namespaceDefaultLimit` (`pulumi.Input[dict]`) - Default resource quota limit for namespaces in project (list maxitems:1)
* `configMaps` (`pulumi.Input[str]`) - Limit for config maps in project (string)
* `limitsCpu` (`pulumi.Input[str]`) - Limit for limits cpu in project (string)
* `limitsMemory` (`pulumi.Input[str]`) - Limit for limits memory in project (string)
* `persistentVolumeClaims` (`pulumi.Input[str]`) - Limit for persistent volume claims in project (string)
* `pods` (`pulumi.Input[str]`) - Limit for pods in project (string)
* `replicationControllers` (`pulumi.Input[str]`) - Limit for replication controllers in project (string)
* `requestsCpu` (`pulumi.Input[str]`) - Limit for requests cpu in project (string)
* `requestsMemory` (`pulumi.Input[str]`) - Limit for requests memory in project (string)
* `requestsStorage` (`pulumi.Input[str]`) - Limit for requests storage in project (string)
* `secrets` (`pulumi.Input[str]`) - Limit for secrets in project (string)
* `services` (`pulumi.Input[str]`)
* `servicesLoadBalancers` (`pulumi.Input[str]`) - Limit for services load balancers in project (string)
* `servicesNodePorts` (`pulumi.Input[str]`) - Limit for services node ports in project (string)
* `projectLimit` (`pulumi.Input[dict]`) - Resource quota limit for project (list maxitems:1)
* `configMaps` (`pulumi.Input[str]`) - Limit for config maps in project (string)
* `limitsCpu` (`pulumi.Input[str]`) - Limit for limits cpu in project (string)
* `limitsMemory` (`pulumi.Input[str]`) - Limit for limits memory in project (string)
* `persistentVolumeClaims` (`pulumi.Input[str]`) - Limit for persistent volume claims in project (string)
* `pods` (`pulumi.Input[str]`) - Limit for pods in project (string)
* `replicationControllers` (`pulumi.Input[str]`) - Limit for replication controllers in project (string)
* `requestsCpu` (`pulumi.Input[str]`) - Limit for requests cpu in project (string)
* `requestsMemory` (`pulumi.Input[str]`) - Limit for requests memory in project (string)
* `requestsStorage` (`pulumi.Input[str]`) - Limit for requests storage in project (string)
* `secrets` (`pulumi.Input[str]`) - Limit for secrets in project (string)
* `services` (`pulumi.Input[str]`)
* `servicesLoadBalancers` (`pulumi.Input[str]`) - Limit for services load balancers in project (string)
* `servicesNodePorts` (`pulumi.Input[str]`) - Limit for services node ports in project (string)
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
__props__['annotations'] = annotations
if cluster_id is None:
raise TypeError("Missing required property 'cluster_id'")
__props__['cluster_id'] = cluster_id
__props__['container_resource_limit'] = container_resource_limit
__props__['description'] = description
__props__['enable_project_monitoring'] = enable_project_monitoring
__props__['labels'] = labels
__props__['name'] = name
__props__['pod_security_policy_template_id'] = pod_security_policy_template_id
__props__['project_monitoring_input'] = project_monitoring_input
__props__['resource_quota'] = resource_quota
__props__['wait_for_cluster'] = wait_for_cluster
super(Project, __self__).__init__(
'rancher2:index/project:Project',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name, id, opts=None, annotations=None, cluster_id=None, container_resource_limit=None, description=None, enable_project_monitoring=None, labels=None, name=None, pod_security_policy_template_id=None, project_monitoring_input=None, resource_quota=None, wait_for_cluster=None):
"""
Get an existing Project resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param str id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[dict] annotations: Annotations for Node Pool object (map)
:param pulumi.Input[str] cluster_id: The cluster id where create project (string)
:param pulumi.Input[dict] container_resource_limit: Default containers resource limits on project (List maxitem:1)
:param pulumi.Input[str] description: A project description (string)
:param pulumi.Input[bool] enable_project_monitoring: Enable built-in project monitoring. Default `false` (bool)
:param pulumi.Input[dict] labels: Labels for Node Pool object (map)
:param pulumi.Input[str] name: The name of the project (string)
:param pulumi.Input[str] pod_security_policy_template_id: Default Pod Security Policy ID for the project (string)
:param pulumi.Input[dict] project_monitoring_input: Project monitoring config. Any parameter defined in [rancher-monitoring charts](https://github.com/rancher/system-charts/tree/dev/charts/rancher-monitoring) could be configured (list maxitems:1)
:param pulumi.Input[dict] resource_quota: Resource quota for project. Rancher v2.1.x or higher (list maxitems:1)
:param pulumi.Input[bool] wait_for_cluster: Wait for cluster becomes active. Default `false` (bool)
The **container_resource_limit** object supports the following:
* `limitsCpu` (`pulumi.Input[str]`) - Limit for limits cpu in project (string)
* `limitsMemory` (`pulumi.Input[str]`) - Limit for limits memory in project (string)
* `requestsCpu` (`pulumi.Input[str]`) - Limit for requests cpu in project (string)
* `requestsMemory` (`pulumi.Input[str]`) - Limit for requests memory in project (string)
The **project_monitoring_input** object supports the following:
* `answers` (`pulumi.Input[dict]`) - Key/value answers for monitor input (map)
The **resource_quota** object supports the following:
* `namespaceDefaultLimit` (`pulumi.Input[dict]`) - Default resource quota limit for namespaces in project (list maxitems:1)
* `configMaps` (`pulumi.Input[str]`) - Limit for config maps in project (string)
* `limitsCpu` (`pulumi.Input[str]`) - Limit for limits cpu in project (string)
* `limitsMemory` (`pulumi.Input[str]`) - Limit for limits memory in project (string)
* `persistentVolumeClaims` (`pulumi.Input[str]`) - Limit for persistent volume claims in project (string)
* `pods` (`pulumi.Input[str]`) - Limit for pods in project (string)
* `replicationControllers` (`pulumi.Input[str]`) - Limit for replication controllers in project (string)
* `requestsCpu` (`pulumi.Input[str]`) - Limit for requests cpu in project (string)
* `requestsMemory` (`pulumi.Input[str]`) - Limit for requests memory in project (string)
* `requestsStorage` (`pulumi.Input[str]`) - Limit for requests storage in project (string)
* `secrets` (`pulumi.Input[str]`) - Limit for secrets in project (string)
* `services` (`pulumi.Input[str]`)
* `servicesLoadBalancers` (`pulumi.Input[str]`) - Limit for services load balancers in project (string)
* `servicesNodePorts` (`pulumi.Input[str]`) - Limit for services node ports in project (string)
* `projectLimit` (`pulumi.Input[dict]`) - Resource quota limit for project (list maxitems:1)
* `configMaps` (`pulumi.Input[str]`) - Limit for config maps in project (string)
* `limitsCpu` (`pulumi.Input[str]`) - Limit for limits cpu in project (string)
* `limitsMemory` (`pulumi.Input[str]`) - Limit for limits memory in project (string)
* `persistentVolumeClaims` (`pulumi.Input[str]`) - Limit for persistent volume claims in project (string)
* `pods` (`pulumi.Input[str]`) - Limit for pods in project (string)
* `replicationControllers` (`pulumi.Input[str]`) - Limit for replication controllers in project (string)
* `requestsCpu` (`pulumi.Input[str]`) - Limit for requests cpu in project (string)
* `requestsMemory` (`pulumi.Input[str]`) - Limit for requests memory in project (string)
* `requestsStorage` (`pulumi.Input[str]`) - Limit for requests storage in project (string)
* `secrets` (`pulumi.Input[str]`) - Limit for secrets in project (string)
* `services` (`pulumi.Input[str]`)
* `servicesLoadBalancers` (`pulumi.Input[str]`) - Limit for services load balancers in project (string)
* `servicesNodePorts` (`pulumi.Input[str]`) - Limit for services node ports in project (string)
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["annotations"] = annotations
__props__["cluster_id"] = cluster_id
__props__["container_resource_limit"] = container_resource_limit
__props__["description"] = description
__props__["enable_project_monitoring"] = enable_project_monitoring
__props__["labels"] = labels
__props__["name"] = name
__props__["pod_security_policy_template_id"] = pod_security_policy_template_id
__props__["project_monitoring_input"] = project_monitoring_input
__props__["resource_quota"] = resource_quota
__props__["wait_for_cluster"] = wait_for_cluster
return Project(resource_name, opts=opts, __props__=__props__)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 64.530466 | 352 | 0.673906 | 2,108 | 18,004 | 5.600095 | 0.102467 | 0.102414 | 0.078272 | 0.090131 | 0.843456 | 0.825667 | 0.814994 | 0.811944 | 0.79551 | 0.787294 | 0 | 0.001846 | 0.217674 | 18,004 | 278 | 353 | 64.76259 | 0.83628 | 0.526938 | 0 | 0.027397 | 1 | 0 | 0.160429 | 0.056734 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054795 | false | 0.013699 | 0.082192 | 0.027397 | 0.342466 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f4ba0bfd41f7be174d9c3b88ba26f3cd4e291b58 | 212 | py | Python | dpckan/tests/__init__.py | danielfeloiola/dpckan | 9aea7aa1d7137dca5adf7ad95d8a6d148ab337e5 | [
"MIT"
] | 6 | 2021-07-04T08:53:12.000Z | 2022-01-27T21:53:05.000Z | dpckan/tests/__init__.py | danielfeloiola/dpckan | 9aea7aa1d7137dca5adf7ad95d8a6d148ab337e5 | [
"MIT"
] | 81 | 2021-06-22T17:01:23.000Z | 2022-01-31T20:41:45.000Z | dpckan/tests/__init__.py | danielfeloiola/dpckan | 9aea7aa1d7137dca5adf7ad95d8a6d148ab337e5 | [
"MIT"
] | 2 | 2021-10-07T14:42:36.000Z | 2022-01-27T14:43:48.000Z | from dpckan.tests.dpckan_test import clone_online_repo
from dpckan.tests.dpckan_test import get_file_name
from dpckan.tests.dpckan_test import get_file_path
from dpckan.tests.dpckan_test import get_ckan_instance
| 42.4 | 54 | 0.886792 | 36 | 212 | 4.888889 | 0.388889 | 0.227273 | 0.340909 | 0.477273 | 0.801136 | 0.801136 | 0.625 | 0.431818 | 0 | 0 | 0 | 0 | 0.075472 | 212 | 4 | 55 | 53 | 0.897959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 8 |
761b3c4ef9f0a778f094b8c8be79a8f4f964a233 | 2,586 | py | Python | csc121/lab6/chapter9.py | rbranford/csc121 | 52aee4940fb01778670c25fb6180a8641e14949e | [
"CC0-1.0"
] | null | null | null | csc121/lab6/chapter9.py | rbranford/csc121 | 52aee4940fb01778670c25fb6180a8641e14949e | [
"CC0-1.0"
] | null | null | null | csc121/lab6/chapter9.py | rbranford/csc121 | 52aee4940fb01778670c25fb6180a8641e14949e | [
"CC0-1.0"
] | null | null | null | def print_10_stars():
for _ in range(10):
print('*', end=' ')
print()
def print_5_stars():
for _ in range(5):
print('*', end=' ')
print()
def print_20_stars():
for _ in range(20):
print('*', end=' ')
print()
def problem_2():
print_10_stars()
print_5_stars()
print_20_stars()
def problem_3():
for _ in range(10):
for _ in range(10):
print('*', end=' ')
print()
def problem_4():
for _ in range(10):
for _ in range(5):
print('*', end=' ')
print()
def problem_5():
for _ in range(5):
for _ in range(20):
print('*', end=' ')
print()
def problem_6():
for _ in range(10):
for i in range(10):
print(i, end=' ')
print()
def problem_7():
for i in range(10):
for _ in range(10):
print(i, end=' ')
print()
def problem_8():
for i in range(10):
for j in range(i+1):
print(j, end=' ')
print()
def problem_9():
for i in range(10):
for j in range(i):
print(' ', end=' ')
for j in range(10-i):
print(j, end=' ')
print()
def problem_10():
for i in range(1, 10):
for j in range(1, 10):
if i*j < 10:
print(' ', end=' ')
print(i*j, end=' ')
print()
def problem_11():
for i in range (10):
for j in range(10-i):
print (' ', end=' ')
for j in range(1, i+1):
print(j, end=' ')
for j in range(i-1, 0, -1):
print(j, end=' ')
print()
def problem_12():
for i in range(10):
for j in range(10-i):
print (' ', end=' ')
for j in range(1,i+1):
print (j, end=' ')
for j in range(i-1,0,-1):
print (j, end=' ')
print()
for i in range(10):
for j in range(i+2):
print (' ', end=' ')
for j in range(1,9-i):
print (j, end=' ')
print()
def problem_13():
for i in range(10):
for j in range(10-i):
print (' ', end=' ')
for j in range(1, i+1):
print (j, end=' ')
for j in range(i-1,0,-1):
print (j, end=' ')
print()
for i in range(10):
for j in range(i+2):
print (' ', end=' ')
for j in range(1, 9-i):
print (j, end=' ')
for j in range(7-i, 0, -1):
print (j, end=' ')
print()
| 19.892308 | 35 | 0.41686 | 361 | 2,586 | 2.889197 | 0.072022 | 0.261745 | 0.16395 | 0.189837 | 0.882071 | 0.799616 | 0.780441 | 0.663471 | 0.537872 | 0.407478 | 0 | 0.068829 | 0.4157 | 2,586 | 129 | 36 | 20.046512 | 0.621443 | 0 | 0 | 0.752475 | 0 | 0 | 0.015486 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148515 | false | 0 | 0 | 0 | 0.148515 | 0.485149 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
76275a557d6b3344e79bbeb0277cef0f9f902b41 | 249 | py | Python | v3/Libraries/builtin/replace/inline replace characters.py | TheShellLand/python | a35e9b32bec3a3ff03d6f0f4c2c2cc891180e516 | [
"MIT"
] | null | null | null | v3/Libraries/builtin/replace/inline replace characters.py | TheShellLand/python | a35e9b32bec3a3ff03d6f0f4c2c2cc891180e516 | [
"MIT"
] | 1 | 2021-06-01T22:50:19.000Z | 2021-06-01T22:50:19.000Z | v3/Libraries/builtin/replace/inline replace characters.py | TheShellLand/python | a35e9b32bec3a3ff03d6f0f4c2c2cc891180e516 | [
"MIT"
] | null | null | null | #!/usr/bin/env python2.7
# -*- coding: utf8 -*-
# '0c a8 f0 d6 02 00 00 00 00 d0 1c d1 10 d2 00 d3 00 d7 01 d4 78 20 ff'.replace(' ', '').decode('hex')
print('0c a8 f0 d6 02 00 00 00 00 d0 1c d1 10 d2 00 d3 00 d7 01 d4 78 20 ff'.replace(' ', ''))
| 35.571429 | 103 | 0.590361 | 58 | 249 | 2.534483 | 0.5 | 0.163265 | 0.163265 | 0.108844 | 0.721088 | 0.721088 | 0.721088 | 0.721088 | 0.721088 | 0.721088 | 0 | 0.365079 | 0.240964 | 249 | 6 | 104 | 41.5 | 0.412698 | 0.586345 | 0 | 0 | 0 | 1 | 0.69 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 13 |
5202ba9389d59e26e377cfcc4b11bff8cde3a1a2 | 10,804 | py | Python | integration/data/test_ha.py | ywei88/longhorn-engine | 552d4b46cb8ae88f202b5697afc2d4590dc9f1cd | [
"Apache-2.0"
] | null | null | null | integration/data/test_ha.py | ywei88/longhorn-engine | 552d4b46cb8ae88f202b5697afc2d4590dc9f1cd | [
"Apache-2.0"
] | null | null | null | integration/data/test_ha.py | ywei88/longhorn-engine | 552d4b46cb8ae88f202b5697afc2d4590dc9f1cd | [
"Apache-2.0"
] | null | null | null | import cmd
import common
from common import grpc_controller, grpc_replica1, grpc_replica2 # NOQA
from common import grpc_backing_replica1, grpc_backing_replica2 # NOQA
from common import prepare_backup_dir, BACKUP_DIR # NOQA
from common import open_replica, get_blockdev, cleanup_replica
from common import verify_read, verify_data, verify_async, VOLUME_HEAD
from snapshot_tree import snapshot_tree_build, snapshot_tree_verify
def test_ha_single_replica_failure(grpc_controller, # NOQA
grpc_replica1, grpc_replica2): # NOQA
open_replica(grpc_replica1)
open_replica(grpc_replica2)
replicas = grpc_controller.replica_list()
assert len(replicas) == 0
v = grpc_controller.volume_start(replicas=[
common.REPLICA1,
common.REPLICA2
])
assert v.replicaCount == 2
replicas = grpc_controller.replica_list()
assert len(replicas) == 2
assert replicas[0].mode == "RW"
assert replicas[1].mode == "RW"
dev = get_blockdev()
data = common.random_string(128)
data_offset = 1024
verify_data(dev, data_offset, data)
cleanup_replica(grpc_replica2)
verify_async(dev, 10, 128, 1)
common.verify_replica_state(grpc_controller, 1, "ERR")
verify_read(dev, data_offset, data)
def test_ha_single_replica_rebuild(grpc_controller, # NOQA
grpc_replica1, grpc_replica2): # NOQA
open_replica(grpc_replica1)
open_replica(grpc_replica2)
replicas = grpc_controller.replica_list()
assert len(replicas) == 0
v = grpc_controller.volume_start(replicas=[
common.REPLICA1,
common.REPLICA2
])
assert v.replicaCount == 2
replicas = grpc_controller.replica_list()
assert len(replicas) == 2
assert replicas[0].mode == "RW"
assert replicas[1].mode == "RW"
dev = get_blockdev()
data = common.random_string(128)
data_offset = 1024
verify_data(dev, data_offset, data)
# Cleanup replica2
cleanup_replica(grpc_replica2)
verify_async(dev, 10, 128, 1)
common.verify_replica_state(grpc_controller, 1, "ERR")
verify_read(dev, data_offset, data)
grpc_controller.replica_delete(replicas[1].address)
# Rebuild replica2
open_replica(grpc_replica2)
cmd.add_replica(common.REPLICA2)
verify_async(dev, 10, 128, 1)
common.verify_replica_state(grpc_controller, 1, "RW")
verify_read(dev, data_offset, data)
# WORKAROUND for unable to remove the parent of volume head
newsnap = cmd.snapshot_create()
info = cmd.snapshot_info()
assert len(info) == 3
sysnap = info[newsnap]["parent"]
assert info[sysnap]["parent"] == ""
assert newsnap in info[sysnap]["children"]
assert info[sysnap]["usercreated"] is False
assert info[sysnap]["removed"] is False
cmd.snapshot_purge()
info = cmd.snapshot_info()
assert len(info) == 2
assert info[newsnap] is not None
assert info[VOLUME_HEAD] is not None
def test_ha_double_replica_rebuild(grpc_controller, # NOQA
grpc_replica1, grpc_replica2): # NOQA
open_replica(grpc_replica1)
open_replica(grpc_replica2)
replicas = grpc_controller.replica_list()
assert len(replicas) == 0
v = grpc_controller.volume_start(replicas=[
common.REPLICA1,
common.REPLICA2
])
assert v.replicaCount == 2
replicas = grpc_controller.replica_list()
assert len(replicas) == 2
assert replicas[0].mode == "RW"
assert replicas[1].mode == "RW"
dev = get_blockdev()
data1 = common.random_string(128)
data1_offset = 1024
verify_data(dev, data1_offset, data1)
# Close replica2
r2 = grpc_replica2.replica_get()
assert r2.revisionCounter == 1
grpc_replica2.replica_close()
verify_async(dev, 10, 128, 1)
common.verify_replica_state(grpc_controller, 1, "ERR")
verify_read(dev, data1_offset, data1)
data2 = common.random_string(128)
data2_offset = 512
verify_data(dev, data2_offset, data2)
# Close replica1
r1 = grpc_replica1.replica_get()
assert r1.revisionCounter == 12 # 1 + 10 + 1
grpc_replica1.replica_close()
# Restart volume
common.cleanup_controller(grpc_controller)
replicas = grpc_controller.replica_list()
assert len(replicas) == 0
# NOTE the order is reversed here
v = grpc_controller.volume_start(replicas=[
common.REPLICA2,
common.REPLICA1
])
assert v.replicaCount == 2
# replica2 is out because of lower revision counter
replicas = grpc_controller.replica_list()
assert len(replicas) == 2
assert replicas[0].mode == "ERR"
assert replicas[1].mode == "RW"
verify_read(dev, data1_offset, data1)
verify_read(dev, data2_offset, data2)
# Rebuild replica2
r2 = grpc_replica2.replica_get()
assert r2.revisionCounter == 1
grpc_replica2.replica_close()
grpc_controller.replica_delete(replicas[0].address)
cmd.add_replica(common.REPLICA2)
verify_async(dev, 10, 128, 1)
common.verify_replica_state(grpc_controller, 1, "RW")
verify_read(dev, data1_offset, data1)
verify_read(dev, data2_offset, data2)
r1 = grpc_replica1.replica_get()
r2 = grpc_replica2.replica_get()
assert r1.revisionCounter == 22 # 1 + 10 + 1 + 10
assert r2.revisionCounter == 22 # must be in sync with r1
def test_ha_revision_counter_consistency(grpc_controller, # NOQA
grpc_replica1, grpc_replica2): # NOQA
open_replica(grpc_replica1)
open_replica(grpc_replica2)
replicas = grpc_controller.replica_list()
assert len(replicas) == 0
v = grpc_controller.volume_start(replicas=[
common.REPLICA1,
common.REPLICA2
])
assert v.replicaCount == 2
replicas = grpc_controller.replica_list()
assert len(replicas) == 2
assert replicas[0].mode == "RW"
assert replicas[1].mode == "RW"
dev = get_blockdev()
common.verify_async(dev, 10, 128, 100)
r1 = grpc_replica1.replica_get()
r2 = grpc_replica2.replica_get()
# kernel can merge requests so backend may not receive 1000 writes
assert r1.revisionCounter > 0
assert r1.revisionCounter == r2.revisionCounter
def test_snapshot_tree_rebuild(grpc_controller, # NOQA
grpc_replica1, grpc_replica2): # NOQA
offset = 0
length = 128
open_replica(grpc_replica1)
open_replica(grpc_replica2)
replicas = grpc_controller.replica_list()
assert len(replicas) == 0
v = grpc_controller.volume_start(replicas=[
common.REPLICA1,
common.REPLICA2
])
assert v.replicaCount == 2
replicas = grpc_controller.replica_list()
assert len(replicas) == 2
assert replicas[0].mode == "RW"
assert replicas[1].mode == "RW"
dev = get_blockdev()
snap, snap_data = snapshot_tree_build(dev, offset, length)
data = common.random_string(128)
data_offset = 1024
verify_data(dev, data_offset, data)
# Cleanup replica2
cleanup_replica(grpc_replica2)
verify_async(dev, 10, 128, 1)
common.verify_replica_state(grpc_controller, 1, "ERR")
verify_read(dev, data_offset, data)
grpc_controller.replica_delete(replicas[1].address)
# Rebuild replica2
open_replica(grpc_replica2)
cmd.add_replica(common.REPLICA2)
verify_async(dev, 10, 128, 1)
common.verify_replica_state(grpc_controller, 1, "RW")
snapshot_tree_verify(dev, offset, length, snap, snap_data)
def test_ha_single_backing_replica_rebuild(grpc_controller, # NOQA
grpc_backing_replica1, # NOQA
grpc_backing_replica2): # NOQA
prepare_backup_dir(BACKUP_DIR)
open_replica(grpc_backing_replica1)
open_replica(grpc_backing_replica2)
replicas = grpc_controller.replica_list()
assert len(replicas) == 0
v = grpc_controller.volume_start(replicas=[
common.BACKED_REPLICA1,
common.BACKED_REPLICA2
])
assert v.replicaCount == 2
replicas = grpc_controller.replica_list()
assert len(replicas) == 2
assert replicas[0].mode == "RW"
assert replicas[1].mode == "RW"
dev = get_blockdev()
data = common.random_string(128)
data_offset = 1024
verify_data(dev, data_offset, data)
# Cleanup replica2
cleanup_replica(grpc_backing_replica2)
verify_async(dev, 10, 128, 1)
common.verify_replica_state(grpc_controller, 1, "ERR")
verify_read(dev, data_offset, data)
grpc_controller.replica_delete(replicas[1].address)
# Rebuild replica2
open_replica(grpc_backing_replica2)
cmd.add_replica(common.BACKED_REPLICA2)
verify_async(dev, 10, 128, 1)
common.verify_replica_state(grpc_controller, 1, "RW")
verify_read(dev, data_offset, data)
# WORKAROUND for unable to remove the parent of volume head
newsnap = cmd.snapshot_create()
info = cmd.snapshot_info()
assert len(info) == 3
sysnap = info[newsnap]["parent"]
assert info[sysnap]["parent"] == ""
assert newsnap in info[sysnap]["children"]
assert info[sysnap]["usercreated"] is False
assert info[sysnap]["removed"] is False
cmd.snapshot_purge()
info = cmd.snapshot_info()
assert len(info) == 2
assert info[newsnap] is not None
assert info[VOLUME_HEAD] is not None
def test_ha_remove_extra_disks(grpc_controller, # NOQA
grpc_replica1, grpc_replica2): # NOQA
prepare_backup_dir(BACKUP_DIR)
open_replica(grpc_replica1)
replicas = grpc_controller.replica_list()
assert len(replicas) == 0
v = grpc_controller.volume_start(replicas=[
common.REPLICA1,
])
assert v.replicaCount == 1
replicas = grpc_controller.replica_list()
assert len(replicas) == 1
assert replicas[0].mode == "RW"
dev = get_blockdev()
wasted_data = common.random_string(128)
data_offset = 1024
verify_data(dev, data_offset, wasted_data)
# now replica1 contains extra data in a snapshot
cmd.snapshot_create()
common.cleanup_controller(grpc_controller)
open_replica(grpc_replica2)
replicas = grpc_controller.replica_list()
assert len(replicas) == 0
v = grpc_controller.volume_start(replicas=[
common.REPLICA2,
])
assert v.replicaCount == 1
replicas = grpc_controller.replica_list()
assert len(replicas) == 1
assert replicas[0].mode == "RW"
dev = get_blockdev()
data = common.random_string(128)
data_offset = 1024
verify_data(dev, data_offset, data)
r1 = grpc_replica1.replica_reload()
print(r1)
cmd.add_replica(common.REPLICA1)
verify_data(dev, data_offset, data)
| 27.077694 | 79 | 0.68197 | 1,353 | 10,804 | 5.193644 | 0.093126 | 0.099616 | 0.065746 | 0.074285 | 0.84218 | 0.774441 | 0.762345 | 0.762345 | 0.749538 | 0.734737 | 0 | 0.038402 | 0.223899 | 10,804 | 398 | 80 | 27.145729 | 0.799642 | 0.056831 | 0 | 0.818868 | 0 | 0 | 0.013002 | 0 | 0 | 0 | 0 | 0 | 0.249057 | 1 | 0.026415 | false | 0 | 0.030189 | 0 | 0.056604 | 0.003774 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
521945c2757073bdf5c69a830c1b5b28c25afe20 | 38,049 | py | Python | tests/test_blobxfer_models_upload.py | temporaer/blobxfer | 8602006192c0f8f7bb078e3d6da20396c07f302a | [
"MIT"
] | null | null | null | tests/test_blobxfer_models_upload.py | temporaer/blobxfer | 8602006192c0f8f7bb078e3d6da20396c07f302a | [
"MIT"
] | null | null | null | tests/test_blobxfer_models_upload.py | temporaer/blobxfer | 8602006192c0f8f7bb078e3d6da20396c07f302a | [
"MIT"
] | null | null | null | # coding=utf-8
"""Tests for models upload"""
# stdlib imports
import hashlib
try:
import unittest.mock as mock
except ImportError: # noqa
import mock
try:
import pathlib2 as pathlib
except ImportError: # noqa
import pathlib
# non-stdlib imports
import bitstring
import pytest
# local imports
import blobxfer.models.azure as azmodels
import blobxfer.models.metadata as metadata
import blobxfer.models.options as options
import blobxfer.operations.azure as azops
import blobxfer.util as util
# module under test
import blobxfer.models.upload as upload
def test_vectorediodistributionmode():
a = upload.VectoredIoDistributionMode('stripe')
assert a == upload.VectoredIoDistributionMode.Stripe
assert str(a) == 'stripe'
def test_localpath(tmpdir):
tmpdir.join('a').write('zz')
pp = pathlib.Path(str(tmpdir))
rp = pathlib.Path('a')
file = pp / rp
stat = file.stat()
lp = upload.LocalPath(pp, rp, use_stdin=True, view=None)
assert lp.absolute_path == file
assert lp.size == 0
assert lp.total_size == 0
assert lp.lmt == 0
assert lp.mode.replace('o', '') == '00'
assert lp.uid == 0
assert lp.gid == 0
lp = upload.LocalPath(pp, rp, use_stdin=False, view=None)
assert lp.absolute_path == file
assert lp.size == stat.st_size
assert lp.total_size == stat.st_size
assert lp.lmt == stat.st_mtime
assert lp.mode.replace('o', '') == str(oct(stat.st_mode)).replace('o', '')
assert lp.uid == stat.st_uid
assert lp.gid == stat.st_gid
lpview = upload.LocalPathView(
fd_start=1,
fd_end=2,
slice_num=1,
mode=upload.VectoredIoDistributionMode.Stripe,
total_slices=2,
next=None,
)
lp = upload.LocalPath(pp, rp, use_stdin=False, view=lpview)
assert lp.absolute_path == file
assert lp.size == 1
assert lp.total_size == stat.st_size
assert lp.lmt == stat.st_mtime
assert lp.mode.replace('o', '') == str(oct(stat.st_mode)).replace('o', '')
assert lp.uid == stat.st_uid
assert lp.gid == stat.st_gid
def _resolve_pypath(path):
return str(pathlib.Path(str(path)).resolve())
def test_localsourcepaths_files(tmpdir):
tmpdir.mkdir('abc')
tmpdir.join('moo.cow').write('z')
abcpath = tmpdir.join('abc')
abcpath.join('hello.txt').write('hello')
abcpath.join('blah.x').write('x')
abcpath.join('blah.y').write('x')
abcpath.join('blah.z').write('x')
abcpath.mkdir('def')
defpath = abcpath.join('def')
defpath.join('world.txt').write('world')
defpath.join('moo.cow').write('y')
a = upload.LocalSourcePath()
a.add_includes('**')
a.add_includes('*.txt')
a.add_includes(('moo.cow', '*blah*'))
with pytest.raises(ValueError):
a.add_includes('**/**/*')
a.add_excludes('**')
a.add_excludes('**/blah.x')
with pytest.raises(ValueError):
a.add_excludes('**/**/blah.x')
a.add_excludes(['world.txt'])
a.add_path(str(tmpdir))
a_set = set()
for file in a.files(True):
sfile = str(file.parent_path / file.relative_path)
a_set.add(sfile)
assert len(a._include) == 3
assert len(a._exclude) == 2
assert not a.can_rename()
assert len(a.paths) == 1
assert _resolve_pypath(abcpath.join('blah.x')) in a_set
assert _resolve_pypath(defpath.join('world.txt')) in a_set
assert _resolve_pypath(defpath.join('moo.cow')) not in a_set
b = upload.LocalSourcePath()
b.add_includes(['moo.cow', '*blah*'])
b.add_includes('*.txt')
b.add_excludes(('world.txt',))
b.add_excludes('**/blah.x')
b.add_paths([pathlib.Path(str(tmpdir))])
for file in a.files(True):
sfile = str(file.parent_path / file.relative_path)
assert sfile in a_set
assert upload.LocalSourcePath.is_stdin('-')
assert upload.LocalSourcePath.is_stdin('/dev/stdin')
assert not upload.LocalSourcePath.is_stdin('/')
a = upload.LocalSourcePath()
a.add_includes('z')
a.add_path(str(tmpdir) + '/abc/hello.txt')
a_set = set()
for file in a.files(True):
sfile = str(file.parent_path / file.relative_path)
a_set.add(sfile)
assert len(a_set) == 0
c = upload.LocalSourcePath()
c.add_path('-')
for file in c.files(False):
assert file.use_stdin
d = upload.LocalSourcePath()
d.add_path(str(tmpdir.join('moo.cow')))
i = 0
for file in d.files(True):
assert str(file.parent_path.absolute()) == str(tmpdir)
assert str(file.relative_path) == 'moo.cow'
assert not file.use_stdin
i += 1
assert i == 1
tmpdir.join('moo.cow2').ensure(file=True)
d.add_path(str(tmpdir.join('moo.cow2')))
i = 0
for file in d.files(True):
i += 1
assert i == 2
def test_specification(tmpdir):
lsp = upload.LocalSourcePath()
lsp.add_paths(['-', '/dev/stdin'])
with pytest.raises(ValueError):
upload.Specification(
upload_options=options.Upload(
access_tier=None,
chunk_size_bytes=4194304,
delete_extraneous_destination=False,
delete_only=False,
mode=azmodels.StorageModes.Auto,
one_shot_bytes=0,
overwrite=True,
recursive=True,
rename=True,
rsa_public_key=None,
stdin_as_page_blob_size=0,
store_file_properties=options.FileProperties(
attributes=True,
cache_control='cc',
content_type='ct',
lmt=None,
md5=True,
),
strip_components=0,
vectored_io=None,
),
skip_on_options=options.SkipOn(
filesize_match=True,
lmt_ge=False,
md5_match=True,
),
local_source_path=lsp,
)
lsp = upload.LocalSourcePath()
lsp.add_path(str(tmpdir))
with pytest.raises(ValueError):
upload.Specification(
upload_options=options.Upload(
access_tier=None,
chunk_size_bytes=4194304,
delete_extraneous_destination=False,
delete_only=False,
mode=azmodels.StorageModes.Auto,
one_shot_bytes=0,
overwrite=True,
recursive=True,
rename=True,
rsa_public_key=None,
stdin_as_page_blob_size=0,
store_file_properties=options.FileProperties(
attributes=True,
cache_control='cc',
content_type='ct',
lmt=None,
md5=True,
),
strip_components=0,
vectored_io=None,
),
skip_on_options=options.SkipOn(
filesize_match=True,
lmt_ge=False,
md5_match=True,
),
local_source_path=lsp,
)
lsp = upload.LocalSourcePath()
lsp.add_path(str(tmpdir))
with pytest.raises(ValueError):
upload.Specification(
upload_options=options.Upload(
access_tier=None,
chunk_size_bytes=-1,
delete_extraneous_destination=False,
delete_only=False,
mode=azmodels.StorageModes.Auto,
one_shot_bytes=0,
overwrite=True,
recursive=True,
rename=False,
rsa_public_key=None,
stdin_as_page_blob_size=0,
store_file_properties=options.FileProperties(
attributes=True,
cache_control='cc',
content_type='ct',
lmt=None,
md5=True,
),
strip_components=0,
vectored_io=None,
),
skip_on_options=options.SkipOn(
filesize_match=True,
lmt_ge=False,
md5_match=True,
),
local_source_path=lsp,
)
with pytest.raises(ValueError):
upload.Specification(
upload_options=options.Upload(
access_tier=None,
chunk_size_bytes=upload._MAX_BLOCK_BLOB_CHUNKSIZE_BYTES + 1,
delete_extraneous_destination=False,
delete_only=False,
mode=azmodels.StorageModes.Auto,
one_shot_bytes=0,
overwrite=True,
recursive=True,
rename=False,
rsa_public_key=None,
stdin_as_page_blob_size=0,
store_file_properties=options.FileProperties(
attributes=True,
cache_control='cc',
content_type='ct',
lmt=None,
md5=True,
),
strip_components=0,
vectored_io=None,
),
skip_on_options=options.SkipOn(
filesize_match=True,
lmt_ge=False,
md5_match=True,
),
local_source_path=lsp,
)
with pytest.raises(ValueError):
upload.Specification(
upload_options=options.Upload(
access_tier=None,
chunk_size_bytes=4194304,
delete_extraneous_destination=False,
delete_only=False,
mode=azmodels.StorageModes.Auto,
one_shot_bytes=-1,
overwrite=True,
recursive=True,
rename=False,
rsa_public_key=None,
stdin_as_page_blob_size=0,
store_file_properties=options.FileProperties(
attributes=True,
cache_control='cc',
content_type='ct',
lmt=None,
md5=True,
),
strip_components=0,
vectored_io=None,
),
skip_on_options=options.SkipOn(
filesize_match=True,
lmt_ge=False,
md5_match=True,
),
local_source_path=lsp,
)
with pytest.raises(ValueError):
upload.Specification(
upload_options=options.Upload(
access_tier=None,
chunk_size_bytes=4194304,
delete_extraneous_destination=False,
delete_only=False,
mode=azmodels.StorageModes.Auto,
one_shot_bytes=upload._MAX_BLOCK_BLOB_ONESHOT_BYTES + 1,
overwrite=True,
recursive=True,
rename=False,
rsa_public_key=None,
stdin_as_page_blob_size=0,
store_file_properties=options.FileProperties(
attributes=True,
cache_control=None,
content_type=None,
lmt=None,
md5=True,
),
strip_components=0,
vectored_io=None,
),
skip_on_options=options.SkipOn(
filesize_match=True,
lmt_ge=False,
md5_match=True,
),
local_source_path=lsp,
)
spec = upload.Specification(
upload_options=options.Upload(
access_tier=None,
chunk_size_bytes=4194304,
delete_extraneous_destination=False,
delete_only=False,
mode=azmodels.StorageModes.Auto,
one_shot_bytes=0,
overwrite=True,
recursive=True,
rename=False,
rsa_public_key=None,
stdin_as_page_blob_size=0,
store_file_properties=options.FileProperties(
attributes=True,
cache_control=None,
content_type=None,
lmt=None,
md5=True,
),
strip_components=0,
vectored_io=None,
),
skip_on_options=options.SkipOn(
filesize_match=True,
lmt_ge=False,
md5_match=True,
),
local_source_path=lsp,
)
spec.add_azure_destination_path(azops.DestinationPath())
assert len(spec.destinations) == 1
def test_descriptor(tmpdir):
size = 32
tmpdir.join('a').write('z' * size)
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
opts = mock.MagicMock()
opts.chunk_size_bytes = 8
opts.one_shot_bytes = 0
opts.store_file_properties.md5 = False
opts.rsa_public_key = None
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._size = size
ase._encryption = None
ase2 = azmodels.StorageEntity('cont')
ase2._mode = azmodels.StorageModes.Block
ase2._name = 'name2'
ase2._size = size
ase2._encryption = None
ase.replica_targets = [ase2]
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
assert ud.hmac is None
assert ud.md5 is None
assert ud._outstanding_ops == 4 * 2
assert ud._completed_chunks is not None
assert ud._md5_cache is not None
assert ud._replica_counters is not None
assert ud.entity == ase
assert not ud.must_compute_md5
assert not ud.all_operations_completed
assert ud.last_block_num == -1
assert ud.is_resumable
assert not ud.remote_is_file
assert not ud.remote_is_page_blob
assert not ud.remote_is_append_blob
assert not ud.is_one_shot_block_blob
assert ud.requires_put_block_list
assert not ud.requires_non_encrypted_md5_put
assert not ud.requires_set_file_properties_md5
assert not ud.requires_access_tier_set
assert ud.requires_resize() == (False, ud._offset)
# test sym key
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._size = size
ase._encryption = mock.MagicMock()
opts.rsa_public_key = None
with pytest.raises(RuntimeError):
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
def test_descriptor_complete_offset_upload(tmpdir):
tmpdir.join('a').write('z' * 32)
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
opts = mock.MagicMock()
opts.chunk_size_bytes = 16
opts.one_shot_bytes = 0
opts.store_file_properties.md5 = True
opts.rsa_public_key = None
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._size = 32
ase._encryption = None
ase.replica_targets = [ase]
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud._md5_cache[0] = 'md50'
ud._md5_cache[1] = 'md51'
ud.complete_offset_upload(0)
assert ud._outstanding_ops == 3
assert ud._replica_counters[0] == 0
ud.complete_offset_upload(1)
assert ud._outstanding_ops == 2
assert ud._replica_counters[1] == 0
# fill md5 cache with junk to trigger gc on next complete
for i in range(-30, -1):
ud._md5_cache[i] = ''
ud.complete_offset_upload(0)
assert ud._outstanding_ops == 1
assert 0 not in ud._replica_counters
assert len(ud._md5_cache) == 2
ud.complete_offset_upload(1)
assert ud._outstanding_ops == 0
assert 1 not in ud._replica_counters
assert len(ud._md5_cache) == 0
def test_descriptor_hmac_data(tmpdir):
tmpdir.join('a').write('z' * 32)
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
opts = mock.MagicMock()
opts.chunk_size_bytes = 16
opts.one_shot_bytes = 0
opts.store_file_properties.md5 = True
opts.rsa_public_key = None
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._size = 32
ase._encryption = mock.MagicMock()
ase._encryption.symmetric_key = 'abc'
ase.replica_targets = [ase]
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
assert ud.hmac is not None
ud.hmac_data(b'\0')
def test_descriptor_initialize_encryption(tmpdir):
tmpdir.join('a').write('z' * 32)
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
opts = mock.MagicMock()
opts.chunk_size_bytes = 16
opts.one_shot_bytes = 0
opts.store_file_properties.md5 = True
opts.rsa_public_key = 'abc'
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._size = 32
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
assert ud.hmac is not None
assert ud.entity.is_encrypted
def test_descriptor_compute_remote_size(tmpdir):
tmpdir.join('a').write('z' * 32)
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
# encrypted remote size with replica
opts = mock.MagicMock()
opts.chunk_size_bytes = 16
opts.one_shot_bytes = 0
opts.store_file_properties.md5 = True
opts.rsa_public_key = 'abc'
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._encryption = mock.MagicMock()
ase._encryption.symmetric_key = 'abc'
ase2 = azmodels.StorageEntity('cont')
ase2._mode = azmodels.StorageModes.Block
ase2._name = 'name2'
ase.replica_targets = [ase2]
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud._compute_remote_size(opts)
assert ud.entity.size == 48
for rt in ase.replica_targets:
assert rt.size == ud.entity.size
# remote size
opts = mock.MagicMock()
opts.chunk_size_bytes = 16
opts.one_shot_bytes = 0
opts.store_file_properties.md5 = True
opts.rsa_public_key = None
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._encryption = None
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud._compute_remote_size(opts)
assert ud.entity.size == 32
# remote size of zero
tmpdir.join('b').ensure(file=True)
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('b'))
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._encryption = None
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud._compute_remote_size(opts)
assert ud.entity.size == 0
# stdin as page, resize
lp = upload.LocalPath(pathlib.Path('-'), pathlib.Path('-'), use_stdin=True)
opts.stdin_as_page_blob_size = 0
ase._mode = azmodels.StorageModes.Page
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud._compute_remote_size(opts)
assert ud.entity.size == upload._MAX_PAGE_BLOB_SIZE
assert ud._needs_resize
# stdin as page, no resize
lp = upload.LocalPath(pathlib.Path('-'), pathlib.Path('-'), use_stdin=True)
opts.stdin_as_page_blob_size = 32
ase._mode = azmodels.StorageModes.Page
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud._compute_remote_size(opts)
assert ud.entity.size == 32
assert not ud._needs_resize
def test_descriptor_adjust_chunk_size(tmpdir):
tmpdir.join('a').ensure(file=True)
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
opts = mock.MagicMock()
opts.chunk_size_bytes = 0
opts.one_shot_bytes = 0
opts.store_file_properties.md5 = True
opts.rsa_public_key = None
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._encryption = None
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
assert ud._chunk_size == 0
with mock.patch('blobxfer.models.upload._DEFAULT_AUTO_CHUNKSIZE_BYTES', 1):
with mock.patch(
'blobxfer.models.upload._MAX_BLOCK_BLOB_CHUNKSIZE_BYTES', 3):
with mock.patch('blobxfer.models.upload._MAX_NUM_CHUNKS', 2):
tmpdir.join('a').write('z' * 4)
lp = upload.LocalPath(
pathlib.Path(str(tmpdir)), pathlib.Path('a'))
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
assert ud._chunk_size == 2
lp = upload.LocalPath(
pathlib.Path(str(tmpdir)), pathlib.Path('-'), use_stdin=True)
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
assert ud._chunk_size == upload._MAX_NONBLOCK_BLOB_CHUNKSIZE_BYTES
tmpdir.join('a').write('z' * 32)
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Page
ase._name = 'name'
ase._encryption = None
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
assert ud._chunk_size == 32
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Append
ase._name = 'name'
ase._encryption = None
opts.chunk_size_bytes = upload._MAX_NONBLOCK_BLOB_CHUNKSIZE_BYTES + 1
with mock.patch(
'blobxfer.models.upload._MAX_NONBLOCK_BLOB_CHUNKSIZE_BYTES', 4):
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
assert ud._chunk_size == 4
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._encryption = None
opts.chunk_size_bytes = 32
opts.one_shot_bytes = 32
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
assert ud._chunk_size == 32
opts.one_shot_bytes = 31
with mock.patch(
'blobxfer.models.upload._MAX_BLOCK_BLOB_CHUNKSIZE_BYTES', 4):
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
assert ud._chunk_size == 4
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.File
ase._name = 'name'
ase._encryption = None
opts.chunk_size_bytes = upload._MAX_NONBLOCK_BLOB_CHUNKSIZE_BYTES + 1
with mock.patch(
'blobxfer.models.upload._MAX_NONBLOCK_BLOB_CHUNKSIZE_BYTES', 4):
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
assert ud._chunk_size == 4
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Page
ase._name = 'name'
ase._encryption = None
opts.chunk_size_bytes = upload._MAX_NONBLOCK_BLOB_CHUNKSIZE_BYTES + 1
with mock.patch(
'blobxfer.models.upload._MAX_NONBLOCK_BLOB_CHUNKSIZE_BYTES', 4):
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
assert ud._chunk_size == 4
with mock.patch('blobxfer.models.upload._MAX_PAGE_BLOB_SIZE', 4):
with pytest.raises(RuntimeError):
upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
def test_compute_total_chunks(tmpdir):
tmpdir.join('a').ensure(file=True)
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
opts = mock.MagicMock()
opts.chunk_size_bytes = 0
opts.one_shot_bytes = 0
opts.store_file_properties.md5 = True
opts.rsa_public_key = None
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._encryption = None
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud.entity.size = upload._MAX_BLOCK_BLOB_CHUNKSIZE_BYTES
with pytest.raises(RuntimeError):
ud._compute_total_chunks(1)
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud.entity.size = upload._MAX_BLOCK_BLOB_CHUNKSIZE_BYTES
ud._chunk_size = upload._MAX_BLOCK_BLOB_CHUNKSIZE_BYTES
with pytest.raises(RuntimeError):
ud._compute_total_chunks(1)
ase._mode = azmodels.StorageModes.Append
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud.entity.size = upload._MAX_BLOCK_BLOB_CHUNKSIZE_BYTES
ud._chunk_size = upload._MAX_NONBLOCK_BLOB_CHUNKSIZE_BYTES
with pytest.raises(RuntimeError):
ud._compute_total_chunks(1)
def test_resume(tmpdir):
tmpdir.join('a').write('zz')
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
opts = mock.MagicMock()
opts.chunk_size_bytes = 0
opts.one_shot_bytes = 0
opts.store_file_properties.md5 = True
opts.rsa_public_key = None
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._encryption = None
# test no resume
ud = upload.Descriptor(lp, ase, 'uid', opts, mock.MagicMock(), None)
assert ud._resume() is None
# check if path exists in resume db
resume = mock.MagicMock()
resume.get_record.return_value = None
ud = upload.Descriptor(lp, ase, 'uid', opts, mock.MagicMock(), resume)
assert ud._resume() is None
# check same lengths
bad = mock.MagicMock()
bad.length = 0
resume.get_record.return_value = bad
assert ud._resume() is None
# check completed resume
comp = mock.MagicMock()
comp.length = 2
comp.completed = True
comp.total_chunks = 1
comp.chunk_size = 2
comp.completed_chunks = 1
resume.get_record.return_value = comp
ud._completed_chunks = mock.MagicMock()
ud._src_ase = ase
assert ud._resume() == 2
ase.replica_targets = [ase]
ud = upload.Descriptor(lp, ase, 'uid', opts, mock.MagicMock(), resume)
ud._completed_chunks = mock.MagicMock()
ud._src_ase = ase
assert ud._resume() == 4
# check no encryption
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
opts.rsa_public_key = 'abc'
nc = mock.MagicMock()
nc.length = 16
nc.completed = False
nc.total_chunks = 2
nc.chunk_size = 1
nc.completed_chunks = 1
resume.get_record.return_value = nc
ud = upload.Descriptor(lp, ase, 'uid', opts, mock.MagicMock(), resume)
assert ud._resume() is None
# check rr path exists
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
nc.length = 2
nc.local_path = pathlib.Path('yyy')
opts.rsa_public_key = None
resume.get_record.return_value = nc
ud = upload.Descriptor(lp, ase, 'uid', opts, mock.MagicMock(), resume)
assert ud._resume() is None
# check resume no md5
opts.store_file_properties.md5 = False
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
nc = mock.MagicMock()
nc.length = 2
nc.completed = False
nc.total_chunks = 2
nc.chunk_size = 1
cc = bitstring.BitArray(length=nc.total_chunks)
cc.set(True, 0)
nc.completed_chunks = cc.int
nc.local_path = lp.absolute_path
resume.get_record.return_value = nc
ud = upload.Descriptor(lp, ase, 'uid', opts, mock.MagicMock(), resume)
assert ud._resume() == 1
# check resume with md5 mismatch
opts.store_file_properties.md5 = True
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
nc = mock.MagicMock()
nc.length = 2
nc.completed = False
nc.total_chunks = 2
nc.chunk_size = 1
cc = bitstring.BitArray(length=nc.total_chunks)
cc.set(True, 0)
nc.completed_chunks = cc.int
nc.local_path = lp.absolute_path
resume.get_record.return_value = nc
ud = upload.Descriptor(lp, ase, 'uid', opts, mock.MagicMock(), resume)
assert ud._resume() is None
# check resume with md5 match
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
nc = mock.MagicMock()
nc.length = 2
nc.completed = False
nc.total_chunks = 2
nc.chunk_size = 1
cc = bitstring.BitArray(length=nc.total_chunks)
cc.set(True, 0)
nc.completed_chunks = cc.int
nc.local_path = lp.absolute_path
md5 = hashlib.md5()
md5.update(b'z')
nc.md5hexdigest = md5.hexdigest()
resume.get_record.return_value = nc
ud = upload.Descriptor(lp, ase, 'uid', opts, mock.MagicMock(), resume)
assert ud._resume() == 1
def test_descriptor_next_offsets(tmpdir):
tmpdir.join('a').write('ab')
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
opts = mock.MagicMock()
opts.chunk_size_bytes = 1
opts.one_shot_bytes = 0
opts.store_file_properties.md5 = True
opts.rsa_public_key = None
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._encryption = None
# test normal
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud._resume = mock.MagicMock()
ud._resume.return_value = None
offsets, rb = ud.next_offsets()
assert rb is None
assert offsets.chunk_num == 0
assert offsets.num_bytes == 1
assert offsets.range_start == 0
assert offsets.range_end == 0
assert not offsets.pad
assert ud._offset == 1
assert ud._chunk_num == 1
offsets, rb = ud.next_offsets()
assert rb is None
assert offsets.chunk_num == 1
assert offsets.num_bytes == 1
assert offsets.range_start == 1
assert offsets.range_end == 1
assert not offsets.pad
assert ud._offset == 2
assert ud._chunk_num == 2
offsets, rb = ud.next_offsets()
assert rb is None
assert offsets is None
# test chunk size exceeds size
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
opts.chunk_size_bytes = 3
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud._chunk_size = 3
ud._resume = mock.MagicMock()
ud._resume.return_value = None
offsets, rb = ud.next_offsets()
assert rb is None
assert offsets.chunk_num == 0
assert offsets.num_bytes == 2
assert offsets.range_start == 0
assert offsets.range_end == 1
assert not offsets.pad
assert ud._offset == 2
assert ud._chunk_num == 1
# test encrypted
tmpdir.join('a').write('z' * 16)
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
opts.chunk_size_bytes = 16
opts.rsa_public_key = 'abc'
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud._resume = mock.MagicMock()
ud._resume.return_value = None
offsets, rb = ud.next_offsets()
assert rb is None
assert offsets.chunk_num == 0
assert offsets.num_bytes == 16
assert offsets.range_start == 0
assert offsets.range_end == 15
assert not offsets.pad
assert ud._offset == 16
assert ud._chunk_num == 1
offsets, rb = ud.next_offsets()
assert rb is None
assert offsets.chunk_num == 1
assert offsets.num_bytes == 16
assert offsets.range_start == 16
assert offsets.range_end == 31
assert offsets.pad
assert ud._offset == 32
assert ud._chunk_num == 2
def test_descriptor_read_data(tmpdir):
tmpdir.join('a').write('ab')
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
# test normal
opts = mock.MagicMock()
opts.chunk_size_bytes = 1
opts.one_shot_bytes = 0
opts.store_file_properties.md5 = True
opts.rsa_public_key = None
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._encryption = None
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud._resume = mock.MagicMock()
ud._resume.return_value = None
# test no data to read
mockoffsets = mock.MagicMock()
mockoffsets.num_bytes = 0
data, newoffset = ud.read_data(mockoffsets)
assert data is None
assert newoffset is None
# test normal data to read
offsets, rb = ud.next_offsets()
assert rb is None
data, newoffset = ud.read_data(offsets)
assert data == b'a'
assert newoffset is None
# test stdin
with mock.patch(
'blobxfer.STDIN', new_callable=mock.PropertyMock) as patched_stdin:
patched_stdin.read = mock.MagicMock()
patched_stdin.read.return_value = b'z'
ud.local_path.use_stdin = True
data, newoffset = ud.read_data(offsets)
assert data == b'z'
assert newoffset.chunk_num == 0
assert newoffset.num_bytes == 1
assert newoffset.range_start == 0
assert newoffset.range_end == 0
assert not newoffset.pad
assert ud._total_chunks == 3
assert ud._outstanding_ops == 3
assert ud._offset == 1
assert ud.entity.size == 2
with mock.patch(
'blobxfer.STDIN', new_callable=mock.PropertyMock) as patched_stdin:
patched_stdin.read = mock.MagicMock()
patched_stdin.read.return_value = None
ud.local_path.use_stdin = True
data, newoffset = ud.read_data(offsets)
assert data is None
assert newoffset is None
assert ud._total_chunks == 2
assert ud._outstanding_ops == 2
assert ud._chunk_num == 0
def test_descriptor_generate_metadata(tmpdir):
tmpdir.join('a').write('ab')
lp = upload.LocalPath(pathlib.Path(str(tmpdir)), pathlib.Path('a'))
# test nothing
opts = mock.MagicMock()
opts.chunk_size_bytes = 1
opts.one_shot_bytes = 0
opts.store_file_properties.attributes = False
opts.store_file_properties.md5 = False
opts.rsa_public_key = None
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._encryption = None
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
meta = ud.generate_metadata()
assert meta is None
# test page md5 align
opts = mock.MagicMock()
opts.chunk_size_bytes = 1
opts.one_shot_bytes = 0
opts.store_file_properties.attributes = False
opts.store_file_properties.md5 = True
opts.rsa_public_key = None
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Page
ase._name = 'name'
ase._encryption = None
ase._size = 1
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud._offset = 1
ud.md5 = hashlib.md5()
ud.md5.update(b'z')
meta = ud.generate_metadata()
assert meta is None
md5 = hashlib.md5()
md5.update(b'z' + b'\0' * 511)
assert ud.md5.digest() == md5.digest()
# test fileattr meta
opts = mock.MagicMock()
opts.chunk_size_bytes = 1
opts.one_shot_bytes = 0
opts.store_file_properties.attributes = True
opts.store_file_properties.md5 = True
opts.rsa_public_key = None
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._encryption = None
# file attr store is not avail on windows
if not util.on_windows():
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
meta = ud.generate_metadata()
assert metadata.JSON_KEY_BLOBXFER_METADATA in meta
assert metadata._JSON_KEY_FILE_ATTRIBUTES in meta[
metadata.JSON_KEY_BLOBXFER_METADATA]
# test enc meta
opts.store_file_properties.attributes = False
opts.store_file_properties.md5 = False
opts.rsa_public_key = 'abc'
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ase.encryption_metadata = mock.MagicMock()
ase.encryption_metadata.convert_to_json_with_mac.return_value = {
'encmeta': 'encmeta'
}
meta = ud.generate_metadata()
assert 'encmeta' in meta
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ud.hmac = None
ase.encryption_metadata = mock.MagicMock()
ase.encryption_metadata.convert_to_json_with_mac.return_value = {
'encmeta': 'encmeta'
}
meta = ud.generate_metadata()
assert 'encmeta' in meta
opts.store_file_properties.md5 = True
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
ase.encryption_metadata = mock.MagicMock()
ase.encryption_metadata.convert_to_json_with_mac.return_value = {
'encmeta': 'encmeta'
}
meta = ud.generate_metadata()
assert 'encmeta' in meta
# test vio meta
opts = mock.MagicMock()
opts.chunk_size_bytes = 1
opts.one_shot_bytes = 0
opts.store_file_properties.md5 = True
opts.rsa_public_key = None
ase = azmodels.StorageEntity('cont')
ase._mode = azmodels.StorageModes.Block
ase._name = 'name'
ase._encryption = None
lp.view = mock.MagicMock()
lp.view.mode = upload.VectoredIoDistributionMode.Stripe
ud = upload.Descriptor(
lp, ase, 'uid', opts, mock.MagicMock(), mock.MagicMock())
with mock.patch(
'blobxfer.models.metadata.generate_vectored_io_stripe_metadata',
return_value={'viometa': 'viometa'}):
meta = ud.generate_metadata()
assert metadata.JSON_KEY_BLOBXFER_METADATA in meta
assert 'viometa' in meta[metadata.JSON_KEY_BLOBXFER_METADATA]
| 31.628429 | 79 | 0.624931 | 4,750 | 38,049 | 4.792842 | 0.062316 | 0.065097 | 0.042563 | 0.038742 | 0.827769 | 0.784415 | 0.764781 | 0.746464 | 0.729992 | 0.698717 | 0 | 0.013171 | 0.263686 | 38,049 | 1,202 | 80 | 31.654742 | 0.799436 | 0.019764 | 0 | 0.741036 | 0 | 0 | 0.036075 | 0.012669 | 0 | 0 | 0 | 0 | 0.173307 | 1 | 0.015936 | false | 0 | 0.01494 | 0.000996 | 0.031873 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
524dea420c5168f9fa23870f0c0412f330ef81ca | 22,103 | py | Python | layint_api/apis/event_api.py | LayeredInsight/layint_api_python | a5c9a5b24098bd823c5102b7ab9e4745432f19b4 | [
"Apache-2.0"
] | null | null | null | layint_api/apis/event_api.py | LayeredInsight/layint_api_python | a5c9a5b24098bd823c5102b7ab9e4745432f19b4 | [
"Apache-2.0"
] | null | null | null | layint_api/apis/event_api.py | LayeredInsight/layint_api_python | a5c9a5b24098bd823c5102b7ab9e4745432f19b4 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Layered Insight Assessment, Compliance, Witness & Control
LI Assessment & Compliance performs static vulnerability analysis, license and package compliance. LI Witness provides deep insight and analytics into containerized applications. Control provides dynamic runtime security and analytics for containerized applications. You can find out more about the Layered Insight Suite at [http://layeredinsight.com](http://layeredinsight.com).
OpenAPI spec version: 0.10
Contact: help@layeredinsight.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class EventApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def describe_event(self, event_id, **kwargs):
"""
Gets description about specific event
Describes an event in a manner that can be understood by humans.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.describe_event(event_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str event_id: hexadecimal ID of event to get description of (required)
:return: AlertEvents
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.describe_event_with_http_info(event_id, **kwargs)
else:
(data) = self.describe_event_with_http_info(event_id, **kwargs)
return data
def describe_event_with_http_info(self, event_id, **kwargs):
"""
Gets description about specific event
Describes an event in a manner that can be understood by humans.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.describe_event_with_http_info(event_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str event_id: hexadecimal ID of event to get description of (required)
:return: AlertEvents
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['event_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method describe_event" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'event_id' is set
if ('event_id' not in params) or (params['event_id'] is None):
raise ValueError("Missing the required parameter `event_id` when calling `describe_event`")
collection_formats = {}
path_params = {}
if 'event_id' in params:
path_params['eventID'] = params['event_id']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['ApiKey']
return self.api_client.call_api('/Events/{eventID}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AlertEvents',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_file_accessors(self, event_id, **kwargs):
"""
Get programs accessing a file
Get a list of programs attempting to access the file in this event
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_file_accessors(event_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str event_id: hexadecimal ID of event to get description of (required)
:return: list[str]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_file_accessors_with_http_info(event_id, **kwargs)
else:
(data) = self.get_file_accessors_with_http_info(event_id, **kwargs)
return data
def get_file_accessors_with_http_info(self, event_id, **kwargs):
"""
Get programs accessing a file
Get a list of programs attempting to access the file in this event
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_file_accessors_with_http_info(event_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str event_id: hexadecimal ID of event to get description of (required)
:return: list[str]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['event_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_file_accessors" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'event_id' is set
if ('event_id' not in params) or (params['event_id'] is None):
raise ValueError("Missing the required parameter `event_id` when calling `get_file_accessors`")
collection_formats = {}
path_params = {}
if 'event_id' in params:
path_params['eventID'] = params['event_id']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['ApiKey']
return self.api_client.call_api('/Events/{eventID}/FileAccessors', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[str]',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_file_executors(self, event_id, **kwargs):
"""
Get programs executing a file
Get a list of programs attempting to execute the file in this event
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_file_executors(event_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str event_id: hexadecimal ID of event to get description of (required)
:return: list[str]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_file_executors_with_http_info(event_id, **kwargs)
else:
(data) = self.get_file_executors_with_http_info(event_id, **kwargs)
return data
def get_file_executors_with_http_info(self, event_id, **kwargs):
"""
Get programs executing a file
Get a list of programs attempting to execute the file in this event
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_file_executors_with_http_info(event_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str event_id: hexadecimal ID of event to get description of (required)
:return: list[str]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['event_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_file_executors" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'event_id' is set
if ('event_id' not in params) or (params['event_id'] is None):
raise ValueError("Missing the required parameter `event_id` when calling `get_file_executors`")
collection_formats = {}
path_params = {}
if 'event_id' in params:
path_params['eventID'] = params['event_id']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['ApiKey']
return self.api_client.call_api('/Events/{eventID}/FileExecutors', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[str]',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_source_ip(self, event_id, **kwargs):
"""
Get IP address used in event
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_source_ip(event_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str event_id: hexadecimal ID of event to get description of (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_source_ip_with_http_info(event_id, **kwargs)
else:
(data) = self.get_source_ip_with_http_info(event_id, **kwargs)
return data
def get_source_ip_with_http_info(self, event_id, **kwargs):
"""
Get IP address used in event
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_source_ip_with_http_info(event_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str event_id: hexadecimal ID of event to get description of (required)
:return: str
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['event_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_source_ip" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'event_id' is set
if ('event_id' not in params) or (params['event_id'] is None):
raise ValueError("Missing the required parameter `event_id` when calling `get_source_ip`")
collection_formats = {}
path_params = {}
if 'event_id' in params:
path_params['eventID'] = params['event_id']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['ApiKey']
return self.api_client.call_api('/Events/{eventID}/SourceIP', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='str',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_source_log(self, event_id, **kwargs):
"""
Get log that resulted in an event
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_source_log(event_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str event_id: hexadecimal ID of event to get description of (required)
:return: ContainerLog
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_source_log_with_http_info(event_id, **kwargs)
else:
(data) = self.get_source_log_with_http_info(event_id, **kwargs)
return data
def get_source_log_with_http_info(self, event_id, **kwargs):
"""
Get log that resulted in an event
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_source_log_with_http_info(event_id, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str event_id: hexadecimal ID of event to get description of (required)
:return: ContainerLog
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['event_id']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_source_log" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'event_id' is set
if ('event_id' not in params) or (params['event_id'] is None):
raise ValueError("Missing the required parameter `event_id` when calling `get_source_log`")
collection_formats = {}
path_params = {}
if 'event_id' in params:
path_params['eventID'] = params['event_id']
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['ApiKey']
return self.api_client.call_api('/Events/{eventID}/SourceLog', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ContainerLog',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 41.861742 | 383 | 0.569018 | 2,312 | 22,103 | 5.206315 | 0.090398 | 0.043616 | 0.0216 | 0.029908 | 0.916507 | 0.91144 | 0.909529 | 0.902218 | 0.900224 | 0.88984 | 0 | 0.000422 | 0.356558 | 22,103 | 527 | 384 | 41.941176 | 0.845943 | 0.351762 | 0 | 0.773438 | 0 | 0 | 0.152533 | 0.034368 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042969 | false | 0 | 0.027344 | 0 | 0.132813 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
526c881000c40b4d03fed764d59bc592ecd7697e | 24,104 | py | Python | model.py | tamerthamoqa/3D-mri-brain-tumour-image-segmentation-medical-decathlon-tensorflow | d0abf521d8b21bc2b8e30952c19652b63150ddd9 | [
"MIT"
] | null | null | null | model.py | tamerthamoqa/3D-mri-brain-tumour-image-segmentation-medical-decathlon-tensorflow | d0abf521d8b21bc2b8e30952c19652b63150ddd9 | [
"MIT"
] | null | null | null | model.py | tamerthamoqa/3D-mri-brain-tumour-image-segmentation-medical-decathlon-tensorflow | d0abf521d8b21bc2b8e30952c19652b63150ddd9 | [
"MIT"
] | null | null | null | # This U-Net implementation is originally imported from zhixuhao's 'unet' GitHub repository and modified for
# 3D convolutions instead of 3D convolutions.
# https://github.com/zhixuhao/unet/blob/master/model.py
from tensorflow.keras.models import Model
from tensorflow.keras.layers import (
Input,
Conv3D,
MaxPooling3D,
UpSampling3D,
Dropout,
Conv3DTranspose,
BatchNormalization,
concatenate
)
def unet_3d_upsampling_dropout(input_size=(240, 240, 160, 4), unet_resize_factor=2, unet_dropout_rate=0.3, num_classes=4,
binary_model=False):
"""Constructs a U-Net 3D segmentation model with Dropout layers and UpSampling3D -> Conv3D layers.
Args:
input_size: (tuple) Keras model input shape is (batch_size, height, width, length, channels) with
'channels_last', (default: (240, 240, 160, 4)). Note: depth must be a multiple of 16.
Source: 'data_format' parameter documentation: https://keras.io/api/layers/convolution_layers/convolution3d/
unet_resize_factor: (int) Resize factor of the number of filters (channels) per Convolutional layer in the U-Net
model (must be >= 1, such that 1 means retaining the original number of filters (channels)
per Convolutional layer in the U-Net model) (default: 2 (half-size))
unet_dropout_rate: (float) Dropout rate for the Dropout layers in the U-Net model (default: 0.3).
num_classes: (int) Number of classes in the training dataset (default: 4).
binary_model: (boolean) If True, make the last layer have one filter with 'sigmoid' activation for a 3D binary
segmentation model.
"""
inputs = Input(shape=input_size)
# Contractive path
conv1 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(inputs)
conv1 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv1)
pool1 = MaxPooling3D(pool_size=(2, 2, 2))(conv1)
conv2 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool1)
conv2 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv2)
pool2 = MaxPooling3D(pool_size=(2, 2, 2))(conv2)
conv3 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool2)
conv3 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv3)
pool3 = MaxPooling3D(pool_size=(2, 2, 2))(conv3)
conv4 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool3)
conv4 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv4)
drop4 = Dropout(rate=unet_dropout_rate)(conv4)
pool4 = MaxPooling3D(pool_size=(2, 2, 2))(drop4)
conv5 = Conv3D(filters=1024 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool4)
conv5 = Conv3D(filters=1024 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv5)
drop5 = Dropout(rate=unet_dropout_rate)(conv5)
# Expansive path
up6 = Conv3D(filters=512 // unet_resize_factor, kernel_size=2, activation='relu', padding='same', kernel_initializer='he_normal')(UpSampling3D(size=(2, 2, 2))(drop5))
merge6 = concatenate([drop4, up6], axis=4)
conv6 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge6)
conv6 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv6)
up7 = Conv3D(filters=256 // unet_resize_factor, kernel_size=2, activation='relu', padding='same', kernel_initializer='he_normal')(UpSampling3D(size=(2, 2, 2))(conv6))
merge7 = concatenate([conv3, up7], axis=4)
conv7 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge7)
conv7 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv7)
up8 = Conv3D(filters=128 // unet_resize_factor, kernel_size=2, activation='relu', padding='same', kernel_initializer='he_normal')(UpSampling3D(size=(2, 2, 2))(conv7))
merge8 = concatenate([conv2, up8], axis=4)
conv8 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge8)
conv8 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv8)
up9 = Conv3D(filters=64 // unet_resize_factor, kernel_size=2, activation='relu', padding='same', kernel_initializer='he_normal')(UpSampling3D(size=(2, 2, 2))(conv8))
merge9 = concatenate([conv1, up9], axis=4)
conv9 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge9)
conv9 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
conv9 = Conv3D(filters=2, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
# Final layer
if binary_model:
conv10 = Conv3D(filters=1, kernel_size=1, activation="sigmoid")(conv9)
else:
conv10 = Conv3D(filters=num_classes, kernel_size=1, activation="softmax")(conv9)
model = Model(inputs=inputs, outputs=conv10)
return model
def unet_3d_conv3dtranspose_dropout(input_size=(240, 240, 160, 4), unet_resize_factor=2, unet_dropout_rate=0.3, num_classes=4,
binary_model=False):
"""Constructs a U-Net 3D segmentation model with Dropout layers and Conv3DTranspose layers instead of
UpSampling3D -> Conv3D layers.
Args:
input_size: (tuple) Keras model input shape is (batch_size, height, width, length, channels) with
'channels_last', (default: (240, 240, 160, 4)). Note: depth must be a multiple of 16.
Source: 'data_format' parameter documentation: https://keras.io/api/layers/convolution_layers/convolution3d/
unet_resize_factor: (int) Resize factor of the number of filters (channels) per Convolutional layer in the U-Net
model (must be >= 1, such that 1 means retaining the original number of filters (channels)
per Convolutional layer in the U-Net model) (default: 2 (half-size))
unet_dropout_rate: (float) Dropout rate for the Dropout layers in the U-Net model (default: 0.3).
num_classes: (int) Number of classes in the training dataset (default: 4).
binary_model: (boolean) If True, make the last layer have one filter with 'sigmoid' activation for a 3D binary
segmentation model.
"""
inputs = Input(shape=input_size)
# Contractive path
conv1 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(inputs)
conv1 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv1)
pool1 = MaxPooling3D(pool_size=(2, 2, 2))(conv1)
conv2 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool1)
conv2 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv2)
pool2 = MaxPooling3D(pool_size=(2, 2, 2))(conv2)
conv3 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool2)
conv3 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv3)
pool3 = MaxPooling3D(pool_size=(2, 2, 2))(conv3)
conv4 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool3)
conv4 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv4)
drop4 = Dropout(rate=unet_dropout_rate)(conv4)
pool4 = MaxPooling3D(pool_size=(2, 2, 2))(drop4)
conv5 = Conv3D(filters=1024 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool4)
conv5 = Conv3D(filters=1024 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv5)
drop5 = Dropout(rate=unet_dropout_rate)(conv5)
# Expansive path
up6 = Conv3DTranspose(filters=512 // unet_resize_factor, kernel_size=(2, 2, 2), strides=(2, 2, 2), padding="same", kernel_initializer='he_normal')(drop5)
merge6 = concatenate([drop4, up6], axis=4)
conv6 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge6)
conv6 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv6)
up7 = Conv3DTranspose(filters=128 // unet_resize_factor, kernel_size=(2, 2, 2), strides=(2, 2, 2), padding="same", kernel_initializer='he_normal')(conv6)
merge7 = concatenate([conv3, up7], axis=4)
conv7 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge7)
conv7 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv7)
up8 = Conv3DTranspose(filters=64 // unet_resize_factor, kernel_size=(2, 2, 2), strides=(2, 2, 2), padding="same", kernel_initializer='he_normal')(conv7)
merge8 = concatenate([conv2, up8], axis=4)
conv8 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge8)
conv8 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv8)
up9 = Conv3DTranspose(filters=32 // unet_resize_factor, kernel_size=(2, 2, 2), strides=(2, 2, 2), padding="same", kernel_initializer='he_normal')(conv8)
merge9 = concatenate([conv1, up9], axis=4)
conv9 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge9)
conv9 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
conv9 = Conv3D(filters=2, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
# Final layer
if binary_model:
conv10 = Conv3D(filters=1, kernel_size=1, activation="sigmoid")(conv9)
else:
conv10 = Conv3D(filters=num_classes, kernel_size=1, activation="softmax")(conv9)
model = Model(inputs=inputs, outputs=conv10)
return model
def unet_3d_upsampling_batchnormalization(input_size=(240, 240, 160, 4), unet_resize_factor=2, num_classes=4, binary_model=False):
"""Constructs a U-Net 3D segmentation model with BatchNormalization layers after each Conv3D layer instead of
using Dropout layers in the expansive path and with using UpSampling3D -> Conv3D layers.
Args:
input_size: (tuple) Keras model input shape is (batch_size, height, width, length, channels) with
'channels_last', (default: (240, 240, 160, 4)). Note: depth must be a multiple of 16.
Source: 'data_format' parameter documentation: https://keras.io/api/layers/convolution_layers/convolution3d/
unet_resize_factor: (int) Resize factor of the number of filters (channels) per Convolutional layer in the U-Net
model (must be >= 1, such that 1 means retaining the original number of filters (channels)
per Convolutional layer in the U-Net model) (default: 2 (half-size)).
num_classes: (int) Number of classes in the training dataset (default: 4).
binary_model: (boolean) If True, make the last layer have one filter with 'sigmoid' activation for a 3D binary
segmentation model.
"""
inputs = Input(shape=input_size)
# Contractive path
conv1 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(inputs)
bn1 = BatchNormalization()(conv1)
conv1 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn1)
bn1 = BatchNormalization()(conv1)
pool1 = MaxPooling3D(pool_size=(2, 2, 2))(bn1)
conv2 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool1)
bn2 = BatchNormalization()(conv2)
conv2 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn2)
bn2 = BatchNormalization()(conv2)
pool2 = MaxPooling3D(pool_size=(2, 2, 2))(bn2)
conv3 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool2)
bn3 = BatchNormalization()(conv3)
conv3 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn3)
bn3 = BatchNormalization()(conv3)
pool3 = MaxPooling3D(pool_size=(2, 2, 2))(bn3)
conv4 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool3)
bn4 = BatchNormalization()(conv4)
conv4 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn4)
bn4 = BatchNormalization()(conv4)
pool4 = MaxPooling3D(pool_size=(2, 2, 2))(bn4)
conv5 = Conv3D(filters=1024 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool4)
bn5 = BatchNormalization()(conv5)
conv5 = Conv3D(filters=1024 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn5)
bn5 = BatchNormalization()(conv5)
# Expansive path
up6 = Conv3D(filters=512 // unet_resize_factor, kernel_size=2, activation='relu', padding='same', kernel_initializer='he_normal')(UpSampling3D(size=(2, 2, 2))(bn5))
bn6 = BatchNormalization()(up6)
merge6 = concatenate([bn4, bn6], axis=4)
conv6 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge6)
bn6 = BatchNormalization()(conv6)
conv6 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn6)
bn6 = BatchNormalization()(conv6)
up7 = Conv3D(filters=256 // unet_resize_factor, kernel_size=2, activation='relu', padding='same', kernel_initializer='he_normal')(UpSampling3D(size=(2, 2, 2))(bn6))
bn7 = BatchNormalization()(up7)
merge7 = concatenate([conv3, bn7], axis=4)
conv7 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge7)
bn7 = BatchNormalization()(conv7)
conv7 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn7)
bn7 = BatchNormalization()(conv7)
up8 = Conv3D(filters=128 // unet_resize_factor, kernel_size=2, activation='relu', padding='same', kernel_initializer='he_normal')(UpSampling3D(size=(2, 2, 2))(bn7))
bn8 = BatchNormalization()(up8)
merge8 = concatenate([conv2, bn8], axis=4)
conv8 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge8)
bn8 = BatchNormalization()(conv8)
conv8 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn8)
bn8 = BatchNormalization()(conv8)
up9 = Conv3D(filters=64 // unet_resize_factor, kernel_size=2, activation='relu', padding='same', kernel_initializer='he_normal')(UpSampling3D(size=(2, 2, 2))(bn8))
bn9 = BatchNormalization()(up9)
merge9 = concatenate([conv1, bn9], axis=4)
conv9 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge9)
bn9 = BatchNormalization()(conv9)
conv9 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn9)
bn9 = BatchNormalization()(conv9)
conv9 = Conv3D(filters=2, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn9)
bn9 = BatchNormalization()(conv9)
# Final layer
if binary_model:
conv10 = Conv3D(filters=1, kernel_size=1, activation="sigmoid")(bn9)
else:
conv10 = Conv3D(filters=num_classes, kernel_size=1, activation="softmax")(bn9)
model = Model(inputs=inputs, outputs=conv10)
return model
def unet_3d_conv3dtranspose_batchnormalization(input_size=(240, 240, 160, 4), unet_resize_factor=2, num_classes=4, binary_model=False):
"""Constructs a U-Net 3D segmentation model with BatchNormalization layers after each Conv3D layer instead of
using Dropout layers in the expansive path and with using Conv3DTranspose layers instead of UpSampling3D -> Conv3D
layers.
Args:
input_size: (tuple) Keras model input shape is (batch_size, height, width, length, channels) with
'channels_last', (default: (240, 240, 160, 4)). Note: depth must be a multiple of 16.
Source: 'data_format' parameter documentation: https://keras.io/api/layers/convolution_layers/convolution3d/
unet_resize_factor: (int) Resize factor of the number of filters (channels) per Convolutional layer in the U-Net
model (must be >= 1, such that 1 means retaining the original number of filters (channels)
per Convolutional layer in the U-Net model) (default: 2 (half-size)).
num_classes: (int) Number of classes in the training dataset (default: 4).
binary_model: (boolean) If True, make the last layer have one filter with 'sigmoid' activation for a 3D binary
segmentation model.
"""
inputs = Input(shape=input_size)
# Contractive path
conv1 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(inputs)
bn1 = BatchNormalization()(conv1)
conv1 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn1)
bn1 = BatchNormalization()(conv1)
pool1 = MaxPooling3D(pool_size=(2, 2, 2))(bn1)
conv2 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool1)
bn2 = BatchNormalization()(conv2)
conv2 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn2)
bn2 = BatchNormalization()(conv2)
pool2 = MaxPooling3D(pool_size=(2, 2, 2))(bn2)
conv3 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool2)
bn3 = BatchNormalization()(conv3)
conv3 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn3)
bn3 = BatchNormalization()(conv3)
pool3 = MaxPooling3D(pool_size=(2, 2, 2))(bn3)
conv4 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool3)
bn4 = BatchNormalization()(conv4)
conv4 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn4)
bn4 = BatchNormalization()(conv4)
pool4 = MaxPooling3D(pool_size=(2, 2, 2))(bn4)
conv5 = Conv3D(filters=1024 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(pool4)
bn5 = BatchNormalization()(conv5)
conv5 = Conv3D(filters=1024 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn5)
bn5 = BatchNormalization()(conv5)
# Expansive path
up6 = Conv3DTranspose(filters=512 // unet_resize_factor, kernel_size=(2, 2, 2), strides=(2, 2, 2), padding="same", kernel_initializer='he_normal')(bn5)
bn6 = BatchNormalization()(up6)
merge6 = concatenate([bn4, bn6], axis=4)
conv6 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge6)
bn6 = BatchNormalization()(conv6)
conv6 = Conv3D(filters=512 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn6)
bn6 = BatchNormalization()(conv6)
up7 = Conv3DTranspose(filters=128 // unet_resize_factor, kernel_size=(2, 2, 2), strides=(2, 2, 2), padding="same", kernel_initializer='he_normal')(bn6)
bn7 = BatchNormalization()(up7)
merge7 = concatenate([conv3, bn7], axis=4)
conv7 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge7)
bn7 = BatchNormalization()(conv7)
conv7 = Conv3D(filters=256 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn7)
bn7 = BatchNormalization()(conv7)
up8 = Conv3DTranspose(filters=64 // unet_resize_factor, kernel_size=(2, 2, 2), strides=(2, 2, 2), padding="same", kernel_initializer='he_normal')(bn7)
bn8 = BatchNormalization()(up8)
merge8 = concatenate([conv2, bn8], axis=4)
conv8 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge8)
bn8 = BatchNormalization()(conv8)
conv8 = Conv3D(filters=128 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn8)
bn8 = BatchNormalization()(conv8)
up9 = Conv3DTranspose(filters=32 // unet_resize_factor, kernel_size=(2, 2, 2), strides=(2, 2, 2), padding="same", kernel_initializer='he_normal')(bn8)
bn9 = BatchNormalization()(up9)
merge9 = concatenate([conv1, bn9], axis=4)
conv9 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(merge9)
bn9 = BatchNormalization()(conv9)
conv9 = Conv3D(filters=64 // unet_resize_factor, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn9)
bn9 = BatchNormalization()(conv9)
conv9 = Conv3D(filters=2, kernel_size=3, activation='relu', padding='same', kernel_initializer='he_normal')(bn9)
bn9 = BatchNormalization()(conv9)
# Final layer
if binary_model:
conv10 = Conv3D(filters=1, kernel_size=1, activation="sigmoid")(bn9)
else:
conv10 = Conv3D(filters=num_classes, kernel_size=1, activation="softmax")(bn9)
model = Model(inputs=inputs, outputs=conv10)
return model
| 68.672365 | 170 | 0.71233 | 3,140 | 24,104 | 5.285987 | 0.054777 | 0.072298 | 0.092541 | 0.155199 | 0.978853 | 0.978853 | 0.978853 | 0.978853 | 0.978853 | 0.978853 | 0 | 0.05664 | 0.151801 | 24,104 | 350 | 171 | 68.868571 | 0.755197 | 0.199718 | 0 | 0.858447 | 0 | 0 | 0.083355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018265 | false | 0 | 0.009132 | 0 | 0.045662 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.